id
stringlengths 10
10
| title
stringlengths 5
246
| abstract
stringlengths 42
3.32k
| authors
stringlengths 5
21.5k
| published_date
timestamp[s] | link
stringlengths 33
34
| markdown
stringlengths 140
1.08M
| abstract_ja
stringlengths 0
1.35k
|
---|---|---|---|---|---|---|---|
2309.04369 | Beyond Static Datasets: A Deep Interaction Approach to LLM Evaluation | Large Language Models (LLMs) have made progress in various real-world tasks,
which stimulates requirements for the evaluation of LLMs. Existing LLM
evaluation methods are mainly supervised signal-based which depends on static
datasets and cannot evaluate the ability of LLMs in dynamic real-world
scenarios where deep interaction widely exists. Other LLM evaluation methods
are human-based which are costly and time-consuming and are incapable of
large-scale evaluation of LLMs. To address the issues above, we propose a novel
Deep Interaction-based LLM-evaluation framework. In our proposed framework,
LLMs' performances in real-world domains can be evaluated from their deep
interaction with other LLMs in elaborately designed evaluation tasks.
Furthermore, our proposed framework is a general evaluation method that can be
applied to a host of real-world tasks such as machine translation and code
generation. We demonstrate the effectiveness of our proposed method through
extensive experiments on four elaborately designed evaluation tasks. | Jiatong Li, Rui Li, Qi Liu | 2023-09-08T15:00:41 | http://arxiv.org/abs/2309.04369v1 | # Beyond Static Datasets: A Deep Interaction Approach to LLM Evaluation
###### Abstract.
Large Language Models (LLMs) have made progress in various real-world tasks, which stimulates requirements for the evaluation of LLMs. Existing LLM evaluation methods are mainly supervised signal-based which depends on static datasets and cannot evaluate the ability of LLMs in dynamic real-world scenarios where deep interaction widely exists. Other LLM evaluation methods are human-based which are costly and time-consuming and are incapable of large-scale evaluation of LLMs. To address the issues above, we propose a novel Deep Interaction-based LLM-evaluation framework. In our proposed framework, LLMs' performances in real-world domains can be evaluated from their deep interaction with other LLMs in elaborately designed evaluation tasks. Furthermore, our proposed framework is a general evaluation method that can be applied to a host of real-world tasks such as machine translation and code generation. We demonstrate the effectiveness of our proposed method through extensive experiments on four elaborately designed evaluation tasks. Our source code is available at [https://anonymous.4open.science/r/DeepEval-112F](https://anonymous.4open.science/r/DeepEval-112F).
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
## 1. Introduction
With the rapid growth of Large Language Models (LLMs), LLM-based applications have made progress and exceeded human performance in many real-world domains such as machine translation and code generation. The advancement of LLM-based applications also facilitates the requirement for the evaluation of LLMs. Due to the huge scale and the lack of interpretability of LLMs, the evaluation of LLMs mainly focuses on their skill sets on domain-specific tasks. The evaluation results of LLMs later can guide users to choose appropriate LLMs for their unique requirements.
In the literature, traditional LLM-evaluation methods are either human-based or supervised signal-based, as shown in Figure 1. In human-based methods, human interrogators interact with LLMs, and the evaluation of LLMs depends on the judgment of human interrogators. For example, in the Turing test (Zhu et al., 2017), a human interrogator can interact with two anonymous examinees (one is an LLM while the other is a human) and is required to distinguish the LLM from the human in a limited time. Despite their flexibility, human-based evaluation methods are too costly to be applied to the large-scale evaluation of LLMs in numerous tasks. In supervised signal-based evaluation methods, LLMs are required to generate correct outputs given dataset inputs. Compared to human-based evaluations, supervised signal-based evaluations are more automatic because evaluation results of LLMs can be gained by automatically calculating evaluation metrics based on dataset labels and LLM outputs. Thus supervised signal-based methods have been widely applied to large-scale evaluations of LLMs. However, there are two shortcomings in these methods. First, datasets in supervised signal-based evaluations are static, which makes it impossible for these methods to evaluate the deep interaction ability of LLMs. In real-world scenarios, LLMs often interact with users in multiple rounds to receive feedback and fix their outputs to meet the users' needs. As a result, the static supervised signal-based methods cannot reflect the true performance of LLMs in these scenarios. Second, LLMs often act in different roles even in the same task. For example, in code generation, some users need LLMs to act as programmers to generate complete codes, while others need LLMs to act as reviewers to optimize their own codes. However, in supervised signal-based methods, LLMs can only act one role in a dataset. This fact limits the range of evaluated skills, and the ability of the method to discover the potential of LLMs to act in different roles.
To address the issues above, in this paper, we propose a novel deep interaction-based LLM-Evaluation Framework. Our motivation is that deep interactions between LLMs and users can be simulated by the interaction of various LLMs in elaborately designed evaluation tasks, and LLMs' deep interaction abilities and domain-specific skill proficiencies can be evaluated from their interaction records. Specifically, we first propose a general evaluation framework that can be integrated with various evaluation tasks. Next, we
Figure 1. Left: Turing test, an example of human-based evaluation methods. Right: mathematical question answering, an example of supervised signal-based evaluation methods
propose a general evaluation running algorithm to ensure the correctness and fairness of the evaluation process. Furthermore, we analyze and give the principle of the design of deep interaction-based evaluation tasks. In experiments, we evaluate the performance of four well-known LLMs with our proposed framework in four elaborately designed evaluation tasks, i.e., public goods game, idioms solitaire, machine translation, and code generation.
In a word, our contribution can be summarized as follows:
* We propose a deep interaction-based evaluation framework for evaluating LLMs in real-world scenarios with deep interactions, which can be integrated with a host of tasks.
* We analyse and present the principle of the design of evaluation tasks, and give four examples for experiments.
* We demonstrate the effectiveness of our proposed method by extensive experiments on four elaborately designed tasks that evaluate both the deep interaction abilities and domain-specific skills of four well-known LLMs.
## 2. Related Works
### The Development of LLMs
Large language models (LLMs) are sophisticated artificial intelligence systems that are designed to understand and generate human-like text to aid humans in all kinds of real-world tasks. LLMs are trained on vast amounts of data and utilize deep learning techniques to learn patterns, language structures, and relationships between words and sentences. So far, LLMs have revolutionized the field of natural language processing (NLP) and have been the subject of extensive research and development. These models, such as BERT (Bidirectional Encoder Representation from Transformers) (Zhou et al., 2017) and GPT-3 (Generative Pre-trained Transformer 3), have demonstrated impressive capabilities in various NLP tasks, including text generation, machine translation, question answering, summarization, etc.
Recently, starting from the presence of ChatGPT1, more and more LLMs are proposed to solve various real-world challenges. For example, Claude2 is an LLM application proposed by Antropic, which is good at many real-world tasks such as paper writing and coding. PaLM (Bordes et al., 2017) is a scaling LLM proposed by Google, and shows strong capabilities in multilingual tasks and code generation. Indeed, the recent advancement of LLMs has shown their power in boosting people's productivity. However, despite their capabilities, LLMs are black boxes that lack explainability, and we cannot ensure their competitive performance in any real-world situation. As a result, researchers manage to evaluate LLMs in various scenarios to quantitatively measure their capabilities in different tasks.
Footnote 1: [https://opena.com/chatgpt](https://opena.com/chatgpt)
Footnote 2: [https://www.anthropic.com/clande-in-slack](https://www.anthropic.com/clande-in-slack)
### The Evaluation of LLMs
The recent advancement of LLMs in real-world scenarios stimulate the requirement for the evaluation of LLMs (Bordes et al., 2017). In the literature, the evaluation of LLMs can be human-based or supervised signal-based. Specifically, human-based evaluation depends on human interrogators to measure the performance of LLMs. For example, Likert scale (K
**Definition 3.1**.: **Deep interaction-based evaluation tasks.** a deep interaction-based evaluation task is defined a collection of component listed as follows:
* [leftmargin=*,noitemsep,topsep=0pt]
* \(\bullet\) A set \(\mathcal{P}=\{P_{1},P_{2},\ldots,P_{N}\}\) (the set of \(N\) LLMs).
* \(\bullet\) A history set \(H\) of sequences (finite or infinite), which can be represented by a gaming tree. \(H\) satisfies the following properties:
1. [leftmargin=*,noitemsep,topsep=0pt]
2. The empty sequence \(\emptyset\) is a member of \(H\), i.e., \(\emptyset\in H\), which serves as the root node of the gaming tree.
3. If a sequence \((a_{k})_{k=1,\ldots,K}\in H\) and \(L<K\), then \((a_{k})_{k=1,\ldots,L}\in H\). Further, if \((a_{k})_{k=1,\ldots,K+1}\neq H\), then \((a_{k})_{k=1,\ldots,K}\) is a **terminal** history.
4. If an infinite sequence \((a_{k})_{k=1}^{\infty}\) satisfies \((a_{k})_{k=1,\ldots,L}\in H\) for every positive integer \(L\), then \((a_{k})_{k=1}^{\infty}\in H\).
5. \(\bullet\) A terminal history set \(Z\) consisting of all terminal histories, i.e., \(Z=\{(a_{k})_{k=1,\ldots,K}|(a_{k})_{k=1,\ldots,K}\in H,(a_{k})_{k=1,\ldots,K+1} \neq H\}\).
6. An LLM function \(P_{f}:H\backslash Z\to Q\) that assigns to each non-terminal history \(h\in H\backslash Z\) a member of LLMs (\(P_{f}(h)\) is the next LLM making decision given the history sequence \(h\)).
7. A payoff function \(u_{i}:Z\to\mathbb{R}\) for every LLM \(Q_{i}\in Q\) (\(u_{i}(z)\) is the payoff of LLM \(Q_{i}\) given the terminal history \(z\in Z\)).
Then a deep interaction-based evaluation task can be represented as a tuple, i.e., \(R=(\mathcal{P},H,Z,P_{f},U)\), where \(U=\{u_{1},\ldots,u_{N}\}\) is the set of payoff functions.
Given the definition of evaluation tasks which can be represented by texts and input to LLMs, LLMs can be asked to interact with other participants to maximize their payoffs. Then the abilities of LLMs can be evaluated from their interaction histories. Specifically, let \(\Theta=(\theta_{1},\theta_{2}\ldots,\theta_{N})\) be the ability of target LLMs, then LLM evaluation task can be defined as follows:
**Definition 3.2**.: **Deep interaction-based evaluation of LLMs.** Given a set of LLMs \(\mathcal{P}\) and an evaluation task \(R\), the goal of the deep interaction-based evaluation of LLMs is to evaluate LLMs' abilities \(\Theta\) from observed history sequences of LLMs in the game.
### Necessary Conditions of DeepEval
The top priority of deep interaction-based evaluation methods is to ensure the correctness of the evaluation. Thus it is necessary to ensure the fairness of the evaluation process and stableness of evaluation results. To this end, we propose the fairness condition and the stableness condition of DeepEval, which constitute the prerequisite of the design of the framework.
**Condition 1**.: _Fairness condition. All of LLMs that participate in the interaction process should be anonymous, and the delivery of LLMs' messages should be synchronous._
The fairness condition essentially ensures the fairness of the evaluation process from two aspects. First, the anonymity of LLMs avoids biased interaction policies of LLMs. For example, if one LLM acknowledges the identity of another LLM, then it can use strategies targeted to the flaw of the latter to win the evaluation task. Second, the synchronicity of the delivery of LLMs' messages guarantees that each LLM has the equal chance to collect information and make decisions.
Next, we introduce the stableness condition which aims to obliterate the influence of uncertainties on evaluation results. The stableness condition is defined as following.
**Condition 2**.: _Stableness condition. To evaluate a set of LLMs, the deep interaction-based evaluation process should be run independently for multiple times until evaluation results of LLMs converge._
### The Structure of DeepEval
The structure of the deep interaction-based LLM evaluation framework (DeepEval) is shown in Figure 2. The motivation of the design of DeepEval is to ensure the fairness of the evaluation process through well-designed mechanisms of the framework. To this end, we propose two pivotal components of DeepEval, i.e., the message pool and the referee. We first introduce the message pool and the referee respectively, then we describe the workflow of DeepEval to illustrate how they work together to ensure the fairness of the evaluation process.
* [leftmargin=*,noitemsep,topsep=0pt]
* **Message Pool.** The message pool in DeepEval is used for containing the interaction histories of LLMs, which serves as the history set \(H\) in our formal definition of deep interaction-based evaluation tasks. The message pool is managed by the referee who can read messages and write messages. Indeed, the message pool blocks direct interaction of LLMs, which can ensure the synchronicity of the interaction process together with the synchronous interaction algorithm, which will be introduced later.
* **Referee.** The referee in DeepEval is a pivotal role that is responsible for supervising the evaluation task process and evaluating the performance of LLMs based on interaction histories. Inspired by studies in LLM-as-a-judge (Zhu et al., 2017; Wang et al., 2018), the referee in DeepEval is acted by an LLM.
**The workflow of DeepEval.** DeepEval starts with an evaluation task box, which stores the information of evaluation tasks. To evaluate LLMs, DeepEval first select an evaluation task from the evaluation task box and input the formalized rule to the message pool, as shown in the middle part of Figure 2. Then each LLM in the DeepEval is assigned a unique ID by the referee (either LLM or human). Next, the referee starts and monitors the deep interaction process. In the deep interaction process, the interaction of LLMs is done through the public message pool, which is used for ensuring the synchronicity of communication. As the referee judges that the task comes to an end, it collects and evaluates the performance of LLMs according to their interaction records. The running of the evaluation task is repeated for multiple times to ensure the stableness condition. For the fairness condition, DeepEval satisfies the condition from two aspects. First, for anonymity, LLMs are assigned identity-irrelevant IDs in DeepEval. Second, for synchronicity, we propose a synchronous interaction algorithm to ensure the synchronicity of the interaction process, which will be introduced in the next part. Finally, After finishing the evaluation task, scores of LLMs are utilized to calculate the final evaluation results.
### Deep Interaction Algorithm
In DeepEval, LLMs interact with each other to collect information that helps make a decision. To satisfy the fairness condition of DeepEval, we propose a synchronous interaction algorithm that ensures the synchronicity of the interaction process of LLMs, as shown in Algorithm 1. The basic idea is that \(l\), the entire interaction
process of LLMs can be decomposed into the interaction process on _rounds_, and \(2\). the communication of LLMs can be done by a public message pool managed by the referee (see Figure 2) which can block LLMs to send messages until all LLMs have received messages from the last round. Specifically, at the beginning of each round, each LLM sends its reply of its contexts (which is equivalent to the non-terminal history sequence \(h\in H\backslash Z\) of the game \(R=(\mathcal{P},H,Z,P_{f},U))\) to the public message pool. Next, for each LLM that can receive messages from others according to the task rule, the referee selects messages from the message pool, and sends these messages to the LLM. At the end of each round, the referee judges whether the game status has come to an end.
### How to Design Evaluation Tasks
The design of evaluation tasks in DeepEval is significant because it decides what to evaluate and how to evaluate. For the first aspect, the design of an evaluation task should be consistent with real-world tasks and require relevant skills such as machine translation and code review. For the second aspect, the rule of the evaluation task regularizes the interaction of LLMs, thus defines how to evaluate. To this end, inspired by game thories, we propose the symmetric design and asymmetric design of evaluation tasks, which evaluate LLMs from different perspective.
* **Symmetric evaluation task**. In symmetric evaluation tasks, all LLMs act the same role with the same action set and task goals. Because task goals of LLMs are the same, this type of evaluation task can evaluate domain-specific skills of LLMs in a competitive manner. Symmetric evaluation tasks are suitable for non-generative abilities of LLMs, such as vocabulary.
* **Asymmetric evaluation task**. In asymmetric evaluation tasks, LLMs play different roles with different action sets and task goals. This type of evaluation task is close to real-world scenarios and can evaluate the performance of LLMs from different aspects regarding of roles they act. Especially in generative tasks such as code review and machine translation, the design of asymmetric evaluation tasks can follow a _writer-editor_ paradigm. In this paradigm, there are two participants in the evaluation task totally.
Figure 2. The deep interaction-based LLM evaluation framework.
One LLM acts as a writer that generates outputs to meet the task requirement. The other LLM acts as an editor that fixes and polishes the writer's output to fit the task requirement better. The writing-polishing process can run for multiple rounds until the writer and the editor reach a consensus. Next, the two LLMs swap their role and repeat the task. Finally, the performance of the two LLMs can be evaluated by comparing their score on the same role, thus both the writing ability and the polishing ability can be evaluated simultaneously.
### Evaluation Metrics in DeepEval
#### 3.6.1. Symmetric Evaluation tasks
In symmetric evaluation tasks, suppose there are \(N\) LLMs. Let \(V=(v_{ij})_{N\times M}\) denotes the payoff matrix calculated by the referee, where \(M\) denotes the repeat times, and \(v_{ij}\) denotes LLM \(i\)'s payoff in the \(j\)-th time of the task. Then all components of \(V\) are comparable, because LLMs have the same role. So the evaluation result \(\Theta=(\theta_{1},\theta_{2},\ldots,\theta_{N})\) is defined as the mean score of each LLM:
\[\theta_{i}=\frac{1}{M}\sum_{j=1}^{M}v_{ij},\ i=1,2,\ldots,N. \tag{1}\]
#### 3.6.2. Asymmetric Evaluation tasks
In asymmetric evaluation tasks, not all components of the payoff matrix \(V\) are comparable because LLMs' roles differ. We assume that there are \(L\) roles in a task (\(2\leq L\leq N\)). Let \(S=(s_{ij})_{N\times M}\) denotes the role assignment matrix, where \(s_{ij}\in\{1,2,\ldots,L\}\) denotes LLM \(i\)'s role in the \(j\)-th time of game. Then the evaluation result is a \(N\times L\) matrix, i.e., \(\Theta=(\theta_{1},\ldots,\theta_{N})=(\theta_{il})_{N\times L}\). The \(\theta_{il}\) denotes LLM \(i\)'s ability when acting the role \(l\). Then \(\theta_{il}\) is defined as the mean score of each LLM given its role:
\[\theta_{il}=\frac{\sum_{j=1}^{M}I(s_{ij}=l)\cdot v_{ij}}{\sum_{j=1}^{M}I(s_{ij} =l)},\ i=1,2,\ldots,N, \tag{2}\]
where \(I(\cdot)\) denotes the indicator function.
### Implementations of Evaluation Tasks
In this part, we propose four elaborately-designed evaluation tasks to show the feasibility of DeepEval. These evaluation tasks include public goods game, idioms solitaire, code review and machine translation. An overview of these tasks is shown in Figure 3. The design of these evaluation tasks are introduced in the following.
#### 3.7.1. Public Goods Game
Public goods game (Game et al., 2017; Dwork et al., 2018) is a symmetric evaluation task that requires the decision-making ability of LLMs. Specifically, at the start of a PGG, each of \(N\) LLMs have the same amount of goods (e.g., dollar). In each round, LLMs can decide whether to invest (part of or all of) its goods to the public goods pool or not. Then all invested goods will be summed and doubled by a constant factor. Then result goods are shared equally by all LLMs. For example, if two of three LLMs invested 100 dollars in total and the constant factor is \(\alpha=1.2\), then the invested goods are doubled to \(100\times 1.2=120\) dollars, and every LLM will get \(120/4=30\) dollars, including those who did not invest. The payoff function of each LLM is the total amount of its private goods. The PGG is a classical example in game theory, and massive studies have indicated that the PGG require the decision making ability in complex scenarios of participants to maximize their payoff. Here, we consider two task modes for the public goods game:
* Mode 1: After each round, the referee informs each participant the number of earnings they received in that round.
* Mode 2: After each round, the referee informs each participant the ordered sequence of all investment amounts for that round.
#### 3.7.2. Idiom Solitaire
Idiom solitaire (Idiom, 2015; Dwork et al., 2018) is a symmetric evaluation task to evaluate the Chinese vocabulary of LLMs. Literally, idiom solitaire is a popular activity in China, where two LLMs give Chinese idioms alternatively, while the first Chinese character of the current idiom must be the last Chinese character of the last idiom. To win the idiom solitaire task, LLMs needs not only enough Chinese idiom vocabulary, but the ability to retrieve appropriate Chinese idioms that are not only consistent with the task rule, but difficult for other participants to retrieve the next idiom. In the idiom solitaire task, LLMs are randomly assigned the speaking order. LLMs then alternately give an idiom based on the last idiom given by other participants. The evaluation score of idiom solitaire is the number of winning of LLMs.
#### 3.7.3. Code Review
Inspired by code generation (Game et al., 2017; Dwork et al., 2018; Dwork et al., 2018), code review is an asymmetric evaluation task to evaluate the code generation ability and review ability of LLMs in real-world scenarios. The design of our code review evaluation task follows the writer-editor paradigm. Specifically, the code review task requires a programmer LLM who is responsible for generating codes given natural language requirements, and a reviewer LLM who is responsible for fixing the generated codes. Then both the performances of the programmer LLM and that of the reviewer LLM are evaluated by the referee LLM. At the beginning of a code review task, the referee broadcasts the description of the coding requirement to both the programmer and the reviewer. During the deep interaction process, the programmer and the reviewer communicates with each other through the message pool until they reach a consensus about the solution. Finally, both the performance of the programmer and the reviewer are rated by the referee.
#### 3.7.4. Machine Translation
Machine translation (Dwork et al., 2018; Dwork et al., 2018) is an asymmetric evaluation task to evaluate the natural language translation ability of LLMs in real-world scenarios. Similar to code review, the design of the machine translation task also follows the writer-editor paradigm, consisting of a translator and a proofreader. In the machine translation task, the referee first broadcast the source text and the target language. Next, the translator translates the source text to the text in the target language. Then, given the source text and the translation, the proofreader polishes the latter to facilitate its correctness and readability. Finally, both the performance of the translator and the reviewer are rated by the referee.
## 4. Experiments
### Experimental Setup
#### 4.1.1. Datasets and Evaluation Metrics
* Public Goods Game: For the two settings of this task, we conduct 10 repeated experiments for all LLMs to assess their capabilities.
in this task. Ultimately, we use the earnings of the LLMs during the game as the evaluation metric.
* Idiom Solitaire: We randomly sample 30 idioms from an existing idiom database as the initial idioms and conduct experiments on all model pairs. We also swap the order of the model pairs during the experiments to evaluate the capabilities of all models under consideration. The final evaluation metric is the number of times a model wins in the task.
* Code Generation: We use the popular code generation evaluation dataset MBPP (Dosov et al., 2017). For each sample in the test set, we assign each pair of models as Programmer and Reviewer and switch roles. Finally, we use a judge model to score the dialogue between the Programmer and Reviewer as the evaluation metric.
* Machine Translation: We select a document-level dataset (Dosov et al., 2017) and use three language pairs for translation: English-Chinese, English-French, and German-English. We split the dataset into paragraph-level segments for the test set. For each sample in the test set, we assign each pair of models as Translator and Proofreader and switch roles. The final evaluation metric is the score given by the judge model to the dialogue between the Translator and Proofreader.
Claude 2 is the state-of-the-art LLM in mode 2. Third, the performance of GPT-4 is more unstable than that of Claude 2, because the range between the minimum and maximum payoff of GPT-4 always contains that of Claude 2 in both mode 1 and mode 2.
### Idioms Solitaire
Evaluation results in Idioms Solitaire (winning rate).
Evaluation results are shown in Table 1 and Table 2. The term "Early" denotes the early position in the interaction process, while the term "Late" denotes the late position. \(s_{E}\) and \(s_{L}\) respectively denote the score of the early participant and the score of the late participant. For example, in the first data row of Table 1, 0.33 denotes the winning rate of GPT-4 (the early position) versus ChatGPT (the late position), while 0.67 denotes that of ChatGPT. PaLM is excluded in Idiom Solitaire because it does not support Chinese input and output. From Table 1 and Table 2, we can observe that the discrepancy between \(\overline{s_{E}}\) and \(\overline{s_{L}}\) of same LLMs are small because Idiom Solitaire is a symmetric evaluation task where different participants have the same action set and goal. Moreover, we can observe that the average winning rate and successful hit of ChatGPT are always the largest, while that of Claude 2 are always the lowest. These results demonstrate that in terms of Chinese idiom vocabulary, ChatGPT is stronger than GPT-4, and GPT-4 is stronger than Claude 2.
### Code Review
Evaluation results are shown in Table 3. The term "Prog" denotes the programmer, and the term "Rev" denotes the reviewer. \(s_{p}\) and \(s_{R}\) respectively denote the score of the programmer and the score of the reviewer. Different from Idioms Solitaire, Code Review is an asymmetric task where roles of LLMs differ. As a result, the average score of an LLM as a programmer and that of the LLM as a reviewer differ more. However, evaluation results show a highly consistency between the coding ability and the reviewing ability of LLMs. Specifically, GPT-4 reaches the state-of-the-art performance as both of the programmer and the reviewer. ChatGPT and Claude 2 have similar coding and reviewing abilities, which are better than the ability of PaLM.
### Machine Translation
Evaluation results in Machine Translation (DE-EN).
Evaluation results in Machine Translation are presented in Table 4 (Deutsch to English), Table 5 (English to French) and Table 6 (English to Chinese). PaLM is excluded in this experiment because it supports only English. From Table 4 and Table 5, we can observe that GPT-4 reaches the state-of-the-art performance in both tasks. This result indicates that GPT-4 has a better translation and proofreading ability than ChatGPT and Claude 2. However, GPT-4 does not perform so excellent in the English to Chinese
\begin{table}
\begin{tabular}{c c|c c c c c c|c} \hline \hline & & \multicolumn{6}{c|}{Late} & \multirow{2}{*}{\(\overline{s_{E}}\)} \\ & & & GPT-4 & & ChatGPT & Claude 2 & & \\ \hline \multirow{4}{*}{Early} & GPT-4 & - & - & 0.33 & 0.67 & 0.57 & 0.43 & 0.45 \\ & ChatGPT & 0.75 & 0.25 & - & - & 0.78 & 0.22 & **0.77** \\ & Claude 2 & 0.3 & 0.7 & 0.25 & 0.75 & - & - & 0.28 \\ \hline \(\overline{s_{L}}\) & & 0.48 & & **0.71** & & 0.33 & & \\ \hline \hline \end{tabular}
\end{table}
Table 1. Evaluation results in Idioms Solitaire (winning rate).
\begin{table}
\begin{tabular}{c c|c c c c c c|c} \hline \hline & & \multicolumn{6}{c|}{Late} & \multirow{2}{*}{\(\overline{s_{E}}\)} \\ & & & GPT-4 & & ChatGPT & Claude 2 & & \\ \hline \multirow{4}{*}{Early} & GPT-4 & - & - & 0.89 & 1.11 & 1.29 & 1.14 & 1.09 \\ & ChatGPT & 1.12 & 0.75 & - & - & 1.11 & 0.78 & **1.12** \\ & Claude 2 & 0.8 & 1 & 0.75 & 1.25 & - & - & 0.78 \\ \hline \(\overline{s_{L}}\) & & 0.88 & & **1.18** & & 0.96 & & \\ \hline \hline \end{tabular}
\end{table}
Table 2. Evaluation results in Idioms Solitaire (successful hit).
\begin{table}
\begin{tabular}{c c|c c c c c c c|c} \hline \hline & & \multicolumn{6}{c|}{Trag} & \multirow{2}{*}{\(\overline{s_{P}}\)} \\ & & & GPT-4 & & ChatGPT & Claude 2 & & \\ \hline \multirow{4}{*}{Proof} & GPT-4 & - & - & 7.90 & 9.15 & 7.84 & 9.12 & **9.14** \\ & ChatGPT & 8.02 & 9.01 & - & - & 7.89 & 9.15 & 9.08 \\ & Claude 2 & 8.07 & 9.01 & 8.09 & 8.98 & - & - & 9.00 \\ \hline \(\overline{s_{T}}\) & & **8.04** & & 8.00 & & 7.87 & & \\ \hline \hline \end{tabular}
\end{table}
Table 5. Evaluation results in Machine Translation (EN-FR).
\begin{table}
\begin{tabular}{c|c c c c c c c|c} \hline \hline & & \multicolumn{6}{c|}{Prog} & \multirow{2}{*}{\(\overline{s_{R}}\)} \\ & & & GPT-4 & & ChatGPT & Claude 2 & & \\ \hline \multirow{4}{*}{Early} & _sp_ & _sg_ & _sp_ & _sg_ & _sp_ & _sg_ & _sp_ & _sg_ & \\ \hline \multirow{4}{*}{Rev} & GPT-4 & - & - & 8.57 & 8.83 & 8.73 & 8.77 & 8.20 & 8.72 & **8.77** \\ & ChatGPT & 8.96 & 8.60 & - & - & 8.83 & 8.89 & 8.96 & 8.73 & 8.74 \\ & Claude 2 & 8.97 & 8.73 & 8.94 & 8.78 & - & - & 8.78 & 8.72 & 8.74 \\ & PaLM & 8.99 & 8.09 & 9.04 & 8.95 & 9.01 & 8.5 & - & 8.52 \\ \hline \(\overline{s_{P}}\) & & **8.97** & & 8.85 & & 8.86 & 8.65 & & \\ \hline \hline \end{tabular}
\end{table}
Table 3. Evaluation results in Code Review.
Figure 4. Evaluation results in Public Goods Game.
\begin{table}
\begin{tabular}{c|c c c c c c c|c} \hline \hline & & \multicolumn{6}{c|}{Trans} & \multirow{2}{*}{\(\overline{s_{P}}\)} \\ & & GPT-4 & & ChatGPT & Claude 2 & & \\ \hline \multirow{4}{*}{Proof} & GPT-4 & - & - & 8.24 & 9.26 & 8.09 & 9.28 & **9.27** \\ & ChatGPT & 8.27 & 9.26 & - & - & 8.18 & 9.23 & 9.25 \\ & Claude 2 & 8.26 & 9.24 & 8.19 & 9.18 & - & - & 9.21 \\ \hline \(\overline{s_{T}}\) & & **8.27** & & 8.22 & & 8.14 & & \\ \hline \hline \end{tabular}
\end{table}
Table 4. Evaluation results in Machine Translation (DE-EN).
translation and proofreading. From Table 6, we can observe that ChatGPT reaches the state-of-the-art performance in the English to Chinese translation and proofreading. Indeed, this result is consistent with experiment results in Idioms Solitare, as shown in Table 1. In conclusion, considering both the aspect of idiom vocabulary and translation-proofreading, ChatGPT is the state-of-the-art LLM among the three participants, and GPT-4 ranks the second.
### Case Study
In Figure 5, we separately show case examples of Idiom Solitaire and Machine Translation to help us better understand how our framework evaluates the capabilities of models in these tasks. Detailed cases are shown in Appendix A. For Idiom Solitaire, the model primarily needs to know what an idiom is and understand the rules of the Idiom Solitaire task. In the example, Claude 2 fails to come up with an idiom that starts with the Chinese character resulting in a failed chain.
For Machine Translation: As a translator, the model needs to translate a paragraph from the source language into the target language, while the proofreader needs to improve the translation. In the example, GPT-4, acting as the translator, accurately captured the meaning of the original text but still had some minor issues. Claude 2, serving as the proofreader, effectively improved GPT-4's translation. Using the word "perform" instead of "operate" in the sentence more accurately restored the original text's semantics.
## 5. Conclusion
In this paper, we studied the evaluation of large language models (LLMs) in dynamic real-world scenarios and proposed a Deep Interaction-based LLM-Evaluation Framework (DeepEval). Specifically, we first clearly defined deep interaction-based evaluation tasks and the deep interaction-based evaluation process of LLMs. Next, we proposed the fairness condition and the stableness condition of DeepEval to ensure the correctness of evaluation results. We then described in detail the structure of DeepEval and the synchronous interaction algorithm to demonstrate how it evaluates the performance of LLMs in a deep interaction-based manner while keeping the fairness and stableness conditions. Furthermore, we introduced methods to design evaluation tasks and gave four implementations of evaluation tasks that can assess the performance of LLMs from various aspects. We demonstrated the effectiveness and evaluated the performance of four well-known LLMs through extensive experiments on the four elaborately-designed evaluation tasks. The evaluation results showed that GPT-4 has the state-of-the-art overall performance, and ChatGPT performs prominently in tasks related to Chinese.
As a brand-new LLM-evaluation framework, DeepEval is the first work that introduce the deep interaction process to the evaluation of LLMs. In the future, we hope to further explore the design of evaluation tasks and the rating mechanism along this line to get more fair and stable evaluation results of LLMs. We also hope that the deep interaction mechanism proposed in this work could inspire studies of the training of LLMs and boost the application of LLMs in various real-world scenarios in the future.
\begin{table}
\begin{tabular}{c c|c c c c c|c} \hline & & \multicolumn{4}{c}{Trans} & \multirow{2}{*}{\(\overline{S_{P}}\)} \\ & & GPT-4 & & \multicolumn{1}{c}{ChatGPT} & \multicolumn{1}{c}{Claude 2} & \multicolumn{1}{c}{\(\overline{S_{P}}\)} \\ \hline & \(s_{T}\) & \(sp_{r}\) & \(s_{T}\) & \(sp_{r}\) & \(s_{T}\) & \(sp_{r}\) & \\ \hline \multirow{4}{*}{Proof} & GPT-4 & - & - & 7.87 & 9.01 & 7.71 & 8.95 & 8.98 \\ & ChatGPT & 7.81 & 9.08 & - & - & 7.84 & 9.09 & **9.09** \\ & Claude 2 & 7.84 & 9.05 & 7.98 & 9.0 & - & - & 9.03 \\ \hline \(\overline{s_{T}}\) & & 7.83 & & **7.93** & & 7.78 & \\ \hline \end{tabular}
\end{table}
Table 6. Evaluation results in Machine Translation (EN-ZH).
Figure 5. Cases of Idioms Solitaire and Machine Translation. | Large Language Models (LLMs) の進展は、様々な現実世界タスクで遂げられており、LLM の評価の必要性が生じている。既存の LLM 評価方法は主に supervised Signal-based であり、静的データセットに依存しており、LLM の能力を動的な現実世界シナリオに評価できない。LLM 評価方法のもう1つは、人間に基づくものであり、コストがかかり、時間と労力を要するため、大規模な評価は不可能である。上記の課題に対処するため、私たちは、Novel Deep Interaction-based LLM-evaluation フレームワークを提案する。このフレームワークでは、LLM の実世界におけるパフォーマンスは、他の LLM の深い相互作用を通じて、精心設計された評価タスクで評価される。さらに、このフレームワークは、機械翻訳やコード生成のような様々な現実世界タスクに適用できる一般的な評価方法である。私たちは、四つの精心設計された評価タスク |
2309.13615 | Descent representations and colored quasisymmetric functions | The quasisymmetric generating function of the set of permutations whose
inverses have a fixed descent set is known to be symmetric and Schur-positive.
The corresponding representation of the symmetric group is called the descent
representation. In this paper, we provide an extension of this result to
colored permutation groups, where Gessel's fundamental quasisymmetric functions
are replaced by Poirier's colored quasisymmetric functions. For this purpose,
we introduce a colored analogue of zigzag shapes and prove that the
representations associated with these shapes coincide with colored descent
representations studied by Adin, Brenti and Roichman in the case of two colors
and Bagno and Biagioli in the general case. Additionally, we provide a colored
analogue of MaMahon's alternating formula which expresses ribbon Schur
functions in the basis of complete homogeneous symmetric functions. | Vassilis Dionyssis Moustakas | 2023-09-24T12:10:20 | http://arxiv.org/abs/2309.13615v1 | # Descent representations and colored quasisymmetric functions
###### Abstract.
The quasisymmetric generating function of the set of permutations whose inverses have a fixed descent set is known to be symmetric and Schur-positive. The corresponding representation of the symmetric group is called the descent representation. In this paper, we provide an extension of this result to colored permutation groups, where Gessel's fundamental quasisymmetric functions are replaced by Poirier's colored quasisymmetric functions. For this purpose, we introduce a colored analogue of zigzag shapes and prove that the representations associated with these shapes coincide with colored descent representations studied by Adin, Brenti and Roichman in the case of two colors and Bagno and Biagioli in the general case. Additionally, we provide a colored analogue of MaMahon's alternating formula which expresses ribbon Schur functions in the basis of complete homogeneous symmetric functions.
2020 Mathematics Subject Classification: Primary: 05E05, 05E10, 05A05, 05A15. Secondary: 20C30 The author was partially co-financed by Greece and the European Union (European Social Fund-ESF) through Operational Programme "Human Resources Development, Education and Lifelong Learning" in the context of the project "Strengthening Human Resources Research Potential via Doctorate Research 2nd Cycle" (MIS-5000432), Implemented by the State Scholarships Foundation (IKY)
## 1. Introduction
The basis of Schur functions forms one of the most interesting basis of the space of symmetric functions [18, Chapter 7]. Schur functions appear in the representation theory of the symmetric group as characters of irreducible representations. A symmetric function is called Schur-positive if it is a linear combination of Schur functions with nonnegative coefficients. The problem of determining whether a given symmetric function is Schur-positive constitutes a major problem in algebraic combinatorics [19].
Adin and Roichman [4] highlighted a connection between Schur-positivity of certain quasisymmetric generating functions and the existence of formulas which express the characters of interesting representations as weighted enumerations of nice combinatorial objects. Quasisymmetric functions are certain power series in infinitely many variables that generalize the notion of symmetric functions. They first appeared in the work of Stanley and were later defined and systematically studied by Gessel [12] (see also [9]).
An example of this connection of particular interest involves the quasisymmetric generating function of inverse descent classes of the symmetric group and the characters of Specht modules of zigzag shapes, often called descent representations (for all undefined terminology we refer to Section 2). Adin, Brenti and Roichman [2] studied descent representations by using the coinvariant algebra as a representation space and provided an extension to the hyperoctahedral group, which was later generalized to every complex reflection group by Bagno and Biagioli [5].
Recently, Adin et al. [1] investigated an extension of the aforementioned connection to the hyperoctahedral setting, where Gessel's fundamental quasisymmetric functions were replaced by Poirier's signed quasisymmetric functions [16]. In particular, they proved [1, Proposition 5.5]
that the signed quasisymmetric generating function of signed inverse descent classes is Schur-positive in the hyperoctahedral setting, but without explicitly specifying the corresponding characters.
Motivated by the afore-mentioned result, in this paper, we aim to extend upon it in the case of colored permutation groups, a special class of complex reflection groups. In particular, we prove that the colored quasisymmetric generating function of inverse colored descent classes is Schur-positive in the colored setting and show that the corresponding characters are precisely the characters of colored descent representations studied by Bagno and Biagioli (see Theorem 5.2). For this purpose, we suggest a colored analogue of Gessel's zigzag shape approach to descent representations. Furthermore, we provide a colored analogue of a well-known formula due to MacMahon, popularized by Gessel [12], which expresses the Frobenius image of colored descent representations, usually called ribbon Schur functions, as an alternating sum of complete homogeneous symmetric functions in the colored context (see Theorem 5.3).
The paper is structured as follows. Section 2 discusses background on permutations, tableaux, compositions, zigzag diagrams, symmetric/quasisymmetric functions and descent representations. Section 3 reviews the combinatorics of colored compositions, colored permutations and colored quasisymmetric functions. Section 4 introduces and studies the notion of colored zigzag shapes and Section 5 proves the main results of this paper, namely Theorems 5.2 and 5.3.
## 2. Preliminaries
This section fixes notation and discusses background. Throughout this paper we assume familiarity with basic concepts in the theory of symmetric functions and representations of the symmetric group as presented, for example, in [18, Chapter 7]. For a positive integer \(n\), we write \([n]:=\{1,2,\ldots,n\}\) and denote by \(|S|\) the cardinality of a finite set \(S\).
### Permutations, tableaux, compositions and zigzag diagrams
A composition of a positive integer \(n\) is a sequence \(\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{k})\) of positive integers such that \(\alpha_{1}+\alpha_{2}+\cdots+\alpha_{k}=n\). Compositions of \(n\) are in one-to-one correspondence with subsets of \([n-1]\). In particular, let \(\mathrm{S}_{\alpha}:=\{r_{1},r_{2},\ldots,r_{k-1}\}\) be the set of partial sums \(r_{i}:=\alpha_{1}+\alpha_{2}+\cdots+\alpha_{i}\), for all \(1\leq i\leq k\). Conversely, given a subset \(S=\{s_{1}<s_{2}<\cdots<s_{k}\}\subseteq[n-1]\), let \(\mathrm{co}(S)=(s_{1},s_{2}-s_{1},\ldots,s_{k}-s_{k-1},n-s_{k})\). The maps \(\alpha\mapsto\mathrm{S}_{\alpha}\) and \(S\mapsto\mathrm{co}(S)\) are bijections and mutual inverses.
Sometimes, it will be convenient to work with subsets of \([n-1]\) which contain \(n\). For this purpose, we will write \(S^{+}:=S\cup\{n\}\). In this case, \(\mathrm{S}_{\alpha}^{+}=\{r_{1},r_{2},\ldots,r_{k}\}\) and the maps \(\alpha\mapsto\mathrm{S}_{\alpha}^{+}\) and \(S^{+}\mapsto\mathrm{co}(S^{+})\) remain bijections and mutual inverses. We make this (non-standard) convention because we will later need to keep track of the color of the last coordinate of a colored permutation (see Section 3.1).
The set of all compositions of \(n\), written \(\mathrm{Comp}(n)\), becomes a poset with the partial order of reverse refinement. The covering relations are given by
\[(\alpha_{1},\ldots,\alpha_{i}+\alpha_{i+1},\ldots,\alpha_{k})\prec(\alpha_{1},\ldots,\alpha_{i},\alpha_{i+1},\ldots,\alpha_{k}).\]
The corresponding partial order on the set of all subsets of \([n-1]\) is inclusion of subsets. A partition of \(n\), written \(\lambda\vdash n\), is a composition \(\lambda\) of \(n\) whose parts appear in weakly decreasing order.
A zigzag diagram (also called border-strip, ribbon or skew hook) is a connected skew shape that does not contain a \(2\times 2\) square. Ribbons with \(n\) cells are in one-to-one correspondence with
compositions of \(n\). Given \(\alpha\in\operatorname{Comp}(n)\), let \(\operatorname{Z}_{\alpha}\) be the ribbon with \(n\) cells whose row lengths, when read from bottom to top, are the parts of \(\alpha\). For example, for \(n=9\)
\[\alpha=(2,1,2,3,1)\quad\longmapsto\quad\operatorname{Z}_{\alpha}=\quad\raisebox {-19.916929pt}{\includegraphics[]{fig/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fabf
The fundamental quasisymmetric function associated to \(\alpha\in\operatorname{Comp}(n)\) is defined by
\[F_{\alpha}(\boldsymbol{x}):=\sum_{\begin{subarray}{c}1\leq i_{1}\leq i_{2}\leq \cdots\leq i_{n}\\ j\in\operatorname{S}_{\alpha}\,\Rightarrow\,i_{j}<i_{j+1}\end{subarray}}x_{i_{1 }}x_{i_{2}}\cdots x_{i_{n}}.\]
We recall the following well-known expansion [18, Theorem 7.19.7]
\[s_{\lambda/\mu}(\boldsymbol{x})=\sum_{Q\in\operatorname{SYT}(\lambda/\mu)}F_{ \operatorname{co}(Q)}(\boldsymbol{x}), \tag{2.1}\]
for any skew shape \(\lambda/\mu\).
A subset \(\mathcal{A}\subseteq\mathfrak{S}_{n}\) is called Schur-positive if the quasisymmetric generating function
\[F(\mathcal{A};\boldsymbol{x}):=\sum_{\pi\in\mathcal{A}}F_{\operatorname{co}( \pi)}(\boldsymbol{x})\]
is Schur-positive. In this case, it follows that \(\operatorname{ch}(\varrho)(\boldsymbol{x})=F(\mathcal{A};\boldsymbol{x})\) for some non-virtual \(\mathfrak{S}_{n}\)-representation \(\rho\) (see also [1, Corollary 3.3]) and we will say that \(\mathcal{A}\) is Schur-positive for \(\varrho\).
The skew Schur function \(r_{\alpha}(\boldsymbol{x}):=s_{Z_{\alpha}}(\boldsymbol{x})\) is called the ribbon Schur function corresponding to \(\alpha\in\operatorname{Comp}(n)\). The (virtual) \(\mathfrak{S}_{n}\)-representation \(\varrho_{\alpha}\) such that \(\operatorname{ch}(\varrho_{\alpha})(\boldsymbol{x})=r_{\alpha}(\boldsymbol{x})\) is called the descent representation of the symmetric group. We remark that this definition is not the standard way to define descent representations in the literature. For more information on descent representations from a combinatorial representation-theoretic point of view we refer to [2]. For example, descent representations are non-virtual \(\mathfrak{S}_{n}\)-representations, as the following proposition explains. Combining Proposition 2.1 and Equation (2.1) yields the following result of Gessel [12, Theorem 7] (see also [18, Corollary 7.23.4]).
**Proposition 2.2**.: _For every \(\alpha\in\operatorname{Comp}(n)\),_
\[r_{\alpha}(\boldsymbol{x})\ =\ F(\operatorname{D}_{\alpha}^{-1};\boldsymbol{x})\ =\ \sum_{\lambda\vdash n}\,c_{\lambda}(\alpha)\,s_{\lambda}( \boldsymbol{x}), \tag{2.2}\]
_where \(c_{\lambda}(\alpha)\) is the number of \(Q\in\operatorname{SYT}(\lambda)\) such that \(\operatorname{co}(Q)=\alpha\). In particular, inverse descent classes are Schur-positive for descent representations._
Descent representations in disguised form appear in Stanley's work [17] on group actions on posets. If \(\chi_{\alpha}\) denotes the character of \(\varrho_{\alpha}\), then [17, Theorem 4.3] is translated into the following alternating formula
\[\chi_{\alpha}=\sum_{\begin{subarray}{c}\beta\in\operatorname{Comp}(n)\\ \beta\preceq\alpha\end{subarray}}(-1)^{\ell(\alpha)-\ell(\beta)}\,1_{\beta} \uparrow_{\mathfrak{S}_{\beta}}^{\mathfrak{S}_{n}}, \tag{2.3}\]
where
* \(\ell(\alpha)\) denotes the number of parts of \(\alpha\), called length of \(\alpha\)
* \(\mathfrak{S}_{\alpha}:=\mathfrak{S}_{\alpha_{1}}\times\mathfrak{S}_{\alpha_{2 }}\times\cdots\) denotes the Young subgroup corresponding to \(\alpha\)
* \(1_{n}\) (resp. \(1_{\alpha}\)) denotes the trivial \(\mathfrak{S}_{n}\)-character (resp. \(\mathfrak{S}_{\alpha}\)-character)
* \(\uparrow\) denotes induction of characters.
Taking the Frobenius image, Equation (2.3) becomes
\[r_{\alpha}(\boldsymbol{x})=\sum_{\begin{subarray}{c}\beta\in\operatorname{ Comp}(n)\\ \beta\preceq\alpha\end{subarray}}(-1)^{\ell(\alpha)-\ell(\beta)}\,h_{\beta}( \boldsymbol{x}), \tag{2.4}\]
where \(h_{\beta}(\mathbf{x})\) denotes the complete homogeneous symmetric functions corresponding to \(\beta\). As Gessel [12, page 293] points out, MacMahon was the first to study ribbon Schur functions by means of Equation (2.4).
In our running example, for \(n=4\) and \(\alpha=(2,2)\)
\[r_{\alpha}(\mathbf{x})=2F_{(2,2)}(\mathbf{x})+F_{(3,1)}(\mathbf{x})+F_{(1,3)}(\mathbf{x})+F_{(1,2,1)}(\mathbf{x})=s_{(2,2)}(\mathbf{x})+s_{(3,1)}(\mathbf{x}),\]
since the tableaux of shape \((2,2)\) and \((3,1)\) and descent set \(\{2\}\) are
\[\begin{array}{c}\framebox{$1$}\framebox{$3$}\\ \framebox{$3$}\end{array}\quad\text{and}\quad\begin{array}{c}\framebox{$1$} \framebox{$2$}\framebox{$4$}\\ \framebox{$3$}\end{array}\]
respectively, which is also in agreement with
\[r_{\alpha}(\mathbf{x})=h_{(2,2)}(\mathbf{x})-h_{(4)}(\mathbf{x}).\]
## 3. Combinatorics of colored objects
This section reviews the combinatorics of colored objects including colored permutations, colored compositions, \(r\)-partite tableaux, colored quasisymmetric functions and a colored analogue of the characteristic map. For the corresponding notions in the case of two colors we refer the reader to [1]. We fix a positive integer \(r\) and view the elements of \(\mathbb{Z}_{r}\), the cyclic group of order \(r\), as colors \(0,1,\ldots,r-1\), totally ordered by the natural order inherited by the integers. Also, we will write \(i^{j}\) instead of \((i,j)\) to represent colored integers, where \(i\) is the underlying integer and \(j\) is the color.
### Colored compositions and colored sets
An \(r\)-colored composition of a positive integer \(n\) is a pair \((\alpha,\epsilon)\) such that \(\alpha\in\operatorname{Comp}(n)\) and \(\epsilon\in\mathbb{Z}_{r}^{\ell(\alpha)}\) is a sequence of colors assigned to the parts of \(\alpha\). An \(r\)-colored subset of \([n]\) is a pair \((S^{+},\zeta)\) such that \(S\subseteq[n-1]\) and \(\zeta:S^{+}\to\mathbb{Z}_{r}\) is a color map. For the examples, we will represent colored compositions (resp. sets) as ordered tuples (resp. sets) of colored integers.
Colored compositions of \(n\) are in one-to-one correspondence with colored subsets of \([n]\). The correspondence is given as follows: Given a colored composition \((\alpha,\epsilon)\), let \(\sigma_{(\alpha,\epsilon)}:=(\mathrm{S}^{+}(\alpha),\zeta)\) where \(\zeta:\mathrm{S}^{+}(\alpha)\to\mathbb{Z}_{r}\) is defined by \(\zeta(r_{i}):=\epsilon_{i}\). Conversely, given a colored subset \((S^{+},\zeta)\) with \(S^{+}=\{s_{1}<\cdots<s_{k}<s_{k+1}=n\}\), let \(\mathrm{co}(S^{+},\zeta)=(\mathrm{co}(S),\epsilon)\) where \(\epsilon\in\mathbb{Z}_{r}^{k}\) is defined by letting \(\epsilon_{i}=\zeta(s_{i})\), for all \(1\leq i\leq k\). For example, for \(n=10\) and \(r=4\)
\[\left(2^{0},2^{1},1^{1},1^{3},3^{1},1^{2}\right)\longleftrightarrow\left\{2 ^{0},4^{1},5^{1},6^{3},9^{1},10^{2}\right\}.\]
Given a colored composition \((\alpha,\epsilon)\) of \(n\), we can extend \(\epsilon\) to a color vector \(\tilde{\epsilon}\in\mathbb{Z}_{r}^{n}\) by letting
\[\tilde{\epsilon}:=(\underbrace{\epsilon_{1},\epsilon_{1},\ldots,\epsilon_{1}} _{\alpha_{1}\text{ times}},\underbrace{\epsilon_{2},\epsilon_{2},\ldots,\epsilon_{2}} _{\alpha_{2}\text{ times}},\ldots,\underbrace{\epsilon_{k},\epsilon_{k},\ldots, \epsilon_{k}}_{\alpha_{k}\text{ times}}).\]
Similarly, given a colored subset \((S^{+},\zeta)\) of \([n]\) with \(S^{+}=\{s_{1}<\cdots<s_{k}<s_{k+1}:=n\}\), we can extend the color map to a color vector \(\tilde{\zeta}=(\tilde{\zeta}_{1},\tilde{\zeta}_{2},\ldots,\tilde{\zeta}_{n}) \in\mathbb{Z}_{r}^{n}\), by letting \(\tilde{\zeta}_{j}:=\zeta(s_{i})\) for all \(s_{i-1}<j\leq s_{i}\) where \(s_{0}:=0\). The corresponding color vector of our running example is
\[(0,0,1,1,1,3,1,1,1,2).\]
The set of all \(r\)-colored compositions of \(n\), written \(\operatorname{Comp}(n,r)\), becomes a poset with the partial order of reverse refinement on consecutive parts of constant color. The covering relations are given by
\[((\ldots,\alpha_{i}+\alpha_{i+1},\ldots),(\ldots,\epsilon_{i},\ldots))\prec(( \ldots,\alpha_{i},\alpha_{i+1},\ldots),(\ldots,\epsilon_{i},\epsilon_{i}, \ldots))\,.\]
The corresponding partial order on \(r\)-colored subsets of \([n]\) is inclusion of subsets with the same color vector. Notice that these posets are not connected, since each color vector gives rise to a unique connected component (see, for example [13, Figure 4]).
### Colored permutations and \(r\)-partite tableaux
The wreath product \(\mathbb{Z}_{r}\wr\mathfrak{S}_{n}\) is called the \(r\)-colored permutation group and we denote it by \(\mathfrak{S}_{n,r}\). It consists of all pairs \((\pi,\mathrm{z})\), called \(r\)-colored permutations, such that \(\pi\in\mathfrak{S}_{n}\) is the underlying permutation and \(\mathrm{z}=(\mathrm{z}_{1},\mathrm{z}_{2},\ldots,\mathrm{z}_{n})\in\mathbb{Z }_{r}^{n}\) is a color vector. When we consider specific examples, it will be convenient to write colored permutations in window notation, that is as words \(\pi_{1}^{\mathrm{z}_{1}}\pi_{2}^{\mathrm{z}_{2}}\cdots\pi_{n}^{\mathrm{z}_{n}}\) on colored integers.
The product in \(\mathfrak{S}_{n,r}\) is given by the rule
\[(\pi,\mathrm{z})(\tau,\mathrm{w})=(\pi\tau,\mathrm{w}+\tau(\mathrm{z}))\]
where \(\pi\tau\) is evaluated from right to left, \(\tau(\mathrm{z}):=(\mathrm{z}_{\tau_{1}},\mathrm{z}_{\tau_{2}},\ldots,\mathrm{ z}_{\tau_{n}})\) and the addition is coordinatewise modulo \(r\). The inverse (resp. conjugate) of \((\pi,\mathrm{z})\), written \((\pi,\mathrm{z})^{-1}\) (resp. \(\overline{(\pi,\mathrm{z})}\)) is the element \((\pi^{-1},-\pi^{-1}(\mathrm{z}))\) (resp. \((\pi,-\mathrm{z})\)).
Colored permutation groups can be viewed as complex reflection groups (see, for example, [5, Sections 1-2]). Therefore, \(\mathfrak{S}_{n,r}\) can be realized as the group of all \(n\times n\) matrices such that
* the nonzero entries are \(r\)-th roots of unity, and
* there is exactly one nonzero entry in every row and every column.
For our purposes it is more convenient to view them as groups of colored permutations rather than groups of complex matrices.
The case \(r=2\) is of particular interest. In this case, it is often customary to write \(\mathfrak{B}_{n}:=\mathfrak{S}_{n,2}\) and identify colors \(0\) and \(1\) with signs \(+\) and \(-\), respectively. \(\mathfrak{B}_{n}\) coincides with the hyperoctahedral group, the symmetry group of the \(n\)-dimensional cube. The hyperoctahedral group is a real reflection group and its elements are called signed permutations. Much of what is presented in this paper is motivated by Adin et al.'s work [1] on character formulas and descents for \(\mathfrak{B}_{n}\).
The colored descent set of \((\pi,\mathrm{z})\in\mathfrak{S}_{n,r}\), denoted by \(\mathrm{sDes}(\pi,\mathrm{z})\), is the pair \((S^{+},\zeta)\) where
* \(S\) consists of all \(i\in[n-1]\) such that \(\mathrm{z}_{i}\neq\mathrm{z}_{i+1}\) or \(\mathrm{z}_{i}=\mathrm{z}_{i+1}\) and \(i\in\mathrm{Des}(\pi)\)
* \(\zeta:S^{+}\to\mathbb{Z}_{r}\) is the map defined by \(\zeta(i)=\mathrm{z}_{i}\) for all \(i\in S^{+}\).
In words, the colored descent set records the ending positions of increasing runs of constant color together with their colors. Notice that the color vector of the colored descent set of \((\pi,\mathrm{z})\) is the same as \(\mathrm{z}\). For example, for \(n=10\) and \(r=4\)
\[\mathrm{sDes}\left(2^{3}4^{3}6^{1}1^{1}5^{1}10^{3}3^{1}7^{1}9^{1}8^{0}\right) =\{2^{3},3^{1},5^{1},6^{3},8^{1},9^{1},10^{0}\}.\]
The \(r\)-colored composition which corresponds to the colored descent set \(\mathrm{sDes}(\pi,\mathrm{z})\) is called colored descent composition of \((\pi,\mathrm{z})\) and is denoted by \(\mathrm{co}(\pi,\mathrm{z})\). It records the lengths of increasing runs of constant color together with their colors. In our running example, we have
\[\mathrm{co}\left(2^{3}4^{3}6^{1}1^{1}5^{1}10^{3}3^{1}7^{1}9^{1}8^{0}\right)= \left(2^{3},1^{1},2^{1},1^{3},1^{1},2^{1},1^{0}\right).\]
For \((\alpha,\epsilon)\in\mathrm{Comp}(n,r)\), we define the colored descent class
\[\mathrm{D}_{(\alpha,\epsilon)}:=\{(\pi,\mathrm{z})\in\mathfrak{S}_{n,r}: \mathrm{co}(\pi,\mathrm{z})=(\alpha,\epsilon)\}\]
and the corresponding conjugate-inverse colored descent class
\[\overline{\mathrm{D}}_{(\alpha,\epsilon)}^{-1}:=\{(\pi,\mathrm{z})\in\mathfrak{ S}_{n,r}:\mathrm{co}\left(\overline{(\pi,\mathrm{z})}^{-1}\right)=(\alpha, \epsilon)\}.\]
For reasons that will become apparent in the sequel, instead of dealing with inverse descent classes it will be more convenient to deal with conjugate-inverse descent classes. Colored descent classes were introduced by Mantaci and Reutenauer [14] who called them shape classes and used them to introduce and study a colored analogue of Solomon's descent algebra. We remark that in the hyperoctahedral case, where we have only two colors, there is no need to consider conjugate-inverse elements because \(\mathfrak{B}_{n}\) is a real reflection group.
An \(r\)-partite partition of \(n\), written \(\boldsymbol{\lambda}\vdash n\), is an \(r\)-tuple \(\boldsymbol{\lambda}=(\lambda^{(0)},\lambda^{(1)},\ldots,\lambda^{(r-1)})\) of (possibly empty) integer partitions of total sum \(n\). For example,
\[\boldsymbol{\lambda}\ =\ ((2),(3,2,1),(1),(1))\;.\]
is a \(4\)-partite partition of \(10\).
A standard Young \(r\)-partite tableau of shape \(\boldsymbol{\lambda}\) is an \(r\)-tuple \(\boldsymbol{Q}=(Q^{(0)},Q^{(1)},\ldots,Q^{(r-1)})\) of (possibly empty) tableaux, called parts, which are strictly increasing along rows and columns such that \(Q^{(i)}\) has shape \(\lambda^{(i)}\) and every element of \([n]\) appears exactly once as an entry of some \(Q^{(i)}\). We denote by \(\mathrm{SYT}(\boldsymbol{\lambda})\) the set of all standard Young \(r\)-partite tableaux of shape \(\boldsymbol{\lambda}\). To each \(r\)-partite tableau \(\boldsymbol{Q}\), we associate a color vector \(\mathrm{z}\), defined by letting \(\mathrm{z}_{i}=j\), where \(0\leq j\leq r-1\) is such that \(i\in Q^{(j)}\). For example, for \(n=10\) and \(r=4\)
\[\boldsymbol{Q}\ =\ \left(\begin{array}{c|c}\framebox{19},&\framebox{35 56}\\ \framebox{40}&\framebox{2},&\framebox{8}\\ \framebox{7}&\end{array}\right)\]
has color vector
\[\mathrm{z}=(0,2,1,1,1,1,1,3,0,1)\]
The colored descent set of an \(r\)-partite tableau \(\boldsymbol{Q}\), denoted by \(\mathrm{sDes}(\boldsymbol{Q})\), is defined similarly to that for colored permutations. In this case, the colored descent set records the first element of a pair \((i,i+1)\) together with its color, such that \(i\) and \(i+1\) either belong to parts with different colors or they belong to the same part and \(i\) is a descent of this part. In our running example,
\[\mathrm{sDes}(\boldsymbol{Q})=\left\{1^{0},2^{2},3^{1},6^{1},7^{1},8^{3},9^{ 0},10^{1}\right\}.\]
Also, we write \(\mathrm{co}(\boldsymbol{Q}):=\mathrm{co}(\mathrm{sDes}(\boldsymbol{Q}))\).
### Colored quasisymmetric functions and the characteristic map
Consider \(r\) copies \(\boldsymbol{x}^{(0)},\boldsymbol{x}^{(1)},\ldots,\boldsymbol{x}^{(r-1)}\) of \(\boldsymbol{x}\), one for each color of \(\mathbb{Z}_{r}\) and let \(\mathrm{Sym}_{n}^{(r)}\) be the space of (homogeneous) formal power series of degree \(n\) in \(\boldsymbol{x}^{(0)},\boldsymbol{x}^{(1)},\ldots,\boldsymbol{x}^{(r-1)}\) which are symmetric in each variable \(\boldsymbol{x}^{(j)}\) separately. In particular,
\[\mathrm{Sym}_{n}^{(r)}=\bigoplus_{\begin{subarray}{c}a_{0},\ldots,a_{r-1} \in\mathbb{N}\\ a_{0}+\cdots+a_{r-1}=n\end{subarray}}\left(\mathrm{Sym}_{a_{0}}(\boldsymbol{ x}^{(0)})\otimes\cdots\otimes\mathrm{Sym}_{a_{r-1}}(\boldsymbol{x}^{(r-1)}) \right).\]
Drawing parallel to the classical case, for an \(r\)-partite partition \(\boldsymbol{\lambda}=(\lambda^{(0)},\lambda^{(1)},\ldots,\lambda^{(r-1)})\), we define
\[s_{\boldsymbol{\lambda}}:=s_{\lambda^{(0)}}(\boldsymbol{x}^{(0)})s_{\lambda^ {(1)}}(\boldsymbol{x}^{(1)})\cdots s_{\lambda^{(r-1)}}(\boldsymbol{x}^{(r-1)}).\]
The set \(\{s_{\boldsymbol{\lambda}}:\boldsymbol{\lambda}\vdash n\}\) forms a basis for \(\mathrm{Sym}_{n}^{(r)}\) which we call the Schur basis. An element of \(\mathrm{Sym}_{n}^{(r)}\) is called Schur-positive if all the coefficients in its expansion in the Schur basis are nonnegative.
It is well-known that (complex) irreducible \(\mathfrak{S}_{n,r}\)-representations are indexed by \(r\)-partite partitions of \(n\) (see, for example, [5, Section 5]). Poirier [16] introduced a colored analogue of the
characteristic map which we denote by \(\operatorname{ch}^{(r)}\). This map is a \(\mathbb{C}\)-linear isomorphism from the space of virtual \(\mathfrak{S}_{n,r}\)-representations to \(\operatorname{Sym}_{n}^{(r)}\) which sends the irreducible \(\mathfrak{S}_{n,r}\)-representation corresponding to \(\boldsymbol{\lambda}\vdash n\) to \(s_{\boldsymbol{\lambda}}\). In particular, it maps non-virtual \(\mathfrak{S}_{n,r}\)-representations to Schur-positive elements of \(\operatorname{Sym}_{n}^{(r)}\).
The colored (fundamental) quasisymmetric function associated to \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\) is defined by
\[F_{(\alpha,\epsilon)}^{(r)}:=F_{(\alpha,\epsilon)}(\boldsymbol{x}^{(0)}, \ldots,\boldsymbol{x}^{(r-1)}):=\sum_{\begin{subarray}{c}1\leq i_{1}\leq i_{2} <\ldots<i_{n}\\ \epsilon_{j}\geq\epsilon_{j+1}\Rightarrow i_{r_{j}}<i_{r_{j+1}}\end{subarray}}x _{i_{1}}^{(\tilde{\epsilon_{1}})}x_{i_{2}}^{(\tilde{\epsilon_{2}})}\cdots x _{i_{n}}^{(\tilde{\epsilon_{n}})}, \tag{3.1}\]
where the second restriction in the sum runs through all indices \(1\leq j\leq\ell(\alpha)-1\). For example, if \((m^{n})\) denotes the vector (or sequence) of length \(n\) and entries equal to \(m\), then
\[F_{((n),(k))}^{(r)} =\sum_{1\leq i_{1}\leq i_{2}\leq\ldots<i_{n}}x_{i_{1}}^{(k)}x_{i _{2}}^{(k)}\cdots x_{i_{n}}^{(k)}=h_{n}(\boldsymbol{x}^{(k)})\] \[F_{((1^{n}),(k^{n}))}^{(r)} =\sum_{1\leq i_{1}<i_{2}<\cdots<i_{n}}x_{i_{1}}^{(k)}x_{i_{2}}^{ (k)}\cdots x_{i_{n}}^{(k)}=e_{n}(\boldsymbol{x}^{(k)}),\]
where \(h_{n}\) (resp. \(e_{n}\)) denotes the \(n\)-th complete homogeneous (resp. elementary) symmetric function.
This colored analogue of Gessel's fundamental quasisymmetric function was introduced by Poirier [16] and has been studied by several people [1, 7, 10, 13, 15]. It seems that this is particularly suitable when we consider colored permutation groups as wreath products. A different signed analogue of quasisymmetric functions was introduced by Chow [11] which has found applications when one considers the hyperoctahedral group as a Coxeter group (see, for example, [6]).
Steingrimsson [21, Definition 3.2] introduced a notion of descents for colored permutations which reduces to the classical one and using it we can provide an alternative (and more convenient) description for colored quasisymmetric functions. The descent set of \((\pi,\mathrm{z})\in\mathfrak{S}_{n,r}\) is defined by
\[\operatorname{Des}(\pi,\mathrm{z}):=\{i\in[n]:\mathrm{z}_{i}>\mathrm{z}_{i+1 }\text{ or }\mathrm{z}_{i}=\mathrm{z}_{i+1}\,\text{and}\,i\in \operatorname{Des}(\pi)\},\]
where \(\pi_{n+1}:=0\) and \(\mathrm{z}_{n+1}:=0\). In particular, \(n\in\operatorname{Des}(\pi,\mathrm{z})\) if and only if \(\mathrm{z}_{n}>0\). With this in mind, Equation (3.1) for the colored descent composition of \((\pi,\mathrm{z})\) becomes
\[F_{(\pi,\mathrm{z})}^{(r)}\,:=\,F_{\mathrm{co}(\pi,\mathrm{z})}^{(r)}\,=\, \sum_{\begin{subarray}{c}1\leq i_{1}\leq i_{2}\leq\ldots\leq i_{n}\\ j\in\operatorname{Des}(\pi,\mathrm{z})\setminus\{n\}\Rightarrow\,i_{j}<i_{j+ 1}\end{subarray}}x_{i_{1}}^{(\mathrm{z}_{1})}x_{i_{2}}^{(\mathrm{z}_{2})} \cdots x_{i_{n}}^{(\mathrm{z}_{n})}. \tag{3.2}\]
Adin et al. [1, Proposition 4.2] proved a signed analogue of Equation (2.1), which can be trivially extended to the general case.
**Proposition 3.1**.: _For \(\boldsymbol{\lambda}\vdash n\),_
\[s_{\boldsymbol{\lambda}}=\sum_{\boldsymbol{Q}\in\operatorname{SYT}(\boldsymbol{ \lambda})}F_{\mathrm{co}(\boldsymbol{Q})}^{(r)}. \tag{3.3}\]
Finally, a subset \(\mathcal{A}\subseteq\mathfrak{S}_{n,r}\) is called Schur-positive if the colored quasisymmetric generating function
\[F^{(r)}(\mathcal{A}):=\sum_{(\pi,\mathrm{z})\in\mathcal{A}}F_{(\pi,\mathrm{z})} ^{(r)}\]
is a Schur-positive element of \(\operatorname{Sym}_{n}^{(r)}\). In this case, it follows that \(\operatorname{ch}^{(r)}(\varrho)(\boldsymbol{x})=F^{(r)}(\mathcal{A})\) for some non-virtual \(\mathfrak{S}_{n,r}\)-representation \(\varrho\) (see also [1, Corollary 3.7]) and we will say that \(\mathcal{A}\) is Schur-positive for \(\varrho\).
## 4. Introducing colored zigzag shapes
This section introduces the notion of colored zigzag shapes and proves several properties which will be needed in the sequel.
Following Bergeron and Hohlweg [10, Section 2.1] (see also [13, Section 3.6]), the rainbow decomposition of a colored composition \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\) is the unique concatenation \((\alpha_{(1)},\epsilon_{(1)})(\alpha_{(2)},\epsilon_{(2)})\cdots(\alpha_{(m)},\epsilon_{(m)})\) of non-empty, monochromatic colored compositions \(\alpha_{(i)}\) of color \(\epsilon_{(i)}\) such that \(\epsilon_{(i)}\neq\epsilon_{(i+1)}\) for all \(1\leq i\leq m-1\). For example, for \(n=10\) and \(r=4\)
\[\left(2^{0},2^{1},1^{1},1^{3},3^{1},1^{2}\right)\ =\ (2)^{0}(2,1)^{1}(1)^{3}(3)^{1}( 1)^{2}.\]
Notice that each \(\epsilon_{(i)}\) is a single color rather than a sequence of colors.
**Definition 4.1**.: An \(r\)-colored zigzag shape with \(n\) cells is a pair \((Z,\epsilon)\), where \(Z=(Z_{1},\ldots,Z_{k})\) is a sequence of zigzag diagrams and \(\epsilon=(\epsilon_{1},\ldots,\epsilon_{k})\in\mathbb{Z}_{r}^{k}\) is a sequence of colors assigned to the parts of \(Z\) such that \(\epsilon_{i}\neq\epsilon_{i+1}\) for every \(1\leq i\leq k-1\).
For example, there exist six \(2\)-colored zigzag shapes with \(2\) cells
\[\left(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},0 \right),\ \left(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},1 \right),\ \left(\left(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}}, \raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}}\right),(0,1) \right),\ \left(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},0 \right),\ \left(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},1 \right).\]
In general, as the following proposition suggests, the number of \(r\)-colored zigzag shapes with \(n\) cells is equal to \(r(r+1)^{n-1}\), the cardinality of \(\operatorname{Comp}(n,r)\) (see [13, Table 1]).
**Proposition 4.2**.: _The set of \(r\)-colored zigzag shapes with \(n\) cells is in one-to-one correspondence with \(\operatorname{Comp}(n,r)\) and therefore with the set of all \(r\)-colored subsets of \([n]\)._
Proof.: Given a colored composition of \(n\) with rainbow decomposition
\[(\alpha,\epsilon)=(\alpha_{(1)},\epsilon_{(1)})(\alpha_{(2)},\epsilon_{(2)}) \cdots(\alpha_{(m)},\epsilon_{(m)})\]
we form the following colored zigzag shape with \(n\) cells
\[\operatorname{Z}_{(\alpha,\epsilon)}:=\left(\left(\operatorname{Z}_{\alpha_{ (1)}},\operatorname{Z}_{\alpha_{(2)}},\ldots,\operatorname{Z}_{\alpha_{(m)} }\right),\left(\epsilon_{(1)},\epsilon_{(2)},\ldots,\epsilon_{(m)}\right) \right).\]
The map \((\alpha,\epsilon)\mapsto\operatorname{Z}_{(\alpha,\epsilon)}\) is the desired bijection.
For example, the corresponding \(4\)-colored zigzag shape with \(10\) cells to the \(4\)-colored composition of our running example is
\[\left(2^{0},2^{1},1^{1},1^{3},3^{1},1^{2}\right)\longleftrightarrow\left( \left(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}}, \raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},\raisebox{-0.5 pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},\raisebox{-0.5 pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},\raisebox{-0.5 pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},\raisebox{-0.5 pt}{\includegraphics[height=56.905512pt]{figs/2.eps}}\right),\ \raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}}\right), \left(0,1,3,1,2\right)\right).\]
Now, to each colored zigzag shape we can associate an \(r\)-partite (skew) shape and consider standard Young \(r\)-partite tableaux of this shape. In particular, given an \(r\)-colored zigzag shape \((Z,\epsilon)\) we define the \(r\)-partite skew shape \(\boldsymbol{\lambda}_{(Z,\epsilon)}:=\left(Z^{(0)},Z^{(1)},\ldots,Z^{(r-1)}\right)\), where
\[Z^{(j)}:=\bigoplus_{\begin{subarray}{c}1\leq i\leq k\\ \epsilon_{i}=j\end{subarray}}Z_{i}\]
for all \(0\leq j\leq r-1\). Here, the direct sum \(\lambda\oplus\mu\) of two (skew) shapes \(\lambda,\mu\) is the skew shape whose diagram is obtained by placing the diagram of \(\lambda\) and \(\mu\) in such a way that the upper-right vertex of \(\lambda\) coincides with the lower-left vertex of \(\mu\). In our running example, we have
\[\left(\raisebox{-0.5pt}{\includegraphics[]{figures/2-3-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1
way we defined \(\mathbf{Q}\) implies \(\tilde{\zeta}=\pi^{-1}(\mathrm{z})\). These observations imply that \(\mathrm{sDes}(\overline{(\pi,\mathrm{z})}^{-1})\) and \(\mathrm{sDes}(\mathbf{Q})\) have the same color vector and therefore they record the same changes of colors. It remains to examine what happens in the case of constant color. Suppose that the second component of \(\mathrm{sDes}(\overline{(\pi,\mathrm{z})}^{-1})\) is \((\mathrm{z}_{i_{1}},\mathrm{z}_{i_{2}},\ldots,\mathrm{z}_{i_{k}})\), for some \(1\leq i_{1}<i_{2}<\cdots<i_{k}\leq n\). If \(\mathrm{z}_{i_{j}}=\mathrm{z}_{i_{j+1}}\), then \(i_{j}\in\mathrm{Des}(\pi^{-1})\) which implies that \(i_{j}\) and \(i_{j+1}\) belong to the same part \(Q^{(\mathrm{z}_{i_{j}})}\) of \(\mathbf{Q}\) and that \(i_{j}\in\mathrm{Des}(Q^{(\mathrm{z}_{i_{j}})})\) which concludes the proof.
**Example 4.4**.: We illustrate the previous proof in a specific example for \(n=10\) and \(r=4\). Suppose
\[(\alpha,\epsilon)\ =\ \left(2^{0},2^{1},1^{1},1^{3},3^{1},1^{2}\right)\ =\ (2^{0})(2^{1},1^{1})(1^{3})(3^{1})(1^{2}).\]
As we have already computed, its corresponding colored zigzag shape is
and thus it corresponds to the following 4-partite skew shape
Now, we pick an element of \(\mathrm{D}_{(\alpha,\epsilon)}\)
\[(\pi,\mathrm{z})\ =\ 2^{0}3^{0}7^{1}10^{1}5^{1}6^{3}1^{1}8^{1}9^{1}4^{2}\ =\ (23)^{0}(7 \,10\,5)^{1}(6)^{3}(189)^{1}(4)^{2}\]
and form the tableaux
\[Q_{(1)}=\overline{\left[\begin{array}{c}\framebox{23}\\ \framebox{7}\framebox{10}\end{array}\right]},\quad Q_{(2)}=\overline{\left[ \begin{array}{c}\framebox{5}\\ \framebox{7}\framebox{10}\end{array}\right]},\quad Q_{(3)}=\overline{\left[ \begin{array}{c}\framebox{6}\\ \framebox{7}\framebox{10}\end{array}\right]},\quad Q_{(4)}=\overline{\left[ \begin{array}{c}\framebox{18}\framebox{9}\end{array}\right]},\quad Q_{(5)}= \overline{\left[\begin{array}{c}\framebox{4}\end{array}\right]}\]
with corresponding colors
\[\epsilon_{(1)}\ =\ 0,\quad\epsilon_{(2)}\ =\ 1,\quad\epsilon_{(3)}\ =\ 3,\quad \epsilon_{(4)}\ =\ 1,\quad\epsilon_{(5)}\ =\ 2.\]
Taking the direct sum of tableaux of the same color yields the following 4-partite tableau
\[\mathbf{Q}\ =\ \left(\begin{array}{c}\framebox{23}\\ \framebox{3}\end{array}\right),\ \ \begin{array}{c}\framebox{18}\framebox{9}\\ \framebox{5}\end{array},\ \underline{\left[\begin{array}{c}\framebox{4} \end{array}\right]},\ \underline{\left[\begin{array}{c}\framebox{6}\end{array}\right]}\]
with colored descent set
\[\mathrm{sDes}(\mathbf{Q})\ =\ \left\{1^{1},3^{0},4^{2},5^{1},6^{3},9^{1},10^{1}\right\}\]
which coincides with the colored descent set of the conjugate-inverse of \((\pi,\mathrm{z})\)
\[\overline{(\pi,\mathrm{z})}^{-1}\ =\ 7^{1}1^{0}2^{0}10^{2}5^{1}6^{3}3^{1}8^{1}9^{1}4^{ 1}.\]
## 5. Character formulas for colored descent representations
This section studies colored descent representations in the context of colored zigzag shapes and proves the main results of this paper. In particular, Theorem 5.2 proves that the colored quasisymmetric generating function of conjugate-inverse colored descent classes is Schur-positive and equals the Frobenius image of colored descent representations. Theorem 5.3 provides an alternating formula for the latter in terms of complete homogeneous symmetric functions in the colored context.
Bagno and Biagioli [5, Section 8] studied colored descent representations using the coinvariant algebra as a representation space, extending the techniques of Adin, Brenti and Roichman [2]. We are going to define colored descent representations by means of colored zigzag shapes and
prove that the two descriptions coincide by providing the decomposition into irreducible \(\mathfrak{S}_{n,r}\)-representations.
**Definition 5.1**.: Let \((\alpha,\epsilon)\) be an \(r\)-colored composition of \(n\) with rainbow decomposition \((\alpha,\epsilon)=(\alpha_{(1)},\epsilon_{(1)})(\alpha_{(2)},\epsilon_{(2)}) \cdots\ (\alpha_{(m)},\epsilon_{(m)})\). The element
\[r_{(\alpha,\epsilon)}:=r_{(\alpha,\epsilon)}(\mathbf{x}^{(0)},\mathbf{x}^{(1)},\dots, \mathbf{x}^{(r-1)}):=r_{\alpha_{(1)}}(\mathbf{x}^{(\epsilon_{(1)})})r_{\alpha_{(2)}}( \mathbf{x}^{(\epsilon_{(2)})})\cdots r_{\alpha_{(m)}}(\mathbf{x}^{(\epsilon_{(m)})})\]
of \(\operatorname{Sym}_{n}^{(r)}\) is called the colored ribbon Schur function corresponding to \((\alpha,\epsilon)\) and the (virtual) \(\mathfrak{S}_{n,r}\)-representation \(\varrho_{(\alpha,\epsilon)}\) such that
\[\operatorname{ch}^{(r)}(\varrho_{(\alpha,\epsilon)})=r_{(\alpha,\epsilon)}\]
is called the colored descent representation corresponding to \((\alpha,\epsilon)\).
For example, for \(n=10\) and \(r=4\)
\[r_{(2^{0},2^{1},1^{1},1^{3},3^{1},1^{2})}=r_{(2)}(\mathbf{x}^{(0)})r_{(2,1)}(\mathbf{ x}^{(1)})r_{(1)}(\mathbf{x}^{(3)})r_{(3)}(\mathbf{x}^{(1)})r_{(1)}(\mathbf{x}^{(2)}).\]
The first part of following theorem shows that colored descent representations are actually non-virtual and coincide with the ones studied by Bagno and Biagioli [5, Theorem 10.5], while the second part extends and complements Adin et al.'s [1, Proposition 5.5(i)] to general colored permutation groups.
**Theorem 5.2**.: _For every \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\),_
\[r_{(\alpha,\epsilon)}\ =\ F^{(r)}(\overline{\operatorname{D}}_{(\alpha, \epsilon)}^{-1})\ =\ \sum_{\mathbf{\lambda}\vdash n}c_{\mathbf{\lambda}}(\alpha,\epsilon)\,s_{\mathbf{\lambda}}, \tag{5.1}\]
_where \(c_{\mathbf{\lambda}}(\alpha,\epsilon)\) is the number of \(\mathbf{Q}\in\operatorname{SYT}(\mathbf{\lambda})\) such that \(\operatorname{co}(\mathbf{Q})=(\alpha,\epsilon)\). In particular, conjugate-inverse colored descent classes are Schur-positive for colored descent representations._
The proof of Theorem 5.2 is essentially a colored version of that of Proposition 2.2. It is based on a colored analogue of the well-known Robinson-Schensted correspondence, first considered by White [22] and further studied by Stanton and White [20] (see also [17, Section 6] and [1, Section 5] for the case of two colors). It is a bijection from \(\mathfrak{S}_{n,r}\) to the set of all pairs of standard Young \(r\)-partite tableaux of the same shape and size \(n\). If \(w\mapsto(\mathbf{P},\mathbf{Q})\) under this correspondence, then
\[\operatorname{sDes}(w) =\operatorname{sDes}(\mathbf{Q})\] \[\operatorname{sDes}(\overline{w}^{-1}) =\operatorname{sDes}(\mathbf{P}).\]
Proof of Theorem 5.2.: The first equality of Equation (5.1) follows directly from Proposition 4.3. For the second equality, applying the colored analogue of the Robinson-Schensted correspondence yields
\[F^{(r)}(\overline{\operatorname{D}}_{(\alpha,\epsilon)}^{-1})=\sum_{\mathbf{ \lambda}\vdash n}\sum_{\begin{subarray}{c}\mathbf{P},\,\mathbf{Q}\in\operatorname{SYT }(\mathbf{\lambda})\\ \operatorname{co}(\mathbf{P})=(\alpha,\epsilon)\end{subarray}}F^{(r)}_{ \operatorname{co}(\mathbf{Q})}.\]
and the proof follows from Equation (3.3).
In our running example, we see that
\[\begin{split}\text{$\text{$\text{$\text{$\text{$\text{$\text{ $\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$}$}$}$}}}}}}}}$}$}(2^{0},2^{1,1,1},1^{3,3^{1},1^{2})}&=s_{2}(\boldsymbol{x}^{(0)})s_{21}( \boldsymbol{x}^{(1)})s_{3}(\boldsymbol{x}^{(1)})s_{1}(\boldsymbol{x}^{(2)})s_ {1}(\boldsymbol{x}^{(3)})\\ &=s_{2}(\boldsymbol{x}^{(0)})\left((s_{321}(\boldsymbol{x}^{(1)}) +s_{411}(\boldsymbol{x}^{(1)})+s_{42}(\boldsymbol{x}^{(1)})+s_{51}( \boldsymbol{x}^{(1)})\right)s_{1}(\boldsymbol{x}^{(2)})s_{1}(\boldsymbol{x}^ {(3)})\\ &=s_{(2,321,1,1)}+s_{(2,411,1,1)}+s_{(2,42,1,1)}+s_{(2,51,1,1)}, \end{split}\]
where we omitted the parentheses and commas in (regular) partitions for ease of notation. There are many ways to make this computation, the most "powerful" of which is to implement the Littlewood-Richardson rule [18, Section 7.15]. Thus, the decomposition of the colored descent representation corresponding to \((2^{0},2^{1},1^{1},1^{3},3^{1},1^{2})\) is the multiplicity free direct sum of the irreducible \(\mathfrak{S}_{10,4}\)-representations corresponding to the \(4\)-partite partitions \((2,321,1,1),(2,411,1,1),(2,42,1,\,1)\) and \((2,51,1,1)\) with corresponding \(4\)-partite tableaux
We can express the colored ribbon Schur function as an alternating sum of elements of a basis of \(\operatorname{Sym}_{n}^{(r)}\) which can be viewed as the colored analogue of the basis of complete homogeneous symmetric functions. For an \(r\)-partite partition \(\boldsymbol{\lambda}=(\lambda^{(0)},\lambda^{(1)},\ldots,\lambda^{(r-1)})\), let
\[h_{\boldsymbol{\lambda}}:=h_{\boldsymbol{\lambda}}(\boldsymbol{x}^{(0)}, \boldsymbol{x}^{(1)},\ldots,\boldsymbol{x}^{(r-1)}):=h_{\lambda^{(0)}}( \boldsymbol{x}^{(0)})h_{\lambda^{(1)}}(\boldsymbol{x}^{(1)})\cdots h_{ \lambda^{(r-1)}}(\boldsymbol{x}^{(r-1)}).\]
The set \(\{h_{\boldsymbol{\lambda}}:\boldsymbol{\lambda}\vdash n\}\) forms a basis for \(\operatorname{Sym}_{n}^{(r)}\).
Similarly to the classical case, given \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\) we can form an \(r\)-partite partition \(\boldsymbol{\lambda}_{(\alpha,\epsilon)}\) of \(n\) by first splitting its entries into colored components and then rearranging the entries of each component in weakly decreasing order. We write \(h_{(\alpha,\epsilon)}:=h_{\boldsymbol{\lambda}_{(\alpha,\epsilon)}}\).
**Theorem 5.3**.: _For every \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\),_
\[r_{(\alpha,\epsilon)}\ =\ \sum_{\begin{subarray}{c}(\beta,\delta)\in \operatorname{Comp}(n,r)\\ (\beta,\delta)\preceq(\alpha,\epsilon)\end{subarray}}(-1)^{\ell(\alpha)- \ell(\beta)}\,h_{(\beta,\delta)}. \tag{5.2}\]
Proof.: Let \((\alpha,\epsilon)\) be a colored composition of \(n\) with rainbow decomposition
\[(\alpha,\epsilon)=(\alpha_{(1)},\epsilon_{(1)})(\alpha_{(2)},\epsilon_{(2)}) \cdots(\alpha_{(m)},\epsilon_{(m)}).\]
Expanding each term \(r_{\alpha_{(i)}}(\boldsymbol{x}^{(\epsilon_{(i)})})\) in the definition of the colored ribbon Schur function \(r_{(\alpha,\epsilon)}\) according to Equation (2.3) yields
\[\begin{split} r_{(\alpha,\epsilon)}&=r_{\alpha_{(1 )}}(\boldsymbol{x}^{(\epsilon_{(1)})})r_{\alpha_{(2)}}(\boldsymbol{x}^{( \epsilon_{(2)})})\cdots r_{\alpha_{(m)}}(\boldsymbol{x}^{(\epsilon_{(m)})})\\ &=\prod_{1\leq i\leq m}\sum_{\beta_{(i)}\preceq\alpha_{(i)}}(-1)^ {\ell(\alpha_{(i)})-\ell(\beta_{(i)})}h_{\beta_{(i)}}(\boldsymbol{x}^{( \epsilon_{(i)})})\\ &=\sum_{\begin{subarray}{c}1\leq i\leq m\\ \beta_{(i)}\preceq\alpha_{(i)}\end{subarray}}(-1)^{\ell(\alpha)-(\ell(\beta_{(1 )})+\cdots+\ell(\beta_{(m)}))}\,h_{\beta_{(1)}}(\boldsymbol{x}^{(\epsilon_{( 1)})})\cdots h_{\beta_{(m)}}(\boldsymbol{x}^{(\epsilon_{(m)})}),\end{split}\]
since \(\ell(\alpha)=\ell(\alpha_{(1)})+\cdots+\ell(\alpha_{(m)})\). The proof follows by considering the colored composition with rainbow decomposition \((\beta,\delta)=(\beta_{(1)},\epsilon_{(1)})\cdots(\beta_{(m)},\epsilon_{(m)})\) and noticing that the conditions \(\beta_{(i)}\preceq\alpha_{(i)}\) for all \(1\leq i\leq m\) are precisely equivalent to \((\beta,\delta)\preceq(\alpha,\epsilon)\) and that
\[\ell(\beta) =\ell(\beta_{(1)})+\cdots+\ell(\beta_{(m)})\] \[h_{(\beta,\delta)} =h_{\beta_{(1)}}(\mathbf{x}^{(\epsilon_{(1)})})\cdots h_{\beta_{(m)}} (\mathbf{x}^{(\epsilon_{(m)})}).\]
In our running example, we have
\[r_{(2^{0},2^{1},1^{1},1^{3},3^{1},1^{2})}=h_{(2^{0},2^{1},1^{1},1^{3},3^{1},1^ {2})}-h_{(2^{0},3^{1},1^{3},3^{1},1^{2})}\]
which is in agreement with the expansion in the Schur basis that we calculated above, since
\[h_{321}-h_{33}=s_{321}+s_{411}+s_{42}+s_{51}.\]
Finally, let us describe the representation-theoretic version of Equation (5.2). For this we need to introduce some notation. We fix a primitive \(r\)-th root of unity \(\omega\). For all \(0\leq j\leq r-1\), let \(\mathds{1}_{n,j}\) be the irreducible \(\mathfrak{S}_{n,r}\)-representation corresponding to the \(r\)-partite partition having all parts empty, except for the part of color \(j\) which is equal to \((n)\). Then,
\[\mathds{1}_{n,j}(\pi,\epsilon)=\omega^{j(\epsilon_{1}+\epsilon_{2}+\cdots+ \epsilon_{n})}\]
for all \((\pi,\epsilon)\in\mathfrak{S}_{n,r}\) (see, for example, [8, Section 4]).
For \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\) of length \(k\), we define the following \(\mathfrak{S}_{\alpha,r}\)-representation
\[\mathds{1}_{(\alpha,\epsilon)}:=\mathds{1}_{\alpha_{1},\epsilon_{1}}\otimes \mathds{1}_{\alpha_{2},\epsilon_{2}}\otimes\cdots\otimes\mathds{1}_{\alpha_{k },\epsilon_{k}},\]
where \(\mathfrak{S}_{\alpha,r}:=\mathfrak{S}_{\alpha_{1},r}\times\mathfrak{S}_{ \alpha_{2},r}\times\cdots\times\mathfrak{S}_{\alpha_{k},r}\) is embedded in \(\mathfrak{S}_{n,r}\) in the obvious way. Since the colored characteristic map is a ring homomorphism, we have
\[\operatorname{ch}^{(r)}\left(\mathds{1}_{(\alpha,\epsilon)}\uparrow_{ \mathfrak{S}_{\alpha,r}}^{\mathfrak{S}_{n,r}}\right)\ =\ h_{\alpha_{1}}(\mathbf{x}^{(\epsilon_{1})})h_{\alpha_{2}}(\mathbf{x}^{(\epsilon_{2} )})\cdots h_{\alpha_{k}}(\mathbf{x}^{(\epsilon_{k})}), \tag{5.3}\]
where the second equality follows from basic properties of the colored characteristic map [16, Corollary 3]
\[\operatorname{ch}^{(r)}(\mathds{1}_{\alpha_{i},\epsilon_{i}})=s_{(\alpha_{i} )}(\mathbf{x}^{(\epsilon_{i})})=h_{\alpha_{i}}(\mathbf{x}^{(\epsilon_{i})}).\]
Since the characteristic map is actually a ring isomorphism [16, Theorem 2], applying it to Equation (5.2) yields the following.
**Corollary 5.4**.: _For all \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\), the character \(\chi_{(\alpha,\epsilon)}\) of the colored descent representation corresponding to \((\alpha,\epsilon)\) satisfies_
\[\chi_{(\alpha,\epsilon)}=\sum_{\begin{subarray}{c}(\beta,\delta)\in \operatorname{Comp}(n,r)\\ (\beta,\delta)\preceq(\alpha,\epsilon)\end{subarray}}(-1)^{\ell(\alpha)-\ell( \beta)}\,\mathds{1}_{(\beta,\delta)}\uparrow_{\mathfrak{S}_{\beta,r}}^{ \mathfrak{S}_{n,r}}. \tag{5.4}\]
Schur functions can also be viewed as generating functions of certain \(P\)-partitions, where \(P\) is a Schur labeled poset arising from the (possibly skew) Young diagram (see, for example, [12, Section 2] and [18, Section 7.19]). In view of this, we conclude with the following question.
**Question 5.5**.: Are colored ribbon Schur functions generating functions of certain colored \(P\)-partitions, where \(P\) arises from the corresponding colored zigzag shape, in the sense of [13]?
## Acknowledgments
The author would like to thank Christos Athanasiadis for suggesting the problem and providing Equation (5.2) in the hyperoctahedral case.
| The quasisymmetric generating function of the set of permutations whoseinverses have a fixed descent set is known to be symmetric and Schur-positive. The corresponding representation of the symmetric group is called the descentrepresentation. In this paper, we provide an extension of this result tocolored permutation groups, where Gessel's fundamental quasisymmetric functions are replaced by Poirier's colored quasisymmetric functions. For this purpose, we introduce a colored analogue of zigzag shapes and prove that the representations associated with these shapes coincide with colored descentrepresentations studied by Adin, Brenti and Roichman in the case of two colors and Bagno and Biagioli in the general case. Additionally, we provide a colored analogue of MaMahon's alternating formula which expresses ribbon Schurfunctions in the basis of complete homogeneous symmetric functions. |
2309.03406 | Distribution-Aware Prompt Tuning for Vision-Language Models | Pre-trained vision-language models (VLMs) have shown impressive performance
on various downstream tasks by utilizing knowledge learned from large data. In
general, the performance of VLMs on target tasks can be further improved by
prompt tuning, which adds context to the input image or text. By leveraging
data from target tasks, various prompt-tuning methods have been studied in the
literature. A key to prompt tuning is the feature space alignment between two
modalities via learnable vectors with model parameters fixed. We observed that
the alignment becomes more effective when embeddings of each modality are
`well-arranged' in the latent space. Inspired by this observation, we proposed
distribution-aware prompt tuning (DAPT) for vision-language models, which is
simple yet effective. Specifically, the prompts are learned by maximizing
inter-dispersion, the distance between classes, as well as minimizing the
intra-dispersion measured by the distance between embeddings from the same
class. Our extensive experiments on 11 benchmark datasets demonstrate that our
method significantly improves generalizability. The code is available at
https://github.com/mlvlab/DAPT. | Eulrang Cho, Jooyeon Kim, Hyunwoo J. Kim | 2023-09-06T23:49:11 | http://arxiv.org/abs/2309.03406v1 | # Distribution-Aware Prompt Tuning for Vision-Language Models
###### Abstract
Pre-trained vision-language models (VLMs) have shown impressive performance on various downstream tasks by utilizing knowledge learned from large data. In general, the performance of VLMs on target tasks can be further improved by prompt tuning, which adds context to the input image or text. By leveraging data from target tasks, various prompt-tuning methods have been studied in the literature. A key to prompt tuning is the feature space alignment between two modalities via learnable vectors with model parameters fixed. We observed that the alignment becomes more effective when embeddings of each modality are 'well-arranged' in the latent space. Inspired by this observation, we proposed distribution-aware prompt tuning (DAPT) for vision-language models, which is simple yet effective. Specifically, the prompts are learned by maximizing inter-dispersion, the distance between classes, as well as minimizing the intra-dispersion measured by the distance between embeddings from the same class. Our extensive experiments on 11 benchmark datasets demonstrate that our method significantly improves generalizability. The code is available at [https://github.com/nlvlab/DAPT](https://github.com/nlvlab/DAPT).
## 1 Introduction
In recent years, pre-trained vision-language models (VLMs) have shown great success in a wide range of applications in computer vision such as image classification [29, 33], object detection [7, 11, 44], captioning [21, 24, 42], and visual question answering (VQA) [9]. Notably, VLMs have shown promising generalization power and transferability in various downstream tasks. For instance, VLMs such as CLIP [29] and ALIGN [15] show outstanding performance in zero-shot and few-shot learning. These models opened the door for zero-shot image classification and zero-shot object detection. To further improve the pre-trained models' zero-shot generalization ability, _prompting_ has been proposed. For instance, in image classification, CLIP [29] suggests using a context text "A photo of a " in front of the class label [CLASS] to obtain text embeddings for target classes.
Prompting is an emerging research topic due to several advantages over fine-tuning, which is a conventional approach to utilize pre-trained deep-learning models. For a pre-trained VLM, fine-tuning is often practically challenging due to the large number of model parameters. In ad
Figure 1: Points in (a) and (b) denote normalized t-SNE embeddings of text features from OxfordPets [26]. In addition, points on (c) and (d) represent the image features’ t-SNE embeddings from EuroSAT [12]. As shown in (a) and (b), zero-shot CLIP has small distances between text embeddings of class labels, but its image embeddings do not cluster well. However, prompt-tuning with DAPT leads to more evenly spaced text embeddings and better clusterings of image embeddings within the same class.
dition, fine-tuning the entire VLM often results in overfitting due to a small amount of target domain data. Zhou _et al_. [47] have shown that more powerful context strings (_hard prompts_) exist. However, manually finding better hard prompts (prompt engineering) is time-consuming and suboptimal. So, after that, a line of works has proposed prompt-tuning that optimizes _soft prompts_, learnable vectors [16, 20, 47]. The learnable vectors are concatenated with other inputs and numerically optimized by backpropagation while the pre-trained VLM models parameters are fixed.
The prompt tuning can be viewed as the alignment between two latent spaces of text and image. Figure 1 shows that each latent space of CLIP is not suitable for feature alignment. The text embeddings of target classes obtained from the original CLIP in Figure 0(a) are gathered nearby, which potentially leads to misclassification to close classes. In addition, the original CLIP's visual embeddings in Figure 0(c) are widely spread, and some regions are overlapped. To address this problem, we propose a prompt tuning method, **DAPT1**, that optimizes the distribution of embeddings for each modality for better feature alignment.
Footnote 1: Distribution-Aware Prompt Tuning
DAPT learns vectors (_i.e_., soft prompts) for both text and image encoders with additional loss terms - inter-dispersion loss and intra-dispersion loss. Specifically, we apply the inter-dispersion loss to the text prompts to spread text embeddings. On the other hand, intra-dispersion loss is applied to the visual prompts to minimize the variability of image embeddings of the same class.
To verify the effectiveness of DAPT, we conducted experiments on few-shot learning and domain generalization tasks with various benchmark datasets. For few-shot learning with one sample (1-shot) to 16 samples (16-shots) per class, the proposed method is evaluated on 11 benchmark datasets. For domain generalization, 4 benchmark datasets were used after few-shot learning on ImageNet [5]. Overall, we achieve a significant improvement over recent baselines for few-shot learning and domain generalization.
In summary, we propose DAPT, a prompt tuning method that is aware of the data distributions to improve the performance of VLMs in the few-shot learning setup. Unlike the orthodox prompt tuning method, DAPT optimizes text and visual prompts to find the appropriate distribution in each modality. In Section 3, we discuss the details of DAPT and show the various experiments in Section 4.
## 2 Related Work
Pre-Trained Vision-Language Models. Pre-trained vision-language models (VLMs) [15, 29, 40, 43] jointly learn text and image embeddings with large-scale noisy image-text paired datasets. Out of those, CLIP [29] and ALIGN [15] optimize cross-modal representations between positive pairs via contrastive learning and demonstrate impressive performance in various downstream tasks [7, 11, 21, 24, 45]. In addition, there are approaches to enhance the ability of VLMs by adjusting latent space in succeeding research. For instance, Wang _et al_. [38] claim that alignment and uniformity are two key properties to optimize. By expanding these properties, Goel _et al_. [10] propose CyCLIP to mitigate inconsistent prediction in CLIP, fixing the CLIP embedding geometry.
**Prompt Tuning.** Prompting has been studied in natural language processing (NLP). Prompt tuning methods such as Petroni _et al_. [28], Shin _et al_. [32], and Jiang _et al_. [17] are proposed to construct suitable prompt template. Under the influence of NLP, prompt tuning methods with vision-language models are actively studied in the computer vision domain. Unlike the hard prompts suggested in CLIP [29], several works have studied soft prompts by optimizing learnable vectors in text or visual modality. CoOp [47] composes prompt concatenated with label embedding and learnable vectors by the text encoder. CoCoOp [46] is an advanced version of CoOp and improves generalizability in unseen classes. Also, VPT [16] and VP [1] propose prompt tuning on the visual modality. VPT uses learnable vectors for prompt tuning in the Vision Transformer [6]. Different from prior works, VP suggests image pixel-level prompt tuning in CLIP image encoder. Those prompt tuning methods show remarkable transferability and generalizability with only a few parameters. More recently, ProDA [22] and PLOT [3] use multiple prompts and demonstrate better performance than a single text prompt. Based on recent success in prompt tuning, there are multimodal prompt tuning methods in VLMs. UPT [41] jointly optimize modality-agnostic prompts with extra layers. MVLPT [31] focuses on multi-task prompting. MaPLe [18] improves generalizability of VLMs via multimodal prompting.
## 3 Method
In this section, we briefly revisit the CLIP [29] and the several prompt tuning methods [16, 47] in Section 3.1. Then we propose a distribution-aware prompt tuning, DAPT, in Section 3.2 in detail.
### Preliminaries
CLIP [29] is a vision-language model which trained via contrastive learning on a massive number of image-text pairs. In general, CLIP consists of image encoder \(f\) and text encoder \(g\). Given an image \(\mathbf{x}\) and text label \(\mathbf{t}\), image embedding \(\mathbf{z}\) and text embedding \(\mathbf{w}\) can be obtained as follows:
\[\mathbf{z}=f(\mathbf{x}) \tag{1}\] \[\mathbf{w}=g(\mathbf{t}). \tag{2}\]
Note that image embedding \(\mathbf{z}\) and text embedding \(\mathbf{w}\) are normalized. Given \(C\) image classes, the prediction probability can be calculated by softmax with the cosine similarity between the image embeddings and the corresponding text embeddings representing the image class given as:
\[p(y=c|\mathbf{x})=\frac{\exp{(\mathbf{z}^{\top}\mathbf{w}_{c}/\tau)}}{\sum_{j=1}^{C}\exp{(\bm {z}^{\top}\mathbf{w}_{j}/\tau)}}, \tag{3}\]
where \(\tau\) is a temperature parameter, and \(\mathbf{w}_{c}\) represents the text embedding of the class label \(\mathbf{t}_{c}\). Combining with cross-entropy, we define CLIP loss, \(\mathcal{L}_{\text{CLIP}}\), as follows:
\[\mathcal{L}_{\text{CLIP}}=-\frac{1}{B}\sum_{i=1}^{B}\log\frac{\exp{(\mathbf{z}_{i }^{\top}\mathbf{w}_{y_{i}}/\tau)}}{\sum_{j=1}^{C}\exp{(\mathbf{z}_{i}^{\top}\mathbf{w}_{j} /\tau)}}, \tag{4}\]
where \(y_{i}\) denotes the class of \(i\)-th image \(\mathbf{x}\), and \(B\) is the batch of image-text pairs.
**Text Prompt.** CoOp [47] is the first approach to apply prompt tuning in the text encoder of CLIP. In CoOp, text prompt \(\mathbf{p}\) is represented as a learnable vector \(v\) combined with the class. Then, the input of the text encoder is given as:
\[\mathbf{p}_{j}=[v_{1},v_{2},\cdots,v_{L},\texttt{CLASS}]. \tag{5}\]
The output of the text encoder with soft prompts is represented as:
\[\tilde{\mathbf{w}}_{j}=g(\mathbf{p}_{j}). \tag{6}\]
Note that \(\tilde{\mathbf{w}}\) is normalized. CoOp uses various configurations with respect to the length \(L\) and positions depending on datasets. They can be viewed as hyperparameters for prompt tuning. In our method, we fixed the hyperparameters for all settings. The learnable vectors of the text prompt are placed in front of CLASS with a length of \(L=16\).
**Visual Prompt.** In the computer vision domain, VPT [16] proposed a visual prompt tuning method for Vision Transformers (ViT) [6]. Similar to CoOp, VPT inserts the learnable vector \(u\) between class token \(\texttt{CLS}\) and image patch embeddings \(\mathbf{E}\) for the image encoder. Since CLIP [29] uses ViT backbone for the image encoder, we define the visual prompt in CLIP as below:
\[\mathbf{q}_{i}=[\texttt{CLS},u_{1},u_{2}\cdots,u_{L},\mathbf{E}]. \tag{7}\]
We set the length of learnable vectors of the visual prompt to \(L=16\), which is the same as the text prompt in (5). From (7), we can obtain output image embedding \(\tilde{\mathbf{z}}_{i}\) with visual prompt \(\mathbf{q}_{i}\) as:
\[\tilde{\mathbf{z}}_{i}=f(\mathbf{q}_{i}). \tag{8}\]
Note that \(\tilde{\mathbf{z}}\) is normalized.
**Prompt Tuning.** When fine-tuning CLIP with prompts, the image encoder and the text encoder have typically frozen the weights of all layers. Therefore, only the prompts are optimized. For large-scale pre-trained models, prompt tuning is often more effective and efficient than traditional fine-tuning methods such as linear probing and full fine-tuning of all layers.
### Distribution-Aware Prompt Tuning
We present DAPT that improves feature alignment between text and visual modalities by optimizing the distributions of embeddings via inter-dispersion and intra-dispersion losses. The overall pipeline of the proposed method is presented in Figure 2 and how inter-dispersion and intra-dispersion losses optimize text and visual latent spaces is depicted in Figure 1(b) and 1(c), respectively.
**Inter-Dispersion Loss for Text Prompts.** A small distance between text (label) embeddings may lead to misclassification and make it difficult to align visual features. To address this issue, we introduce an inter-dispersion loss in text embeddings based on the _uniformity_ inspired by Wang _et al._[38]. _Uniformity_ means that the feature embeddings are roughly uniformly distributed in the hypersphere. This minimizes the overlap between embeddings and enables better alignment. With normalized text embeddings \(\tilde{\mathbf{w}}\), we define Gaussian potential kernel \(\mathrm{G}\) as follows:
\[\mathrm{G}(\tilde{\mathbf{w}}_{m},\tilde{\mathbf{w}}_{n}):=\exp{(-t\|\tilde{\mathbf{w}}_{m }-\tilde{\mathbf{w}}_{n}\|_{2}^{2})}, \tag{9}\]
where \(m,n\in C\) and \(m\neq n\).
Minimizing the Gaussian potential kernel \(\mathrm{G}\) above increases the distance between the text embeddings of prompt \(\mathbf{p}_{m}\) and \(\mathbf{p}_{n}\) on the hypersphere. To optimize the distribution of text embeddings encouraging _uniformity_, we define the inter-dispersion loss as follows:
\[\mathcal{L}_{\text{inter}} =\sum_{m\neq n}\mathrm{G}(\tilde{\mathbf{w}}_{m},\tilde{\mathbf{w}}_{n}) \tag{10}\] \[=\sum_{m\neq n}\exp{(-t\|\tilde{\mathbf{w}}_{m}-\tilde{\mathbf{w}}_{n}\|_ {2}^{2})}. \tag{11}\]
Note that we set hyperparameter \(t=2\) for all experiments in this paper.
**Intra-Dispersion Loss for Visual Prompts.** Given a class, unlike the uniquely defined text (label) embedding, multiple visual embeddings exist in the latent space. Specifically, due to multiple images per class in the dataset, various image embeddings are obtained from the image encoder.
For better alignment between the text and image embeddings given class \(\mathbf{t}_{c}\), image embeddings of the same class should be close to each other. To reduce the intra-class distance of image embeddings \(\tilde{\mathbf{z}}_{i}\) and \(\tilde{\mathbf{z}}_{j}\), we define the prototype \(\mathbf{s}\) motivated by PROTONET [34] with training samples
\(\mathcal{D}_{N}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{N}\) as follows:
\[\mathbf{s}_{c}=\frac{1}{N}\sum\limits_{(\mathbf{x}_{i},\mathbf{y}_{i})\in\mathcal{D}_{N}^{ \varsigma}}\mathbf{z}_{i}, \tag{12}\]
where \(\mathbf{z}_{i}=f(\mathbf{x}_{i})\). Note that \(\mathcal{D}_{N}^{\varsigma}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\in\mathcal{D}_{N}|\mathbf{y} _{i}=c\}\) and \(N\) is the number of training samples. In order to cluster image embeddings with the same class, we assume that each embedding should be close to its prototype. Therefore, the intra-dispersion loss \(\mathcal{L}_{\text{intra}}\), which reduces the distance between the image embedding and the prototype \(\mathbf{s}\), is defined as follows:
\[\mathcal{L}_{\text{intra}}=\sum\limits_{c}\sum\limits_{i}\mathbb{1}_{[y_{i}= c]}\|\tilde{\mathbf{z}}_{i}-\mathbf{s}_{c}\|_{2}^{2}, \tag{13}\]
where \(c\) is the corresponding class index of input image \(\mathbf{x}_{i}\).
**Optimization.** Combining the CLIP loss in (4), our dispersion losses in (11) and (13), DAPT optimizes text prompt \(\mathbf{p}\) from (5) and visual prompt \(\mathbf{q}\) from (7) by minimizing the following total loss:
\[\mathcal{L}=\mathcal{L}_{\text{CLIP}}+\beta_{t}\mathcal{L}_{\text{inter}}+ \beta_{v}\mathcal{L}_{\text{intra}}, \tag{14}\]
where \(\beta_{t}\) and \(\beta_{v}\) are hyperparameters for each dispersion loss.
```
0: Pre-trained CLIP image encoder \(f\) and text encoder \(g\), dataset \(\mathcal{D}_{N}\) with \(C\) classes
1:\(\mathbf{z}_{i}\leftarrow\ f(\mathbf{z}_{i})\)\(\triangleright\) See (1).
2:\(\mathbf{s}_{c}\leftarrow\ \frac{1}{N}(\sum\limits_{\mathbf{y}_{i}\in\mathcal{D}_{N}^{ \varsigma}}\mathbf{z}_{i},\text{for}\ \forall c\)\(\triangleright\) See (12).
3:\(\tilde{\mathbf{z}}_{i}\leftarrow\ f(\mathbf{q}_{i})\)\(\triangleright\) See (7) and (8).
4:\(\tilde{\mathbf{w}}_{j}\gets\ g(\mathbf{p}_{j})\)\(\triangleright\) See (5) and (6).
5:for\(\mathcal{D}(\mathbf{x}_{i},\mathbf{y}_{i})\)do
6:\(\mathcal{L}_{\text{CLIP}}\leftarrow\ -\frac{1}{B}\sum\limits_{i=1}^{B} \log\frac{\exp{(\tilde{\mathbf{z}}_{i}^{\top}\tilde{\mathbf{w}}_{j}/\tau)}}{\sum \limits_{j=1}^{C}\exp{(\tilde{\mathbf{z}}_{i}^{\top}\tilde{\mathbf{w}}_{j}/\tau)}}\)\(\triangleright\) See (4).
7:\(\mathcal{L}_{\text{inter}}\leftarrow\ \sum\limits_{m\neq n}\exp{(-t\|\tilde{\mathbf{w}}_{m}- \tilde{\mathbf{w}}_{n}\|_{2}^{2})}\)\(\triangleright\) See (11).
8:\(\mathcal{L}_{\text{intra}}\leftarrow\ \sum\limits_{c}\sum\limits_{i}1_{[y_{i}=c]}\| \tilde{\mathbf{z}}_{i}-\mathbf{s}_{c}\|_{2}^{2}\)\(\triangleright\) See (13).
9:\(\mathcal{L}\leftarrow\ \mathcal{L}_{\text{CLIP}}+\beta_{t}\mathcal{L}_{\text{inter}}+ \beta_{v}\mathcal{L}_{\text{intra}}\)
10:\(\mathcal{L}\).backward()
11:endfor
12:\(\tilde{\mathbf{z}}\).update()
13:\(\tilde{\mathbf{w}}\).update()
```
**Algorithm 1** DAPT
Algorithm 1 summarizes how DAPT optimizes text and visual prompts regarding the distribution of each modality in latent spaces by minimizing the proposed loss in (14). To sum up, during training, text prompt \(\tilde{\mathbf{w}}\) and visual prompt \(\tilde{\mathbf{z}}\) are optimized via combined loss which consists of inter-dispersion loss, intra-dispersion loss, and CLIP loss.
Figure 2: **Overall architecture of DAPT. (a) DAPT consists of CLIP [29] architecture combined with CoOp [47] and VPT [16]. The symbols mean text and visual output embedding (_i.e._, \(\mathbf{w}\) and \(\mathbf{z}\)), text and visual outputs combined with prompts (_i.e._, \(\tilde{\mathbf{w}}\) and \(\tilde{\mathbf{z}}\)), and the prototype \(\mathbf{s}\). Following prompt tuning manner, text and image encoders are frozen during training, and only prompts are updated. (b) Inter-dispersion loss \(\mathcal{L}_{\text{inter}}\) defined from Gaussian potential kernel \(\mathrm{G}\) is applied to text prompts to expand the distance between each text embeddings \(\tilde{\mathbf{w}}\) to avoid embedding collapse. (c) To aggregate image embeddings within the same class, we define the prototype \(\mathbf{s}\) demonstrates representative image embeddings of each class by calculating the average of zero-shot CLIP image embeddings \(\mathbf{z}\). Then, intra-dispersion loss \(\mathcal{L}_{\text{intra}}\) is applied to the visual prompt to gather image embeddings around the prototype \(\mathbf{s}\).**
## 4 Experiments
**Datasets.** We evaluate DAPT on few-shot image classification and domain generalization settings. We evaluate 11 public datasets in few-shot learning, Food101 [2], DTD [4], Imagenet [5], Caltech101 [8], EuroSAT [12], StanfordCars [19], FGVCAircraft [23], Flowers102 [25], OxfordPets [26], UCF101 [35], and SUN397 [39], using 1, 2, 4, 8, and 16-shots per dataset. In the domain generalization setting, we set the source dataset to ImageNet and test to target dataset - ImageNet-R [13], ImageNet-A [14], ImageNetV2 [30], and ImageNet-Sketch [37].
**Baselines.** In the experiments, we compare with zero-shot CLIP, linear probe CLIP, CoOp [47], and VPT [16]. In the case of zero-shot CLIP, we test with pre-trained CLIP without additional training. On the other hand, we fine-tune the classifier in the case of linear probe CLIP following Radford _et al_. [29]. Because we demonstrate DAPT on CLIP with the ViT-B/16 image encoder backbone, we implement CoOp and VPT with ViT-B/16. Additionally, we observed that VPT has various accuracy gaps in the few-shot learning setting. For a fair comparison, we search hyperparameters (_i.e._, learning rate) with ranges from 0.002 to 20, as reported in Table 1. The figures show the average accuracy of 11 datasets in 16-shots image classification settings for each learning rate. As a result, we set the learning rate is 0.2 on VPT in all experiments.
**Implementation Details.** We use pre-trained CLIP [29] with ViT-B/16 image encoder backbone from the official repository2. To construct prompts for text and visual encoders, we refer to CoOp and VPT open sources3,4. In all experiments, we evaluate three times on NVIDIA RTX-3090 GPUs and report the average value. More implementation details are included in the supplement.
Footnote 2: [https://github.com/openai/CLIP](https://github.com/openai/CLIP)
Footnote 3: [https://github.com/KaiyangZhou/CoOp](https://github.com/KaiyangZhou/CoOp)
Footnote 4: [https://github.com/KMnP/vpt](https://github.com/KMnP/vpt)
### Few-Shot Learning
Figure 4 summarizes the performance of DAPT in few-shot learning on 11 datasets and the average accuracy. Each plot compares DAPT with baselines. The experiments show DAPT outperforms baselines on most benchmark datasets. In addition, Figure 5 also shows that DAPT consistently outperforms previous prompt tuning methods - CoOp [47] and VPT [16] on all datasets.
Comparison with Previous Prompt Tuning.Figure 4(a) demonstrates the results of DAPT and CoOp [47] on 16 shots. As shown in the results, DAPT outperforms CoOp for all datasets. Furthermore, Figure 4(b) presents the comparison of DAPT and VPT [16] under the same experimental setting. As with the CoOp comparison results, it shows that DAPT outperforms VPT on all datasets, and the accuracy is 12.43% higher than VPT on StanfordCars [19]. To sum up, using DAPT for each modality shows superior performance compared to conventional prompt tuning, as well as zero-shot CLIP and liner probe CLIP.
Figure 4: Few-shot learning in image classifications on 11 datasets.
results are shown in Table 3. From the experimental results, DAPT achieves remarkable performance on unseen data compared to zero-shot CLIP and linear probe CLIP. Compared with CoOp [47] and VPT [16], it slightly decreases accuracy on ImageNet-A and ImageNet-R, respectively. In contrast, in the rest of the dataset, ImageNet-V2 and ImageNet-Sketch, DAPT has superior performance with a significant accuracy gain.
### Ablation Study
**Effectiveness of Intra-dispersion Loss and Inter-dispersion Loss.** To verify the accuracy improvement when applying the inter-dispersion loss and intra-dispersion loss, we test few-shot learning experiments on 11 datasets with 16 samples from training data. We set the model combined with text prompts, _i.e._, CoOp [47], and visual prompts, _i.e._, VPT [16], as a baseline. Table 5 shows the performance improvement of applying inter-dispersion loss and intra-dispersion loss. The experimental results indicate that the accuracy improved overall for 11 datasets and the average of the entire datasets compared to the baseline. As a result, both inter-dispersion loss and intra-dispersion loss show performance gain by reconstructing the embedding space across most datasets. In particular, the inter-dispersion loss and intra-dispersion loss are effective for improving performance on FGVCAircraft [23], and DTD [4], respectively. Interestingly, despite some datasets, _i.e._, Food101 [2], ImageNet [5], FGVCAircraft [23], Flowers102 [25], and SUN397 [39], slightly decreasing accuracy by adding inter-dispersion loss or intra-dispersion loss, DAPT increases accuracy.
The results support that reconstructing and optimizing embedding jointly are essential to feature alignment between the modalities. As noted in Table 4, this tendency can also be observed for various shots - DAPT has superior performance compared with the combination of CoOp [47] and VPT [16].
**Exploration of Intra-dispersion Loss.** As discussed in Section 3, the prototype \(\mathbf{s}\) is defined as the average of image embeddings in DAPT. To evaluate the prototypes set by
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{5}{c}{\# of training samples in each class} \\ \cline{2-6} & 1 & 2 & 4 & 8 & 16 \\ \hline CoOp+VPT & 61.05 & 68.49 & 73.28 & 78.76 & 81.25 \\ DAPT & **61.42** & **69.95** & **74.91** & **78.98** & **81.62** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation study.** The average accuracy of CoOp+VPT and DAPT on 11 benchmark datasets is presented.
Figure 5: **Comparison with CoOp and VPT**. We show the result of image classification with 16 samples from training data per each dataset. Our method has overall improvement compared with CoOp [47] and VPT [16].
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & Source & \multicolumn{4}{c}{Target} \\ \cline{2-6} & ImageNet & -V2 & -Sketch & -A & -R \\ \hline Zero-shot CLIP & 66.72 & 60.90 & 46.10 & 47.75 & 73.97 \\ Linear probe CLIP & 67.42 & 57.19 & 35.97 & 36.19 & 60.10 \\ CoOp & 71.93 & 64.22 & 47.07 & **48.97** & 74.32 \\ VPT & 69.31 & 62.36 & 47.72 & 46.20 & **75.81** \\ DAPT & **72.20** & **64.93** & **48.30** & 48.74 & 75.75 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison on domain generalization.
the mean of embeddings, we conduct the ablation study of intra-dispersion loss with the prototype (DAPT-R) set by a random sample's embedding. Table 6 shows the image classification results for 1, 2, 4, 8, and 16-shots on 11 datasets. From the results, we confirmed that clustering around the average of samples (DAPT) is more effective than clustering around the random sample (DAPT-R), especially when the number of shots is not extremely small (_i.e._, 4, 8, and 16-shots).
### Analysis
We investigate whether DAPT learns text and image embeddings as intended - spreading text embeddings and assembling image embeddings within the same classes - by quantitative and qualitative analyses.
**Quantitative Analysis.** We analyze the pairwise distance pdist and the area of the convex hull of embeddings. Table 7 shows the relative average pairwise distance pdist between image embeddings of the same class compared to zero-shot CLIP, which is computed as the relative ratios, _i.e._, \(\Delta\text{pdist}\) (%) \(=(1-\text{DAPT}/\text{zero-shot CLIP})\times 100\). The results evidence that DAPT properly minimizes the average pairwise distance between image embeddings in a class (\(\approx\) intra-dispersion) and maximizes it for text (labels) embeddings across classes (\(\approx\) inter-dispersion). This implies that DAPT learns prompts to better latent spaces for feature alignment.
**Qualitative Analysis.** We verify that DAPT more compactly clusters image embeddings via t-SNE [36] visualization on several datasets. Figure 6 shows image embeddings of Fowers102 [25] and UCF101 [35] benchmark datasets. Each point represents an image embedding, and the colors of the points indicate their classes. More t-SNE visualizations are provided in the supplement.
## 5 Limitations and Conclusion
We proposed a distribution-aware prompt tuning method called DAPT for pre-trained vision-language models (VLMs). By considering the distribution of embeddings for prompt-tuning, which is underexplored in the literature, the proposed method significantly improves performance
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{\# of training samples in each class} \\ \cline{2-5} & 1 & 2 & 4 & 8 & 16 \\ \hline CoOp+VPT & 61.05 & 68.49 & 73.28 & 78.76 & 81.25 \\ DAPT-R & **65.08** & **70.51** & 74.85 & 78.11 & 81.29 \\ DAPT & 61.42 & 69.95 & **74.91** & **78.98** & **81.62** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Average performance on 11 datasets.** DAPT-R refers to applying intra-dispersion loss \(\mathcal{L}_{\text{intra}}\) with the randomly defined prototype.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Dataset} & Image & \multicolumn{2}{c}{Text} \\ \cline{2-4} & \(\Delta\text{pdist}\) (\%) & \(\Delta\text{pdist}\) (\%) & \(\text{cvx\_hull}\)text \\ \hline OxfordPets & 0.2 & 23.4 & 65.8 \\ Flowers102 & -0.3 & 36.4 & 145.1 \\ FGVCAircraft & -9.3 & 62.5 & 246.4 \\ DTD & -6.4 & 119.5 & 713.8 \\ EuroSAT & -22.2 & 117.1 & 921.9 \\ StanfordCars & -2.4 & 30.6 & 87.8 \\ Food101 & -4.6 & 27.8 & 115.8 \\ SUN397 & -9.9 & 33.2 & 148.3 \\ Caltech101 & -2.8 & 45.2 & 115.3 \\ UCF101 & -1.4 & 51.1 & 251.4 \\ ImageNet & -0.3 & 0.2 & -12.2 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Analysis of embeddings from DAPT.** Pairwise distances (pdist) and Convex hull area (cvx_hull) of embeddings from DAPT compared to zero-shot CLIP. Ratios, \((1-\text{DAPT}/\text{zero-shot CLIP})\times 100\) are reported.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & OxfordPets & Flowers102 & FGVCAircraft & DTD & EuroSAT & StanfordCars \\ \hline CoOp+VPT & 91.90 & 96.89 & 46.06 & 69.86 & 91.77 & 82.78 \\ CoOp+VPT w/ \(\mathcal{L}_{\text{inter}}\) & 91.97 & 96.85 & 46.52 & 70.06 & 92.01 & 82.95 \\ CoOp+VPT w/ \(\mathcal{L}_{\text{intra}}\) & 91.97 & 97.03 & 45.90 & 70.76 & 92.16 & 83.14 \\ DAPT & **92.27** & **97.06** & **46.37** & **71.38** & **92.65** & **83.03** \\ \hline Method & Food101 & SUN397 & Caltech101 & UCF101 & ImageNet & Average \\ \hline CoOp+VPT & 86.52 & 75.88 & 95.70 & 84.23 & 72.14 & 81.25 \\ CoOp+VPT w/ \(\mathcal{L}_{\text{inter}}\) & 86.42 & 75.71 & 95.71 & 84.27 & 72.07 & 81.33 \\ CoOp+VPT w/ \(\mathcal{L}_{\text{intra}}\) & 86.47 & 75.90 & 95.74 & 84.35 & 72.15 & 81.41 \\ DAPT & **86.55** & **75.99** & **95.82** & **84.53** & **72.20** & **81.62** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Abalation study.** We compared DAPT with the baseline, _i.e._, CoOp [47]+VPT [16], on 11 benchmark datasets. We observed performance gain in most datasets by adding losses one by one.
while maintaining the merits of existing prompt-tuning methods. In this paper, we present the inter-dispersion loss and intra-dispersion loss that appropriately optimize the text and visual latent spaces of VLMs, allowing us to achieve higher performance in downstream tasks using only prompts without additional layers. Although the proposed method significantly improves overall performance, it is still challenging to optimize prompts in the extreme few-shot settings, such as 1-shot and 2-shot. Lastly, it will be an interesting future direction to apply it to various downstream applications beyond image classification.
## Acknowledgments
This research was supported in part by the MSIT (Ministry of Science and ICT), Korea, under the ICT Creative Consilience program (IITP-2023-2020-0-01819) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation); the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2023R1A2C2005373); and the Google Cloud Research Credits program with the award (MQMD-JNER-1YQC-YAQ5).
| Pre-trained Vision-Languageモデル (VLMs) は、大規模なデータから学習した知識を活用することで、様々な下位タスクで驚くべき性能を示している。一般的には、VLMs の対象タスクのパフォーマンスをさらに向上させるには、prompt tuning を用いることが可能で、入力画像またはテキストにコンテキストを追加することで、パフォーマンスを向上させることができる。対象タスクからのデータを利用し、prompt tuning に関しては、文献で様々な手法が研究されている。prompt tuning の鍵となる要素は、モデルパラメータを固定した可学習ベクトルを用いて、2つのモデリティ間の特徴空間の整合性である。私たちはその整合性がより効果的になるように、各モデリティのエンベディングが潜在空間において「うまく配置されている」という観察から、prompt tuning に対する distribution-aware prompt tuning (DAPT) を提案した。DAPT は、シンプルながらも効果的である |
2309.15264 | The $W(E_6)$-invariant birational geometry of the moduli space of marked
cubic surfaces | The moduli space $Y = Y(E_6)$ of marked cubic surfaces is one of the most
classical moduli spaces in algebraic geometry, dating back to the nineteenth
century work of Cayley and Salmon. Modern interest in $Y$ was restored in the
1980s by Naruki's explicit construction of a $W(E_6)$-equivariant smooth
projective compactification $\overline{Y}$ of $Y$, and in the 2000s by Hacking,
Keel, and Tevelev's construction of the KSBA stable pair compactification
$\widetilde{Y}$ of $Y$ as a natural sequence of blowups of $\overline{Y}$. We
describe generators for the cones of $W(E_6)$-invariant effective divisors and
curves of both $\overline{Y}$ and $\widetilde{Y}$. For Naruki's
compactification $\overline{Y}$, we further obtain a complete stable base locus
decomposition of the $W(E_6)$-invariant effective cone, and as a consequence
find several new $W(E_6)$-equivariant birational models of $\overline{Y}$.
Furthermore, we fully describe the log minimal model program for the KSBA
compactification $\widetilde{Y}$, with respect to the divisor
$K_{\widetilde{Y}} + cB + dE$, where $B$ is the boundary and $E$ is the sum of
the divisors parameterizing marked cubic surfaces with Eckardt points. | Nolan Schock | 2023-09-26T20:47:34 | http://arxiv.org/abs/2309.15264v2 | # The \(W(E_{6})\)-invariant birational geometry of the moduli space of marked cubic surfaces
###### Abstract.
The moduli space \(Y=Y(E_{6})\) of marked cubic surfaces is one of the most classical moduli spaces in algebraic geometry, dating back to the nineteenth century work of Cayley and Salmon. Modern interest in \(Y\) was restored in the 1980s by Naruki's explicit construction of a \(W(E_{6})\)-equivariant smooth projective compactification \(\overline{Y}\) of \(Y\) [10], and in the 2000s by Hacking, Keel, and Tevelev's construction of the KSBA stable pair compactification \(\widetilde{Y}\) of \(Y\) as a natural sequence of blowups of \(\overline{Y}\)[13]. We describe generators for the cones of \(W(E_{6})\)-invariant effective divisors and curves of both \(\overline{Y}\) and \(\widetilde{Y}\). For Naruki's compactification \(\overline{Y}\), we further obtain a complete stable base locus decomposition of the \(W(E_{6})\)-invariant effective cone, and as a consequence find several new \(W(E_{6})\)-equivariant birational models of \(\overline{Y}\). Furthermore, we fully describe the log minimal model program for the KSBA compactification \(\widetilde{Y}\), with respect to the divisor \(K_{\widetilde{Y}}+cB+dE\), where \(B\) is the boundary and \(E\) is the sum of the divisors parameterizing marked cubic surfaces with Eckardt points.
## 1. Introduction
The moduli space \(Y=Y(E_{6})\) of marked cubic surfaces is one of the most classical moduli spaces in algebraic geometry, essentially dating back to the original works of Cayley and Salmon studying the 27 lines on a cubic surface [11]. There is a natural action of the Weyl group \(W(E_{6})\) on \(Y\), permuting the 27 lines, and using ideas dating back to Cayley and Coble [12], in the 1980s Naruki explicitly constructed a remarkable \(W(E_{6})\)-equivariant smooth projective compactification \(\overline{Y}=\overline{Y}(E_{6})\) of \(Y\), with simple normal crossings boundary [10]. Naruki's compactification and related spaces have since been intensively studied from a number of different perspectives, see [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20].
The goal of the present article is to study the \(W(E_{6})\)-invariant birational geometry of \(\overline{Y}\), together with a related space \(\widetilde{Y}=\widetilde{Y}(E_{6})\), a blowup of \(\overline{Y}\) along intersections of certain boundary divisors. The space \(\widetilde{Y}\) is of significant interest, as it is the KSBA stable pair compactification (cf. [18]) of the moduli space of pairs \((S,B)\), where \(S\) is a smooth cubic surface without Eckardt points, and \(B\) is the sum of its finitely many lines [13, Theorem 10.31]. (More generally, if \(c\in(1/9,1]\), then the KSBA compactification of the moduli space of weighted marked cubic surfaces \((S,cB)\) is either \(\overline{Y}\), for \(c\in(1/9,1/4]\), or \(\widetilde{Y}\), for \(c\in(1/4,1]\)[15, Theorem 1.2].)
To state our main results, let us first recall Naruki's compactification \(\overline{Y}\) in more detail. The moduli space \(\widetilde{Y}\) is a 4-dimensional smooth projective variety with 76 irreducible boundary divisors, in bijection with certain root subsystems of \(E_{6}\)--36 \(A_{1}\) divisors \(\cong\overline{M}_{0,6}\) (parameterizing marked cubic surfaces with \(A_{1}\) singularities) and 40 \(A_{2}^{3}=A_{2}\times A_{2}\times A_{2}\) divisors \(\cong(\mathbb{P}^{1})^{3}\) (parameterizing marked cubic surfaces which are the union of three planes, e.g., \(xyz=0\)). A collection of such boundary divisors intersect if and only if the corresponding root subsystems are pairwise orthogonal or nested. If \(\Theta_{1},\dots,\Theta_{k}\) are a collection of such root subsystems, we say the corresponding boundary stratum of \(\overline{Y}\) is of type \(\Theta_{1}\cdots\Theta_{k}\). We write \(B_{\Theta_{1}\cdots\Theta_{k}}\) for the sum of the boundary strata of the given type--thus, for instance, \(B_{2A_{1}A_{2}^{3}}\) is the sum of the curves formed by the intersections of 2 \(A_{1}\) divisors and 1 \(A_{2}^{3}\) divisor.
Our first main result is a complete description of the cones of \(W(E_{6})\)-invariant effective divisors and curves on \(\overline{Y}\).
**Theorem 1.1** (Theorems 3.1 and 3.2).:
1. _The_ \(W(E_{6})\)_-invariant effective cone of_ \(\overline{Y}(E_{6})\) _is the closed cone spanned by the sum_ \(B_{A_{1}}\) _of the boundary divisors of type_ \(A_{1}\)_, and the sum_ \(B_{A_{2}^{3}}\) _of the boundary divisors of type_ \(A_{2}^{3}\)
2. _The_ \(W(E_{6})\)_-invariant cone of curves of_ \(\overline{Y}(E_{6})\) _is the closed cone spanned by the sum_ \(B_{3A_{1}}\) _of the curves of type_ \(3A_{1}\)_, and the sum_ \(B_{2A_{1}A_{2}^{3}}\) _of the curves of type_ \(2A_{1}A_{2}^{3}\)_. In particular, a_ \(W(E_{6})\)_-invariant divisor on_ \(\overline{Y}(E_{6})\) _is nef (resp. ample) if it intersects the curves of type_ \(3A_{1}\) _and_ \(2A_{1}A_{2}^{3}\) _nonnegatively (resp. positively)._
As a corollary, we also obtain a complete description of the \(W(E_{6})\)-invariant nef cone of \(\overline{Y}\).
**Corollary 1.2** (Corollary 3.3).: _The \(W(E_{6})\)-invariant nef cone of \(\overline{Y}\) is spanned by \(B_{A_{1}}+3B_{A_{2}^{3}}\) and \(B_{A_{1}}+B_{A_{2}^{3}}\)._
Recall the stable base locus of a \(\mathbb{Q}\)-divisor \(D\) is the set \(\mathbb{B}(D)=\bigcap_{m\geq 0}\operatorname{Bs}(mD)\), where \(m\) is taken to be sufficiently divisible so that \(mD\) is a \(\mathbb{Z}\)-divisor, and \(\operatorname{Bs}(mD)\) is the set-theoretic base locus of \(mD\). The variety \(\overline{Y}\) is rational, and linear, numerical, and homological equivalence coincide on \(\overline{Y}\) (see [10] and [11, Theorem 1.9]). Therefore, there is a well-behaved decomposition of the pseudoeffective cone of \(\overline{Y}\) into chambers determined by the stable base loci of the corresponding divisors. We completely describe this stable base locus decomposition for the \(W(E_{6})\)-invariant effective cone of \(\overline{Y}\).
For two \(\mathbb{Q}\)-divisors \(D_{1},D_{2}\), we write \((D_{1},D_{2}]\) for the set of divisors of the form \(\alpha D_{1}+\beta D_{2}\) with \(\alpha\geq 0\), \(\beta>0\), and similarly for \([D_{1},D_{2})\), \((D_{1},D_{2})\), \([D_{1},D_{2}]\). A \(\mathbb{Q}\)-divisor \(D\) is in \((D_{1},D_{2}]\) if and only if it lies in the cone spanned by the rays through \(D_{1}\) and through \(D_{2}\), but does not lie on the ray through \(D_{1}\).
**Theorem 1.3** (Theorem 3.5).: _Let \(D\) be a \(W(E_{6})\)-invariant effective \(\mathbb{Q}\)-divisor on \(\overline{Y}\)._
1. _If_ \(D\in[B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{1}}+B_{A_{2}^{3}}]\) _then_ \(D\) _is nef and semi-ample. In particular,_ \(\mathbb{B}(D)=\emptyset\)_._
2. _If_ \(D\in(B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{2}}]\)_, then_ \(\mathbb{B}(D)=B_{A_{2}^{3}}\) _is the sum of the_ \(A_{2}^{3}\) _divisors._
3. _If_ \(D\in(B_{A_{1}}+B_{A_{2}^{3}},5B_{A_{1}}+3B_{A_{2}^{3}}]\)_, then_ \(\mathbb{B}(D)=B_{2A_{1}}\) _is the sum of the_ \(2A_{1}\) _surfaces._
4. _If_ \(D\in(5B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{1}}]\)_, then_ \(\mathbb{B}(D)=B_{A_{1}}\) _is the sum of the_ \(A_{1}\) _divisors._
Theorem 1.3 is pictured in Fig. 1.
For an effective \(\mathbb{Q}\)-divisor \(D\) on \(\overline{Y}\), let \(\overline{Y}(D)=\operatorname{Proj}\bigoplus_{m\geq 0}H^{0}(\overline{Y},mD)\) be the associated projective model (if it exists), where the sum is taken over \(m\) sufficiently divisible so that \(mD\) is a \(\mathbb{Z}\)-divisor. We completely describe the \(W(E_{6})\)-invariant birational models of \(\overline{Y}\) appearing in the stable base locus decomposition of the \(W(E_{6})\)-invariant effective cone of \(\overline{Y}\).
**Theorem 1.4** (Theorem 3.6).: _Let \(D\) be a \(W(E_{6})\)-invariant effective \(\mathbb{Q}\)-divisor on \(\overline{Y}\)._
1. _If_ \(D\in(B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{1}}+B_{A_{2}^{3}})\)_, then_ \(D\) _is ample, and_ \(\overline{Y}(D)=\overline{Y}\)_._
2. _If_ \(D\in[B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{2}^{3}}]\)_, then_ \(\overline{Y}(D)\) _is the GIT moduli space_ \(\overline{M}\) _of marked cubic surfaces. The morphism_ \(\pi:\overline{Y}\to\overline{M}\) _is given by contracting the 40_ \(A_{2}^{3}\) _divisors of_ \(\overline{Y}\) _to singular points, each locally isomorphic to the cone over the Segre embedding of_ \((\mathbb{P}^{1})^{3}\)_._
3. _If_ \(D\) _is a multiple of_ \(B_{A_{1}}+B_{A_{2}^{3}}\)_, then_ \(D\) _is semi-ample and the morphism_ \(\phi:\overline{Y}\to\overline{Y}(D)=:\overline{W}\) _associated to_ \(|mD|\) _for_ \(m\gg 0\) _is given by contracting each_ \(2A_{1}\) _surface_ \(S\cong Bl_{4}\mathbb{P}^{2}\) _to a singular line via the strict transform of the linear system of conics on_ \(\mathbb{P}^{2}\) _passing through the 4 blown up points._
4. _If_ \(D\in(B_{A_{1}}+B_{A_{2}^{3}},B_{A_{1}}]\)_, then_ \(\overline{Y}(D)\) _is the_ \(D\)_-flip_ \(\overline{X}\) _of the small contraction_ \(\phi:\overline{Y}\to\overline{W}\) _of part (3)._
5. _If_ \(D\) _is a multiple of_ \(B_{A_{1}}\) _or_ \(B_{A_{2}^{3}}\)_, then_ \(\overline{Y}(D)\) _is a point._
Figure 1. The stable base locus decomposition of the \(W(E_{6})\)-invariant effective cone of \(\overline{Y}\).
In Theorem 1.4, the GIT moduli space \(\overline{M}\) of marked cubic surfaces is classically known; it is the natural \(W(E_{6})\)-cover of the GIT moduli space of cubic surfaces induced by choosing a marking, see for instance [15, Section 2]. The contraction \(\pi:\overline{Y}\to\overline{M}\) is explicitly constructed in Naruki's original work [16]. The other birational models in Theorem 1.4 appear to be new.
Our second moduli space of interest is the moduli space \(\widetilde{Y}=\widetilde{Y}(E_{6})\) of KSBA stable marked cubic surfaces [10]. The space \(\widetilde{Y}\) is the blowup of \(\overline{Y}\) along all intersections of \(A_{1}\) divisors, in increasing order of dimension. The boundary of \(\widetilde{Y}\) consists of 5 types of irreducible divisors, which we label by \(a,b,a_{2},a_{3},a_{4}\). These are, respectively, the strict transforms of the \(A_{1}\) and \(A_{2}^{3}\) divisors, and the (strict transforms of the) exceptional divisors over the intersections of \(2,3,4\)\(A_{1}\) divisors. The moduli space \(\widetilde{Y}\) also has 45 Eckardt divisors, parameterizing marked cubic surfaces with Eckardt points. We label the Eckardt divisors by type \(e\), and we choose not to consider them as part of the boundary. We follow similar notation for strata of \(\widetilde{Y}\) as for \(\overline{Y}\).
**Theorem 1.5** (Theorems 5.10 and 5.11).:
1. _The_ \(W(E_{6})\)_-invariant effective cone of_ \(\widetilde{Y}(E_{6})\) _is the closed cone spanned by the_ \(W(E_{6})\)_-invariant boundary divisors_ \(B_{a}\)_,_ \(B_{a_{2}}\)_,_ \(B_{a_{3}}\)_,_ \(B_{a_{4}}\)_, and_ \(B_{b}\)_._
2. _The_ \(W(E_{6})\)_-invariant cone of curves of_ \(\widetilde{Y}(E_{6})\) _is the closed cone spanned by the_ \(W(E_{6})\)_-invariant curves of types_ \[aa_{2}a_{3},aa_{2}a_{4},aa_{3}a_{4},a_{2}a_{3}a_{4},aa_{2}b,aa_{3}b,a_{2}a_{ 3}b,aa_{2}e.\] _(These are the boundary curves of_ \(\widetilde{Y}(E_{6})\)_, together with the curves of type_ \(aa_{2}e\) _formed by the intersection of one type_ \(a\) _divisor, one type_ \(a_{2}\) _divisor, and one Eckardt divisor.) In particular, a_ \(W(E_{6})\)_-invariant divisor on_ \(\widetilde{Y}(E_{6})\) _is nef (resp. ample) if it intersects the curves of the above types nonnegatively (resp. positively)._
The \(W(E_{6})\)-invariant effective cone of \(\widetilde{Y}\) is 5-dimensional and its stable base locus decomposition appears to be quite complicated in comparison to that of \(\overline{Y}\). We describe an interesting slice of this decomposition, by describing the log minimal models of the pair \((\widetilde{Y},cB+dE)\), with respect to the divisor \(K_{\widetilde{Y}}+cB+dE\), where \(B\) is the sum of the boundary divisors and \(E\) is the sum of the Eckardt divisors.
**Theorem 1.6** (Theorem 5.15).: _Fix \(0\leq c\leq 1\) and \(0\leq d\leq 2/3\). Then the pair \((\widetilde{Y},cB+dE)\) has log canonical singularities. The log canonical models of \((\widetilde{Y},cB+dE)\) are as follows._
1. _If_ \(4c+25d<1\)_, then_ \(K_{\widetilde{Y}}+cB+dE\) _is not effective._
2. _If_ \(4c+25d=1\)_, then the log canonical model of_ \((\widetilde{Y},cB+dE)\) _is a point._
3. _If_ \(2c+12d\leq 1\) _and_ \(4c+25d>1\) _then the log canonical model of_ \((\widetilde{Y},cB+dE)\) _is the GIT moduli space_ \(\overline{M}\) _of marked cubic surfaces._
4. _If_ \(c+4d\leq 1\) _and_ \(2c+12d>1\)_, then the log canonical model of_ \((\widetilde{Y},cB+dE)\) _is Naruki's compactification_ \(\overline{Y}\)_._
5. _If_ \(c+3d\leq 1\) _and_ \(c+4d>1\)_, then the log canonical model of_ \((\widetilde{Y},cB+dE)\) _is the blowup_ \(\overline{Y}_{1}\) _of_ \(\overline{Y}\) _along the intersections of 4_ \(A_{1}\) _divisors._
6. _If_ \(c+2d\leq 1\) _and_ \(c+3d>1\)_, then the log canonical model of_ \((\widetilde{Y},cB+dE)\) _is the blowup_ \(\overline{Y}_{2}\) _of_ \(\overline{Y}_{1}\) _along the strict transforms of the intersections of 3_ \(A_{1}\) _divisors._
7. _If_ \(c+2d>1\)_, then_ \(K_{\widetilde{Y}}+cB+dE\) _is ample, so_ \((\widetilde{Y},cB+dE)\) _is already its own log canonical model. Recall this is the blowup of_ \(\overline{Y}_{2}\) _along the strict transforms of the intersections of 2_ \(A_{1}\) _divisors._
Theorem 1.6 is pictured in Fig. 2. Note that when \(d=0\), the log canonical model of \((\widetilde{Y},B)\) is Naruki's compactification \(\overline{Y}\). (This is also shown in [10].) The log minimal model program for \((\overline{Y},cB_{\overline{Y}})\) is captured by the bottom line of Fig. 2; this is also captured by the left half of Fig. 1.
The analogue of Theorem 1.6 for moduli of curves is sometimes called the Hassett-Keel program, and has seen significant interest as a method to interpolate between different geometrically interesting compactifications of the moduli space of curves, see for instance [12, 13]. Recently, a version of the Hassett-Keel program for moduli of K3 surfaces, dubbed the _Hassett-Keel-Looijenga program_, has seen intensive study, see for instance [1, 2, 13]. Theorem 1.6 is another higher-dimensional generalization of the Hassett-Keel program.
### Context
The study of the birational geometry of moduli spaces is a topic of great interest, see for instance [15, 1
to fixing an isomorphism \(\operatorname{Pic}^{0}(D)\cong\mathbb{C}^{*}\) (cf. [12, Lemma 1.6]). The orthogonal complement \(D^{\perp}:=\langle D_{1},D_{2},D_{3}\rangle^{\perp}\) in \(\operatorname{Pic}S\) is isomorphic to the \(D_{4}\) root lattice \(\Lambda(D_{4})\)[13, Appendix by E. Looijenga]. It follows that \(\operatorname{Hom}(D^{\perp},\operatorname{Pic}^{0}(D))\cong\operatorname{ Hom}(\Lambda(D_{4}),\mathbb{C}^{*})\) is identified with the algebraic torus \(T_{\Lambda(D_{4})}\cong(\mathbb{C}^{*})^{4}\) with character lattice \(\Lambda(D_{4})\). The period map \(Z\to\operatorname{Hom}(D^{\perp},\operatorname{Pic}^{0}(D))\) sending \((S,m,D)\) to \((\alpha\mapsto\alpha|_{D})\) induces an isomorphism of \(Z\) with the complement in \(T_{\Lambda(D_{4})}\) of the _root hyperori_\(H_{\alpha}=\{\chi\mid\chi(\alpha)=1\}\), for \(\alpha\) a positive root \(D_{4}\). (Note \(H_{\alpha}=H_{-\alpha}\).) For more details we refer to [13, Appendix], see also related ideas in [14, 14, 15, 16]
There is a natural compactification \(\overline{Z}\) of \(Z\), the so-called _minimal wonderful compactification_ (cf. [1]), which Naruki describes as follows [13]. Let \(X(\Sigma)\) be the \(D_{4}\) Coxeter toric variety, i.e., the toric variety with torus \(T_{\Lambda(D_{4})}\) whose maximal cones are the \(D_{4}\) Weyl chambers (cf. [1]). Connected components of intersections of the root hypertori \(H_{\alpha}\) in \(T_{\Lambda(D_{4})}\) correspond to root subsystems of \(D_{4}\) whose Dynkin diagrams look like Dynkin diagrams obtained by deleting vertices from the extended Dynkin diagram of \(D_{4}\)[1, Theorem 7.17]. Blowing up \(X(\Sigma)\) in the closures of the subvarieties of \(T_{\Lambda(D_{4})}\) corresponding to irreducible root subsystems of \(D_{4}\) produces a smooth projective variety \(\overline{Z}\) such that \(\overline{Z}\setminus Z\) is a simple normal crossings divisor. Explicitly, \(\overline{Z}\) is obtained from \(X(\Sigma)\) by blowing up \(1\) point (the identity of \(T_{\Lambda(D_{4})}\)) corresponding to the \(D_{4}\) root system itself, then \(12\) curves corresponding to \(A_{3}\subset D_{4}\) root subsystems, then \(16\) surfaces corresponding to \(A_{2}\subset D_{4}\) root subsystems.
Naruki shows that the divisors of \(\overline{Z}\) over the \(A_{3}\) curves are each isomorphic to \(\mathbb{P}^{1}\times\overline{M}_{0,5}\), and each can be blown down via the second projection \(\mathbb{P}^{1}\times\overline{M}_{0,5}\to\overline{M}_{0,5}\)[13]. The resulting variety is his "cross-ratio variety" \(\overline{Y}\), compactifying the moduli space of marked cubic surfaces. (On the interiors, the map \(\overline{Z}\to\overline{Y}\) is identified with the natural map forgetting the anticanonical cycle on the marked cubic surface.)
### Divisors on \(\overline{Y}\)
There are three natural types of divisors of interest on Naruki's compactification.
1. \(36\)\(A_{1}\) divisors (corresponding to \(A_{1}\subset E_{6}\) root subsystems), parameterizing marked cubic surfaces with \(A_{1}\) singularities. These appear on \(\overline{Y}\) as the images of the \(12\)\(A_{1}\subset D_{4}\) root hypersurfaces of \(\overline{Z}\), and \(24\) of the \(48\) boundary divisors of \(X(\Sigma)\), see [13, 14].
2. \(40\)\(A_{2}^{3}\) divisors (corresponding to \(A_{2}^{3}=A_{2}\times A_{2}\times A_{2}\subset E_{6}\) root subsystems), parameterizing marked cubic surfaces which are the reducible union of three planes meeting transversally (e.g., \(xyz=0\) in \(\mathbb{P}^{3}\)). These appear on \(\overline{Y}\) as the images of the \(16\)\(A_{2}\subset D_{4}\) divisors of \(\overline{Z}\), and the remaining \(24\) of the \(48\) boundary divisors of \(X(\Sigma)\), again see [13, 14].
3. \(45\) Eckardt divisors, parameterizing marked cubic surfaces with Eckardt points. These appear on \(\overline{Y}\) as the images of the \(D_{4}\) divisor of \(\overline{Z}\) (the strict transform of the exceptional divisor over the identity of the torus \(T_{\Lambda(D_{4})}\)), and \(44\) more divisors of \(\overline{Z}\) obtained from hypersurfaces of \(T_{\Lambda(D_{4})}\) which are explicitly written down in [13]. The Eckardt divisors are labeled as follows. Suppose \(S\) is a smooth cubic surface obtained by blowing up \(6\) points \(p_{1},\dots,p_{6}\) in \(\mathbb{P}^{2}\) in general position. Then the lines on \(S\) are labeled by \(e_{i}\) for \(i=1,\dots,6\) (the exceptional divisor over \(p_{i}\)), \(c_{i}\) for \(i=1,\dots,6\) (the conic passing through \(p_{j}\) for \(j\neq i\)), and \(\ell_{ij}\) for \(ij\subset[6]\) (the line through \(p_{i}\) and \(p_{j}\)). The possible Eckardt points on \(S\) are given by two types of triples of lines: \(30\) of the form \(\{e_{i},c_{j},\ell_{ij}\}\), and \(15\) of the form \(\{\ell_{ij},\ell_{kl},\ell_{mn}\}\). These triples give the labels of the \(45\) Eckardt divisors. We say that two Eckardt divisors have a common line if the corresponding triples have a common line. (In [13, 14, 15], Eckardt divisors are instead called _tritangent_ divisors.)
We write \(B_{A_{1}}\) for the sum of the \(A_{1}\) divisors and \(B_{A_{2}^{3}}\) for the sum of the \(A_{2}^{3}\) divisors.
From Naruki's construction of \(\overline{Y}\) one is able to read off a wealth of information concerning these divisors and their intersections, as summarized below. For more information we refer to [13, 15, 14].
**Notation 2.1**.: Recall that the divisors of \(\overline{M}_{0,n}\) are labeled by \(D_{I}=D_{I,I^{c}}\), with \(|I|,|I^{c}|\geq 2\), parameterizing stable genus \(0\) curves with \(2\) irreducible components, such that the points from \(I\) are on one component, and the remaining points on the other component. When we are not concerned with the particular boundary divisor, we write \(D_{|I|}\) to denote a boundary divisor of the form \(D_{I}\), and we write \(B_{|I|}\) for the sum of all such divisors.
We also recall the Keel-Vermeire divisors on \(\overline{M}_{0,6}\)[14]. For \((ij)(kl)(mn)\in S_{6}\), let \(\overline{M}_{0,6}\to\overline{\mathcal{M}}_{3}\) be the morphism sending a genus \(0\) curve with \(6\) marked points to a genus \(3\) curve by identifying the marked points \(i\) and \(j\), \(k\) and \(l\), and \(m\) and \(n\). The Keel-Vermeire divisor associated to \((ij)(kl)(mn)\) is the pullback of
the hyperelliptic locus of \(\overline{\mathcal{M}}_{3}\). There are 15 Keel-Vermeire divisors on \(\overline{M}_{0,6}\), and they are effective divisors which cannot be written as effective sums of boundary divisors.
**Proposition 2.2**.: _There are 36 \(A_{1}\) divisors on \(\overline{Y}\). Let \(D=D_{A_{1}}\) be an \(A_{1}\) divisor. Then \(D\cong\overline{M}_{0,6}\), and the nonempty intersections of \(D\) with the Eckardt divisors and the other boundary divisors of \(\overline{Y}\) are as follows._
1. \(D\) _intersects 15 other_ \(A_{1}\) _divisors,_ \(D_{A_{1}^{\prime}}\) _for_ \(A_{1}^{\prime}\) _orthogonal to_ \(A_{1}\)_, in the 15 divisors_ \(D_{ij}\) _on_ \(\overline{M}_{0,6}\)_._
2. \(D\) _intersects 10_ \(A_{2}^{3}\) _divisors,_ \(D_{A_{2}^{3}}\) _for_ \(A_{1}\subset A_{2}^{3}\)_, in the 10 divisors_ \(D_{ijk}\) _on_ \(\overline{M}_{0,6}\)_._
3. \(D\) _intersects every Eckardt divisor_ \(D_{e}\)_._ 1. _If the lines of_ \(D_{e}\) _are lines on the_ \(A_{1}\)_-nodal cubic surface generically parameterized by_ \(D\)_, then_ \(D_{e}\) _restricts to a Keel-Vermeire divisor on_ \(\overline{M}_{0,6}\)_. (There are 15 of these.)_ 2. _Otherwise, there is a unique second Eckardt divisor_ \(D_{e}^{\prime}\) _and a unique second_ \(A_{1}\) _divisor_ \(D_{A_{1}^{\prime}}\)_, such that_ \[D_{e}\cap D_{A_{1}}=D_{e}^{\prime}\cap D_{A_{1}^{\prime}}=D_{A_{1}}\cap D_{A_{1 }^{\prime}}=D_{ij}.\] _(There are 15 such pairs.)_
_The class of the restriction of \(D_{A_{1}}\) to itself is_
\[\frac{-B_{2}-3B_{3}}{5}.\]
Proof.: This all follows from Naruki's explicit construction of \(\overline{Y}\)[13], also described in greater detail in [11, 12, 13]. The only part not explained in detail in _loc. cit._ is the intersections with the Eckardt divisors which give the Keel-Vermeire divisors on \(\overline{M}_{0,6}\). For this, it suffices to consider one particular example. Suppose \(D\) is the \(A_{1}\) divisor obtained by blowing up 6 points on a conic. Then the three lines \(\ell_{ij}\), \(\ell_{kl}\), \(\ell_{mn}\) intersect in a point precisely when the genus 0 curve given by the 6 points on the conic lies in the pullback of the hyperelliptic locus of \(\overline{\mathcal{M}}_{3}\). Thus the intersection with the Eckardt divisor given by \(\{\ell_{ij},\ell_{kl},\ell_{mn}\}\) is the Keel-Vermeire divisor of type \((ij)(kl)(mn)\).
**Proposition 2.3**.: _There are 40 \(A_{2}^{3}\) divisors on \(\overline{Y}\). Let \(D=D_{A_{2}^{3}}\) be an \(A_{2}\) divisor. Then \(D\cong(\mathbb{P}^{1})^{3}\), and the nonempty intersections of \(D\) with the Eckardt divisors and the other boundary divisors of \(\overline{Y}\) are as follows._
1. \(D\) _intersects 9_ \(A_{1}\) _divisors,_ \(D_{A_{1}}\) _for_ \(A_{1}\subset A_{2}^{3}\)_, in the 9 divisors_ \(p\times\mathbb{P}^{1}\times\mathbb{P}^{1}\)_,_ \(\mathbb{P}^{1}\times p\times\mathbb{P}^{1}\)_,_ \(\mathbb{P}^{1}\times\mathbb{P}^{1}\times p\)_, for_ \(p\in\{0,1,\infty\}\)_._
2. \(D\) _is disjoint from the other_ \(A_{2}^{3}\) _divisors._
3. \(D\) _intersects 18 Eckardt divisors in 18 smooth hypersurfaces, 6 each of the classes_ \(h_{1}+h_{2}\)_,_ \(h_{1}+h_{3}\)_,_ \(h_{2}+h_{3}\)_. If one fixes the_ \(k\)_th_ \(\mathbb{P}^{1}\)_, then the hypersurfaces of class_ \(h_{i}+h_{j}\) _are determined uniquely by specifying that they pass through the three distinct points_ \(p_{1}\times q_{1},p_{2}\times q_{2},p_{3}\times q_{3}\) _in the remaining_ \(\mathbb{P}^{1}\times\mathbb{P}^{1}\)_, where_ \(p_{l},q_{l}\in\{0,1,\infty\}\)_._
_The class of the restriction of \(D_{A_{2}^{3}}\) to itself is_
\[-h_{1}-h_{2}-h_{3}.\]
Proof.: Again this all follows from Naruki's construction [13], as described in detail in [11, 12, 13, 14] (see also [13, Proposition 5.4]).
**Proposition 2.4**.: _There are 45 Eckardt divisors on \(\overline{Y}\). Let \(D=D_{e}\) be an Eckardt divisor. Then \(D\) is isomorphic to the minimal wonderful compactification \(\overline{X}(D_{4})\) of the \(D_{4}\) hyperplane arrangement \(x_{i}=\pm x_{j}\) in \(\mathbb{P}^{3}\), obtained by blowing up the 12 points corresponding to \(A_{3}\subset D_{4}\) root subsystems, followed by the 16 lines corresponding to \(A_{2}\subset D_{4}\) hyperplanes. The intersections of \(D\) with the other Eckardt divisors and the boundary divisors of \(\overline{Y}\) are as follows._
1. \(D\) _intersects every_ \(A_{1}\) _divisor._ 1. _If the lines of_ \(D\) _are lines on the_ \(A_{1}\)_-nodal cubic surface generically parameterized by_ \(D_{A_{1}}\)_, then_ \(D_{A_{1}}\) _restricts to one of the_ \(D_{4}\) _hyperplanes on_ \(D\)_. There are 12 such intersections, each isomorphic to_ \(Bl_{3}\mathbb{P}^{2}\)_._ 2. _Otherwise, there is a unique second_ \(A_{1}\) _divisor_ \(D_{A_{1}^{\prime}}\) _such that_ \[D\cap D_{A_{1}}=D\cap D_{A_{1}^{\prime}}=D_{A_{1}}\cap D_{A_{1}^{\prime}}.\]
_On_ \(D\)_, this intersection is the strict transform of the exceptional divisor over one of the blown up points. There are 12 such pairs of intersections. Each intersection is isomorphic to_ \(Bl_{4}\mathbb{P}^{2}\cong\overline{M}_{0,5}\)_._
2. \(D\) _intersects 16_ \(A_{2}^{3}\) _divisors. These intersections occur precisely when the lines of_ \(D\) _all lie on the same one of the three planes in the reducible cubic surface generically parameterized by the given_ \(A_{2}^{3}\) _divisor. These intersections give the exceptional divisors over the 16 blown up lines on_ \(D\)_. Each such intersection is isomorphic to_ \(\mathbb{P}^{1}\times\mathbb{P}^{1}\)_._
3. \(D\) _intersects every other Eckardt divisor._ 1. _If_ \(D^{\prime}\) _is an Eckardt divisor having no common lines with_ \(D\)_, then there is a unique third Eckardt divisor_ \(D^{\prime\prime}\) _such that the intersection of any 2 of_ \(D,D^{\prime},D^{\prime\prime}\) _is the same as the intersection of all 3. There are 16 such pairs, restricting in_ \(D\) _to the strict transforms of 16 smooth quadrics in_ \(\mathbb{P}^{3}\)_. Each such quadric is determined by one of the blown-up lines, by the condition that the quadric passes through all of the blown-up points except for the three lying on the chosen line_ _[_10_, Section 6.7]__. In particular, each such intersection is isomorphic to the blowup of_ \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) _at 9 points_ \(p\times q\) _for_ \(p,q\in\{0,1,\infty\}\)_._ 2. _If_ \(D\) _and_ \(D^{\prime}\) _have a common line, then their intersection is reducible. On_ \(D\) _it looks like the union of the strict transform of an_ \(F_{4}\) _hyperplane_ \(x_{i}=0\) _or_ \(x_{1}\pm x_{2}\pm x_{3}\pm x_{4}=0\)_, together with the strict transform of the exceptional divisor over one of the blown up points which is contained in the given_ \(F_{4}\) _hyperplane. There are 12 such intersections._
Proof.: This also follows from Naruki's construction [22], as described in detail in [10, 11]. However, we note that part of the result is misstated in [10, Section 6.7], where it is stated that if \(D\) and \(D^{\prime}\) have a common line, then their intersection consists only of the strict transform \(H\) of an \(F_{4}\) hyperplane, rather than the union of \(H\) with the strict transform \(P\) of an exceptional divisor over a blown up point. However, this is inconsistent with Proposition 2.2. Indeed, suppose \(P=D\cap D_{A_{1}}=D\cap D_{A^{\prime}_{1}}\), as in Proposition 2.2 and part 1(b) of the current proposition. Then by Proposition 2.2, \(D\cap D_{A_{1}}\cap D_{A^{\prime}_{1}}=D^{\prime}\cap D_{A_{1}}\cap D_{A^{ \prime}_{1}}\). This is of course false unless \(D\cap D^{\prime}\) contains \(P\). The corrected description follows from Naruki's construction and his explicit description of the Eckardt divisors starting from their equations in \(Z\subset T_{\Lambda(D_{4})}\cong(\mathbb{C}^{*})^{4}\)[22, Section 8]. Namely, Naruki puts coordinates \(\lambda,\mu,\nu,\rho\) on \((\mathbb{C}^{*})^{4}\) corresponding to a basis of simple roots of \(D_{4}\), and writes down explicit equations in terms of these coordinates for each Eckardt divisor [22, Table 3]. For instance, two such Eckardt divisors are given by \(\{\lambda\nu=1\}\) and \(\{\lambda=\nu\}\). The intersection of these two divisors has two connected components, \(\{\lambda=\nu=1\}\), \(\{\lambda=\nu=-1\}\). Neither of these components is blown up by \(\overline{Z}(D_{4})\to X(\Sigma)\), since the root subsystem of \(D_{4}\) spanned by the roots corresponding to \(\lambda\) and \(\nu\) is reducible, a \(2A_{1}\) subsystem. Tracing these intersections through Naruki's construction \(\overline{Y}\leftarrow\overline{Z}(D_{4})\to X(\Sigma)\), one sees the correct description.
**Example 2.5**.: Let \(S\) be a smooth cubic surface obtained as the blowup of 6 points in \(\mathbb{P}^{2}\), and write \(\operatorname{Pic}S=\langle h,e_{1},\ldots,e_{6}\rangle\), where \(h\) is the pullback of the hyperplane class and \(e_{i}\) is the class of the exceptional divisor over the \(i\)th point.
Recall that the root system \(E_{6}\) is given by
\[E_{6}=\{\alpha\in\operatorname{Pic}S\mid\alpha\cdot K_{S}=0,\alpha^{2}=-2\}.\]
Explicitly, we label the positive roots of \(E_{6}\) by \(ij=e_{i}-e_{j}\), \(ijk=h-e_{i}-e_{j}-e_{k}\), and \(7=\beta=2h-e_{1}-\cdots-e_{6}\) (cf. [1, Remark 4.9]). We identify \(A_{1}\) subsystems with positive roots.
We consider the Eckardt divisor \(D_{e}\cong\overline{X}(D_{4})\) in \(\overline{Y}\) corresponding to the triple of lines \(\{e_{5},c_{6},\ell_{56}\}\).
1. The divisors \(D_{A_{1}}\) such that the lines of \(D_{e}\) are lines on the \(A_{1}\)-nodal cubic parameterized by \(D_{A_{1}}\) are given by \(A_{1}=ij\) or \(ij6\), for \(ij\subset[4]\). These cut out the strict transforms of the 12 \(D_{4}\) hyperplanes on \(D_{e}\).
2. The remaining \(D_{A_{1}}\) divisors come in pairs of orthogonal \(A_{1}\)'s, cutting out the strict transforms of the exceptional divisors of \(\overline{X}(D_{4})\) over the 12 blown up points. For instance, if \(A_{1}=7\) and \(A^{\prime}_{1}=56\), then \(D_{e}\cap D_{A_{1}}=D_{e}\cap D_{A^{\prime}_{1}}\) is the exceptional divisor over the point \((1:1:1:1)\).
3. The Eckardt divisors sharing a line with \(D_{e}\) intersect \(D_{e}\) in the reducible union of the strict transform of an \(F_{4}\) hyperplane and the strict transform of the exceptional divisor over one of the 12 blown up points. For instance, if \(D^{\prime}_{e}\) is given by the triple of lines \(\{e_{6},c_{5},\ell_{56}\}\), then \(D_{e}\cap D^{\prime}_{e}\) is the union of the hyperplane \(\{x_{1}-x_{2}+x_{3}-x_{4}=0\}\) and the strict transform of the exceptional divisor over
the point \((1:1:1:1)\). Note that this and the previous part are consistent with the fact that \(D_{e}\cap D_{A_{1}}=D^{\prime}_{e}\cap D_{A_{1}}=D_{A_{1}}\cap D_{A^{\prime}_{1}}\) for \(A_{1}=7\), \(A^{\prime}_{1}=56\), as expected from Proposition 2.2.
### Intersection-theoretic results on \(\overline{Y}\)
The intersection theory of \(\overline{Y}\) has been studied in depth in [21]. Additional results, including a complete presentation of its Chow ring, are obtained in [1, Theorem 1.9]. In addition to the descriptions of intersections and self-intersections of boundary and Eckardt divisors above, we also have need of the following results.
**Proposition 2.6** ([21]).:
1. _We have_ \(\operatorname{Pic}(\overline{Y})^{W(E_{6})}\cong\mathbb{Z}^{2}\)_, generated by the sum_ \(B_{A_{1}}\) _of the_ \(A_{1}\) _divisors, and the sum_ \(B_{A_{2}^{3}}\) _divisors._
2. _The class of the sum_ \(E\) _of the Eckardt divisors on_ \(\overline{Y}\) _is given by_ \[E=\frac{25B_{A_{1}}+27B_{A_{2}^{3}}}{4}.\]
3. _The canonical class of_ \(\overline{Y}\) _is given by_ \[K_{\overline{Y}}=\frac{-B_{A_{1}}+B_{A_{2}^{3}}}{4}.\]
Recall from the introduction that we label strata of \(\overline{Y}\) by juxtaposition, so for instance a stratum of type \(2A_{1}A_{2}^{3}\) is a curve formed by the intersection of \(2\)\(A_{1}\) and \(1\)\(A_{2}^{3}\) divisors.
**Proposition 2.7**.: _The 1-dimensional boundary strata of \(\overline{Y}\) are of types \(3A_{1}\) and \(2A_{1}A_{2}^{3}\). The intersection numbers of \(W(E_{6})\)-invariant boundary divisors on \(\overline{Y}(E_{6})\) with curves of these types are given in the following table._
Proof.: Let \(D\) be an irreducible boundary divisor of type \(A_{1}\). Using Proposition 2.2, we compute that
\[B_{A_{1}}|_{D}=\frac{4B_{2}-3B_{3}}{5}\ \ \ \text{and}\ \ \ B_{A_{2}^{3}}|_{D}=B_{3}.\]
The \(3A_{1}\) curves contained in \(D\cong\overline{M}_{0,6}\) are precisely the intersections of two \(D_{2}\) divisors on \(\overline{M}_{0,6}\); likewise, the \(2A_{1}A_{2}^{3}\) curves contained in \(D\cong\overline{M}_{0,6}\)A are precisely the intersections of one \(D_{2}\) and one \(D_{3}\) divisor on \(\overline{M}_{0,6}\). The result now follows by standard intersection-theoretic computations on \(\overline{M}_{0,6}\), see [19, Corollary 2.6].
**Lemma 2.8**.: _Let \(D\) be a \(\mathbb{Q}\)-divisor on \(\overline{Y}\), and suppose that \(D\cdot C\in\mathbb{Z}\) for each 1-dimensional boundary stratum \(C\) of \(\overline{Y}\). Then \(D\) is actually a \(\mathbb{Z}\)-divisor on \(\overline{Y}\)._
Proof.: By [18, Theorem 1.9], the Kronecker duality map
\[A^{1}(\overline{Y})\to\operatorname{Hom}(A_{3}(\overline{Y}),\mathbb{Z}),\ \ D \mapsto(C\mapsto D\cdot C)\]
is an isomorphism. By the same theorem, \(A_{3}(\overline{Y})\) is generated by the 1-dimensional boundary strata of \(\overline{Y}\). So if \(D\in A^{1}(\overline{Y})_{\mathbb{Q}}\) and \(D\cdot C\in\mathbb{Z}\) for each 1-dimensional boundary stratum \(C\) of \(\overline{Y}\), then \(D\) defines an element of \(\operatorname{Hom}(A_{3}(\overline{Y}),\mathbb{Z})\), which lifts to an element \(D^{\prime}\in A^{1}(\overline{Y})\) by surjectivity of the Kronecker duality map. Injectivity then implies that \(D\) is linearly equivalent to \(D^{\prime}\).
## 3. \(W(E_{6})\)-invariant birational geometry of \(\overline{Y}\)
The goal of this section is to study the \(W(E_{6})\)-invariant birational geometry of Naruki's compactification \(\overline{Y}\). In Section 3.1 we describe the cones of \(W(E_{6})\)-invariant effective divisors and curves, in Section 3.2 we describe the stable base locus decomposition of the \(W(E_{6})\)-invariant effective cone of divisors, and in Section 3.3 we describe the \(W(E_{6})\)-invariant birational models of \(\overline{Y}\) appearing in this stable base locus decomposition.
\begin{table}
\begin{tabular}{|c||c|c|} \hline & \(3A_{1}\) & \(2A_{1}A_{2}^{3}\) \\ \hline \hline \(B_{A_{1}}\) & \(-2\) & \(3\) \\ \(B_{A_{2}^{3}}\) & \(2\) & \(-1\) \\ \hline \end{tabular}
\end{table}
Table 1. Intersection numbers on \(\overline{Y}(E_{6})\)
### The cones of \(W(e_{6})\)-invariant effective divisors and curves
In this subsection we describe the \(W(E_{6})\)-invariant cones of effective divisors and curves of \(\overline{Y}\), and as a consequence also describe the nef cone of \(\overline{Y}\).
**Theorem 3.1**.: _The \(W(E_{6})\)-invariant effective cone of \(\overline{Y}(E_{6})\) is the closed cone spanned by \(B_{A_{1}}\) and \(B_{A_{2}^{3}}\)._
Proof.: Note that \(B_{A_{1}}\) and \(B_{A_{2}^{3}}\) themselves are effective divisors. Let
\[B=c_{A_{1}}B_{A_{1}}+c_{A_{2}^{3}}B_{A_{2}^{3}}.\]
be a \(W(E_{6})\)-invariant effective divisor on \(\overline{Y}(E_{6})\). We wish to show \(c_{A_{1}},c_{A_{2}^{3}}\geq 0\). By subtracting the fixed components of \(B\), we can assume that \(B\) does not contain any irreducible boundary divisor, so that the restriction of \(B\) to any irreducible boundary divisor is effective. By Proposition 2.2, we have
\[B|_{D_{A_{1}}}=\frac{4c_{A_{1}}B_{2}+(5c_{A_{2}^{3}}-3c_{A_{1}})B_{3}}{5}.\]
Since \(D_{A_{1}}\cong\overline{M}_{0,6}\) and \(D_{A_{2}^{3}}\cong(\mathbb{P}^{1})^{3}\) have symmetric effective cones generated by their symmetric boundary divisors [13], it follows that \(c_{A_{1}},c_{A_{2}^{3}}\geq 0\).
**Theorem 3.2**.: _The \(W(E_{6})\)-invariant cone of curves of \(\overline{Y}(E_{6})\) is the closed cone spanned by the sum \(B_{3A_{1}}\) of the curves of type \(3A_{1}\), and the sum \(B_{2A_{1}A_{2}^{3}}\) of the curves of type \(2A_{1}A_{2}^{3}\)._
Proof.: It follows by Theorem 3.1 and [13, Corollary 2.3] that the \(W(E_{6})\)-invariant cone of curves of \(\overline{Y}(E_{6})\) is generated by curves contained in the boundary. Since the curves of types \(3A_{1}\) and \(2A_{1}A_{2}^{3}\) generate the symmetric effective cones of \(\overline{M}_{0,6}\) and \((\mathbb{P}^{1})^{3}\) (cf. [13]), the result follows.
**Corollary 3.3**.: _The \(W(E_{6})\)-invariant nef cone of \(\overline{Y}(E_{6})\) is spanned by \(B_{A_{1}}+3B_{A_{2}^{3}}\) and \(B_{A_{1}}+B_{A_{2}^{3}}\)._
Proof.: The nef cone is the dual of the cone of curves under the intersection pairing, so this follows by a direct computation from Theorem 3.2 and Proposition 2.7.
**Remark 3.4**.: The first lattice point of a ray in the effective cone is the smallest \(\mathbb{Q}\)-divisor \(D\) lying on that ray such that \(D\) is actually a \(\mathbb{Z}\)-divisor. It follows from Lemma 2.8 and Proposition 2.7 that the first lattice points on the rays through \(B_{A_{1}}+3B_{A_{2}^{3}}\) and \(B_{A_{1}}+B_{A_{2}^{3}}\) are \(\frac{B_{A_{1}}+3B_{A_{2}^{3}}}{4}=K_{\overline{Y}}+\frac{1}{2}B\) and \(\frac{B_{A_{1}}+B_{A_{2}^{3}}}{2}=\frac{1}{2}B\), where \(B=B_{A_{1}}+B_{A_{2}^{3}}\) is the sum of all the boundary divisors.
### Stable base locus decomposition
**Theorem 3.5**.: _Let \(D\) be a \(W(E_{6})\)-invariant effective divisor on \(\overline{Y}\)._
1. _If_ \(D\in[B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{1}}+B_{A_{2}^{3}}]\)_, then_ \(D\) _is nef and semi-ample. In particular,_ \(\mathbb{B}(D)=\emptyset\)_._
2. _If_ \(D\in(B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{3}^{3}}]\)_, then_ \(\mathbb{B}(D)=B_{A_{2}^{3}}\) _is the sum of the_ \(A_{2}^{3}\) _divisors._
3. _If_ \(D\in(B_{A_{1}}+B_{A_{2}^{3}},5B_{A_{1}}+3B_{A_{2}^{3}}]\)_, then_ \(\mathbb{B}(D)=B_{2A_{1}}\) _is the sum of the_ \(2A_{1}\) _surfaces._
4. _If_ \(D\in(5B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{1}}]\)_, then_ \(\mathbb{B}(D)=B_{A_{1}}\) _is the sum of the_ \(A_{1}\) _divisors._
Proof.:
1. Let \(\Delta_{1}=\frac{1}{2}(B_{A_{1}}+B_{A_{2}^{3}})\) and \(\Delta_{2}=\frac{1}{2}B_{A_{1}}\), and for \(i=1,2\) let \(D_{i}=K+\Delta_{i}\). Then \(D_{1}=\frac{B_{A_{1}}+3B_{A_{2}^{3}}}{4}\) and \(D_{2}=\frac{B_{A_{1}}+B_{A_{2}^{3}}}{4}\) span the extremal rays of the \(W(E_{6})\)-invariant nef cone of \(\overline{Y}\) by Corollary 3.3. Thus it suffices to show that \(D_{i}\) is semi-ample for \(i=1,2\). But \(D_{i}\) is nef, and since it lies in the interior of the (pseudo)effective cone, \(D_{i}\) is big [15, Theorem 2.2.26]. (Alternatively, one may directly compute, using [10, Theorem 4.13], that \(D_{1}^{4}=27\) and \(D_{2}^{4}=12\) are both positive, hence \(D_{i}\) is big by [14, Theorem 2.2.16].) Thus \(2D_{i}-K-\Delta_{i}=D_{i}\) is nef and big. Since \((\overline{Y},\Delta_{i})\) is klt, it follows by the Basepoint-Free Theorem that \(D_{i}\) is semi-ample. (In Theorem 3.6 below we will explicitly describe the morphisms given by \(|mD_{i}|\) for \(m\gg 0\).)
2. Let \(D\in(B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{2}^{3}}]\). Since \(B_{A_{1}}+3B_{A_{2}^{3}}\) is semi-ample by part (1), we see that \(\mathbb{B}(D)\subset B_{A_{2}^{3}}\). Write \[D=\alpha B_{A_{1}}+(3\alpha+\beta)B_{A_{2}^{3}}\] where \(\alpha\geq 0\), \(\beta>0\). Then \(D\cdot C_{2A_{1}A_{2}^{3}}=-\beta<0\). Since the curves of type \(2A_{1}A_{2}^{3}\) cover \(B_{A_{2}^{3}}\), it follows that \(B_{A_{2}^{3}}\subset\mathbb{B}(D)\), hence \(\mathbb{B}(D)=B_{A_{2}^{3}}\).
3. Suppose \(D\in(B_{A_{1}}+B_{A_{2}^{3}},5B_{A_{1}}+3B_{A_{2}^{3}}]\). We first show that \(B_{2A_{1}}\subset\mathbb{B}(D)\). Let \(S\cong\overline{M}_{0,5}\cong Bl_{4}\mathbb{P}^{2}\) be an irreducible \(2A_{1}\) surface. Recall the correspondence between the boundary divisors of \(\overline{M}_{0,5}\) and \(Bl_{4}\mathbb{P}^{2}\): for \(ij\subset[4]\), \(D_{ij}=\ell_{ij}=h-e_{i}-e_{j}\) is the strict transform of the line through the \(i\)th and \(j\)th blown up points, and for \(i\in[4]\), \(D_{i5}=e_{i}\) is the exceptional divisor over the \(i\)th blown up point. The \(3A_{1}\) curves on \(S\) are precisely the divisors \(D_{ij}=\ell_{ij}\). Viewing \(S\) as \(\overline{M}_{0,5}\), let \(C\) be the moving curve obtained by fixing \(4\) points on \(\mathbb{P}^{1}\) and varying the \(5\)th point. Then \(C\cdot D_{i5}=1\) and \(C\cdot D_{ij}=0\). It follows that the class of \(C\) is \(2h-e_{1}-e_{2}-e_{3}-e_{4}=\ell_{ij}+\ell_{kl}\), for any partition \(ij\amalg kl=[4]\). By Proposition 2.7, \(D\cdot C_{3A_{1}}<0\), hence \(D\cdot C<0\). Since \(C\) covers \(S\), we conclude that \(B_{2A_{1}}\subset\mathbb{B}(D)\).
Now we show that the stable base locus of \(D\) is exactly \(B_{2A_{1}}\). For this, note that since \(B_{A_{1}}+B_{A_{2}^{3}}\) is semi-ample by part (1), \(\mathbb{B}(D)\) is contained in the stable base locus of the divisor \(\Delta=\frac{5B_{A_{1}}+3B_{A_{2}^{3}}}{4}\) lying on the other ray of the cone \((B_{A_{1}}+B_{A_{2}^{3}},5B_{A_{1}}+3B_{A_{2}^{3}}]\). Thus it suffices to show that \(\mathbb{B}(\Delta)=B_{2A_{1}}\).
Since \(\Delta\in(B_{A_{1}}+B_{A_{2}^{3}},B_{A_{1}}]\), and \(B_{A_{1}}+B_{A_{2}^{3}}\) is semi-ample, it follows that \(\mathbb{B}(D)\subset B_{A_{1}}\). If \(D_{A_{1}}\) is an \(A_{1}\) divisor on \(\overline{Y}\), then by Proposition 2.2, \(\Delta|_{D_{A_{1}}}=B_{2}\) is the sum of the \(D_{2}\) divisors on \(D_{A_{1}}\cong\overline{M}_{0,6}\); these are precisely the \(2A_{1}\) surfaces on \(\overline{Y}\) that are contained in \(D_{A_{1}}\). Thus \(\Delta|_{B_{A_{1}}}=B_{2A_{1}}\) is the sum of all the \(2A_{1}\) surfaces on \(\overline{Y}\). We claim that the restriction map
\[H^{0}(\overline{Y},\Delta)\to H^{0}(B_{A_{1}},\Delta|_{B_{A_{1}}})\]
is surjective. Since \(\Delta_{|_{B_{A_{1}}}}=B_{2A_{1}}\) is the fixed effective divisor \(B_{2A_{1}}\) on \(B_{A_{1}}\), we have
\[H^{0}(B_{A_{1}},\Delta|_{B_{A_{1}}})\neq 0,\]
and any nonzero section of \(\Delta|_{B_{A_{1}}}\) vanishes exactly along \(B_{2A_{1}}\). Thus it follows from the claim that \(\mathbb{B}(\Delta)\subset B_{2A_{1}}\), thereby concluding the proof.
To prove the claim, note that by taking the long exact sequence of cohomology of the exact sequence
\[0\to\mathcal{O}_{\overline{Y}}(\Delta-B_{A_{1}})\to\mathcal{O}_{\overline{Y}} (\Delta)\to\mathcal{O}_{B_{A_{1}}}(\Delta|_{B_{A_{1}}})\to 0,\]
it suffices to prove that
\[H^{1}(\overline{Y},\Delta-B_{A_{1}})=0.\]
For this, observe that
\[\Delta-B_{A_{1}}=\frac{B_{A_{1}}+3B_{A_{2}^{3}}}{4}=K_{\overline{Y}}+\frac{1}{ 2}B,\]
where \(B=B_{A_{1}}+B_{A_{2}^{3}}\) is the sum of the boundary divisors of \(\overline{Y}\). The \(\mathbb{Q}\)-divisor \(\frac{1}{2}B\) is actually a nef and big integral divisor (cf. Corollary 3.3 and Remark 3.4), and its support has simple normal crossings. Therefore, by Kawamata-Viehweg vanishing,
\[H^{1}(\overline{Y},\Delta-B_{A_{1}})=H^{1}(\overline{Y},K_{\overline{Y}}+\frac {1}{2}B)=0.\]
(See also [10, Section 6.1].)
4. Let \(D\in(5B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{1}}]\). Note that \(D\in(B_{A_{1}}+B_{A_{2}^{3}},B_{A_{1}}]\), and \(B_{A_{1}}+B_{A_{2}^{3}}\) is semi-ample by part (1), thus \(\mathbb{B}(D)\subset B_{A_{1}}\). \[D=(5\alpha+\beta)B_{A_{1}}+3\alpha B_{A_{2}^{3}},\] where \(\alpha\geq 0\), \(\beta>0\). By Proposition 2.2, the restriction of \(D\) to a given \(A_{1}\) divisor \(\cong\overline{M}_{0,6}\) is \[\frac{(20\alpha+4\beta)B_{2}+(-3\beta)B_{3}}{5}.\] Since \(-3\beta<0\), this is not effective by [11, Theorem 1.3(1)]. Thus \(B_{A_{1}}\subset\mathbb{B}(D)\).
### \(W(e_{6})\)-equivariant birational models of \(\overline{Y}\)
**Theorem 3.6**.: _Let \(D\) be a \(W(E_{6})\)-invariant effective \(\mathbb{Q}\)-divisor on \(\overline{Y}\)._
1. _If_ \(D\in(B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{1}}+B_{A_{2}^{3}})\)_, then_ \(D\) _is ample, and_ \(\overline{Y}(D)=\overline{Y}\)_._
2. _If_ \(D\in[B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{2}^{3}})\)_, then_ \(\overline{Y}(D)\) _is the GIT moduli space_ \(\overline{M}\) _of marked cubic surfaces. The morphism_ \(\pi:\overline{Y}\to\overline{M}\) _is given by contracting the 40_ \(A_{2}^{3}\) _divisors of_ \(\overline{Y}\) _to singular points, each locally isomorphic to the cone over the Segre embedding of_ \((\mathbb{P}^{1})^{3}\)_._
3. _If_ \(D\) _is a multiple of_ \(B_{A_{1}}+B_{A_{2}^{3}}\)_, then_ \(D\) _is semi-ample and the morphism_ \(\phi:\overline{Y}\to\overline{Y}(D)=:\overline{W}\) _associated to_ \(|mD|\) _for_ \(m\gg 0\) _is given by contracting each_ \(2A_{1}\) _surface_ \(S\cong Bl_{4}\mathbb{P}^{2}\) _to a singular line via the strict transform of the linear system of conics on_ \(\mathbb{P}^{2}\) _passing through the 4 blown up points._
4. _If_ \(D\in(B_{A_{1}}+B_{A_{2}^{3}},B_{A_{1}}]\)_, then_ \(\overline{Y}(D)\) _is the_ \(D\)_-flip_ \(\overline{X}\) _of the small contraction_ \(\phi:\overline{Y}\to\overline{W}\) _of part (3)._
5. _If_ \(D\) _is a multiple of_ \(B_{A_{1}}\) _or_ \(B_{A_{2}^{3}}\)_, then_ \(\overline{Y}(D)\) _is a point._
Proof.:
1. This first part follows from Corollary 3.3, since \((B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{1}}+B_{A_{2}^{3}})\) is the interior of the nef cone.
2. The described morphism \(\pi:\overline{Y}\to\overline{M}\) is explicitly constructed by Naruki in [10], and Colombo and van Geemen show that [11, Proposition 2.7] \[\pi^{*}\mathcal{O}(1)=\mathcal{O}\left(\frac{B_{A_{1}}+3B_{A_{2}^{3}}}{4} \right).\] Thus if \(D^{\prime}\) lies on the ray through \(B_{A_{1}}+3B_{A_{2}^{3}}\), then \(|mD^{\prime}|\) for \(m\gg 0\) gives the morphism \(\pi\). In general, if \(D\in(B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{2}^{3}})\), then \(|mD|\) for \(m\gg 0\) has fixed part \(B_{A_{2}^{3}}\) by Theorem 3.5. Therefore \(H^{0}(\overline{Y},mD)\cong H^{0}(\overline{Y},mD-B_{A_{2}^{3}})\cong H^{0}( \overline{Y},mD^{\prime})\), so \(\overline{Y}(D)=\overline{Y}(D^{\prime})=\overline{M}\).
3. Let \(D=\frac{B_{A_{1}}+B_{A_{2}^{3}}}{2}=\frac{1}{2}B\). As shown in (the proof of) Theorem 3.5, \(D\) is semi-ample, and ample outside of the sum \(B_{2A_{1}}\) of the \(2A_{1}\) surfaces. By Proposition 2.2, the restriction of \(D\) to an \(A_{1}\) divisor \(\cong\overline{M}_{0,6}\) is given by \[\frac{2B_{2}+B_{3}}{5}.\] It is shown in [19] that a sufficiently large multiple of this divisor gives the contraction \(\overline{M}_{0,6}\to\mathcal{I}_{4}\) to the Igusa quartic, obtained by contracting each irreducible \(D_{2}\) divisor \(\cong Bl_{4}\mathbb{P}^{2}\) via the strict transform of the linear system of conics through the 4 blown up points. Since the \(D_{2}\) divisors on \(\overline{M}_{0,6}\) are precisely the \(2A_{1}\) surfaces of \(\overline{Y}\), the result follows.
4. Note that \[K+\frac{1}{2}B_{A_{1}}=\frac{B_{A_{1}}+B_{A_{2}^{3}}}{4}\ \ \text{and}\ \ \ K+\frac{2}{3}B_{A_{1}}=\frac{5B_{A_{1}}+3B_{A_{2}^{3}}}{12}.\] Thus if \(D\in(B_{A_{1}}+B_{A_{2}^{3}},5B_{A_{1}}+3B_{A_{2}^{3}}]\), then \(D\) is a multiple of \(K+\alpha B_{A_{1}}\) for some \(\alpha\in(1/2,2/3]\). Since \(B_{A_{1}}\) is normal crossings, \((\overline{Y},\Delta)\) is klt. Furthermore, \(-D\) is ample on the fibers of the small contraction \(\phi:\overline{Y}\to\overline{W}\) (since by the previous part these fibers are linear combinations of \(3A_{1}\) curves). Thus \(\overline{Y}(D)\) is the flip \(\overline{X}\) of \(\phi\), which exists by [12, Theorem 1.3.3]. If \(D\in(5B_{A_{1}}+3B_{A_{2}^{3}},B_{A_{1}})\), then \(|mD|\) for \(m\gg 0\) has fixed part \(B_{A_{1}}\) by Theorem 3.5. Therefore \(H^{0}(\overline{Y},mD)\cong H^{0}(\overline{Y},mD-B_{A_{1}})\), and for \(m\gg 0\), \(mD-B_{A_{1}}\) is a multiple of \(5B_{A_{1}}+3B_{A_{2}^{3}}\), hence \(\overline{Y}(D)\cong\overline{X}\).
5. If \(D=B_{A_{1}}\) or \(B_{A_{2}^{3}}\), then \(D\) is a fixed divisor, so \(\overline{Y}(D)\) is a point.
**Remark 3.7**.: There are two birational models of \(\overline{M}_{0,6}\) appearing in the \(S_{6}\)-invariant minimal model program for \(\overline{M}_{0,6}\)--the classically known Segre cubic and Igusa quartic threefolds, see [19]. The Segre cubic is obtained by contracting the \(D_{3}\) divisors on \(\overline{M}_{0,6}\) to singular points, and the Igusa quartic is obtained by contracting the \(D_{2}\) divisors on \(\overline{M}_{0,6}\) to singular lines. The \(W(E_{6})\)-invariant minimal model program for \(\overline{Y}(E_{6})\) contains the \(S_{6}\)-invariant minimal model program for \(\overline{M}_{0,6}\)--the restriction of the morphism \(\pi:\overline{Y}\to\overline{M}\) of Theorem 3.6(2) to an \(A_{1}\) divisor is the contraction of \(\overline{M}_{0,6}\) to the Segre cubic, and the
restriction of the morphism \(\phi:\overline{Y}\to\overline{W}\) of Theorem 3.6(3) to an \(A_{1}\) divisor is the contraction of \(\overline{M}_{0,6}\) to the Igusa quartic.
**Remark 3.8**.: We have been unsuccessful in explicitly constructing the flip \(\overline{X}\to\overline{W}\) of the small contraction \(\overline{Y}\to\overline{W}\). Intuitively, since the small contraction \(\overline{Y}\to\overline{W}\) is given by contracting each \(2A_{1}\) surface \(S\cong Bl_{4}\mathbb{P}^{2}\) to a line, \(S\to\mathbb{P}^{1}\), the natural guess for the flip is to blowup \(S\) to \(S\times\mathbb{P}^{1}\), then contract it to \(\mathbb{P}^{1}\times\mathbb{P}^{1}\). However, since the \(2A_{1}\) surfaces do not all intersect transversally, we cannot do this for all surfaces simultaneously. First blowing up all intersections of the \(2A_{1}\) surfaces leads to the space \(\widetilde{Y}\) studied below, and there does not appear to be an evident contraction of \(\widetilde{Y}\) to the flip \(\overline{X}\) either.
## 4. The moduli space of stable marked cubic surfaces
Recall that the moduli space of stable marked cubic surfaces is the blowup \(\widetilde{Y}=\widetilde{Y}(E_{6})\) of \(\overline{Y}=\overline{Y}(E_{6})\) along all intersections of \(A_{1}\) divisors, in increasing order of dimension [10, Section 10]. In this section we gather the necessary results concerning divisors and intersection theory on \(\widetilde{Y}\), in preparation for the study of the birational geometry of \(\widetilde{Y}\) in the following sections.
### Divisors on \(\widetilde{Y}(E_{6})\)
There are 6 types of divisors of interest on \(\widetilde{Y}(E_{6})\), labeled by types \(a,b,e,a_{2},a_{3},a_{4}\). The divisors of types \(a,b,e\) are the strict transforms of the \(A_{1}\), \(A_{2}^{3}\), and Eckardt divisors on \(\overline{Y}\), respectively. The divisors of type \(a_{i}\), \(i=2,3,4\) are the (strict transforms of the) exceptional divisors over the intersections of \(i\)\(A_{1}\) divisors in \(\overline{Y}\). We refer to the divisors of types \(a,b,a_{2},a_{3},a_{4}\) as the boundary divisors of \(\widetilde{Y}\), and the divisors of type \(e\) as the Eckardt divisors of \(\widetilde{Y}\). We describe in detail each type of divisor and their intersections.
**Proposition 4.1**.: _There are 36 type \(a\) divisors on \(\widetilde{Y}(E_{6})\). A given type \(a\) divisor \(D_{a}\) is isomorphic to the blowup of \(\overline{M}_{0,6}\) at the 15 points \(D_{ij}\cap D_{kl}\cap D_{mn}\), and then the strict transforms of the 45 lines \(D_{ij}\cap D_{kl}\). We write \(F_{ij,kl,mn}\) for the strict transform of the exceptional divisor over \(D_{ij}\cap D_{kl}\cap D_{mn}\), \(F_{ij,kl}\) for the exceptional divisor over the strict transform of \(D_{ij}\cap D_{kl}\). Additionally, we write \(F_{ij}\) for the strict transform of the divisor \(D_{ij}\) on \(\overline{M}_{0,6}\), and \(F_{ijk}\) for the strict transform of the divisor \(D_{ijk}\) on \(\overline{M}_{0,6}\)._
_The nonempty intersections of \(D_{a}\) with the other divisors are as follows._
1. _15 type_ \(a_{4}\) _divisors, intersecting in the 15_ \(F_{ij,kl,mn}\)_'s._
2. _45 type_ \(a_{3}\) _divisors, intersecting in the 45_ \(F_{ij,kl}\)_'s._
3. _15 type_ \(a_{2}\) _divisors, intersecting in the 15_ \(F_{ij}\)_'s._
4. _10 type_ \(b\) _divisors, intersecting in the 10_ \(F_{ijk}\)_'s._
5. _15 Eckardt divisors, intersecting in the strict transforms of the 15 Keel-Vermeire divisors._
_The class of \(D_{a}|_{D_{a}}\) is_
\[\frac{-8\sum F_{ij,kl,mn}-7\sum F_{ij,kl}-6\sum F_{ij}-3\sum F_{ijk}}{5}.\]
Proof.: The description of \(D_{a}\) and its intersections with the other boundary divisors follows from Proposition 2.2 and the construction of \(\widetilde{Y}(E_{6})\). Recall from that proposition that there are two types of intersections of \(D_{A_{1}}\) with an Eckardt divisor. The first type gives a \(D_{ij}\) surface on \(D_{A_{1}}\cong\overline{M}_{0,6}\); this surface is blown up in the construction of \(\widetilde{Y}(E_{6})\), separating \(D_{a}\) from the given Eckardt divisor. The second type of intersection with an Eckardt divisor gives a Keel-Vermeire divisor on \(\overline{M}_{0,6}\), so after the blowups the intersection is the strict transform of the Keel-Vermeire divisor.
We explain the calculation of \(D_{a}|_{D_{a}}\). Recall from Proposition 2.2 that in \(\overline{Y}\), we have
\[D_{A_{1}}|_{D_{A_{1}}}=\frac{-\sum D_{ij}-3\sum D_{ijk}}{5}.\]
In the sequence of blowups \(\widetilde{Y}\to\overline{Y}\), \(D_{A_{1}}\) either contains each blown up center or is disjoint from it. Standard formulas for the normal bundle of a strict transform [11, B.6.10] therefore imply that
\[D_{A_{1}}|_{D_{A_{1}}}=\frac{-\sum D_{ij}-\sum D_{ijk}}{5}-\sum F_{ij,kl,mn}- \sum F_{ij,kl}-\sum F_{ij},\]
where \(D_{ij}\), \(D_{ijk}\) denote the classes of the pullbacks of \(D_{ij}\) and \(D_{ijk}\). Since each \(D_{ij}\cap D_{kl}\cap D_{mn}\) is contained in 3 \(D_{ij}\)'s and disjoint from each \(D_{ijk}\), and each \(D_{ij}\cap D_{kl}\) is contained in 2 \(D_{ij}\)'s and intersects any \(D_{ijk}\) either trivially or transversally, it follows that
\[\sum D_{ij}=\sum F_{ij}+3\sum F_{ij,kl,mn}+2\sum F_{ij,kl},\ \ \text{and}\ \sum D_{ijk}=\sum F_{ijk}.\]
The result follows.
**Proposition 4.2**.: _There are 40 type \(b\) divisors on \(\widetilde{Y}(E_{6})\). A given type \(b\) divisor \(D_{b}\) is isomorphic to the blowup of \((\mathbb{P}^{1})^{3}\) at the 27 points \(p\times q\times r\) and 27 lines \(p\times q\times\mathbb{P}^{1}\), \(p\times\mathbb{P}^{1}\times q\), \(\mathbb{P}^{1}\times q\times r\), \(p,q,r\in\{0,1,\infty\}\). We write \(e_{pqr}\) for the strict transform of the exceptional divisor over \(p\times q\times r\), and \(e_{pqx}\), \(e_{pqr}\), \(e_{xqr}\) respectively for the exceptional divisor over the strict transform of the line \(p\times q\times\mathbb{P}^{1}\), \(p\times\mathbb{P}^{1}\times q\), \(\mathbb{P}^{1}\times q\times r\)._
_The nonempty intersections of \(D_{b}\) with the other boundary divisors are as follows._
1. _27 type_ \(a_{3}\) _divisors, intersecting in the 27_ \(e_{pqr}\)_'s._
2. _27 type_ \(a_{2}\) _divisors, intersecting in the 27_ \(e_{pqx}\)_,_ \(e_{pxtr}\)_, and_ \(e_{xqr}\)_'s._
3. _9 type_ \(a\) _divisors, intersecting in the strict transforms of hypersurfaces_ \(p_{1}\times\mathbb{P}^{1}\times\mathbb{P}^{1}\)_,_ \(\mathbb{P}^{1}\times p_{2}\times\mathbb{P}^{1}\)_,_ \(\mathbb{P}^{1}\times\mathbb{P}^{1}\times p_{3}\)_._
4. _18 Eckardt divisors, intersecting in the strict transforms of the smooth hypersurfaces described in Proposition_ 2.3_._
_The class of \(D_{b}|_{D_{b}}\) is_
\[-h_{1}-h_{2}-h_{3}.\]
Proof.: Again everything follows directly from the blowup construction \(\widetilde{Y}\to\overline{Y}\) and Proposition 2.3. We note here that the class of \(D_{b}|_{D_{b}}\) is the pullback of the class of \(D_{A_{2}^{3}}|_{D_{A_{2}^{3}}}\), because the divisor \(D_{A_{2}^{3}}\) is either disjoint from or intersects transversally each blown up center.
**Remark 4.3**.: We observe for later use that the strict transform of the hypersurface of the form \(p_{i}\times\mathbb{P}^{1}\times\mathbb{P}^{1}\), etc. on a type \(b\) divisor is isomorphic to the blowup \(Bl_{9}(\mathbb{P}^{1}\times\mathbb{P}^{1})\) of \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) at the 9 points \(p\times q,p,q\in\{0,1,\infty\}\). We denote the exceptional divisor over \(p\times q\) by \(e_{pq}\).
Recall from Proposition 2.4 that an Eckardt divisor on \(\overline{Y}\) is isomorphic to the minimal wonderful compactification \(\overline{X}(D_{4})\) of the \(D_{4}\) hyperplane arrangement in \(\mathbb{P}^{3}\); this is obtained by sequentially blowing up \(\mathbb{P}^{3}\) at the 12 points and 16 lines where the hyperplane arrangement \(x_{i}=\pm x_{j}\) is not normal crossings. Eckardt divisors are labeled by triples of lines on marked cubic surfaces; we say that two Eckardt divisors have a common line if the corresponding two triples of lines have a common line.
For completeness we record the following description of the Eckardt divisors on \(\widetilde{Y}\), although this will not be used later in the article.
**Proposition 4.4**.: _There are 45 Eckardt (type \(e\)) divisors on \(\widetilde{Y}(E_{6})\). A given Eckardt divisor \(D_{e}\) on \(\widetilde{Y}(E_{6})\) is isomorphic to the blowup of the corresponding Eckardt divisor \(D\cong\overline{X}(D_{4})\) on \(\overline{Y}(E_{6})\) along a total of 48 points, then the strict transforms of 87 lines, as follows._
1. \(36=3\cdot 12\) _points: 3 points each on the exceptional divisors over the_ \(A_{3}\) _points (corresponding to intersections of 4_ \(A_{1}\) _divisors on_ \(D\)_, such as_ \(D_{7}\cap D_{56}\cap D_{34}\cap D_{12}\)_.)_
2. \(12\) _points:_ 1. _6 points_ \((1:1:0:0),\ldots,(0:0:1:1)\)_, corresponding to intersections of 3_ \(A_{1}\) _divisors such as_ \(D_{12}\cap D_{34}\cap D_{126}\)_, and_ 2. _6 points_ \((1:-1:0:0),\ldots,(0:0:1:-1)\)_, corresponding to intersections of 3_ \(A_{1}\) _divisors such as_ \(D_{126}\cap D_{346}\cap D_{12}\)_._
3. \(72=12\cdot 6\) _lines, given by the intersection of a_ \(D_{4}\) _hyperplane with an exceptional divisor over an_ \(A_{3}\) _point (corresponding to intersections of 3_ \(A_{1}\) _divisors such as_ \(D_{7}\cap D_{56}\cap D_{12}\)_.)_
4. \(15\) _lines:_ 1. _3 lines like_ \(x_{i}=x_{j},x_{k}=x_{l}\)_, corresponding to intersections of 2_ \(A_{1}\) _divisors such as_ \(D_{12}\cap D_{34}\)_,_ 2. _6 lines like_ \(x_{i}=x_{j},x_{i}=-x_{j}\)_, corresponding to intersections of 2_ \(A_{1}\) _divisors on_ \(D\) _such as_ \(D_{12}\cap D_{126}\)_,_ 3. _3 lines_ \(x_{i}=x_{j},x_{k}=-x_{l}\)_, corresponding to intersections of 2_ \(A_{1}\) _divisors on_ \(D\) _such as_ \(D_{12}\cap D_{346}\)_, and_
_._
4. _3 lines_ \(x_{i}=-x_{j},x_{k}=-x_{l}\) _(corresponding to intersections of 2_ \(A_{1}\) _divisors on_ \(D\) _such as_ \(D_{126}\cap D_{346}\)_)._
_The nonempty intersections of \(D_{e}\) with other divisors are as follows._
1. _36_ \(a_{4}\) _divisors, intersecting in the 36 exceptional divisors in (1) above._
2. \(12+72=84\)__\(a_{3}\) _divisors, intersecting in the_ \(12+72\) _exceptional divisors in (2) and (3) above._
3. \(27\)__\(a_{2}\) _divisors._ 1. _15_ \(a_{2}\) _divisors, intersecting in the 15 exceptional divisors in (4) above._ 2. _12_ \(a_{2}\) _divisors, intersecting in the strict transforms of the exceptional divisors over the original 12_ \(A_{3}\) _points._
4. _12_ \(a\) _divisors, intersecting in the strict transforms of the_ \(D_{4}\) _hyperplanes._
5. _16_ \(b\) _divisors, intersecting in the strict transforms of the_ \(A_{2}\) _lines._
6. _Every Eckardt divisor:_ 1. _If_ \(D_{e}\) _and_ \(D_{e}^{\prime}\) _do not have a common line, then there is a unique third Eckardt divisor_ \(D_{e}^{\prime\prime}\) _such that the intersection of any two of_ \(D_{e},D_{e}^{\prime},D_{e}^{\prime\prime}\) _is the same as the intersection of all three. This is the strict transform of one of the 16 quadrics on_ \(\overline{X}(D_{4})\)_, as in Proposition_ 2.4_. (There are 16 of these pairs.)_ 2. _If_ \(D_{e}\) _and_ \(D_{e}^{\prime}\) _have a common line, then their intersection is the strict transform of an_ \(F_{4}\) _hyperplane_ \(x_{i}=0\) _or_ \(x_{1}\pm x_{2}\pm x_{3}\pm x_{4}=0\)_. (There are 12 of these.)_
Proof.: This again follows from the blowup construction \(\widetilde{Y}\to\overline{Y}\) and Proposition 2.4, as can be seen by considering a specific example, see Example 2.5. In the proposition we have described the corresponding blowups induced on this particular example.
**Notation 4.5**.: Let \(Bl_{7}\mathbb{P}^{2}\) be the blowup of \(\mathbb{P}^{2}\) at the four points \(p_{1}=(1:0:0)\), \(p_{2}=(0:1:0)\), \(p_{3}=(0:0:1)\), \(p_{4}=(1:1:1)\), followed by the 3 points \(p_{5}=(1:1:0)\), \(p_{6}=(1:0:1)\), \(p_{7}=(0:1:1)\). Denote the exceptional divisors by \(e_{1},\dots,e_{7}\). Let \(\ell_{ijk}\) denote the strict transform of the line through the points \(p_{i}\), \(p_{j}\), and \(p_{k}\) (so \(ijk=125\), 345, 136, 246, 147, or 237), and \(\ell_{ij}\) the strict transform of the line through the points \(p_{i}\) and \(p_{j}\), \(i,j\in\{5,6,7\}\).
**Proposition 4.6**.: _There are \(270\) type \(a_{2}\) divisors. A given type \(a_{2}\) divisor \(D_{a_{2}}\) is isomorphic to \(Bl_{7}\mathbb{P}^{2}\times\mathbb{P}^{1}\), where \(Bl_{7}\mathbb{P}^{2}\) is as in Notation 4.5. Let \(h_{1},h_{2}\) denote the classes of the pullbacks of general hyperplanes from \(\mathbb{P}^{2}\) and \(\mathbb{P}^{1}\), and let \(e_{i}\) denote the class of \(e_{i}\times\mathbb{P}^{1}\)._
_The nonempty intersections of \(D_{a_{2}}\) with the other boundary divisors are as follows._
1. _3 type_ \(a_{4}\) _divisors, intersecting in the 3 divisors_ \(e_{i}\times\mathbb{P}^{1}\) _for_ \(i=5,6,7\)_._
2. _6 type_ \(a_{3}\) _divisors, intersecting in the 6 divisors_ \(\ell_{ijk}\times\mathbb{P}^{1}\)_, where_ \(\ell_{ijk}\) _is as in Notation_ 4.5_. The class of such an intersection is_ \(h_{1}-e_{i}-e_{j}-e_{k}\)_._
3. _2 type_ \(a\) _divisors, intersecting in 2 distinct divisors_ \(Bl_{7}\mathbb{P}^{2}\times pt\) _of class_ \(h_{2}\)_._
4. _4 type_ \(b\) _divisors, intersecting in the 4 divisors_ \(e_{i}\times\mathbb{P}^{1}\)_, for_ \(i=1,2,3,4\)_._
5. _5 Eckardt divisors:_ 1. _3 intersecting in_ \(\ell_{ij}\times\mathbb{P}^{1}\)_, where_ \(\ell_{ij}\)_,_ \(i,j\in\{5,6,7\}\) _is the strict transform of the line through the_ \(i\)_th and_ \(j\)_th blown up points. The class of such an intersection is_ \(h_{1}-e_{i}-e_{j}\)_._ 2. _2 intersecting in 2 distinct divisors_ \(Bl_{7}\mathbb{P}^{2}\times pt\) _of class_ \(h_{2}\)_._
_The class of \(D_{a_{2}}|_{D_{a_{2}}}\) is_
\[-7h_{1}+3(e_{1}+e_{2}+e_{3}+e_{4})+e_{5}+e_{6}+e_{7}-h_{2}.\]
Proof.: The divisor \(D_{a_{2}}\) is obtained as the divisor over a \(2A_{1}\) surface \(S\) in \(\overline{Y}(E_{6})\). The surface \(S\) is isomorphic to \(\overline{M}_{0,5}\cong Bl_{4}\mathbb{P}^{2}\). Let \(D\) be an \(A_{1}\) divisor on \(\overline{Y}\) containing \(S\). By [21, Section 4], \(N_{D/\overline{Y}}|_{S}=N_{S/D}\cong\mathcal{O}(-1)\), the pullback to \(Bl_{4}\mathbb{P}^{2}\) of \(\mathcal{O}_{\mathbb{P}^{2}}(-1)\), thus from the standard exact sequence
\[0\to N_{S/D}\to N_{S/\overline{Y}}\to N_{D/\overline{Y}}|_{S}\to 0,\]
we see that \(N_{S/\overline{Y}}\cong\mathcal{O}(-1)^{2}\) (see also [16, Lemma 10.11]). Let \(S^{\prime}\) be the strict transform of \(S\) under the blowup of the \(4A_{1}\) points. Then \(S^{\prime}\cong Bl_{7}\mathbb{P}^{2}\), with normal bundle \(N^{\prime}=\mathcal{O}(-1)^{2}\otimes\mathcal{O}(-e_{5}-e_{6}-e_{7})\). Next let \(S^{\prime\prime}\) be the strict transform of \(S\) under the blowup of the \(3A_{1}\) curves. Then \(S^{\prime\prime}\) is obtained from
by blowing up the strict transforms of the 6 lines \(x_{i}=0\), \(x_{i}=x_{j}\). Since these are divisors on \(S^{\prime}\), we see that \(S^{\prime\prime}\cong S^{\prime}\), but the normal bundle to \(S^{\prime\prime}\) is
\[N^{\prime\prime}=N^{\prime}\otimes\mathcal{O}(-6h_{1}+3(e_{1}+e_{2}+e_{3}+e_{4 })+2(e_{5}+e_{6}+e_{7})).\]
Thus, blowing up \(S^{\prime\prime}\), one obtains that \(D_{a_{2}}\cong S^{\prime\prime}\times\mathbb{P}^{1}\cong Bl_{7}\mathbb{P}^{2} \times\mathbb{P}^{1}\), with
\[D_{a_{2}}|_{D_{a_{2}}} =-h_{1}-h_{2}-(e_{5}-e_{6}-e_{7})-6h_{1}+3(e_{1}+e_{2}+e_{3}+e_{4}) +2(e_{5}+e_{6}+e_{7})\] \[=-7h_{1}+3(e_{1}+e_{2}+e_{3}+e_{4})+(e_{5}+e_{6}+e_{7})-h_{2},\]
as desired. The intersections of \(D_{a_{2}}\) with the other divisors are immediately verified. We explain the 2 types of intersections with Eckardt divisors. Recall from Proposition 2.2 that the \(2A_{1}\) surface \(S\) in \(\overline{Y}\) is a divisor of the form \(D_{ij}\) on \(D\cong\overline{M}_{0,6}\). There are 2 types of intersections of \(D\) with Eckardt divisors on \(\overline{Y}\). The first type gives a Keel-Vermeire divisor; there are 3 of these intersection \(D_{ij}\), giving the first type of intersection of \(D_{a_{2}}\) with an Eckardt divisor. The second type comes in pairs \(D_{e}\cap D=D^{\prime}_{e}\cap D=D_{ij}\), giving the second type of intersection of \(D_{a_{2}}\) with an Eckardt divisor.
**Proposition 4.7**.: _There are \(540\) type \(a_{3}\) divisors. A given type \(a_{3}\) divisor \(D_{a_{3}}\) is isomorphic to \(\mathbb{P}^{1}\times Bl_{3}\mathbb{P}^{2}\), where \(Bl_{3}\mathbb{P}^{2}\) is the blowup of \(\mathbb{P}^{2}\) at the 3 coordinate points, with exceptional divisors \(e_{1},e_{2},e_{3}\). Let \(h_{1},h_{2}\) be the classes of the pullbacks of general hyperplanes from \(\mathbb{P}^{1}\) and \(\mathbb{P}^{2}\), and let \(e_{i}\) be the class of the divisor \(\mathbb{P}^{1}\times e_{i}\)._
_The nonempty intersections of \(D_{a_{3}}\) with the other boundary divisors are as follows._
1. _1 type_ \(a_{4}\) _divisor, intersecting in a divisor_ \(pt\times Bl_{3}\mathbb{P}^{2}\) _of class_ \(h_{1}\)_._
2. _3 type_ \(a_{2}\) _divisors, intersecting in the 3 divisors_ \(\mathbb{P}^{1}\times e_{i}\) _of class_ \(e_{i}\)_,_ \(i=1,2,3\)_._
3. _3 type_ \(a\) _divisors, intersecting in the 3 divisors_ \(\mathbb{P}^{1}\times\ell_{ij}\)_, where_ \(\ell_{ij}\) _is the strict transform of the coordinate line through the_ \(i\)_th and_ \(j\)_th coordinate points. The class of such an intersection is_ \(h_{2}-e_{i}-e_{j}\)_._
4. _2 type_ \(b\) _divisors, intersecting in 2 distinct divisors_ \(pt\times Bl_{3}\mathbb{P}^{2}\) _of class_ \(h_{1}\)_._
5. _7 Eckardt divisors:_ 1. _1 intersecting in a divisor_ \(pt\times Bl_{3}\mathbb{P}^{2}\) _of_ \(h_{1}\)_._ 2. _3 pairs, where the 2 divisors in a given pair intersect in_ \(\mathbb{P}^{1}\times\ell_{i}\)_,_ \(\mathbb{P}^{1}\times\ell^{\prime}_{i}\)_, with_ \(\ell_{i},\ell^{\prime}_{i}\) _the strict transforms of 2 general lines meeting the_ \(i\)_th blown up point. The class of such an intersection is_ \(h_{2}-e_{i}\)_._
_The class of \(D_{a_{3}}|_{D_{a_{3}}}\) is_
\[-2h_{1}-h_{2}.\]
Proof.: In \(\overline{Y}\), let \(C\) be a \(3A_{1}\) curve and \(S\) a \(2A_{1}\) surface containing \(C\). The curve \(C\) in \(S\cong Bl_{4}\mathbb{P}^{2}\) appears as the strict transform of one of the line \(\ell_{ij}\) with class \(h-e_{i}-e_{j}\) through 2 of the blown up points. It follows that \(N_{C/S}=\mathcal{O}(-1)\). Since \(N_{S/\overline{Y}}=\mathcal{O}(-1)^{2}\) (see the proof of Proposition 4.6), it follows from the standard exact sequence
\[0\to N_{C/S}\to N_{C/\overline{Y}}\to N_{S/\overline{Y}}|_{C}\to 0\]
that \(N_{C/\overline{Y}}=\mathcal{O}(-1)^{3}\). Let \(C^{\prime}\) be the strict transform of \(C\) under the blowup of all \(4A_{1}\) points. Since \(C\) contains exactly one \(4A_{1}\) point, we see that \(C^{\prime}\cong C\cong\mathbb{P}^{1}\), but with normal bundle \(N^{\prime}\cong\mathcal{O}(-2)^{3}\). Therefore, blowing up \(C^{\prime}\), one obtains exceptional divisor \(F\cong\mathbb{P}^{1}\times\mathbb{P}^{2}\) with \(F|_{F}\)=\(-2h_{1}-h_{2}\). Observe that \(F\) intersects the strict transforms of 3 \(2A_{1}\) surfaces transversally in the 3 lines \(\mathbb{P}^{1}\times p_{i}\), where \(p_{i}\), \(i=1,2,3\) are the coordinate points on \(\mathbb{P}^{2}\). Thus, blowing up the \(2A_{1}\) surfaces, one obtains that the strict transform \(D_{a_{3}}\) of \(F\) is isomorphic to \(\mathbb{P}^{1}\times Bl_{3}\mathbb{P}^{2}\), with \(D_{a_{3}}|_{D_{a_{3}}}\)= \(-2h_{1}-h_{2}\). The intersections with the other divisors are immediately verified. (As in the proof of Proposition 4.6, the 2 types of intersections with Eckardt divisors are described, respectively, by Eckardt divisors which restrict to Keel-Vermeire or \(D_{ij}\) divisors on an \(A_{1}\) divisor \(D\) containing \(S\) in \(\overline{Y}\).)
**Proposition 4.8**.: _There are \(135\) type \(a_{4}\) divisors. A given type \(a_{4}\) divisor \(D_{a_{4}}\) is the blowup of \(\mathbb{P}^{3}\) at the 4 coordinate points and the 6 lines between them. Let \(h\) be the class of the pullback of a general hyperplane, \(e_{i}\) the class of the strict transform of the exceptional divisor over the \(i\)th blown up point, and \(e_{ij}\) the class of the exceptional divisor over the strict transform of the line through the \(i\)th and \(j\)th point._
_The nonempty intersections of \(D_{a_{4}}\) with the other boundary divisors and Eckardt divisors are as follows._
1. _4 type_ \(a_{3}\) _divisors, intersecting in the_ 4 \(e_{i}\)_'s._
_._
2. _6 type_ \(a_{2}\) _divisors, intersecting in the 6_ \(e_{ij}\)_'s._
3. _4 type_ \(a\) _divisors, intersecting in the strict transforms of the 4 coordinate hyperplanes. The class of such an intersection is_ \(h-e_{i}-e_{j}-e_{k}-e_{ij}-e_{ik}-e_{jk}\)_, where_ \(i,j,k\in[4]\) _are distinct._
4. _12 Eckardt divisors, coming in 6 pairs. The 2 Eckardt divisors in a given pair are the strict transforms of 2 general planes containing one of the blown up lines. (Note these planes are disjoint on_ \(D_{a_{4}}\)_.) The class of such an intersection is_ \(h-e_{i}-e_{j}-e_{ij}\)_._
_The class of \(D_{a_{4}|_{D_{a_{4}}}}\) is \(-h\)._
Proof.: Let \(p\) be a \(4A_{1}\) point in \(\overline{Y}\). Blowing up \(p\), one obtains the exceptional divisor \(F\cong\mathbb{P}^{3}\) with \(F|_{F}=-h\). Observe that \(F\) intersects 4 \(3A_{1}\) curves transversally in the 4 coordinate points on \(\mathbb{P}^{3}\), and 6 \(2A_{1}\) surfaces transversally in the 6 lines through 2 of the coordinate points on \(\mathbb{P}^{3}\). The result follows.
**Remark 4.9**.: The intersections of boundary divisors and Eckardt divisors on \(\widetilde{Y}\) can also be ascertained from the explicit descriptions of the surfaces parameterized by boundary divisors, see [10].
### Curve and surface strata
We label the types of strata in \(\widetilde{Y}\) by juxtaposition of the types of the corresponding divisors--thus for instance \(aa_{2}e\) denotes a curve stratum formed by the intersection of a divisor of type \(a\), a divisor of type \(a_{2}\), and a divisor of type \(e\). We say a stratum given by the intersection of divisors \(D_{1},\dots,D_{k}\) is a _boundary stratum_ if each \(D_{i}\) is a boundary divisor (i.e., of type \(a\), \(b\), \(a_{2}\), \(a_{3}\), or \(a_{4}\)), and a _mixed stratum_ if at least one \(D_{i}\) is an Eckardt divisor.
**Notation 4.10**.: Let \(\Gamma\) be the collection of 1-dimensional boundary strata as well as the 1-dimensional mixed strata of type \(aa_{2}e\). Explicitly, \(\Gamma\) is the collection of curve strata of \(\widetilde{Y}\) of the following types:
\[aa_{2}a_{3},aa_{2}a_{4},aa_{3}a_{4},a_{2}a_{3}a_{4},aa_{2}b,aa_{3}b,a_{2}a_{ 3}b,aa_{2}e.\]
We will see later that the curves in \(\Gamma\) generate the cone of curves of \(\widetilde{Y}\) (Theorem 5.11).
**Proposition 4.11**.: _The boundary surface strata as well as the curve strata in \(\Gamma\) are described in Table 2, where \(Bl_{7}\mathbb{P}^{2}\) and \(Bl_{9}(\mathbb{P}^{1}\times\mathbb{P}^{1})\) are as in Remark 4.3 and Notation 4.5._
Proof.: This is a direct observation given the descriptions of the boundary and Eckardt divisors and their intersections in Section 4.1.
**Remark 4.12**.: Of course, there are more types of mixed strata than just the curve stratum \(aa_{2}e\)--we do not consider them as they will not be necessary for our purposes.
### Intersection-theoretic results on \(\widetilde{Y}(e_{6})\)
**Proposition 4.13**.:
1. _The_ \(W(E_{6})\)_-invariant Picard group of_ \(\widetilde{Y}(E_{6})\) _is generated by the classes_ \(B_{a}\)_,_ \(B_{b}\)_,_ \(B_{a_{2}}\)_,_ \(B_{a_{3}}\)_,_ \(B_{a_{4}}\) _of the sums of the divisors of the respective types._
2. _The class_ \(B_{e}\) _of the sum of the Eckardt divisors on_ \(\widetilde{Y}(E_{6})\) _is given by_ \[B_{e}=\frac{25B_{a}+42B_{a_{2}}+51B_{a_{3}}+52B_{a_{4}}+27B_{b}}{4}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & & \multicolumn{6}{c|}{_Curve type_} \\ \hline _Stratum_ & _Surface_ & \(aa_{2}a_{3}\) & \(aa_{2}a_{4}\) & \(aa_{3}a_{4}\) & \(a_{2}a_{3}a_{4}\) & \(aa_{2}b\) & \(aa_{3}b\) & \(a_{2}a_{3}b\) & \(aa_{2}e\) \\ \hline \(aa_{2}\) & \(Bl_{7}\mathbb{P}^{2}\) & \(\ell_{ijk}\) & \(e_{5},\dots,e_{7}\) & & & \(e_{1},\dots,e_{4}\) & & \(\ell_{ij}\) \\ \(aa_{3}\) & \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) & \(\mathbb{P}^{1}\times pt\) & & \(pt\times\mathbb{P}^{1}\) & & & & \(pt\times\mathbb{P}^{1}\) & \\ \(aa_{4}\) & \(Bl_{3}\mathbb{P}^{2}\) & & \(\ell_{ij}\) & \(e_{i}\) & & & & & \\ \(a_{2}a_{3}\) & \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) & \(\mathbb{P}^{1}\times pt\) & & & \(pt\times\mathbb{P}^{1}\) & & & \(pt\times\mathbb{P}^{1}\) & \\ \(a_{2}a_{4}\) & \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) & & \(\mathbb{P}^{1}\times pt\) & & \(pt\times\mathbb{P}^{1}\) & & & & \\ \(a_{3}a_{4}\) & \(Bl_{3}\mathbb{P}^{2}\) & & & \(\ell_{ij}\) & \(e_{i}\) & & & & \\ \(ab\) & \(Bl_{9}(\mathbb{P}^{1}\times\mathbb{P}^{1})\) & & & & & \(pt\times\mathbb{P}^{1}\), \(\mathbb{P}^{1}\times pt\) & \(e_{pq}\) & & \\ \(a_{2}b\) & \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) & & & & & \(\mathbb{P}^{1}\times pt\) & & \(pt\times\mathbb{P}^{1}\) & \\ \(a_{3}b\) & \(Bl_{3}\mathbb{P}^{2}\) & & & & & & & \(\ell_{ij}\) & \(e_{i}\) & \\ \hline \end{tabular}
\end{table}
Table 2. Curves on surface strata.
3. _The canonical class of_ \(\widetilde{Y}\) _is given by_ \[K_{\widetilde{Y}}=\frac{-B_{a}+2B_{a_{2}}+5B_{a_{3}}+8B_{a_{4}}+B_{b}}{4}.\]
Proof.:
1. Since \(\widetilde{Y}\) is obtained from \(\overline{Y}\) by a sequence of smooth blowups, this follows from Proposition 2.6 and the usual formula for the Picard group of a smooth blowup [10, Exercise II.8.5]. (See also [1, Theorem 1.9].)
2. In \(\overline{Y}\), a given \(4A_{1}\) point is contained in \(4\)\(A_{1}\) divisors and \(12\) Eckardt divisors and disjoint from all \(A_{2}^{3}\) divisors, a given \(3A_{1}\) curve is contained in \(3\)\(A_{1}\) divisors and \(6\) Eckardt divisors, and is either disjoint from or intersects transversally the remaining Eckardt and the \(A_{2}^{3}\) divisors, and a given \(2A_{1}\) curve is contained in \(2\)\(A_{1}\) divisors and \(2\) Eckardt divisors, and is either disjoint from or intersects transversally the remaining Eckardt and the \(A_{2}^{3}\) divisors. It follows that \[B_{a} =B_{A_{1}}-2B_{a_{2}}-3B_{a_{3}}-4B_{a_{4}},\] \[B_{b} =B_{A_{2}^{3}},\] \[B_{e} =E-2B_{a_{2}}-6B_{a_{3}}-12B_{a_{4}},\] where \(B_{A_{1}}\), \(B_{A_{2}^{3}}\), and \(E\) denote the pullbacks to \(\widetilde{Y}\) of the sums of the \(A_{1}\), \(A_{2}^{3}\), and Eckardt divisors on \(\overline{Y}\), respectively. The result follows using the formula for \(E\) from Proposition 2.6.
3. This is a direct computation using the formulas for \(B_{A_{1}}\), \(B_{A_{2}^{3}}\) above, the usual formula for the canonical class of a smooth blowup [10, Exercise II.8.5], and the formula for \(K_{\overline{Y}}\) from Proposition 2.6.
**Proposition 4.14**.: _Intersection numbers of symmetric divisors on \(\widetilde{Y}(E_{6})\) and curves in \(\Gamma\) are given in the following table._
Proof.: This is a direct calculation using the descriptions of the curves in \(\Gamma\) from Proposition 4.11 and Section 4.1, and restricting to boundary divisors using the results of Section 4.1, analogous to the proof of Proposition 2.7.
We describe in detail the calculations for the last column. The curves of type \(aa_{2}e\) appear in the the \(a_{2}\) divisors \(\cong Bl_{7}\mathbb{P}^{2}\times\mathbb{P}^{1}\) as the curves of the form \(\ell_{ij}\times pt\), where in keeping with Notation 4.5, \(\ell_{ij}\) is the line on \(Bl_{7}\mathbb{P}^{2}\) obtained as the strict transform of the line through \(p_{i}\) and \(p_{j}\), \(i,j\in\{5,6,7\}\) distinct. In the notation of Proposition 4.6, \(aa_{2}e\) has class \((h_{1}-e_{i}-e_{j})h_{2}\) in the Chow group of \(Bl_{7}\mathbb{P}^{2}\times\mathbb{P}^{1}\). Fix an \(a_{2}\) divisor \(D\). The restrictions of the symmetric boundary and Eckardt divisors to \(D\) are computed by Proposition 4.6 to be as follows.
\[B_{a}|_{B_{a_{2}}} =2h_{2},\] \[B_{a_{2}}|_{B_{a_{2}}} =-7h_{1}+3(e_{1}+e_{2}+e_{3}+e_{4})+(e_{5}+e_{6}+e_{7})-h_{2},\] \[B_{a_{3}}|_{B_{a_{2}}} =6h_{1}-3(e_{1}+e_{2}+e_{3}+e_{4}),\] \[B_{a_{4}}|_{B_{a_{2}}} =e_{5}+e_{6}+e_{7},\] \[B_{b}|_{B_{a_{2}}} =e_{1}+e_{2}+e_{3}+e_{4},\] \[B_{e}|_{B_{a_{2}}} =3h_{1}-2(e_{5}+e_{6}+e_{7})+2h_{2}.\]
Now by standard intersection-theoretic computations on \(Bl_{7}\mathbb{P}^{2}\times\mathbb{P}^{1}\), we find that intersection numbers with the \(aa_{2}e\) curve of class \((h_{1}-e_{5}-e_{6})h_{2}\) to be as in the last column of Table 3.
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|} \hline & \(aa_{2}a_{3}\) & \(aa_{2}a_{4}\) & \(aa_{3}a_{4}\) & \(a_{2}a_{3}a_{4}\) & \(aa_{2}b\) & \(aa_{3}b\) & \(a_{2}a_{3}b\) & \(aa_{2}e\) \\ \hline \hline \(B_{a}\) & \(0\) & \(0\) & \(-1\) & \(2\) & \(0\) & \(-1\) & \(2\) & \(0\) \\ \(B_{a_{2}}\) & \(0\) & \(-1\) & \(2\) & \(-1\) & \(-3\) & \(2\) & \(-1\) & \(-5\) \\ \(B_{a_{3}}\) & \(-2\) & \(2\) & \(-1\) & \(0\) & \(3\) & \(-1\) & \(0\) & \(2\) \\ \(B_{a_{4}}\) & \(1\) & \(-1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(2\) \\ \(B_{b}\) & \(2\) & \(0\) & \(0\) & \(0\) & \(-1\) & \(0\) & \(0\) & \(0\) \\ \(B_{e}\) & \(1\) & \(2\) & \(2\) & \(2\) & \(0\) & \(2\) & \(2\) & \(-1\) \\ \hline \end{tabular}
\end{table}
Table 3. Intersection numbers of symmetric divisors on \(\widetilde{Y}(E_{6})\) and curves in \(\Gamma\).
**Remark 4.15**.: Note in particular that the curves of types \(aa_{3}a_{4}\) and \(aa_{3}b\) have the same intersection numbers with any \(W(E_{6})\)-invariant boundary divisor; likewise for the curves of types \(a_{2}a_{3}a_{4}\) and \(a_{2}a_{3}b\) (cf. Table 2).
## 5. \(W(e_{6})\)-invariant birational geometry of \(\widetilde{Y}(e_{6})\)
In this section we study the \(W(E_{6})\)-invariant birational geometry of \(\widetilde{Y}=\widetilde{Y}(E_{6})\), culminating in a description of the \(W(E_{6})\)-invariant cones of effective divisors and curves (Theorems 5.10 and 5.11), and a complete description of the log minimal model program for \(\widetilde{Y}\) with respect to the divisor \(K_{\widetilde{Y}}+cB+dE\), where \(B\) is the sum of the boundary divisors and \(E\) is the sum of the Eckardt divisors (Theorem 5.15).
In what follows we continue with the notation of Section 4.1.
### Two-dimensional boundary strata
**Lemma 5.1**.: _Let \(X=Bl_{7}\mathbb{P}^{2}\) be as in Notation 4.5. The effective cone of \(X\) is generated by the \(e_{i}\), \(\ell_{ijk}\), and \(\ell_{ij}\). In particular, a divisor of the form_
\[\Delta=c_{1}\sum_{i=1}^{4}e_{i}+c_{2}\sum_{i=5}^{7}e_{i}+c_{3}\sum\ell_{ijk}+ c_{4}\sum\ell_{ij}\]
_on \(X\) is effective if and only if \(c_{1},\dots,c_{4}\geq 0\)._
Proof.: See, for instance, [10, Section 7].
**Lemma 5.2**.: _Let \(X\) be the blowup of \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) at the 9 points \(p\times q\) for \(p,q\in\{0,1,\infty\}\), as in Remark 4.3. Let \(\sum e_{pq}\) be the sum of the exceptional divisors, \(\sum h_{p}\) the sum of the strict transforms of the rulings \(p\times\mathbb{P}^{1}\) for \(p\in\{0,1,\infty\}\), and \(\sum k_{q}\) the sum of the strict transforms of the rulings \(\mathbb{P}^{1}\times q\) for \(q\in\{0,1,\infty\}\). A divisor on \(X\) of the form_
\[\Delta=c_{1}\sum e_{pq}+c_{2}\sum h_{p}+c_{3}\sum k_{q}\]
_is effective if and only if \(c_{1},c_{2},c_{3}\geq 0\)._
Proof.: By subtracting the fixed components, we can assume that \(\Delta\) does not contain any of the \(e_{pq},h_{p},k_{q}\). Then
\[\Delta\cdot e_{pq}=-c_{1}+c_{2}+c_{3}\geq 0,\] \[\Delta\cdot h_{p}=3c_{1}-3c_{2}\geq 0,\] \[\Delta\cdot k_{q}=3c_{1}-3c_{3}\geq 0.\]
The first inequality gives \(c_{2}+c_{3}\geq c_{1}\), and the latter 2 inequalities give \(c_{1}\geq c_{2},c_{3}\). So we see that \(c_{2}+c_{3}\geq c_{1}\geq c_{2},c_{3}\), implying that \(c_{2}\geq 0\) and \(c_{3}\geq 0\), hence \(c_{3}\geq 0\) as well.
For the sake of completeness we also recall the well-known descriptions of the effective cones of the other surface strata \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(Bl_{3}\mathbb{P}^{2}\). (For proofs, one may use for instance that these are smooth projective toric varieties, and apply [11, Lemma 15.1.8].)
**Lemma 5.3**.: _The effective cone of \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) is generated by the divisor classes \(h_{1},h_{2}\) corresponding to the two rulings. In particular, a divisor of the form_
\[\Delta=c_{1}h_{1}+c_{2}h_{2}\]
_is effective if and only if \(c_{1},c_{2}\geq 0\)._
**Lemma 5.4**.: _The effective cone of \(Bl_{3}\mathbb{P}^{2}\) is generated by the 3 exceptional divisors \(e_{1},e_{2},e_{3}\), and the strict transforms \(\ell_{12},\ell_{13},\ell_{23}\) of the 3 lines passing through 2 of the blown-up points. In particular, a divisor of the form_
\[\Delta=c_{1}\sum e_{i}+c_{2}\sum\ell_{ij}\]
_is effective if and only if \(c_{1},c_{2}\geq 0\)._
### Boundary divisors
**Lemma 5.5**.: _Let \(X\) be the blowup of \(\mathbb{P}^{3}\) at 4 points in general position, then the 6 lines between them. The effective cone of \(X\) is generated by the strict transforms of the exceptional divisors \(e_{i}\) over the points, the exceptional divisors \(e_{ij}\) over the lines, and the strict transforms \(h_{ijk}\) of the hyperplanes passing through 3 of the points. In particular, a divisor of the form_
\[\Delta=c_{1}\sum e_{i}+c_{2}\sum e_{ij}+c_{3}\sum h_{ijk}\]
_is effective if and only if \(c_{1},c_{2},c_{3}\geq 0\)._
Proof.: Note \(X\) is toric and the described divisors are the torus-invariant boundary divisors. The result is standard, see [11, Lemma 15.1.8].
**Lemma 5.6**.: _Let \(X=\mathbb{P}^{1}\times Bl_{3}\mathbb{P}^{2}\), where \(Bl_{3}\mathbb{P}^{2}\) is the blowup of \(\mathbb{P}^{2}\) at 3 points in general position. In the notation of Proposition 4.7, the effective cone of \(X\) is generated by the classes \(h_{1}\), \(e_{i},i=1,2,3\), \(\ell_{ij},i,j\in\{1,2,3\}\) distinct, of the divisors of the forms \(pt\times Bl_{3}\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times e_{i}\), and \(\mathbb{P}^{1}\times\ell_{ij}\), respectively. In particular, a divisor of the form_
\[\Delta=c_{1}h_{1}+c_{2}\sum e_{i}+c_{3}\sum\ell_{ij}\]
_is effective if and only if \(c_{1},c_{2},c_{3}\geq 0\)._
Proof.: Again note \(X\) is toric and the described divisors are the torus-invariant boundary divisors, so the result follows by [11, Lemma 15.1.8].
**Lemma 5.7**.: _Let \(X=Bl_{7}\mathbb{P}^{2}\times\mathbb{P}^{1}\), where \(Bl_{7}\mathbb{P}^{2}\) is as in Notation 4.5. Then the effective cone of \(X\) is generated by the classes \(h_{2}\), \(e_{i},i=1,\ldots,7\), \(\ell_{ijk}\), and \(\ell_{ij}\), where the notation is as in Proposition 4.6. In particular, a divisor of the form_
\[\Delta=c_{1}\sum_{i=1}^{4}e_{i}+c_{2}\sum_{i=5}^{7}e_{i}+c_{3}\sum\ell_{ijk}+c _{4}\sum\ell_{ij}+c_{5}h_{2}\]
_is effective if and only if \(c_{1},c_{2},c_{3},c_{4},c_{5}\geq 0\)._
Proof.: Note \(\overline{\operatorname{Eff}}(X)=\overline{\operatorname{Eff}}(Bl_{7} \mathbb{P}^{2})\times\overline{\operatorname{Eff}}(\mathbb{P}^{1})\), so the result follows by Lemma 5.1.
**Lemma 5.8**.: _Let \(X\) be the blowup of \(\overline{M}_{0,6}\) at the 15 points \(D_{ij}\cap D_{kl}\cap D_{mn}\) and the 45 lines \(D_{ij}\cap D_{kl}\). In the notation of Proposition 4.1, a divisor on \(X\) of the form_
\[\Delta=c_{1}\sum F_{ij,kl,mn}+c_{2}\sum F_{ij,kl}+c_{3}\sum F_{ij}+c_{4}\sum F _{ijk}\]
_is effective if and only if \(c_{1},c_{2},c_{3},c_{4}\geq 0\)._
Proof.: We can assume without loss of generality that \(\Delta\) does not contain any of the \(F\)'s, so \(\Delta|_{F}\) is effective for every \(F\). The divisor \(F_{ij}\) is isomorphic to \(Bl_{7}\mathbb{P}^{2}\), and from the blowup construction of \(X\) we see that
\[F_{ij}|_{F_{ij}} =-h-e_{5}-e_{6}-e_{7}-(6h-3\sum_{i=5}^{4}e_{i}-2\sum^{7}e_{i})\] \[=\frac{-7\sum\ell_{ijk}-3\sum_{i=1}^{4}e_{i}-8\sum_{i=5}^{7}e_{i} }{6}.\]
Thus we compute that
\[\Delta|_{F_{ij}}=\frac{(6c_{2}-7c_{3})\sum\ell_{ijk}+(6c_{4}-3c_{3})\sum_{i=1 }^{4}e_{i}+(6c_{1}-8c_{3})\sum_{i=5}^{7}e_{i}}{6}.\]
It follows from Lemma 5.1, that \(\Delta|_{F_{ij}}\) is effective \(\iff\)\(6c_{2}\geq 7c_{3}\), \(6c_{4}\geq 3c_{3}\), \(6c_{1}\geq 8c_{3}\). Thus to show that \(c_{1},\ldots,c_{4}\geq 0\), it suffices to show that \(c_{3}\geq 0\). For this, we interpret \(X\) as the boundary divisor of \(\widetilde{Y}(E_{6})\) whose general point parameterizes the stable replacement of a marked cubic surface with a single \(A_{1}\) singularity obtained by blowing up 6 points on a conic. Let \(C\) be the curve in \(X\) obtained by fixing the first 5 points in general position and varying the 6th point along the conic through the first 5. Then \(C\) is a moving curve in \(X\), so \(\Delta\cdot C\geq 0\). On the other hand, note that \(C\) intersects 5 of the boundary divisors
when the 6th point coincides with one of the first 5, and otherwise \(C\) does not intersect any of the \(F\)'s. We conclude that
\[\Delta\cdot C=5c_{3}\geq 0,\]
so \(c_{3}\geq 0\).
**Lemma 5.9**.: _Let \(X\) be the blowup of \((\mathbb{P}^{1})^{3}\) at the 27 points \(p\times q\times r\) and the 27 lines \(p\times\mathbb{P}^{1}\times\mathbb{P}^{1}\), \(\mathbb{P}^{1}\times q\times\mathbb{P}^{1}\), \(\mathbb{P}^{1}\times r\), \(p,q,r\in\{0,1,\infty\}\), as in Proposition 4.2. Let \(F_{p}=\sum e_{pqr}\) denote the sum of the strict transforms of the exceptional divisors over the blown up points, \(F_{\ell}=\sum e_{pqr}+\sum e_{pxr}+\sum e_{xqr}\) the sum of the exceptional divisors over the strict transforms of the lines, and \(F_{s}\) the sum of the strict transforms of the hypersurfaces \(p\times\mathbb{P}^{1}\times\mathbb{P}^{1}\), \(\mathbb{P}^{1}\times q\times\mathbb{P}^{1}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\times r\), \(p,q,r\in\{0,1,\infty\}\). Then a divisor on \(X\) of the form_
\[\Delta=c_{1}F_{p}+c_{2}F_{\ell}+c_{3}F_{s}\]
_is effective if and only if \(c_{1},c_{2},c_{3}\geq 0\)._
Proof.: We can assume without loss of generality that \(\Delta\) does not contain any of the given hypersurfaces, so that \(\Delta\) restricts to an effective divisor on each such hypersurface.
The strict transform of the exceptional divisor over a point is isomorphic to \(Bl_{3}\mathbb{P}^{2}\), and the restriction of \(\Delta\) to such an exceptional divisor is
\[\frac{(3c_{3}-c_{1})\sum\ell_{ij}+(3c_{2}-2c_{1})\sum e_{i}}{3},\]
hence \(3c_{3}\geq c_{1}\) and \(3c_{2}\geq 2c_{1}\). Thus it suffices to show \(c_{1}\geq 0\).
The exceptional divisor over a line is isomorphic to \(\mathbb{P}^{1}\times\mathbb{P}^{1}\), and the restriction of \(\Delta\) to such an exceptional divisor is
\[(3c_{1}-3c_{2})h_{1}+(2c_{3}-c_{2})h_{2},\]
hence \(3c_{1}\geq 3c_{2}\) and \(2c_{3}\geq c_{2}\). Since \(3c_{2}\geq 2c_{1}\), the former inequality gives that \(c_{1}\geq 0\), and then the inequalities from the previous restriction give \(c_{2},c_{3}\geq 0\) as well.
### The \(W(e_{6})\)-invariant cones of effective divisors and curves on \(\widetilde{Y}(e_{6})\)
**Theorem 5.10**.: _The \(W(E_{6})\)-invariant effective cone of \(\widetilde{Y}(E_{6})\) is the closed cone spanned by the \(W(E_{6})\)-invariant boundary divisors \(B_{a}\), \(B_{a_{2}}\), \(B_{a_{3}}\), \(B_{a_{4}}\), and \(B_{b}\)._
Proof.: Let
\[\Delta=c_{a_{4}}B_{a_{4}}+c_{a_{3}}B_{a_{3}}+c_{a_{2}}B_{a_{2}}+c_{a}B_{a}+c_{ b}B_{b}\]
be a \(W(E_{6})\)-invariant effective divisor. We wish to show that all coefficients are nonnegative. We can assume without loss of generality that \(\Delta\) does not contain any boundary divisor, so that the restriction of \(\Delta\) to each irreducible boundary divisor is effective. In particular
\[\Delta|_{D_{a}}\!=\frac{(5c_{a_{4}}-8c_{a})\sum F_{ij,kl,mn}+(5c_{a_{3}}-7c_{ a})\sum F_{ij,kl}+(5c_{a_{2}}-6c_{a})\sum F_{ij}+(5c_{b}-3c_{a})\sum F_{ijk}}{5}\]
is effective. Then by Lemma 5.8 all coefficients of \(\Delta|_{D_{a}}\) are nonnegative, thus to show all coefficients of \(\Delta\) are nonnegative it suffices to show that \(c_{a}\geq 0\). For this, let \(C\) be the moving curve in \(\widetilde{Y}(E_{6})\) obtained by fixing 5 points in general position and varying the last point along a general line. This line intersects the conic through the first 5 points in 2 points, and the line through any 2 of the first 5 points in 1 point. It follows that \(C\cdot D_{7}=2\) and \(C\cdot D_{ij}=1\) for \(ij\subset[5]\), and otherwise \(C\) does not intersect any boundary divisor of \(\widetilde{Y}\). Since \(C\) is moving, we conclude that \(\Delta\cdot C=12c_{a}\geq 0\), so \(c_{a}\geq 0\).
Recall that \(\Gamma\) is the set of curves in \(\widetilde{Y}(E_{6})\) of the types
\[aa_{2}a_{3},aa_{2}a_{4},aa_{3}a_{4},a_{2}a_{3}a_{4},aa_{2}b,aa_{3}b,a_{2}a_{3} b,aa_{2}e.\]
These are the 1-dimensional boundary strata of \(\widetilde{Y}(E_{6})\), together with the curves of type \(aa_{2}e\) involving the intersection with an Eckardt divisor. These latter curves appear on 2-dimensional boundary strata of type \(aa_{2}\), isomorphic to \(Bl_{7}\mathbb{P}^{2}\) as the strict transforms of the lines between 2 of the last 3 blown-up points (cf. Proposition 4.11).
**Theorem 5.11**.: _The \(W(E_{6})\)-invariant cone of curves of \(\widetilde{Y}(E_{6})\) is the closed cone spanned by the \(W(E_{6})\)-invariant curves of the types listed in \(\Gamma\)._
Proof.: Since by Theorem 5.10, the \(W(E_{6})\)-invariant effective cone of \(\widetilde{Y}\) is generated by the \(W(E_{6})\)-invariant boundary divisors, it follows by [13, Corollary 2.3] that the \(W(E_{6})\)-invariant cone of curves of \(\widetilde{Y}\) is generated by curves contained in the boundary. The restriction of a \(W(E_{6})\)-invariant boundary divisor of \(\widetilde{Y}\) on an irreducible boundary divisor \(D\) of \(\widetilde{Y}\) can be written as a symmetric divisor on \(D\) of the appropriate form given in the corresponding lemma in Section 5.2. Thus, applying that lemma and [13, Corollary 2.3], we see that the \(W(E_{6})\)-invariant effective cone of \(\widetilde{Y}\) is generated by curves contained in the 2-dimensional boundary strata of \(\widetilde{Y}\). Restricting a \(W(E_{6})\)-invariant boundary divisor of \(\widetilde{Y}\) further to a 2-dimensional boundary stratum \(S\), we get a symmetric divisor on \(S\) of the appropriate form given in the corresponding lemma in Section 5.1. Then it follows from Proposition 4.11 and the lemmas in Section 5.1 that the effective cones of the 2-dimensional boundary strata are generated by the curves in \(\Gamma\), so the result follows.
**Corollary 5.12**.: _A \(W(E_{6})\)-invariant divisor \(\Delta\) on \(\widetilde{Y}(E_{6})\) is nef, if and only if it intersects every curve in \(\Gamma\) nonnegatively._
Proof.: Since the nef cone is the dual of the cone of effective curves, this follows from Theorem 5.11.
### The log minimal model program for \(\widetilde{Y}(E_{6})\)
Let \(B\) be the sum of all boundary divisors and \(E\) the sum of all Eckardt divisors on \(\widetilde{Y}(E_{6})\). In this section we compute the log canonical models of the pair \((\widetilde{Y}(E_{6}),cB+dE)\), as one varies the coefficients \(c\) and \(d\).
**Lemma 5.13**.: _The pair \((\widetilde{Y}(E_{6}),cB+dE)\) has log canonical singularities as long as \(0\leq c\leq 1\) and \(0\leq d\leq 2/3\)._
Proof.: Observe that \(\widetilde{Y}(E_{6})\) is smooth and \(B+E\) has normal crossings except at the triple intersections of three Eckardt divisors which have no lines in common. Blowing up these triple intersections therefore gives a log resolution \(\pi:Z\to\widetilde{Y}(E_{6})\) of \((\widetilde{Y}(E_{6}),B+E)\). Let \(F\) be the exceptional locus of \(\pi\), i.e., the sum of the exceptional divisors over the triple intersections of Eckardt divisors. Then the class of the strict transform of \(E\) is \(\widetilde{E}=\pi^{*}E-3F\), and the class of the strict transform of \(B\) is \(\widetilde{B}=\pi^{*}B\). Thus
\[\pi^{*}(K_{\widetilde{Y}}+cB+dE)=\pi^{*}K_{\widetilde{Y}}+c\widetilde{B}+d \widetilde{E}+3dF,\]
so
\[K_{Z}=\pi^{*}K_{\widetilde{Y}}+F=\pi^{*}(K_{\widetilde{Y}}+cB+dE)-c\widetilde {B}-d\widetilde{E}+(1-3d)F.\]
It follows that \((\widetilde{Y}(E_{6}),cB+dE)\) is log canonical as long as \(c\leq 1\) and \(d\leq 2/3\).
**Remark 5.14**.: Note analogously that a smooth weighted marked cubic surface \((S,dB)\) with an Eckardt point is log canonical if and only if \(d\leq 2/3\), see [11, Section 7.1].
**Theorem 5.15**.: _Fix \(0\leq c\leq 1\) and \(0\leq d\leq 2/3\)._
1. _If_ \(4c+25d<1\)_, then_ \(K_{\widetilde{Y}}+cB+dE\) _is not effective._
2. _If_ \(4c+25d=1\)_, then the log canonical model of_ \((\widetilde{Y},cB+dE)\) _is a point._
3. _If_ \(2c+12d\leq 1\) _and_ \(4c+25d>1\) _then the log canonical model of_ \((\widetilde{Y},cB+dE)\) _is the GIT moduli space_ \(\overline{M}\) _of marked cubic surfaces, obtained by contracting the 40_ \(A_{3}^{2}\) _divisors_ \(\cong(\mathbb{P}^{1})^{3}\) _on_ \(\overline{Y}\) _to singular points._
4. _If_ \(c+4d\leq 1\) _and_ \(2c+12d>1\)_, then the log canonical model of_ \((\widetilde{Y},cB+dE)\) _is Naruki's compactification_ \(\overline{Y}\)_._
5. _If_ \(c+3d\leq 1\) _and_ \(c+4d>1\)_, then the log canonical model of_ \((\widetilde{Y},cB+dE)\) _is the blowup_ \(\overline{Y}_{1}\) _of_ \(\overline{Y}\) _along the intersections of 4_ \(A_{1}\) _divisors._
6. _If_ \(c+2d\leq 1\) _and_ \(c+3d>1\)_, then the log canonical model of_ \((\widetilde{Y},cB+dE)\) _is the blowup_ \(\overline{Y}_{2}\) _of_ \(\overline{Y}_{1}\) _along the strict transforms of the intersections of 3_ \(A_{1}\) _divisors._
7. _If_ \(c+2d>1\)_, then_ \(K_{\widetilde{Y}}+cB+dE\) _is ample, so_ \((\widetilde{Y},cB+dE)\) _is already its own log canonical model. Recall this is the blowup of_ \(\overline{Y}_{2}\) _along the strict transforms of the intersections of 2_ \(A_{1}\) _divisors._
Proof.: By Proposition 4.13, \(K+cB+dE\) is given by
\[(-1+4c+25d)B_{a}+(2+4c+42d)B_{a_{2}}+(5+4c+51d)B_{a_{3}}+(8+4c+52d)B_{a_{4}}+(1+ 4c+27d)B_{b}.\]
By Theorem 5.10 this is not effective for \(4c+25d<1\).
If \(4c+25d=1\), then the pushforward of \(K+cB+dE\) to \(\overline{M}\) is \(0\), so if we know that \(\overline{M}\) is the log canonical model for \(2c+12d\leq 1\) and \(4c+25d>1\), then it follows that the log canonical model for \(4c+25d=1\) is a point. Thus from now on we assume \(4c+25d>1\). Let \(\Delta_{c,d}=K+cB+dE\), and let \(\pi:\widetilde{Y}\to Z\) be the morphism to the proposed log canonical model. Then to show \(Z\) is indeed the log canonical model, it suffices to show that \(\Delta_{c,d}-\pi^{*}\pi_{*}(\Delta_{c,d})\) is effective, and \(\pi_{*}(\Delta_{c,d})\) is ample, see [22, 19] for more details and similar arguments.
For \(Z\neq\overline{M}\), the map \(\pi:\widetilde{Y}\to Z\) is a sequence of smooth blowups, from which one easily computes \(\pi^{*}\pi_{*}(\Delta_{c,d})\). For \(Z=\overline{M}\), the map \(\pi:\widetilde{Y}\to Z\) is a composition of the blowup \(g:\overline{Y}\to\overline{M}\) of the \(40\) singular points, followed by the sequence of smooth blowups \(\widetilde{Y}\to\overline{Y}\). If \(\overline{B}_{A_{1}}\) denotes the image of \(B_{A_{1}}\subset\overline{Y}\) in \(\overline{M}\), then \(B_{A_{1}}=g^{*}\overline{B}_{A_{1}}+3B_{A_{2}^{3}}\) by [13, Corollary 5.6(1)], from which one also computes \(\pi^{*}\pi_{*}(\Delta_{c,d})\) in this case. The resulting formulas are given in Table 4.
**Table 4.** Formulas for \(\pi^{*}\pi_{*}(\Delta_{c.d})\).
\begin{tabular}{|c|c|} \hline \(Z\) & \(\pi^{*}\pi_{*}(\Delta_{c,d})\) \\ \hline \hline \(\overline{M}\) & \(\underline{(-1+4c+25d)(B_{a}+2B_{a_{2}}+3B_{a_{3}}+4B_{a_{4}}+3B_{b})}\) \\ \(\overline{Y}\) & \(\underline{(-1+4c+25d)(B_{a}+2B_{a_{2}}+3B_{a_{3}}+4B_{a_{4}})+(1+4c+27d)B_{b}}\) \\ \(\overline{Y}_{1}\) & \(\underline{(-1+4c+25d)(B_{a}+2B_{a_{2}}+3B_{a_{3}})+4}+(8+4c+52d)B_{a_{4}}+(1+ 4c+27d)B_{b}\) \\ \(\overline{Y}_{2}\) & \(\underline{(-1+4c+25d)(B_{a}+2B_{a_{2}})+(5+4c+51d)B_{a_{3}}+(8+4c+52d)B_{a_{4} }+(1+4c+27d)B_{b}}\) \\ \(\widetilde{Y}\) & \(\Delta_{c,d}\) \\ \hline \end{tabular}
From Table 4, we compute \(\Delta_{c,d}-\pi^{*}\pi_{*}(\Delta_{c,d})\) to be as in Table 5, and we see that it is effective in the desired region.
\begin{tabular}{|c|c|c|} \hline \(Z\) & \(\Delta_{c,d}-\pi^{*}\pi_{*}(\Delta_{c,d})\) & Effective region \\ \hline \hline \(M\) & \((1-c-2d)B_{a_{2}}+(2-2c-6d)B_{a_{3}}+(3-3c-12d)B_{a_{4}}+(1-2c-12d)B_{b}\) & \(2c+12d\leq 1\) \\ \(\overline{Y}\) & \((1-c-2d)B_{a_{2}}+(2-2c-6d)B_{a_{3}}+(3-3c-12d)B_{a_{4}}\) & \(c+4d\leq 1\) \\ \(\overline{Y}_{1}\) & \((1-c-2d)B_{a_{2}}+(2-2c-6d)B_{a_{3}}\) & \(c+3d\leq 1\) \\ \(\overline{Y}_{2}\) & \((1-c-2d)B_{a_{2}}\) & \(c+2d\leq 1\) \\ \(\widetilde{Y}\) & \(0\) & Always \\ \hline \end{tabular}
It remains to show that \(\pi_{*}(\Delta_{c,d})\) is ample in the desired region. Using Proposition 4.14 and Table 4, we compute the intersection numbers of \(\pi^{*}\pi_{*}\Delta_{c,d}\) and the extremal rays of the \(W(E_{6})\)-invariant cone of curves of \(\widetilde{Y}\) to be as in Table 6. By Corollary 5.12 and the table, we see that for \(c,d\) in the desired region, \(\pi^{*}\pi_{*}\Delta_{c,d}\) is nef and zero precisely on the curves contracted by \(\pi\). The result follows.
**Table 6.** Intersection numbers of \(\pi^{*}\pi_{*}\Delta_{c,d}\) with the extremal rays of the \(W(E_{6})\)-invariant cone of curves of \(\widetilde{Y}(E_{6})\).
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \(aa_{2}a_{3}\) & \(aa_{2}a_{4}\) & \(aa_{3}a_{4}\), \(aa_{3}b\) & \(a_{2}a_{3}a_{4}\), \(a_{2}a_{3}b\) & \(aa_{2}b\) & \(aa_{2}e\) \\ \hline \hline \(\overline{M}\) & \(-1+4c+25d\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-1+4c+25d\) \\ \(\overline{Y}\) & \(d\) & \(0\) & \(0\) & \(0\) & \(-1+2c+12d\) & \(-1+4c+25d\) \\ \(\overline{Y}_{1}\) & \(4-3c-11d\) & \(-3+3c+12d\) & \(0\) & \(0\) & \(-1+2c+12d\) & \(5-2c+d\) \\ \(\overline{Y}_{2}\) & \(c+d\) & \(1-c\) & \(-2+2c+6d\) & \(0\) & \(5-4c-6d\) & \(9-6c-11d\) \\ \(\widetilde{Y}\) & \(c+d\) & \(2d\) & \(2d\) & \(-1+c+2d\) & \(2-c\) & \(4-c-d\) \\ \hline \end{tabular}
**Table 7.** The \(W(E_{6})\)-invariant cone of curves of \(\widetilde{Y}(E_{6})\).
\begin{tabular}{|c|c|c|c|c|c|} \hline & \(aa_{2}a_{3}\) & \(aa_{2}a_{4}\) & \(aa_{3}a_{4}\), \(aa_{3}b\) & \(a_{2}a_{3}a_{4}\), \(a_{2}a_{3}b\) & \(aa_{2}b\) & \(aa_{2}e\) \\ \hline \hline \(\overline{M}\) & \(-1+4c+25d\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-1+4c+25d\) \\ \(\overline{Y}\) & \(d\) & \(0\) & \(0\) & \(0\) & \(-1+2c+12d\) & \(-1+4c+25d\) \\ \(\overline{Y}_{1}\) & \(4-3c-11d\) & \(-3+3c+12d\) & \(0\) & \(0\) & \(-1+2c+12d\) & \(5-2c+d\) \\ \(\overline{Y}_{2}\) & \(c+d\) & \(1-c\) & \(-2+2c+6d\) & \(0\) & \(5-4c-6d\) & \(9-6c-11d\) \\ \(\widetilde{Y}\) & \(c+d\) & \(2d\) & \(2d\) & \(-1+c+2d\) & \(2-c\) & \(4-c-d\) \\ \hline \end{tabular}
**Table 8.** The \(W(E_{6})\)-invariant cone of curves of \(\widetilde{Y}(E_{6})\).
\begin{tabular}{|c|c|c|c|c|c|} \hline & \(aa_{2}a_{3}\) & \(aa_{2}a_{4}\) & \(aa_{3}a_{4}\), \(aa_{3}b\) & \(a_{2}a_{3}a_{4}\), \(a_{2}a_{3}b\) & \(aa_{2}b\) & \(aa_{2}e\) \\ \hline \hline \(\overline{M}\) & \(-1+4c+25d\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-1+4c+25d\) \\ \(\overline{Y}\) & \(d\) & \(0\) & \(0\) & \(0\) & \(-1+2c+12d\) & \(-1+4c+25d\) \\ \(\overline{Y}_{1}\) & \(4-3c-11d\) & \(-3+3c+12d\) & \
**Remark 5.16**.: In particular, when \(d=0\), one recovers the log minimal model program for \((\overline{Y},cB_{\overline{Y}})\): \(K_{\overline{Y}}+cB_{\overline{Y}}\) is ample for \(c>1/2\), one performs the contraction \(\overline{Y}\to\overline{M}\) when \(c=1/2\), and when \(c=1/4\) the log canonical model is a point (cf. Theorem 3.6).
| $Y = Y(E_6)$の標数空間は代数幾何学における最も古典的な moduli空間の一つで、カリーとサロモンの19世紀の研究に由来します。現代の $Y$ の関心は、1980年代にナルキが $W(E_6)$ 準同型 smooth projective 完備化 $\overline{Y}$ を $Y$ の明示的な構築によって再燃させました。そして、2000年代にはハッキン、キール、テベレフが $Y$ の KSBA 安定ペア 完備化 $\widetilde{Y}$ を $Y$ の自然な吹き出しの系列として構築しました。私たちは $W(E_6)$ 準同型invariant 効率的divisor and curves の生成子を $ \overline{Y}$ と $\widetilde{Y}$ の両方で記述しました。ナルキの完備化 $\ |
2309.05077 | Generalization error bounds for iterative learning algorithms with
bounded updates | This paper explores the generalization characteristics of iterative learning
algorithms with bounded updates for non-convex loss functions, employing
information-theoretic techniques. Our key contribution is a novel bound for the
generalization error of these algorithms with bounded updates. Our approach
introduces two main novelties: 1) we reformulate the mutual information as the
uncertainty of updates, providing a new perspective, and 2) instead of using
the chaining rule of mutual information, we employ a variance decomposition
technique to decompose information across iterations, allowing for a simpler
surrogate process. We analyze our generalization bound under various settings
and demonstrate improved bounds. To bridge the gap between theory and practice,
we also examine the previously observed scaling behavior in large language
models. Ultimately, our work takes a further step for developing practical
generalization theories. | Jingwen Fu, Nanning Zheng | 2023-09-10T16:55:59 | http://arxiv.org/abs/2309.05077v3 | # Generalization error bounds for iterative learning algorithms with bounded updates
###### Abstract
This paper explores the generalization characteristics of iterative learning algorithms with bounded updates for non-convex loss functions, employing information-theoretic techniques. Our key contribution is a novel bound for the generalization error of these algorithms with bounded updates. Our approach introduces two main novelties: 1) we reformulate the mutual information as the uncertainty of updates, providing a new perspective, and 2) instead of using the chaining rule of mutual information, we employ a variance decomposition technique to decompose information across iterations, allowing for a simpler surrogate process. We analyze our generalization bound under various settings and demonstrate improved bounds. To bridge the gap between theory and practice, we also examine the previously observed scaling behavior in large language models. Ultimately, our work takes a further step for developing practical generalization theories.
## 1 Introduction
The majority of machine learning techniques utilize the empirical risk minimization framework. Within this framework, the optimization objective is to minimize empirical risk, which is the average risk over a finite set of training samples. In practice, the value of interest is the population risk, representing the expected risk across a population. Generalization error is the difference between the optimization objective (empirical risk) and the value of interest (population risk). The prevalence of machine learning techniques makes it essential to comprehend generalization error.
Previous studies (Russo and Zou, 2016, 2019; Xu and Raginsky, 2017) have established a relationship between mutual information, \(I(W;S_{n})\), and the generalization error, where \(S_{n}\) is a set containing \(n\) samples from a distribution \(\mu\), errving as the algorithm's input, and \(W\) represents the model's weights after training, serving as the algorithm's output. Information-theoretic tools are well-suited for analyzing iterative learning algorithms, as the chain rule of mutual information allows for a simple decomposition \(I(W,S_{n})\) across iterations (i.e. \(I(W_{T};S_{n})\leq I(W_{1},\cdots W_{T};S_{n})\leq\sum_{t=1}^{T}I(W_{t};S_{n}|W _{t-1})\)). Leveraging this technique, Xu and Raginsky (2017) studies the generalization properties of stochastic gradient Langevin dynamics (SGLD). SGLD can be considered as introducing noise to the SGD in each update step.
Since most commonly used algorithms in practice, such as SGD and Adam (Kingma and Ba, 2014), do not incorporate noise injection during the update process, recent research efforts are focused on integrating information-theoretic methods into these iterative algorithms without added noise. The challenge in this approach is that the value of \(I(W_{t};S_{n}|W_{t-1})\) will become infinite when \(W_{T}\) is determined by \(S_{n}\) and \(W_{t-1}\). A potential solution involves utilizing surrogate processes (Negrea et al., 2020; Sefidgaran et al., 2022). Neu et al. (2021) derives generalization bounds for SGD by employing a "virtual SGD" surrogate process, in which noise is introduced during each update step of (S)GD. Their generalization bound consists of two components: the generalization bound for the surrogate process and the bound for the difference between the generalization errors of the surrogate and original processes.
This paper examines the mutual information \(I(S_{n},W)\) from alternative perspectives and reformulates the mutual information to relate to the uncertainty of the update. The uncertainty of the update
refers to how the update will vary for different datasets \(S_{n}\sim\mu^{\otimes n}\). Instead of applying the chaining rule of mutual information, we use a variance decomposition method to decompose information across iterations. From this perspective, we establish the generalization bound for general iterative algorithms with bounded updates by employing a surrogate process that adds noise exclusively to the original process's final update.
We analyze our generalization bound in different situation. Our work achieve better vanishing rate guarantee than previous work Neu et al. (2021). We also investigate the gap between our theoretical framework and practical applications by analyzing the previous discovery of the scaling behavior in large language models. Our model shed light on developing practically useful generalization theories.
The contributions of our work can be summarized as following:
* This paper offers a novel viewpoint for analyzing the mutual information \(I(W,S_{n})\) by focusing on the uncertainty of updates.
* A new generalization bound, derived from an information-theoretic approach, is presented. This bound is applicable to iterative learning algorithms with bounded updates.
* We investigate the generalization behavior of various types of bounded update, iterative learning algorithms. Additionally, we summary the scaling rules of large language models from previous experimental findings to examine the gap between theoretical and practical aspects.
## 2 Related works
Existing works on generalization theory can be roughly divided into two categories: function space based method, and the learning algorithm based method. The function space based method study the generalization behavior based on the complexity of function space. Many methods for measuring the complexity of the function space have been proposed, e.g., VC dimension (Vapnik and Chervonenkis, 2015), Rademacher Complexity (Bartlett and Mendelson, 2002) and covering number (Shalev-Shwartz and Ben-David, 2014). These works fail in being applied to overparameters models, where the number of parameters is larger than the number of data samples. Because the function space is too large to deliver a trivial result (Zhang et al., 2021) in this case. To overcome this problem, recent works want to leverage the properties of learning algorithm to analyzing the generalization behavior. The most popular methods are stability of algorithm (Hardt et al., 2016) and information-theoretic analysis (Xu and Raginsky, 2017; Russo and Zou, 2016). Among them, the stability of algorithm (Bousquet and Elisseeff, 2002) measures how one sample change of training data impacts the model weights finally learned, and the information theory (Russo and Zou, 2016; Xu and Raginsky, 2017) based generalization bounds rely on the mutual information of the input (training data) and output (weights after training) of the learning algorithm. Although the both the stability method and information theoretic method are general, obtaining the generalization bound for practical learning algorithms is non-trival. Most of the stability-based generalization bounds focus on SGD (Hardt et al., 2016; Bassily et al., 2020; Nikolakakis et al., 2022). Applying the stability-based method outside SGD is very complex and non-trival (Nguyen et al., 2022; Ramezani et al., 2018). Most information-theoretic generalization bounds are applied for Stochastic Gradient Langevin Dynamics(SGD), e.g., SGD with noise injected in each step of parameters updating (Pensia et al., 2018; Negrea et al., 2019; Haghifaram et al., 2020; Negrea et al., 2019; Haghifaram et al., 2020). Neu et al. (2021) extends the information-theoretic generalization bounds to SGD by leveraging surrogate process. **Our work advances the field by extending the information-theoretic based method to learning algorithms beyond SGD in a simple way.** This represents a significant step towards developing practically useful generalization theories.
## 3 Preliminary
Let \(P,Q\) be probability measures on a measurable space. When \(Q\ll P\), meaning \(Q\) is absolutely continuous with respect to \(P\), \(\frac{\mathrm{d}Q}{\mathrm{d}P}\) represents the Radon-Nikodym derivative of \(Q\) concerning \(P\). The relative entropy (KL divergence) is calculated as \(\mathrm{KL}(P\|Q)=\int_{x}\mathrm{d}P(x)\log\left(\frac{\mathrm{d}P}{\mathrm{ d}Q}(x)\right)\). The
distribution of variable \(x\) is denoted as \(\mathbb{P}(x)\) or \(\mathbb{P}_{x}\). The product distribution between two variables \(x,y\) is denoted as \(\mathbb{P}(x)\otimes\mathbb{P}(y)\). The mutual information between two variables \(x,y\) is calculated as \(I(x;y)=\mathrm{KL}(\mathbb{P}(x,y)\|\mathbb{P}(x)\otimes\mathbb{P}(y))\). We use \(\|\cdot\|\) to denote the Euclidean norm. And we denote \(\{1,\cdots,k\}\) as \([k]\).
We consider the data distribution \(\mu\). The data \(Z\) is sampled from \(\mu\) and resides in the space \(\mathcal{Z}\). The training dataset is represented as \(S_{n}\sim\mu^{\otimes n}\). The learning algorithms is denoted as \(\mathcal{A}\) which takes \(S_{n}\) as input and outputs weights for parameterized. The weights are denoted as \(W\in\mathbb{W}\), with a dimension of \(d\). The performance and behavior of these weights are evaluated using a loss function, represented as \(f(W,Z)\in\mathbb{R}_{+}\). We assume \(f(W,Z)\) is differentiable with respect to \(W\). The gradient and the Hessian matrix of \(f(W,Z)\) are denoted as \(\nabla f(W,Z)\) and \(\nabla^{2}f(W,Z)\) respectively. the value of interest is population risk, which is calculated as
\[F_{\mu}(W)=\mathbb{E}_{z\sim\mu}f(W,z).\]
However, the population risk is often inaccessible. In the context of empirical risk minimization (ERM), the objective is to minimize the empirical risk. Given a data set \(S_{n}=\{z_{i}\}_{i=1}^{n}\sim\mu^{\otimes n}\), the empirical risk is calculated as
\[F_{S_{n}}(W)=\frac{1}{n}\sum_{i=1}^{n}f(W,z_{i}).\]
The empirical risk is determined by averaging all samples in a dataset \(S_{n}\). This paper primarily focuses on the generalization error, which represents the difference between empirical risk and population risk. The generalization error can be calculated as follows
\[gen(\mu,\mathbb{P}_{W|S_{n}})=\mathbb{E}_{S_{n}\sim\mu^{\otimes n},W\sim \mathbb{P}_{W|S_{n}}}\left[F_{S_{n}}(W)-F_{\mu}(W)\right].\]
The generalization error is calculated as the expectation concerning the randomness of data and the algorithm. In the learning problem, we iteratively update the weights of parameterized functions. We represent the weights at step \(t\) as \(W_{t}\). \(W_{t}\) is acquired by adding the update value to the initial weights \(W_{0}\), i.e., \(W_{t}=W_{t-1}+U_{t}\). Typically, \(U_{t}\) takes the form \(U_{t}=\eta_{t}u_{t}\), where \(\eta_{t}\) indicates the learning rate for the \(t\)-th step. We denote the accumulated update as \(U^{(t)}\triangleq\sum_{t=1}^{t}U_{t}\). The initial weights are obtained by sampling from a specific distribution, i.e. \(W_{0}\sim\mathbb{P}(W_{0})\). The final output of the \(T\)-steps algorithm is \(W_{T}\). The variance of update is defined as:
\[\mathbb{V}_{\mu,n}(U^{(t)}|W_{0})\triangleq\mathbb{E}_{W_{0}\sim\mathbb{P}_{ W_{0}}}\mathbb{E}\left[\left\|U^{(t)}-\mathbb{E}U^{(t)}\right\|^{2}|W_{0} \right],\]
where the \(\mathbb{E}U^{(t)}\) is taking the expection of all randomness of \(U^{(t)}\), including the randomness caused by data sampling and the randomness of learning algorithm. Following the similar way, we define the covariance as
\[\mathbb{C}_{\mu,n}(U_{i},U_{j}|W_{0})\triangleq\mathbb{E}_{W_{0}\sim\mathbb{P} _{W_{0}}}\mathbb{E}\left[<\bar{U}_{i},\bar{U}_{j}>|W_{0}\right],\]
where \(\bar{U}_{i}=U_{i}-\mathbb{E}U_{i}\). Without loss of ambiguity, we simplify \(\mathbb{V}_{\mu,n}(U^{(t)}|W_{0})\) as \(\mathbb{V}(U^{(t)})\) and \(\mathbb{C}_{\mu,n}(U_{i},U_{j}|W_{0})\) as \(\mathbb{C}(U_{i},U_{j})\).
## 4 Generalization Bound
Our primary result is a bound on the generalization error of the weights \(W\) generated by a learning algorithm with bounded updates. We will initially analyze the generalization mutual information from the perspective of update uncertainty. Subsequently, we will provide a bound for the learning algorithm with bounded updates.
### Generalization bounds with uncertainty of update
We begin by discussing the assumption used in our bound. The \(R\)-sub-Gaussian is defined as follows:
**Definition 4.1**.: A random variable \(X\) is \(R\)-sub-Gaussian if for every \(\lambda\in\mathbb{R}\), the following inequality holds:
\[\mathbb{E}[\exp\left(\lambda(X-\mathbb{E}X)\right)]\leq\exp\left(\frac{\lambda ^{2}R^{2}}{2}\right)\]
_Remark 4.2_.: If a variable \(X\in\mathbb{R}\) and takes value in \([a,b]\), then the variable is \((b-a)/2\)-sub-Guassian.
Based on the definition of \(R\)-sub-Guassian, our assumption is:
**Assumption 4.3**.: Suppose \(f(w,Z)\) is \(R\)-sub-Guassian with respect to \(Z\sim\mu\) for every \(w\in\mathcal{W}\).
With the \(R\)-sub-Guassian, we obtain the following generalization bound,
**Theorem 4.4**.: _Under Assumption 4.3, the following bound holds:_
\[\left|gen(\mu,\mathbb{P}_{W|S_{n}})\right|\leq\sqrt{\frac{2R^{2}}{n}[h(U^{(T) }|W_{0})-h(U^{(T)}|W_{0},S_{n})]}. \tag{1}\]
This bound transfer the original \(I(W;S_{n})\) into the difference between two update entropy. The update entropy can be interprete as of measure the uncertainty. \(h(U^{(T)}|W_{0})-h(U^{(T)}|W_{0},S_{n})\) measures the contribution dataset \(S_{n}\) to the update uncertainty. A low generalization bound can be obtained if the learning algorithm takes a similar update given different \(S_{n}\sim\mu^{\otimes n}\).
We first consider the situation where \(h(U^{(T)}|W_{0},S_{n})\geq 0\). In this case, we can simply omit \(h(U^{(T)}|W_{0},S_{n})\) and we only need to derive a upper bound of \(h(U^{(T)}|W_{0})\).
**Theorem 4.5**.: _Under Assumption 4.3, for high randomness learning algorithm, i.e. \(h(U^{(T)}|W_{0},S_{n})\geq 0\), the generalization error of the final iteration satisfies_
\[\left|gen(\mu,\mathbb{P}_{W|S_{n}})\right|\leq\sqrt{\frac{2\pi eR^{2}\mathbb{V }(U^{(T)})}{n}}.\]
_Remark 4.6_.: \(h(U^{(T)}|W_{0},S_{n})\geq 0\) can be achieved if the learning algorithms have high randomness. The high randomness can be obtain through 1) using small batch size 2) adding noise during the updates, like SGLD 3) or some other methods.
The generalization bound in Theorem 4.4 can not be calculated directly when \(h(U^{(T)}|W_{0},S_{n})<0\), because we don't know the distribution of \(U^{(T)}\). The \(h(U^{(T)}|W_{0})\) and \(h(U^{(T)}|W_{0},S_{n})\) can be extremely small when the algorithm has minimal randomness. A natural approach is to associate the update entropy with the Gaussian distribution entropy, which can be calculated directly. Consequently, we introduce a surrogate process for our analysis:
surrogate processWe consider the surrogate update \(\tilde{U}\) with noise added to the final update, i.e., \(U_{t}=U_{t}\) when \(t\neq T\) and \(U_{T}=U_{T}+\epsilon\), where \(\epsilon\) is a random noise. Here we consider \(\epsilon\sim\mathcal{N}(0,\sigma^{2}\mathbf{I})\). Then we have \(\tilde{U}^{(T)}=U^{(T)}+\epsilon\).
Based on the surrogate process, we obtain the result:
**Theorem 4.7**.: _Under Assumption 4.3, for any \(\sigma\), the generalization error of the final iteration satisfies_
\[\left|gen(\mu,\mathbb{P}_{W|S_{n}})\right|\leq\sqrt{\frac{R^{2}\mathbb{V}(U^ {(T)})}{n\sigma^{2}}}+\Delta_{\sigma}, \tag{2}\]
_where \(\Delta_{\sigma}\triangleq|\mathbb{E}\left[(F_{\mu}(W_{T})-F_{\mu}(W_{T}+ \epsilon))-(F_{S}(W_{T})-F_{S}(W_{T}+\epsilon))\right]|\) and \(\epsilon\sim\mathcal{N}(0,\sigma^{2}\mathbf{I})\)._
_Remark 4.8_.: Compared to Theorem 4.5, Theorem 4.7 employs the surrogate process and, as a results, this theorem is more general. We give a further analysis of the results of this Theorem from Pac-Bayes perspective in Appendix F to remove sub-Guassian assumption and obtain high probability bounds.
Theorem 4.5 and Theorem 4.7 establish a connection between the generalization error and the variance of the update. Based on this, the generalization analysis of bounded updates learning algorithm is given in the following section.
### Generalization bounds for bounded updates learning algorithms
Building on the results from the previous section, we derive the bound for the bounded updates learning algorithm in this part. We provide the formal definition of the bounded updates as follows:
**Definition 4.9**.: (Bounded updates) A learning algorithm is said to have bounded updates with respect to function \(f(\cdot)\) and data distribution \(\mu\), if for all \(S_{n}\sim\mu^{\otimes n}\), there exists a constant \(L\), such that \(\|u_{t}\|\leq L\) for all \(t\leq T\), when the learning algorithm is operated on \(f(\cdot)\) and \(S_{n}\).
Comparison between bounded updates assumption and \(L\)-Lipschitz assumptionThe \(L\)-Lipschitz assumption is widely used to analyze the convergence or generalization behavior of learning algorithms. The \(L\)-Lipschitz condition requires that \(\|\nabla f(w,Z)\|\leq L\) for all \(w,Z\). These two assumptions, \(L\)-Lipschitz and bounded update, share some similarities. However, some fundamental differences exist: **1)**\(L\)-Lipschitz is a property of \(f(\cdot)\), while the bounded updates is a joint behavior of the learning algorithm and \(f(\cdot)\). It is possible to achieve a bounded updates behavior even when the function is not \(L\)-Lipschitz. **2)** The \(L\)-Lipschitz is a "global assumption," meaning that the assumption must be held for all \(w\). On the other hand, the bounded updates assumption is a local assumption. This assumption is only required to be held for the weights encountered during the learning process.
Under the bounded updates assumption, we can obtain the result as follows:
**Theorem 4.10**.: _If the learning algorithm has bounded updates on data distribution \(\mu\) and loss function \(f(\cdot)\), then we have_
\[\mathbb{V}(U^{(T)})\leq\sum_{t=1}^{T}4\eta_{t}^{2}L^{2}+2L^{2}\sum_{t=1}^{T} \eta_{t}\sum_{i=1}^{t-1}\eta_{t}\]
_then under Assumption 4.3, we have_
\[gen(\mu,\mathbb{P}_{W|S_{n}})\leq\sqrt{\frac{R^{2}}{n\sigma^{2}}\left(\sum_{t =1}^{T}4\eta_{t}^{2}L^{2}+2L^{2}\sum_{t=1}^{T}\eta_{t}\sum_{i=1}^{t-1}\eta_{t} \right)}+\Delta_{\sigma}.\]
_If the learning algorithms have high randomness, i.e. satisfying \(h(U^{(T)}|W_{0},S_{n})\geq 0\), we have_
\[\left|gen(\mu,\mathbb{P}_{W|S_{n}})\right|\leq\sqrt{\frac{2\pi eR^{2}}{n} \left(\sum_{t=1}^{T}4\eta_{t}^{2}L^{2}+2L^{2}\sum_{t=1}^{T}\eta_{t}\sum_{i=1} ^{t-1}\eta_{t}\right)}.\]
Proof Schetch:The full proof is listed in Appendix C. Here, we give the proof schetch. **Step 1** We use the equation \(\mathbb{V}(U^{(T)})=\sum_{t=1}^{T}\mathbb{V}(U_{t})+2\sum_{t=1}^{T}\mathbb{C} (U^{(t-1)},U_{t})\) to decomposite the \(\mathbb{V}(U^{(T)})\) to the information along the learning trajectory. **Step 2** Due to the bounded updates assumption, the \(\mathbb{V}(U_{t})\leq 4\eta_{t}^{2}L^{2}\) and \(\mathbb{V}(U^{(t)})\leq L\sum_{i=1}^{t}\eta_{t}\). **Step 3** Combining the results above, we obtain the final bound.
Technique Novelty:Most previous works employ the technique \(I(W_{T};S_{n})\leq\sum_{t=1}^{T}I(W_{t};S_{n}|W_{t-1})\) to decompose the information of the final weights into the information along the learning trajectory. This method fails in our case because we do not add noise at every update step along the learning trajectory. As a result, \(I(W_{t};S_{n}|W_{t-1})\) becomes large in this situation. To address this challenge, we utilize another commonly used technique: \(\mathbb{V}(U(T))=\sum_{t=1}^{T}\mathbb{V}(U_{t})+2\sum_{t=1}^{T}\mathbb{C}(U( t-1),U_{t})\). This method is quite simple, but it is effective. We will analyze the effectiveness of our method by comparing it with Neu et al. (2021), which uses the technique \(I(W_{T};S_{n})\leq\sum_{t=1}^{T}I(W_{t};S_{n}|W_{t-1})\), in the following section.
## 5 Analysis
### bounded updates learning algorithms
In this section, we will discussion about the bounded updates behavior of commonly used algorithm.
**Proposition 5.1**.: _Adam(Kingma & Ba, 2014), Adagrad(Duchi et al., 2011), RMSprop(Tieleman et al., 2012) are bounded updates with respect to all data distribution and function \(f(\cdot)\) when \(d=\mathcal{O}(1)\)_
This proposition suggests that when setting the dimension \(d\) as a constant, commonly used learning algorithms, such as Adam, Adagrad, and RMSprop, exhibit bounded updates. However, in real-world situations, we typically scale the model size based on the amount of data, which implies that \(d\) will increase along with \(n\). In this scenario, we do not have \(d=\Theta(1)\).
Then, we consider the learning algorithm modified with update clip. The update rule of learning algorithm with update clip is \(u_{t}=\min\{L,\|u_{t}^{\prime}\|\}\frac{u_{t}^{\prime}}{\|u_{t}^{\prime}\|}\), where \(u_{t}^{\prime}\) is the update value of original learning algorithm without update clip.
**Proposition 5.2**.: _All learning algorithms with update clip and (S)GD with grad clip have bounded updates with respect to all data distribution and function \(f(\cdot)\)._
Proof.: For algorithms with update clip, we have \(\|u_{t}\|=\min\{L,\|u_{t}^{\prime}\|\}\frac{\|u_{t}^{\prime}\|}{\|u_{t}^{\prime }\|}\leq L\). For (S)GD, because \(u_{t}^{\prime}\) is gradient of a batch data, the grad clip is equal to update clip.
The gradient clipping technique is commonly employed in practice (Zhang et al., 2019; Qian et al., 2021). If a learning algorithm does not have a bounded update, it may be possible to incorporate an update clipping technique to ensure that it aligns with our theoretical framework.
### \(d\) dependence of \(\Delta_{\sigma}\)
We consider the situations where \(\sigma\) is a small value. As our analysis concentrates on the asymptotic behavior of the generalization error when \(n\) increases, we use the setting \(\lim\limits_{n\to\infty}\sigma=0\). In this situation, \(\sigma\) is a small value when a relatively large \(n\) is adopted.
For \(z\in\mathcal{Z}\), we have
\[\mathbb{E}[f(W_{T},z)- f(W_{T}+\epsilon,z)]\approx\mathbb{E}[<\nabla f(W_{T},z), \epsilon>]+\frac{1}{2}\mathbb{E}[\epsilon^{\mathrm{T}}\nabla^{2}f(W_{T},z)\epsilon]\] \[=\frac{1}{2d}\mathbb{E}\left\|\epsilon\right\|^{2}\mathbb{E} \operatorname{Tr}(\nabla^{2}f(W_{T},z))=\frac{\sigma^{2}}{2}\mathbb{E} \operatorname{Tr}(\nabla^{2}f(W_{T},z))\]
The, according to the definition of \(\Delta_{\sigma}\), we have \(\Delta_{\sigma}\approx\frac{\sigma^{2}}{2}\left|\mathbb{E}\operatorname{Tr} \left(\nabla^{2}F_{\mu}(W_{T})-\nabla^{2}F_{S_{\alpha}}(W_{T})\right)\right|\). Therefore, analyzing \(d\) dependence of \(\Delta_{\sigma}\) is equal to analyzing the \(d\) dependence of \(\operatorname{Tr}\left(\nabla^{2}f(W_{T},z)\right)\).
**Worst case:**\(\Delta_{\sigma}=\Theta(d\sigma^{2})\). We assume the \(\beta\)-smooth for function \(f(w,z)\), then we have the upper bound \(\mathbb{E}\left|\operatorname{Tr}(\nabla^{2}(W_{T},z))\right|\leq d\beta\). The equal sign is taken when all the eignvalue of \(\nabla^{2}f(W_{T},z))\) is \(\beta\).
**Benign case:** The benign case is possible when the distribution of eigenvalues of the Hessian matrix exhibits a long tail. In this situation, most eigenvalues are close to 0, which implies that \(\operatorname{Tr}(\nabla^{2}(W_{T},z))\) remains stable when increasing \(d\). The long tail distribution is commonly observed in neural networks (Ghorbani et al., 2019; Sagun et al., 2016; Zhou et al., 2022). We consider two cases in this context: **1)**\(\Delta_{\sigma}=\Theta(\sigma^{2}/\eta)\): This case may be achieved by leveraging the inductive bias of training algorithm. Wu et al. (2022) finds that the SGD can only converge to \(W_{T}\) where \(\operatorname{Tr}(\nabla^{2}(W_{T},z))\) is smaller than a specific value. The value is dimension independent but learning rate dependent (\(\frac{1}{\eta}\)). The similar learning rate dependent on maximum eigenvalue is also discovered by Cohen et al. (2021, 2022). **2)**\(\Delta_{\sigma}=\Theta(\sigma^{2})\). This case may be achieved if the learning algorithm explicitly decreases \(\operatorname{Tr}(\nabla^{2}(W_{T},z))\). The SAM learning algorithm (Foret et al., 2020) is specifically designed to reduce the sharpness (maximum eigenvalue of the Hessian matrix). Wen et al. (2022) find that the stochastic SAM minimizes \(\operatorname{Tr}(\nabla^{2}(W_{T},z))\).
### Compared with Neu et al. (2021)
Neu et al. (2021) consider the surrogate process that \(\tilde{U}_{t}=U_{t}+\epsilon_{t}\) for all \(t\in[T]\), where \(\epsilon_{t}\sim\mathcal{N}(0,\sigma_{t}\mathbf{I}_{d})\). They obtain the generalization error bound,
\[\left|gen(\mu,\mathbb{P}_{W|S_{n}})\right|=\mathcal{O}(\sqrt{\frac{R^{2}\eta^{ 2}T}{n}(dT+\frac{1}{b\sigma^{2}})}+\Delta_{\sigma_{1:T}}),\]
where \(b\) denotes batch size and \(\sigma_{1:T}=\sqrt{\sigma_{1}^{2}+\cdots+\sigma_{T}^{2}}\).
We consider two settings of \(d\) in this analysis. The first one is the underparameterized regime, where \(d=\Theta(1)\). In this regime, as we increase \(n\) to a large value, \(n\) will be significantly larger than \(d\). The second setting is the overparameterized regime, where \(d=\Theta(n)\). In this case, the ratio between \(d\) and \(n\) remains nearly constant as we increase \(n\). This setting is commonly employed in Large Language Models (Muennighoff et al., 2023; Hoffmann et al., 2022) when scaling \(n\). Table 1 examines the behavior of the generalization bound under different \(d\) values and various cases of \(\Delta_{\sigma}\). In this analysis, we fix \(\eta T=\Theta(1)\).
Last iteration noise v.s. whole process noiseOur work and Neu et al. (2021) both utilize surrogate processes for analysis. The main difference lies in the surrogate process, where our approach adds noise only to the final iteration, while Neu et al. (2021) adds noise throughout the entire process. **Our bound is better for analysis** because our bounds only require taking infinity with respect to one variable \(\sigma\), whereas the bound of Neu et al. (2021) needs to consider infinity with respect to \(T\) variables, \(\sigma_{1},\cdots\sigma_{T}\). **Our method exhibits weaker dependence on \(T\).** The \(\Delta_{\sigma}\) used in our bound does not have a clear dependence on \(T\), while the \(\Delta_{\sigma_{1:T}}\) will increase with respect to \(T\).
Applies to general learning algorithms.Our bound don't leverage any specific knowledge about particular learning algorithms, while the main Theorem of Neu et al. (2021) only applied to (S)GD. Although the methods of Neu et al. (2021) is general which makes it possible to apply to other learning algorithms, it is untruly to do this. More information can be found in the Section "5. Extension" in Neu et al. (2021).
### Compared with stability based method
Table 2 summaries some recent stability-based studies on different learning algorithms. Our methods have the following advantages:
* **Weaker assumptions.** Most stability-based works (Hardt et al., 2016; Ramezani et al., 2018; Nguyen et al., 2022) require Lipschitz and smooth assumption. Lei & Ying (2020)
\begin{table}
\begin{tabular}{c c|c c|c c|c} \hline \hline \multirow{2}{*}{\(0\)} & \multicolumn{2}{c|}{Settings} & \multicolumn{2}{c|}{Ours} & \multicolumn{1}{c}{Neu et al. (2021)} \\ \cline{2-7} & \(d\) & \(\Delta_{\sigma}\) & \((\triangle)\) & \((\star)\) & \(b=\mathcal{O}(1)\) & \(b=\mathcal{O}(\sqrt{n})\) \\ \hline \(\Theta(1)\) & \(\Theta(d\sigma^{2})\), \(\Theta(\sigma^{2}/\eta)\),\(\Theta(\sigma^{2})\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & \(\mathcal{O}(1/n^{\frac{1}{2}})\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & \(\mathcal{O}(1/n^{\frac{1}{2}})\) \\ \hline \multirow{3}{*}{\(\Theta(n)\)} & \(\Theta(d\sigma^{2})\) & \(\mathcal{O}(1)\) & \multirow{3}{*}{\(\mathcal{O}(1/n^{\frac{1}{3}})\)} & \multirow{3}{*}{\(\mathcal{O}(1/n^{\frac{1}{2}})\)} & \multirow{3}{*}{\(\mathcal{O}(1)\)} & \multirow{3}{*}{\(\mathcal{O}(1)\)} \\ & \(\Theta(\sigma^{2}/\eta)\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & & & \\ \cline{1-1} \cline{4-7} & \(\Theta(\sigma^{2})\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: The asymptotic analysis when increasing \(n\) under \(T\eta=\Theta(1)\) for various scenarios, with \(b\) denoting the batch size. \((\triangle)\) stands for \(h(U^{(T)}|W_{0},S_{n})<0\), and \((\star)\) is short for \(h(U^{(T)}|W_{0},S_{n})\geq\)\(\begin{array}{c}\text{Settings}\\ d\end{array}\) & \(\begin{array}{c}\text{Ours}\\ \Delta_{\sigma}\end{array}\) & \(\begin{array}{c}\text{Neu et al. (2021)}\\ (\triangle)\end{array}\) \\ \hline \(\Theta(1)\) & \(\Theta(d\sigma^{2})\), \(\Theta(\sigma^{2}/\eta)\),\(\Theta(\sigma^{2})\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & \(\mathcal{O}(1/n^{\frac{1}{2}})\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & \(\mathcal{O}(1/n^{\frac{1}{2}})\) \\ \hline \multirow{3}{*}{\(\Theta(n)\)} & \(\Theta(d\sigma^{2})\) & \(\mathcal{O}(1)\) & \multirow{3}{*}{\(\mathcal{O}(1/n^{\frac{1}{3}})\)} & \multirow{3}{*}{\(\mathcal{O}(1/n^{\frac{1}{2}})\)} \\ & \(\Theta(\sigma^{2}/\eta)\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & & \\ \cline{1-1} \cline{4-7} & \(\Theta(\sigma^{2})\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & & \\ \cline{1-1} \cline{4-7} & \(\Theta(\sigma^{2})\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: The asymptotic analysis when increasing \(n\) under \(T\eta=\Theta(1)\) for various scenarios, with \(b\) denoting the batch size. \((\triangle)\) stands for \(h(U^{(T)}|W_{0},S_{n})<0\), and \((\star)\) is short for \(h(U^{(T)}|W_{0},S_{n})\geq\)\(\begin{array}{c}\text{Settings}\\ d\end{array}\) & \(\begin{array}{c}\text{Ours}\\ \Delta_{\sigma}\end{array}\) & \(\begin{array}{c}\text{Neu et al. (2021)}\\ (\triangle)\end{array}\) \\ \hline \(\Theta(1)\) & \(\Theta(d\sigma^{2})\), \(\Theta(\sigma^{2}/\eta)\),\(\Theta(\sigma^{2})\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & \(\mathcal{O}(1/n^{\frac{1}{2}})\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & \(\mathcal{O}(1/n^{\frac{1}{2}})\) \\ \hline \multirow{3}{*}{\(\Theta(n)\)} & \(\Theta(d\sigma^{2})\) & \(\mathcal{O}(1)\) & \multirow{3}{*}{\(\mathcal{O}(1/n^{\frac{1}{3}})\)} & \multirow{3}{*}{\(\mathcal{O}(1/n^{\frac{1}{2}})\)} & \multirow{3}{*}{\(\mathcal{O}(1)\)} \\ \(\Theta(n)\) & \(\Theta(\sigma^{2}/\eta)\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & & \\ \cline{1-1} \cline{4-7} & \(\Theta(\sigma^{2})\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & & \\ \cline{1-1} \cline{4-7} & \(\Theta(\sigma^{2})\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & & \\ \hline \end{tabular}
\end{table}
Table 1: The asymptotic analysis when increasing \(n\) under \(T\eta=\Theta(1)\) for various scenarios, with \(b\) denoting the batch size. \((\triangle)\) stands for \(h(U^{(T)}|W_{0},S_{n})<0\), and \((\star)\) is short for \(h(U^{(T)}|W_{0},S_{n})\geq\)\(\begin{array}{c}\text{Settings}\\ d\end{array}\) & \(\begin{array}{c}\text{Ours}\\ \Delta_{\sigma}\end{array}\) & \(\begin{array}{c}\text{Neu et al. (2021)}\\ (\triangle)\end{array}\) \\ \hline \(\Theta(1)\) & \(\Theta(d\sigma^{2})\), \(\Theta(\sigma^{2}/\eta)\),\(\Theta(\sigma^{2})\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & \(\mathcal{O}(1/n^{\frac{1}{2}})\) & \(\mathcal{O}(1/n^{\frac{1}{3}})\) & \(\mathcal{O}(1/n^{\frac{1}{2}})\) \\ \hline \multirow{3}{*}{\(\Theta(n)\)} & \(\Theta(d\sigma^{2})\) & \(\mathcal{O}(1)\) & \multirow{3}{*}{\(\mathcal{O}(1/n^{\frac{1}{3}})\)} & \multirow{3}{*}{\(\mathcal{O}(1/n^{\frac{1}{2}})\)} & \
removes the Lipschitz assumption, but the convex assumption is required. Our methods only require to be \(f(\cdot)\) sub-Guassian.
* **Better results in non-convex situation.** Obviously, our methods have a better result than Nguyen et al. (2022); Ramezani et al. (2018) from Table 2 under the setting \(\eta=\frac{1}{t}\) and \(T=\mathcal{O}(n)\). As for the Hardt et al. (2016), our bound is better if \(\beta>1\), which is hold in many situations (Cohen et al., 2021; Ghorbani et al., 2019; Zhou et al., 2022).
_Remark 5.3_.: We don't compare the results with Lei and Ying (2020) because 1) it relies on the convex assumption and 2) its studies don't include the results with learning rate setting \(\eta_{t}=\frac{1}{t}\). Haghifam et al. (2023) argues that all the information-theoretic methods will be worse than stability-based work in convex case. We leave achieving better results in convex case using information-theoretic methods as future work (Detail discussion on Section 8).
## 6 Connection to practice
In this section, we investigate practical concerns, specifically focusing on the scaling results of the LLM problem. Practically speaking, the population risk is unbiased estimated by the test loss. Test loss is assessed using a new dataset sampled from the same distribution \(\mu\), which was not observed during the training process. The test loss can be roughly deccomposite as:
Test \(\text{Loss}=\text{Training loss}+\text{Generalization Error}\),
where the training loss is refer to the loss of the data set used as input to learning algorithm.
Relation between test loss and generalization error.The test loss consists of two components: the generalization error and the training loss. The generalization error can accurately represent the test loss if the training loss is negligible compared to the generalization error. There are two scenarios in which this can occur: **1) The training loss is consistently numerically small compared to the generalization error**. In practice, small numerical values are often disregarded. Under these circumstances, the behavior of the generalization error dictates the pattern observed in the test loss. **2) The training loss diminishes at an equal or faster rate compared to the generalization error.** In this case, the rate of the test loss is determined by the rate of the generalization error. When analyzing how quickly the test loss decreases as we scale \(n\), only the rate of decrease is taken into account.
### Comparison between Theoretic and practice
The setting \(d=\Theta(n)\) is prefered in practice. Hoffmann et al. (2022) found that optimal performance can be achieved with \(d=\Theta(n)\) (Table 2 in Hoffmann et al. (2022)). Additionally, Kaplan et al. (2020) discovers that \(n\gtrsim(5\times 10^{3})d^{0.74}\) can avoid overfitting behavior. It is clear that the \(d=\Theta(n)\) condition satisfies the inequality for relative large \(n\). We argue that it is crucial to study the generalization behavior under \(d=\Theta(n)\) to better align theoretical work with practical applications.
Interpreting our results in practice situationIf the training error can decrease to a significantly lower value than the generalization error, or if the training error's vanishing rate is faster than the generalization error, and \(\Delta_{\sigma}\) is not in the worst-case scenario, then the iterative learning algorithm with bounded updates can achieve a vanishing test loss at a rate of \(\mathcal{O}(1/n^{\frac{1}{3}})\) in worst-case scenario.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Kaplan et al. (2020) & \(n\gtrsim(5\times 10^{3})d^{0.74}\) & \(\mathcal{O}(1/n^{0.103})\) & Generalization Error \\ Hoffmann et al. (2022) & \(d=\Theta(n)\) & \(\mathcal{O}(1/n^{0.28})\) & \\ Muennighoff et al. (2023) & \(d=\Theta(n)\) & \(\mathcal{O}(1/n^{0.353})\) & \\ Ours & \(d=\Theta(n)\) & & \(\mathcal{O}(1/n^{\frac{1}{3}})\) or \(\mathcal{O}(1/n^{\frac{1}{2}})\) \\ \hline \end{tabular}
\end{table}
Table 3: comparing the empirical results on the scaling of large language models with our theory. it is important to note that large language models are trained with only one epoch. Therefore, the “training loss” in the new batch data of their work is, in fact, the test loss. The actual training loss can be determined by reevaluating the loss on the training data after training with fixed weights..
The asymtotic rate of lossThe generalization error result (\(\frac{1}{n^{1/3}}\)) is similar to the experimental test loss findings of Hoffmann et al. (2022) (\(\mathcal{O}(1/n^{0.28})\)) and Muennighoff et al. (2023) (\(\mathcal{O}(1/n^{0.353})\)).
### Gap between theory and practice
Bounded updateOur method requires the learning algorithms have bounded update.. However, practically employed learning algorithms may not always exhibit this property. To bridge this gap, future efforts should focus on: 1) Analyzing the differences in behavior between learning algorithms with update clipping, which ensures bounded updates, and the original learning algorithms. 2) Investigating the behavior of the update norm when scaling the dimension \(d\). It is possible for learning algorithms that don't guarantee bounded updates to still achieve bounded update behavior if \(f(\cdot)\) has desirable properties. The lazy training phenomenon (Chizat et al., 2019; Allen-Zhu et al., 2019; Du et al., 2019; Zou et al., 2018) implies that such favorable properties exist.
Learning rate settingIn our analysis, we select \(T\eta=\Theta(1)\). Practically, the learning rate often decays throughout the training process. We also give further discuss the configuration \(T=\mathcal{O}(n)\) and \(\eta_{t}=\frac{c}{t}\) in Appendix D. The outcomes of this setting closely resemble those with \(T\eta=\Theta(1)\), except for an additional \(\log n\) term. This \(\log n\) term is negligible compared to polynomial terms of \(n\). However, the real application usually decay the learning rate for certain iteration and may leverage warm up technique. Therefore, future work is needed to bridge the gap.
## 7 Future Work
Integrating the knowledge of learning trajectoryIncorporating information from the learning trajectory is crucial for gaining a deeper understanding of generalization behavior. Fu et al. (2023) employs learning trajectory data to establish a better generalization bound for SGD. Additionally, using learning trajectory information could potentially enhance the bounds of iterative learning algorithms with bounded updates.
## 8 Limitation
Haghifam et al. (2023) analyzes the behavior of information-theoretic generalization bounds and stability-based generalization bounds, finding that all information-theoretic-based generalization bounds do not achieve a min-max rate comparable to stability-based works in stochastic convex optimization problems. **Our work cannot overcome this limitation for the following reasons**: 1) Unlike stability-based work, information-theoretic methods, including our work, cannot directly leverage convex information. This makes the information-theoretic methods sub-optimal. 2) Some failure cases listed in Haghifam et al. (2023) are due to the work of Russo & Zou (2016), on which our study is based. Improving the limitations of Russo & Zou (2016) is beyond the scope of our paper. Given that all bounds of information-theoretic methods suffer from this limitation, it is an important direction for future research.
## 9 Conclusion
This paper presents a new generalization bound for general iterative learning algorithms with bounded updates. This result is more general than previous methods, which primarily focus on the SGD algorithm. To achieve these results, we introduce a new perspective by reformulating the mutual information \(I(W;S)\) as the uncertainty of the update. Our generalization bound is analyzed under various settings. Our work achieves a better vanishing rate guarantee than previous work (Neu et al., 2021) in the overparameterized regime where \(d=\Theta(n)\). Finally, we examine the gap between our theory and practice by analyzing the previously discovered scaling behavior in large language models. Our model shed light on developing practial used generalization theory. | この論文では、非凸損失関数のiterative learningアルゴリズムの汎化特性を、情報論的技術を用いて考察します。本論文の主要な貢献は、更新が制限されるiterative learningアルゴリズムの汎化誤差に関する新しい上限であることです。このアプローチは、2つの主要な新規性をもたらします。1) Mutual informationを更新の不確実性として再構成することで、新しい視点を提供し、2) Mutual informationの鎖接続規則を使用するのではなく、情報を各iterationで分解する分散分解技術を用いて、より単純な近似プロセスを導入します。さまざまな設定下で、この汎化上限を解析し、改善された上限を示します。理論と実務のギャップを埋めるために、大規模言語モデルにおける既知のスケール特性についても検討しました。最終的には、この研究は実用的な汎化理論の開発に進むための重要なステップを踏み出しました。 |
2309.06556 | The Quantum and thermodynamic properties of dyonic RN-like black holes | The effect of magnetic fields on black hole superradiance is an exciting
topic with possible astrophysical applications. A dyonic RN-like black hole is
not asymptotically flat. It describes a black hole immersed in an
asymptotically uniform magnetic field. This paper discusses the superadditive
stability of binary RN black holes, asymptotically flat, band-like black holes.
This article introduces the above condition into dyonic RN-like black holes if
a dyonic RN-like black hole satisfies the requirement of $\mu=y\omega$, When
$\sqrt{2(B^2+Q^2)}/{r^2_+}< \omega< q\varPhi_H$,particularly $\mu \ge
\sqrt{2}(q\varPhi_H)$,the dyonic RN-like black hole is superradiantly stable at
that time.Scalars can be seen as combinations of positive/negative powers of a
base, much like the decimal system. This principle is key in math and
computing, from number systems to Fourier series (linked to $e^{i x}$ ). Dyonic
RN-like black holes show no phase transition. | Wen-Xiang Chen, Yao-Guang Zheng | 2023-09-11T01:41:13 | http://arxiv.org/abs/2309.06556v1 | # The Quantum and thermodynamic properties of dyonic RN-like black holes
###### Abstract
The effect of magnetic fields on black hole superradiance is an exciting topic with possible astrophysical applications. A dyonic RN-like black hole is not asymptotically flat. It describes a black hole immersed in an asymptotically uniform magnetic field. This paper discusses the superadditive stability of binary RN black holes, asymptotically flat, band-like black holes. This article introduces the above condition into dyonic RN-like black holes if a dyonic RN-like black hole satisfies the requirement of \(\mu=y\omega\), When \(\sqrt{2(B^{2}+Q^{2})}/r_{+}^{2}<\omega<q\mathcal{P}_{H}\),particularly \(\mu\geq\sqrt{2}(q\Phi_{H})\),the dyonic RN-like black hole is superradiantly stable at that time.Scalars can be seen as combinations of positive/negative powers of a base, much like the decimal system. This principle is key in math and computing, from number systems to Fourier series (linked to \(e^{ix}\) ). Dyonic RN-like black holes show no phase transition.
**Keywords: superradiantly stable, a new variable y, dyonic RN-like black hole, thermodynamic properties**
## I Introduction
The research on the stability of black holes can be traced back to 1957 when Regge and Wheeler found that Schwarzschild black holes are stable under small perturbations of the metric. In 1970, Zerilli further studied Schwarzschild black holes and RN black holes and reduced the perturbation problem to solving the Schrodinger-like equation (wave equation) [1; 2; 3; 4]. In 1972 Teukolsky studied the perturbations of various matter fields (gravitational, electromagnetic, neutrino fields) in Kerr space-time and decoupled the field equations into independent wave equations[5; 6], laying the foundation for the study of the external field disturbance of black holes. In 1983, Chandrasekhar's "Mathematical Theory of Black Holes" systematically expounded the perturbation theory of black holes. Superradiance is essentially the radiation enhancement process, which plays an essential role in optics, quantum mechanics, especially relativity, and astrophysics. Dicke, who coined the term "superradiance" in the context of quantum optics coherent emission [7], achieved the first high-resolution superradiance measurements using cohesive synchrotron radiation [7; 8]. Zeldovich believed that the dissipative rotating body amplifies human radiation, and Starobinsky recognized the superradiation phenomenon of black holes on his basis. When the frequency of the human radiation satisfies the superradiation condition, The rotational energy can be extracted from the black hole [7; 8]. Black hole superradiation is closely related to the black hole area theorem, the Penrose process, tidal forces, and even Hawking radiation [9]. In the general theory of relativity, the superradiation of a black hole is to extract energy, charge, and angular momentum in a vacuum[9; 10]. It can be known from the scattering problem of root quantum mechanics: the plane wave whose eigenfrequency is \(\omega\) moves toward the center of the black hole and is scattered to infinity under the action of the black hole, and the dispersed particles obey a specific angular distribution. Taking the scalar wave under the background of static spherical symmetry as an example, we will see, at this time, the scalar field satisfies the Schrodinger-like equation of the following form:
\[\frac{d^{2}\psi_{lm}}{dx^{2}}+V_{\rm eff}\,\psi_{lm}=0, \tag{1}\]
where \(\psi_{lm}\) is the radial component of the field after decomposing the variables, \(x\) is the turtle coordinate, and \(V_{\rm eff}\) depends on the theoretical model and the space-time background. In the case of spherical symmetry, we consider the scattering of monochromatic plane waves. Assuming that \(V_{\rm eff}\) is constant on the boundary, the asymptotic solution satisfies
\[\psi_{lm}\sim\begin{cases}\mathcal{T}e^{-ik_{H}r},&r\to r_{+}\\ \mathcal{I}e^{-ik_{\infty}r}+\mathcal{R}e^{ik_{\infty}r},&r\to\infty,\end{cases} \tag{2}\]
in
\[k_{H}=\omega-\omega_{c},\] \[k_{\infty}=\sqrt{\omega^{2}-\mu^{2}}. \tag{3}\]
\(\omega_{c}\) is the critical frequency. For a charged and rotating black hole, the critical frequency is
\[\omega_{c}=q\Phi_{H}+m\Omega_{H}=\frac{qQr_{p}+ma}{r_{p}^{2}+a^{2}}, \tag{4}\]
where \(r_{p}\) is the event horizon of the black hole (namely the outer horizon), \(\Omega_{H}\) is the angular velocity of the black hole and the electric potential at the event horizon, \(m\) is the magnetic quantum number of the scalar field, \(q\) is the charge of the scalar field. When the black hole is not rotating, the critical frequency degenerates to
\[\omega_{c}=q\Phi_{H}=\frac{qQ}{r_{p}}. \tag{5}\]
\[W\equiv\left|\begin{array}{cc}\psi&\psi^{*}\\ \psi^{\prime}&\psi^{*\prime}\end{array}\right|=\psi\psi^{*\prime}-\psi^{*} \psi^{\prime} \tag{6}\]
can be obtained at infinity
\[W=2ik_{\infty}\left(|\mathcal{R}|^{2}-|\mathcal{I}|^{2}\right) \tag{7}\]
And at the horizon is
\[W=-2ik_{H}|\mathcal{T}|^{2}. \tag{8}\]
Since the Lansky determinant is a constant, we have
\[|\mathcal{R}|^{2}=|\mathcal{I}|^{2}-\frac{k_{H}}{k_{\infty}}|\mathcal{T}|^{2}. \tag{9}\]
It can be found that when \(\frac{k_{H}}{k_{\infty}}>0,|R|^{2}<|I|^{2}\), the reflected wave amplitude is smaller than the human radiation wave amplitude, and the energy of the scalar field decreases; when \(\frac{k_{H}}{k_{\infty}}<0,|R|^{2}>|I|^{2}\), the amplitude of the reflected wave is greater than the amplitude of the human radiation, and the energy of the scalar field increases at this time. Therefore, the superradiance generation condition is the increased condition of the scalar field energy, that is,
\[0<\omega<\omega_{c}. \tag{10}\]
In the above equation, the critical angular frequency \(\omega_{c}\) is defined as
\[\omega_{c}=q\Psi, \tag{11}\]
where \(\Psi\) is the electromagnetic potential of the outer horizon of the dyonic RN black hole, \(\Psi=Q/r_{+}\). The superradiant condition for an electrically charged massive scalar perturbation on the dyonic RN black hole background is
\[\omega<\omega_{c}=\frac{qQ}{r_{+}}. \tag{12}\]
The bound state condition at spatial infinity for the scalar perturbation is
\[\omega^{2}<\mu^{2}. \tag{13}\]
When the radiation wave with the frequency \(\omega\) satisfies the formula, superradiance scattering occurs. At this time, the reflected wave carries more energy than the human radiation wave, which is the superradiation occurrence condition of the charged black hole in the general theory of relativity. It is worth noting that if the black hole is not set, that is, the Schwarzschild black hole, there is no superradiation phenomenon, and only the rotating black hole or the charged black hole has the superradiation phenomenon[8; 9; 10].
Hod proved[10] that the Kerr black hole should be superradiant stable under massive scalar perturbation when \(\mu\geq\sqrt{2}m\Omega_{H}\), where \(\mu\) is the mass.
The effect of magnetic fields on black hole superradiance is an exciting topic with possible astrophysical applications. A dyonic RN-like black hole is not asymptotically flat. It describes a black hole immersed in an asymptotically uniform magnetic field. This paper discusses the superradiant stability of binary RN black holes, asymptotically flat, band-like black holes. This article introduces the above condition into dyonic RN-like black holes if a dyonic RN-like black hole satisfies the requirement of \(\mu=y\omega\)[11; 12; 13; 14], when \(\mu\geq\sqrt{2}(q\Phi_{H})\), so the dyonic RN-like black hole is superradiantly stable at that time.
Thermodynamic geometric analysis provides a unique lens through which the thermodynamic attributes of black holes can be explored by leveraging their geometric properties and intrinsic thermodynamic features. In recent times, there has been a surge in interest from the scientific community, focusing on the thermodynamic geometric analysis of Reissner-Nordstrom (RN) black holes in the realm of f(R) gravity.
RN black holes can be described as electrically charged cosmic entities where gravitational pull is counteracted by the electromagnetic repulsion originating from the charged particles. Diverging from Einstein's general relativity, the f(R) gravity theory introduces an innovative function of the Ricci scalar curvature, aiming to offer a more comprehensive portrayal of gravitational behavior across both quantum and cosmological scales.
When delving into the thermodynamics of black holes, common variables of focus include entropy, temperature, among others. Particularly within the framework of f(R) gravity, the study of RN black holes has employed the geometric techniques championed by Ruppeiner and Quevedo. These pioneering methods map the black hole's thermodynamic variables onto a thermodynamic plane, which serves as a geometric depiction of the thermodynamic state space.
The structure of this paper is organized as follows: In Section 2, we present the mathematical framework of dyonic RN-like black holes. Section 3 introduces a new class of action and field equations for dyonic RN-like black holes. In Section 4, we derive the radial equation of motion and the associated effective potential. Section 5 involves a detailed analysis of the shape of the effective potential, from which we determine the superradiant stability parameter region for the system. In Section 6, we discuss the limit y of the incident particle under the superradiance of these novel black holes. Section 7 offers a thorough examination of the thermodynamic geometry of the new-type black holes and their connection to superradiant stability. Section 8 is dedicated to establishing that there is no phase transition for the black hole in its general form. Finally, Section 9 concludes the paper.
## 2 Mathematical Framework
### Complex Function Theory
Complex function theory, also known as complex analysis, studies the functions of complex variables. The key concepts in our investigation include analytic functions, singularities, and residues.
We propose a methodology to construct the Lagrangian function using the radical solutions of a quartic equation. The generalized coordinates \(q\) and generalized velocities q can be expressed in terms of the radical solutions:[8; 9; 10]
\[q=A+Bx+Cx^{\wedge}2+Dx^{\wedge}3+\text{Ex}^{\wedge}4, \tag{14}\]
\[\dot{q}=B+2Cx+3Dx^{\wedge}2+4Ex^{\wedge}3, \tag{15}\]
where \(A,B,C,D,\) and \(E\) are constants. We can then define the kinetic and potential energies in terms of the radical solutions:
\[T(q,\dot{q},t)=1/2*m^{*}(\dot{q})^{\wedge}2, \tag{16}\]
\[V(q,t)=k*(q-q_{0})^{\wedge}\,2/2, \tag{17}\]
where m is the mass, k is the spring constant, and q\({}_{0}\) is the equilibrium position. We can construct the Lagrangian function using the expressions for the kinetic and potential energies:
\[L(q,\dot{q},t)=T(q,\dot{q},t)-V(q,t)=1/2^{*}m*(\dot{q})^{\wedge}2-k*(q-q_{0})^ {\wedge}\,2/2. \tag{18}\]
We obtain the Lagrangian function in terms of the radical solutions of the quartic equation:
\[L(x)=1/2^{*}m^{*}\left(B+2Cx+3Dx^{\wedge}2+4\text{Ex}^{\wedge}3\right)^{ \wedge}2-k*(A+Bx+Cx^{\wedge}2+Dx^{\wedge}3+\,Ex^{\wedge}4-q_{0})^{\wedge}\,2/2. \tag{19}\]
### Analytic Functions
An analytic function is a function f(z) that is differentiable at every point z in its domain. The Cauchy-Riemann equations are given by:[1; 2; 3; 4]
\[\begin{split}\partial u/\partial x&=\partial v/ \partial y\\ \partial u/\partial y&=-\partial v/\partial x\end{split} \tag{20}\]
where u and v are the real and imaginary parts of f(z), respectively.
### Singularities
Singularities occur in complex functions when a function is not analytic at a specific point. There are three types of singularities: removable, pole, and essential.
### Residues
The residue of an analytic function f(z) at a singularity z=a is given by:
\[\text{Res}(f,a)=\left(1/(2\pi i)\right)\oint_{-}Cf(z)dz \tag{21}\]
where the integral is taken over a contour C enclosing the singularity a.
The multi-pole Laurent series is a method for representing a meromorphic function as a polynomial and a finite series of terms, which is suitable for complex functions that have multiple poles in the vicinity of certain points. It can be written in the following form:
\[f(z)=\sum_{n=-\infty}^{\infty}c_{n}\left(z-z_{0}\right)^{n} \tag{22}\]
Here, \(z_{0}\) is the pole of the function \(f(z)\), and \(c_{n}\) are the coefficients of the series. Unlike the Puiseux Laurent series, the multi-pole Laurent series allows for the presence of multiple poles near \(z_{0}\).
The multi-pole Laurent series can also be written in the following form:
\[f(z)=\sum_{j=1}^{k}\sum_{n=-\infty}^{\infty}c_{n,j}\left(z-z_{0,j}\right)^{n} \tag{23}\]
Here, \(z_{0,j}\) is the \(j\)th pole of the function \(f(z)\), \(c_{n,j}\) are the coefficients of the series, and \(k\) is the total number of poles.
We get that
\[T(f(z))=\int D\phi\exp(iS(\phi))/\left(z-z_{0}\right), \tag{24}\]
where \(S\) is the action of the path \(x(t)\) or \(\phi(x(t))\), the time integral of the Lagrangian \(L(t,x,\dot{x})\):
\[S=\int L(t,x,\dot{x})dt \tag{25}\]
Our study revolves around the most basic background geometry of a black hole, in tandem with a \(U(1)\) gauge field interacting with a charged scalar field. The expression for the lagrangian density is given by:
\[\begin{split}\mathcal{L}=& R+\frac{6}{L^{2}}-\gamma \left(\frac{1}{4}F^{\mu\nu}F_{\mu\nu}+\mid\nabla_{\mu}\psi\right.\\ &-\left.iqA_{\mu}\psi\right|^{2}+m^{2}|\psi|^{2}\right)\end{split} \tag{26}\]
In this formula, \(R\) signifies the Ricci scalar, while \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) characterizes the electromagnetic field strength. The variable \(\psi\) stands for a scalar field possessing a charge \(q\) and mass \(m\), and \(A\) symbolizes the gauge field. We
define the AdS radius as \(L\), and \(q\) as the charge, and for this work, we consider them both to be unity. Finally, \(\gamma\) is a variable that quantifies the intensity of the backreaction.
The SU(2) linear non-autonomous quantum system refers to the linear functional whose Hamiltonian is the SU(2) generator and its superposition coefficient is related to time. It is a time-dependent quantum system with important practical value. One of the most important achievements of nonlinear science at present lies in understanding chaotic phenomena. Therefore, discussing the chaos of SU(2) linear non-autonomous quantum systems has certain theoretical significance and practical value.
The chaotic problem in SU(2) linear non-autonomous quantum systems is discussed using the SU(2) algebraic dynamics equation, and a very important and interesting result is found: Complementary chaos exists in mathrmSU(2) linear non-autonomous quantum system, and the box dimension of the fractal graph is calculated.
Example: SU(2) group: \(\rm SU(2)=\left\{J_{0},J_{+},J_{-}\right\},n=3,1=1\). The group chain is \(su(2)\supset u(1),\rm CSCOII=\hat{J}^{2},\hat{J}_{0}\), their common eigenfunction \(y_{Jm}\) is the basis vector of the entire Hilbert space.
The Chern-Simons theory related to the group \(SU(2)\) is a gauge theory for the three-dimensional manifold \(M\) governed by action[15; 16]
\[S_{k}[A]=\frac{k}{4\pi}\int_{M}\left\langle A\wedge dA+\frac{2}{3}A\wedge A \wedge A\right\rangle \tag{27}\]
where \(k\) is called the level of the action, \(A\) is the local \(SU(2)\)-connection field, and \(\left\langle\cdot,\cdot\right\rangle\) is the notation for \(\mathfrak{su}(z)\) fill form. The Chern-Simons theory became very important when it was first shown to be closely related to gravity in three dimensions, especially when Witten demonstrated [16] its surprising relationship to manifold and knot invariants. The Chen-Simons path integral is manifold invariant, and the mean of quantum observables naturally leads to Jones polynomials. For all these reasons, Chen-Simons theory has been the center of much interest, and its quantification is now very famous when the gauge group is compact, especially when the gauge group is SU(2).
In this article, we fabricate a black hole metric that conforms to the SU(2) group structure and compute the scalar curvature of the thermodynamic geometry of this black hole. We conclude that there is no phase transition for SU(2) black holes.
## III New class of action and field equations
We set the system to be in the interval from 0 to 1. Since the break from 0 to 1 can be mapped to the interval from 0 to infinity, the size sequence this paper discusses is unchanged. This theory aims to find that the modified Einstein gravitational equation has a Reissner-Nordstrom solution in a vacuum. First, we can consider the following equation (modified Einstein's gravitational equation).
The proper time of spherical coordinates is[13](The metric which is in exponential form)
\[ds^{2}=-e^{G(t,r)}dt^{2}+e^{-G(t,r)}dr^{2}+\left[r^{2}d\theta^{2}+r^{2}\sin^{ 2}\theta d\varphi^{2}\right] \tag{28}\]
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R+\Lambda(\left(g^{\theta\theta}\right)^{2}) g_{\mu\nu}=-\frac{8\pi G}{C^{4}}T_{\mu\upsilon} \tag{29}\]
In this work, the action(we set \(8G=c=1\) ) is given by the following relation, which in the special case, reduces to the Einstein-Maxwell dilaton gravity:[10]
\[S=\frac{1}{16\pi G}\int d^{4}x\sqrt{-g}(-2\Lambda(\left(g^{\theta\theta} \right)+R) \tag{30}\]
where \(\Lambda\) is a function of the Ricci scalar \(R\), and \(\Phi\) is the representation of the dilatonic field, also similar to f(R)(We will now consider non-pathological functional forms of \(f(R)\) that can be expanded in a Taylor series of the form \(f(R)=a_{0}+R+a_{2}R^{2}+a_{3}R^{3}+\ldots a_{n}R^{n}+\ldots\) where we have normalized all coefficients concerning the coefficient of the linear term). Variation of the action for the metric \(g_{\mu\nu}\), the gauge \(A_{\mu}\) and dilaton field \(\Phi\) gives the following field equations:
This leads to the following:
\[\frac{1}{2}R\Lambda^{\prime}(R)-\Lambda=0. \tag{31}\]
\[R_{kl}\Lambda^{\prime}(R)-\frac{1}{2}g_{kl}\Lambda=0 \tag{32}\] \[\nabla_{\sigma}\left[\sqrt{-g}\Lambda^{\prime}(R)g^{\mu\nu}\right]=0.\]
In this relationship, we get
\[\mathbf{\Lambda}=B(\mathbf{p}\times\mathbf{r})/r^{4}, \tag{33}\]
B is an algebraic parameter, and p is a momentum or momentum operator.
A dyonic-like RN black hole is a static spherically symmetric space-time geometry, which is the solution of the Einstein-Maxwell theory [14]. Using spherical coordinates \((t,r,\theta,\phi)\), the line element can be expressed as (we use natural units, where \(G=c=\hbar=1\)).We set up a geometric entity, and B takes a specific value for the parametric algebra so that the following formula holds.
\[ds^{2}=-\frac{\Box}{r^{2}}\mathrm{dt}^{2}+\frac{r^{2}}{\Box}\mathrm{dr}^{2}+r^ {2}\ \mathrm{d}\theta^{2}+r^{2}\sin^{2}\theta\mathrm{d}\phi^{2}, \tag{34}\]
Where
\[\Box=-2Mr+r^{2}+Q^{2}+B^{2}, \tag{35}\]
\(M\) is the mass of the black hole, and \(Q\) and B are the electric and magnetic charges of the black hole, respectively. The dynamic RN black hole has an outer horizon in \(r_{+}\) and an inner horizon in \(r_{-}\),
\[r_{+}=M+\sqrt{M^{2}-Q^{2}-B^{2}},\quad r_{-}=M-\sqrt{M^{2}-Q^{2}-B^{2}}. \tag{36}\]
They satisfy the following relation
\[\Box=\left(r-r_{+}\right)\left(r-r_{-}\right),\quad r_{+}r_{-}=Q^{2}+B^{2}, \quad r_{+}+r_{-}=2M. \tag{37}\]
The equation of motion of the charged massive scalar perturbation \(\Phi\) in the dynamic RN black hole background is described by the covariant Klein-Gordon (KG) equation
\[\left(D^{\nu}D_{\nu}-\mu^{2}\right)\Phi=0, \tag{38}\]
where \(D^{\nu}=\nabla^{\nu}-iqA^{\nu}\) and \(D_{\nu}=\nabla_{\nu}-iqA_{\nu}\) are covariant Derivatives, \(q\) and \(\mu\) are the charge and mass of the scalar field, respectively. The following vector potential describes the electromagnetic field of a dynamic black hole
\[A_{\nu}=\left(-\frac{Q}{r},0,0,B(\cos\theta\mp 1)\right), \tag{39}\]
The upper minus sign applies to the northern hemisphere of the black hole, and the lower plus warning applies to the southern hemisphere.
The solution of the KG equation can be decomposed into the following form
\[\Phi(t,r,\theta,\phi)=R(r)Y(\theta)e^{im\phi}e^{-i\omega t}, \tag{40}\]
where \(\omega\) is the angular frequency of the scalar perturbation and \(m\) is the azimuthal harmonic index. \(Y(\theta)\) is the angular part of the solution and \(R(r)\) is the radial part of the solution. Substituting the above solution into the KG equation, we can get the radial and angular parts of the equation of motion. Considering the different electromagnetic potentials in the northern and southern hemispheres, the equation of motion angle is discussed below in two cases.
## 4 The radial equation of motion and effective potential
A new radial wave function is defined as[11; 12; 13; 14]
\[\psi_{lm}\equiv\Delta^{\frac{1}{2}}R_{lm}. \tag{41}\]
to substitute the radial equation of motion for a Schrodinger-like equation
\[\frac{d^{2}\Psi_{lm}}{dr^{2}}+(\omega^{2}-V)\Psi_{lm}=0, \tag{42}\]
where
\[\omega^{2}-V=\frac{U+M^{2}-a^{2}-Q^{2}}{\Delta^{2}}, \tag{43}\]
in which \(V\) denotes the effective potential.
Considering the superradiation condition, i.e. \(\omega<\omega_{c}\), and the bound state condition, when the potential is captured, the Kerr-Newman black hole and the charged massive scalar perturbation system are superradiation stable[12]. Therefore, the shape of the effective potential V is next analyzed to investigate the presence of trapping wells.
The asymptotic behaviors of the effective potential \(V\) around the inner and outer horizons and at spatial infinity can be expressed as
\[V(r\rightarrow+\infty)\rightarrow\mu^{2}-\frac{2(2M\omega^{2}-qQ\omega-M\mu^{ 2})}{r}+\mathcal{O}(\frac{1}{r^{2}}), \tag{44}\]
\[V(r\to r_{+})\rightarrow-\infty,\ \ V(r\to r_{-})\rightarrow-\infty. \tag{45}\]
If a Kerr black hole satisfies the condition of \(\mu=y\omega\), it will be superradiantly stable when \(\mu<\sqrt{2}m\Omega_{H}\). In this article, we introduce the above condition into dyonic-like black holes. Therefore, the formula of the asymptotic behaviors is written as
\[V(r\rightarrow+\infty)\to y^{2}\omega^{2}-\frac{2[M(2-y^{2})\omega^{2}-qQ \omega]}{r}+\mathcal{O}(\frac{1}{r^{2}}), \tag{46}\]
\[V(r\to r_{+})\rightarrow-\infty,\ \ V(r\to r_{-})\rightarrow-\infty. \tag{47}\]
It is concluded from the equations above that the effective potential approximates a constant at infinity in space, and the extreme between its inner and outer horizons cannot be less than one. The asymptotic behavior of the derivative of the influential potential \(V\) at spatial infinity can be expressed as
\[V^{\prime}(r\rightarrow+\infty)\rightarrow\frac{2[M(2-y^{2})\omega^{2}-qQ \omega]}{r^{2}}+\mathcal{O}(\frac{1}{r^{3}}), \tag{48}\]
The derivative of the effective potential has to be negative to satisfy the no trapping well condition,
\[2M(2-y^{2})\omega^{2}-2Qq\omega<0. \tag{49}\]
## V Analysis of Superradiant Stability
In this section, we will find the regions in the parameter space where the system of dyonic RN black hole and massive scalar perturbation is superradiantly stable. We determine the parameter regions by considering the extremes of the effective potential in the range \(r_{-}<r<+\infty\)
Now, we define a new variable \(z,z=r-r_{-}\). The expression of the derivative of the effective potential \(V\) is
\[\begin{split} V^{\prime}(r)&=\frac{-2\left(ar^{4}+ br
\[\begin{split} d_{1}=&-2\left(4M-3r_{+}\right)\left(2M-r_{+} \right)^{3}\omega^{2}+2qQ\left(7M-5r_{+}\right)\left(2M-r_{+}\right)^{2}\omega \\ &+2q^{2}\left(-B^{2}\left(M^{2}-5Q^{2}\right)+4Q^{4}+B^{4}\right) -2q^{2}Q^{2}\left(3M\left(r_{+}-2M\right)\right.\\ &\left.+2\mu^{2}\left(2M^{2}-3Mr_{+}+r_{+}^{2}\right)^{2}+2 \left(Q^{2}+B^{2}\right)\right)-12Mq^{2}Q^{2}r_{-}\\ &+2\left(M-r_{+}\right)^{2}\left(\lambda-1\right)\\ e_{1}=&\left(r_{+}-r_{-}\right)\left(qQ-\omega r_{- }\right)^{2}r_{-}^{2}+\frac{1}{4}\left(r_{+}-r_{-}\right)^{3},\end{split} \tag{53}\]
where \(\lambda=l(l+1),\ \ \ l>qB\)[14].Since we set the system to range from 0 to 1, \(qB>q^{2}Q^{2}\).
In this paper, we denote the numerator of the derivative of the effective potential \(V^{\prime}(z)\). This quartic polynomial of z allows us to study the existence of trapped wells beyond the horizon by analyzing the properties of the roots of the equation. We use \(z_{1}\), \(z_{2}\), \(z_{3}\) and \(z_{4}\) to represent the four roots of \(g(z)=0\). The relationship between them conforms to Vieta's theorem.
\[z_{1}z_{2}z_{3}z_{4}=\frac{e_{1}}{a_{1}},z_{1}z_{2}+z_{1}z_{3}+z_{1}z_{4}+z_{2} z_{3}+z_{2}z_{4}+z_{3}z_{4}=\frac{c_{1}}{a_{1}}. \tag{54}\]
When \(z>0\), from the asymptotic behavior of the effective potential of the inner and outer horizons and space infinity, it can be inferred that the equation \(V^{\prime}(z)=0(\)or \(g(z)=0)\) cannot be less on two. So the two positive roots are written as \(z_{1},z_{2}\).
Research shows that
\[e_{1}>0. \tag{55}\]
and in
\[e_{1}>0,\ \ c_{1}<0, \tag{56}\]
\(g(z)=0\), that is, \(z_{3},z_{4}\) are all negative numbers.
When \(y^{2}>2(a_{1}>0)\) for \(e_{1}>0\), \(c_{1}<0\) at this time, and we can know that the equation \(\ {\rm V_{1}}^{\prime}(\)z\()=0\) cannot have more than two positive roots. So the dyonic RN-like black hole is superradiantly stable at that time.
## VI The limit \(y\) of the incident particle under the superradiance of novel black holes
We will investigate the physical and mathematical properties of linearized extended-mass scalar field configurations (scalar clouds) with nontrivial coupling to the electromagnetic-like fields of novel black holes. The space-time line element of a new type of spherically symmetric black hole can be expressed as[12]
\[ds^{2}=-g(r)dt^{2}+(1/g(r))dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}),\]
where
\[g(r)=1-2M/r+(B^{2}+Q^{2})/r^{2}. \tag{57}\]
V can change to [12]
\[V(r)=\left(1-\frac{2M}{r}+\frac{(B^{2}+Q^{2})}{r^{2}}\right)\left[\mu^{2}+ \frac{l(l+1)}{r^{2}}+\frac{2M}{r^{3}}-\frac{2(B^{2}+Q^{2})}{r^{4}}-\frac{ \alpha(B^{2}+Q^{2})}{r^{4}}\right] \tag{58}\]
As we will show clearly, the Schrodinger-like equation governs the radial function behavior of the spatially bounded nonminimum coupled-mass scalar field configuration of novel black hole spacetimes. Here we use the integral relation \(\int_{0}^{1}dx\sqrt{1/x-1}=\pi/2\).), when \(V(r\rightarrow+\infty),\mu=1/(n+\frac{1}{2})\),
\[\int_{(y^{2})_{t-}}^{(y^{2})_{t+}}d(y^{2})\sqrt{\omega^{2}-V1(y;M,B,l,\mu, \alpha 1)}=\left(n+\frac{1}{2}\right)\cdot\pi\mu/2=\pi/2\ \ \ ;\ \ \ \ n=0,1,2,.... \tag{59}\]
We see that the two integral boundaries \(\{y_{t-},y_{t+}\}\) formulas are classical turning points with \(V(y_{t-})=V(y_{t+})=0\) form a new type of black hole mass field combined with potential energy. The resonance parameter \(n\) (with \(n\in\{0,1,2,...\}\)) characterizes \(\alpha_{n}(\mu,l,M,B)_{n=0}^{n\infty}\) system.
Using the relationship between the radial coordinates \(y\) and \(r\), the WKB resonance equation can be expressed as
\[\int_{r_{t-}}^{r_{t+}}dr\frac{\sqrt{-V(r;M,B,l,\mu,\alpha)}}{g(r)}=\big{(}n+ \frac{1}{2}\big{)}\cdot\pi\ \ \ \ ;\ \ \ \ n=0,1,2,...\, \tag{60}\]
where the two polynomial relations
\[1-\frac{2M}{r_{t-}}+\frac{B^{2}+Q^{2}}{r_{t-}^{2}}=0 \tag{61}\]
and
\[\frac{l(l+1)}{r_{t+}^{2}}+\frac{2M}{r_{t+}^{3}}-\frac{2\left(B^{2}+Q^{2} \right)}{r_{t+}^{4}}-\frac{\alpha\left(B^{2}+Q^{2}\right)}{r_{t+}^{4}}=0. \tag{62}\]
Determine the radial turning point \(r_{t-},r_{t+}\) of the composite black hole field binding potential.
We set
\[x\equiv\frac{r-r_{+}}{r_{+}}\ \ \ \ ;\ \ \ \ \tau\equiv\frac{r_{+}-r_{-}}{r_{+}}. \tag{63}\]
In this regard, the combined black hole-mass field interaction term has the form of a combined potential well,
\[V[x(r)]=-\tau\Big{(}\frac{\alpha(B^{2}+Q^{2})}{r_{+}^{4}}-\mu^{2}\Big{)}\cdot x +\Big{[}\frac{\alpha(B^{2}+Q^{2})(5r_{+}-6r_{-})}{r_{+}^{5}}-\mu^{2}\big{(}1- \frac{2r_{-}}{r_{+}}\big{)}\Big{]}\cdot x^{2}+O(x^{3})\, \tag{64}\]
in the near-horizon region
\[x\ll\tau. \tag{65}\]
From the near-horizon expression of the black-hole-field binding potential, one obtains the dimensionless expressions
\[x_{t-}=0 \tag{66}\]
and
\[x_{t+}=\tau\cdot\frac{\frac{\alpha(B^{2}+Q^{2})}{r_{+}^{4}}-\mu^{2}}{\frac{ \alpha(B^{2}+Q^{2})(5r_{+}-6r_{-})}{r_{+}^{2}}-\mu^{2}\big{(}1-\frac{2r_{-}}{ r_{+}}\big{)}} \tag{67}\]
for the classical turning points of the WKB integral relation.
We found that our analysis is valid if (\(\alpha 1\) corresponds to the transformation of y)
\[\alpha\simeq\frac{\mu^{2}r_{+}^{4}}{(B^{2}+Q^{2})},\alpha 1\simeq\sqrt{\frac{ \mu^{2}r_{+}^{4}}{(B^{2}+Q^{2})}} \tag{68}\]
In this case, the near horizon binding potential and its outer turning point can be approximated by a very compact expression
\[V(x)=-\tau\Big{[}\Big{(}\frac{\alpha(B^{2}+Q^{2})}{r_{+}^{4}}-\mu^{2}\Big{)} \cdot x-4\mu^{2}\cdot x^{2}\Big{]}+O(x^{3}) \tag{69}\]
and
\[x_{t+}=\frac{1}{4}\Big{(}\frac{\alpha(B^{2}+Q^{2})}{\mu^{2}r_{+}^{4}}-1\Big{)}. \tag{70}\]
In addition, one finds the near-horizon relation
\[p(x)=\tau\cdot x+(1-2\tau)\cdot x^{2}+O(x^{3}). \tag{71}\]
We know that
\[\frac{1}{\sqrt{\tau}}\int_{0}^{x_{t+}}dx\sqrt{\frac{\alpha(B^{2}+Q^{2})}{r_{+ }^{4}}-\mu^{2}r_{+}^{2}}{x}-4\mu^{2}r_{+}^{2}=\big{(}n+\frac{1}{2}\big{)}\cdot \pi\ \ \ \ ;\ \ \ \ n=0,1,2,.... \tag{72}\]
Defining the dimensionless radial coordinate
\[z\equiv\frac{x}{x_{t+}}\;, \tag{73}\]
and we get
\[\frac{2\mu r_{+}x_{t+}}{\sqrt{\tau}}\int_{0}^{1}dz\sqrt{\frac{1}{z}-1}=\big{(}n +\frac{1}{2}\big{)}\cdot\pi\;\;\;\;;\;\;\;\;n=0,1,2,...\;, \tag{74}\]
which yields the relation
\[\frac{\mu r_{+}x_{t+}}{\sqrt{\tau}}=n+\frac{1}{2}\;\;\;\;;\;\;\;\;\;n=0,1,2,... \tag{75}\]
From the curve integral formula, it can be seen that there is a certain extremum to form a ring
\[1/y^{2}\rightarrow\alpha \tag{76}\]
y takes the interval from 0 to 1 at this time.
The physical parameter \(y\) is defined by a dimensionless relationship, and at this time \(y\) is greater than \(\sqrt{2}\),
\[y\equiv\alpha 1/\sqrt{2}. \tag{77}\]
Here the critical parameter y is given by the simple relation
\[y/\mu\equiv\frac{r_{+}^{2}}{\sqrt{2(B^{2}+Q^{2})}}. \tag{78}\]
When
\[\sqrt{2(B^{2}+Q^{2})}/r_{+}^{2}<\omega<q\Phi_{H}, \tag{79}\]
The new type of black hole was very stable at that time.
## 7 The thermodynamic geometry of new type black holes and their relation to superradiant stability
We can rewrite the action[12]
\[S[\varphi]=\frac{a}{2\Omega_{H}}\int d^{4}x\sin\theta\varphi\left(-\frac{1}{f( r)}\partial_{\xi}^{2}+\partial_{r}f(r)\partial_{r}\right)\varphi \tag{80}\]
When \(\sin\theta=0\), the pull equation for action can conform to the above form, but the boundary becomes 0. The action form can permanently be reduced to the formation of negative power expansion. It can be seen that Hawking radiation is consistent with superradiation.
Thermodynamic geometry of new type black holes: Entropy S[12]
\[S\to 4\pi\left[2M\left(M+\sqrt{M^{2}-B^{2}-Q^{2}}\right)-Q^{2}\right] \tag{81}\]
Through the thermodynamic geometric metric, we obtain the expression of the new type black hole Ruppeiner metric
\[g_{ab}^{R}=\frac{\partial^{2}}{\partial x^{a}\partial x^{b}}S(M,Q)\;\;\;\;(a, b=1,2) \tag{82}\]
where \(x^{1}=H,x^{2}=\Omega\). By calculation, we get the metric expression
\[g_{11}^{R}=8\pi\left(2-\frac{M^{3}}{\left(-B^{2}+M^{2}-Q^{2}\right)^{3/2}}+ \frac{3M}{\sqrt{-B^{2}+M^{2}-Q^{2}}}\right) \tag{83}\]
\[g_{12}^{R}=\frac{8\pi Q\left(B^{2}+Q^{2}\right)}{\left(-B^{2}+M^{2}-Q^{2}\right)^ {3/2}} \tag{84}\]
\[g_{21}^{R}=\frac{8\pi Q\left(B^{2}+Q^{2}\right)}{\left(-B^{2}+M^{2}-Q^{2}\right) ^{3/2}} \tag{85}\]
\[g_{22}^{R}=8\pi\left(-1+\frac{M\left(B^{2}-M^{2}\right)}{\left(-B^{2}+M^{2}-Q^{ 2}\right)^{3/2}}\right) \tag{86}\]
The curvature scalar of a new type black hole is
\[\hat{R}=g_{ab}R^{ab}\to 1/(16\pi(B^{2}-M^{2}+Q^{2})(2B^{4}+4M^{4}-5M^{2}Q^{2}+Q ^{4}+ \tag{87}\] \[4M^{3}\sqrt{-B^{2}+M^{2}-Q^{2}}-3MQ^{2}\sqrt{-B^{2}+M^{2}-Q^{2}} +B^{2}(-7M^{2}+3Q^{2}-5M\sqrt{-B^{2}+M^{2}-Q^{2}}))^{5})\]
According to the rewritten action, when the superradiation condition is established, and the black hole does not undergo a thermodynamic phase transition, the superradiation stability condition of the black hole is found.
## VIII No phase transition for the dyonic RN-like black hole in general form
SU(2) black hole metric in general form [15; 16]
\[\mathrm{d}s^{2}=-e^{2z_{1}}\ \mathrm{d}t^{2}+e^{-2z_{1}}\ \mathrm{d}r^{2}+r^{2}e^{2z_{2}}d \theta^{2}+r^{2}e^{-2z_{2}}\sin^{2}\theta d\varphi^{2}, \tag{88}\]
where z1 and z2 are complex numbers modulo 1 or less, they are both functions of entropy. Convert the metric to a simple form by taking a specific value
\[\mathrm{d}s^{2}=-e^{2z_{1}}\ \mathrm{d}t^{2}+e^{-2z_{1}}\ \mathrm{d}r^{2}+e^{2z_{2}}d \theta^{2}+e^{-2z_{2}}d\varphi^{2}, \tag{89}\]
The non-zero Christoffel symbols are
\[\begin{split}\Gamma_{00}^{1}&={z_{1}}^{\prime}{}^{ \epsilon}{}^{4z_{1}}\\ \Gamma_{10}^{0}&={z_{1}}^{\prime}\\ \Gamma_{11}^{1}&=-{z_{1}}^{\prime}\\ \Gamma_{21}^{2}&={z_{2}}^{\prime}\\ \Gamma_{22}^{1}&=-{z_{2}}^{\prime}e^{2(z_{2}+z_{1})} \\ \Gamma_{31}^{3}&=-{z_{2}}^{\prime}\\ \Gamma_{33}^{1}&={z_{2}}^{\prime}e^{2(z_{1}-z_{2})} \end{split} \tag{90}\]
The Ricci tensors are
\[\begin{split} R_{00}&=\left(z_{1}^{\prime\prime}+{z_ {1}}^{2}-{z_{1}}^{\prime}\left(-{z_{1}}^{\prime}\right)^{\prime}+{z_{1}}^{ \prime}{z_{2}}^{\prime}+{z_{1}}^{\prime}\left(-{z_{2}}\right)^{\prime}\right)e ^{4(z_{1})}\\ R_{11}&={z_{1}}^{\prime}\left(-{z_{1}}\right)^{ \prime}+{z_{2}}^{\prime}\left(-{z_{1}}\right)^{\prime}-{z_{2}}^{\prime\prime }-\left(-{z_{2}}\right)^{\prime\prime}-{z_{1}}^{2}-{z_{2}}^{2}-(-z_{2})^{\prime 2 }-{z_{1}}^{\prime\prime}\\ R_{22}&=\left(-{z_{2}}^{\prime\prime}+{z_{2}}^{ \prime}\left(-{z_{1}}\right)^{\prime}-{z_{1}}^{\prime}\left({z_{2}}\right)^{ \prime}-\left(-{z_{2}}\right)^{\prime}{z_{2}}^{\prime}-{z_{2}}^{\prime 2}\right)e^{2(z_{1}+z_{2})} \\ R_{33}&=\left(-\left(-{z_{2}}\right)^{\prime\prime }+-{z_{2}}^{\prime}\left(-{z_{1}}\right)^{\prime}-{z_{1}}^{\prime}\left(-{z_{2} }\right)^{\prime}-{z_{2}}^{\prime}\left(-{z_{2}}\right)^{\prime}-\left(-{z_{2} }\right)^{\prime}\right)e^{2(z_{1}-z_{2})}\end{split} \tag{91}\]
The Ricci scalar is
\[\begin{split} R=\left(-2{z_{1}^{\prime\prime}}-2{z_{1}}^{2}+2{z_ {1}}^{\prime}\left(-{z_{1}}\right)^{\prime}-2{z_{1}}^{\prime}{z_{2}}^{\prime}- 2{z_{2}}^{\prime}\left(-{z_{2}}\right)^{\prime}+2{z_{2}}^{\prime}\left(-{z_{1} }\right)^{\prime}+2-{z_{2}}^{\prime}\left(-{z_{1}}\right)^{\prime}\\ -2{z_{2}}^{\prime\prime}-2\left(-{z_{2}}\right)^{\prime\prime}-2{z_ {2}}^{\prime 2}-2\left(-{z_{2}}\right)^{\prime\prime}-2\left(-{z_{2}}\right)^{ \prime}{z_{2}}^{\prime}\right)e^{2z_{1}}\end{split} \tag{92}\]
For z1 and z2 are functions of entropy, we perform a generalized thermodynamic geometric analysis on them and see that there is no divergence term for this curvature scalar.In this article, we fabricate a black hole metric that conforms to the SU(2) group structure and compute the scalar curvature of the thermodynamic geometry of this black hole. We conclude that there is no phase transition for SU(2) black holes.
Summary
In this paper, we introduce \(\mu=y\omega\)[12; 13] into dyonic RN-like black holes and discuss the superradiation stability of dyonic RN-like black holes. We adopt the variable separation method to divide the least coupled scalar perturbation motion equations in dynamical RN black holes into two forms: angular and radial.
Hod proved [10] that when \(\mu\geq\sqrt{2}m\Omega_{H}\) (where \(\mu\) is the mass), Kerr black holes should be superradiantly stable under large-scale scalar perturbations. In this post, a new variable y is added here to extend the results of the previous post.
When \(\sqrt{2(B^{2}+Q^{2})}/r_{+}^{2}<\omega<q\Phi_{H}\),particularly \(\mu\geq\sqrt{2}(q\Phi_{H})\),\((16\pi(B^{2}-M^{2}+Q^{2})(2B^{4}+4M^{4}-5M^{2}Q^{2}+Q^{4}+4M^{3}\sqrt{-B^{2}+M^{2}-Q^{2} }-3MQ^{2}\sqrt{-B^{2}+M^{2}-Q^{2}}+B^{2}(-7M^{2}+3Q^{2}-5M\sqrt{-B^{2}+M^{2}-Q^ {2}}))^{5})\neq 0\), this dyonic RN-like black hole was superradiantly stable then.
Any scalar can be regarded as a linear combination of positive and negative powers of a base. This is essentially how the decimal (or any other base) number system works.This concept is fundamental to many areas of mathematics and computing, from number representation to Fourier series, where functions are represented as a sum of sines and cosines (which can be thought of as positive and negative powers of the complex number \(\,e^{ix}\)).And We conclude that there is no phase transition for the dyonic RN-like black holes.
| 磁場がブラックホール超伝導に与える影響は、可能性のある天体物理的応用をもたらす興味深いトピックです。ダイオニックなRN型ブラックホールは、漸近的に平坦ではありません。それは、漸近的に一様磁場の中に浸透しているブラックホールを記述しています。この論文では、二体RNブラックホールの超伝導安定性について議論しています。漸近的に平坦で、バンド状のブラックホール。この論文では、上記の条件をダイオニックなRN型ブラックホールに導入しています。ダイオニックなRN型ブラックホールが$\mu=y\omega$の条件を満たす場合、$\sqrt{2(B^2+Q^2)}/{r^2_+}< \omega< q\varPhi_H$、特に $\mu \ge\sqrt{2}(q\varPhi_H)$ の場合、ダイオニックな |
2309.09665 | Uplink Power Control for Distributed Massive MIMO with 1-Bit ADCs | We consider the problem of uplink power control for distributed massive
multiple-input multiple-output systems where the base stations (BSs) are
equipped with 1-bit analog-to-digital converters (ADCs). The scenario with a
single-user equipment (UE) is first considered to provide insights into the
signal-tonoise-and-distortion ratio (SNDR). With a single BS, the SNDR is a
unimodal function of the UE transmit power. With multiple BSs, the SNDR at the
output of the joint combiner can be made unimodal by adding properly tuned
dithering at each BS. As a result, the UE can be effectively served by multiple
BSs with 1-bit ADCs. Considering the
signal-to-interference-plus-noise-anddistortion ratio (SINDR) in the multi-UE
scenario, we aim at optimizing the UE transmit powers and the dithering at each
BS based on the min-power and max-min-SINDR criteria. To this end, we propose
three algorithms with different convergence and complexity properties.
Numerical results show that, if the desired SINDR can only be achieved via
joint combining across multiple BSs with properly tuned dithering, the optimal
UE transmit power is imposed by the distance to the farthest serving BS (unlike
in the unquantized case). In this context, dithering plays a crucial role in
enhancing the SINDR, especially for UEs with significant path loss disparity
among the serving BSs. | Bikshapathi Gouda, Italo Atzeni, Antti Tölli | 2023-09-18T11:02:52 | http://arxiv.org/abs/2309.09665v1 | # Uplink Power Control for Distributed Massive MIMO with 1-Bit ADCs
###### Abstract
We consider the problem of uplink power control for distributed massive multiple-input multiple-output systems where the base stations (BSs) are equipped with 1-bit analog-to-digital converters (ADCs). The scenario with a single user equipment (UE) is first considered to provide insights into the signal-to-noise-and-distortion ratio (SNDR). With a single BS, the SNDR is a unimodal function of the UE transmit power. With multiple BSs, the SNDR at the output of the joint combiner can be made unimodal by adding properly tuned dithering at each BS. As a result, the UE can be effectively served by multiple BSs with 1-bit ADCs. Considering the signal-to-interference-plus-noise-and-distortion ratio (SINDR) in the multi-UE scenario, we aim at optimizing the UE transmit powers and the dithering at each BS based on the min-power and max-min-SINDR criteria. To this end, we propose three algorithms with different convergence and complexity properties. Numerical results show that, if the desired SINDR can only be achieved via joint combining across multiple BSs with properly tuned dithering, the optimal UE transmit power is imposed by the distance to the farthest serving BS (unlike in the unquantized case). In this context, dithering plays a crucial role in enhancing the SINDR, especially for UEs with significant path loss disparity among the serving BSs.
+
Footnote †: publication: pubid: 979-8-3503-1090-0/23/$31.00 © 2023 IEEE
## I Introduction
Fully digital massive multiple-input multiple-output (MIMO) is widely recognized for its ability to realize flexible beamforming and large-scale spatial multiplexing [1]. As we move towards higher carrier frequencies in the next-generation wireless systems, the number of antennas at the base station (BS) is bound to increase significantly [2]. Fully digital massive MIMO arrays with high-resolution analog-to-digital converters (ADCs), however, are exceedingly complex and power hungry. Low-resolution and 1-bit ADCs can be used to considerably decrease the power consumption without compromising the performance [3, 4, 5, 6]. Moreover, adopting low-resolution and 1-bit ADCs in a distributed/cell-free massive MIMO setting allows to reduce the backhaul signaling overhead (among the BSs or between the BSs and the central processing unit) associated with the network-wide beamforming design [7, 8].
The works [4, 5] investigated the uplink channel estimation and achievable signal-to-interference-plus-noise-and-distortion ratio (SINDR) using 1-bit ADCs in single-BS systems. In addition, the spectral and energy efficiency of massive MIMO systems with low-resolution ADCs are studied as a function of the UE transmit power in [9]. These studies employ the Gaussian approximation for quantization distortion (QD) to evaluate system performance, which accurately represents the behavior in scenarios with low UE transmit power or a large number of UEs. The performance of 1-bit ADC systems with a low number of UEs is evaluated in [10], which is inferior at high UE transmit power due to QD. However, the introduction of properly tuned dithering at the BS enhances performance at high transmit powers [10]. Regarding cell-free massive MIMO systems, the performance of 1-bit ADCs is evaluated in [11], while the energy efficiency of such systems with low-resolution ADCs is analyzed in conjunction with an uplink power control scheme for max-min fairness in [12]. It is important to note that the works in [11, 12] also adopt the Gaussian approximation for QD, which is more accurate with increasing the number of ADC bits. However, the approximation may not fully capture the true performance characteristics of 1-bit ADC systems.
In this paper, we consider the uplink power control problem in the distributed massive MIMO systems with 1-bit ADCs. We first provide insight into the signal-to-noise-and-distortion ratio (SNDR) of a UE in single and multi-BS scenarios with 1-bit ADCs. Our analysis reveals that the SNDR of the UE is non-monotonic and highly dependent on the UE transmit powers and location. Specifically, in the single-BS case, the SNDR is unimodal, whereas, in the multi-BS case, the SNDR is non-unimodal. However, we show that by tuning the Gaussian dithering (i.e., the noise level), the SNDR can be made unimodal even in the multi-BS scenario. We also optimize the transmit powers of UEs by considering the min-power and max-min-SINDR optimization problems in a multi-UE scenario. To solve these optimization problems, we use gradient, fixed-point, and block coordinate descent (BCD) methods. Our numerical results indicate that in a single BS system, the SINDR reaches a saturation point beyond a certain power of the UEs. Moreover, in a multi-BS scenario, if achieving the desired target SINDR involves reception from multiple BSs using a large number of antennas and dithering, then the UE transmit power heavily depends on the distance to the farthest serving BS. In this context, employing dithering at the BSs becomes crucial in enhancing the SINDR for UEs that exhibit a substantial variation in path loss among the serving BSs.
## II System Model
We consider an uplink distributed/cell-free massive MIMO system where a set of BSs \(\mathcal{B}\triangleq\{1,\ldots,B\}\) with \(M\) antennas serves a set of single-antenna UEs \(\mathcal{K}\triangleq\{1,\ldots,K\}\). Each | We consider the uplink power control problem for distributed massive multiple-input multiple-output systems with base stations (BSs) equipped with 1-bit analog-to-digital converters (ADCs). First, we consider a single-user equipment (UE) scenario to provide insights into the signal-to-noise-and-distortion ratio (SNDR). With a single BS, the SNDR is an unimodal function of the UE transmit power. With multiple BSs, the SNDR at the output of the joint combiner can be made unimodal by adding properly tuned dithering at each BS. As a result, the UE can be effectively served by multiple BSs with 1-bit ADCs. Considering the signal-to-interference-plus-noise-and-distortion ratio (SINDR) in the multi-UE scenario, we aim at optimizing the UE transmit powers and the dithering at each BS based on the min-power and max |
2309.16745 | Efficient Training of One Class Classification-SVMs | This study examines the use of a highly effective training method to conduct
one-class classification. The existence of both positive and negative examples
in the training data is necessary to develop an effective classifier in common
binary classification scenarios. Unfortunately, this criteria is not met in
many domains. Here, there is just one class of examples. Classification
algorithms that learn from solely positive input have been created to deal with
this setting. In this paper, an effective algorithm for dual soft-margin
one-class SVM training is presented. Our approach makes use of the Augmented
Lagrangian (AL-FPGM), a variant of the Fast Projected Gradient Method. The FPGM
requires only first derivatives, which for the dual soft margin OCC-SVM means
computing mainly a matrix-vector product. Therefore, AL-FPGM, being
computationally inexpensive, may complement existing quadratic programming
solvers for training large SVMs. We extensively validate our approach over
real-world datasets and demonstrate that our strategy obtains statistically
significant results. | Isaac Amornortey Yowetu, Nana Kena Frempong | 2023-09-28T15:35:16 | http://arxiv.org/abs/2309.16745v1 | # Efficient Training of One Class Classification - SVMs
###### Abstract
This study examines the use of a highly effective training method to conduct one-class classification. The existence of both positive and negative examples in the training data is necessary to develop an effective classifier in common binary classification scenarios. Unfortunately, this criteria is not met in many domains. Here, there is just one class of examples. Classification algorithms that learn from solely positive input have been created to deal with this setting. In this paper, an effective algorithm for dual soft-margin one-class SVM training is presented. Our approach makes use of the Augmented Lagrangian (AL-FPGM), a variant of the Fast Projected Gradient Method. The FPGM requires only first derivatives, which for the dual soft margin OCC-SVM means computing mainly a matrix-vector product. Therefore, AL-FPGM, being computationally inexpensive, may complement existing quadratic programming solvers for training large SVMs. We extensively validate our approach over real-world datasets and demonstrate that our strategy obtains statistically significant results.
**Keywords:** Support Vector Machine (SVM), One-Class Classification
(OCC), Support vector Data Description (SVDD)
## 1 Introduction
Learning algorithms consider the availability of both positive and negative examples in common binary classification tasks. Sometimes a strong requirement like this is needed but it does not work in the context of an application for real-world application. In actuality, labeling data is an expensive, time-consuming task that necessitates a high level of domain knowledge. In some cases, this operation is quick, but usually, defining reliable labels for each data example is a hard task [1].
The goal of one-class classification (OCC) methods is to create classification models when the negative class is either nonexistent, poorly sampled, or poorly specified [2]. Thus, this technique creates class boundaries only with the knowledge of positive class. Examples of one-class classification include outlier detection and novelty detection, where the outlier elements are identified independently of all the other data elements. This One-Class classification problem happens in a variety of situations, including:
* **Outlier Detection:** The objective is to find samples from an unlabeled dataset that are outliers. Outliers in the training set are observations that deviate significantly from the others. Outlier estimators ignore the deviating observations and attempt to fit the majority of the training data under the region. An alternative name for it is unsupervised anomaly detection.
* **Novelty Detection:** Consider training data where there are no outliers, and we want to determine whether the incoming or fresh observation is an outlier or not. The outlier can be referred to as a novelty in this situation. Anomaly detection that is semi-supervised might be said to be involved.
* **Information Retrieval:** The purpose of this classification is to find samples in the unlabeled dataset that are related to the ones provided by the user.
* **One-vs-rest:** This is taken into account in this case when the negative class is too diversified and it is difficult to collect and label numerous negative samples [3].
This technique can find outliers that deviate from the training set in some way. It is highly helpful in resolving a classification problem where samples for one class are in great abundance and samples for the other classes are scarce [4]. The primary objective of OCC is to estimate the support of the data distribution, which is very helpful in unsupervised learning, particularly in high-dimensional feature spaces where doing density estimation is exceedingly challenging [3]. Several real problems, including novelty discovery, outlier detection, and others. Among the first authors to develop OCC algorithms were [5] and [6]. A classifier that finds the least radius hypersphere enclosing data is proposed by [6] whereas [5] specifically suggests a classifier that finds the hyperplane separating data from the origin with the largest margin. [5] establishes that despite these two approaches having differences between them, they both yield the same results for translation-invariant kernels like
the Gaussian kernel. With the aim of addressing large-scale non-linear optimization problems, first-order approaches for non-linear optimization problem solving have been developed and used. Several methods that require the solution of linear systems of equations, such as the Exterior-Point Method (EPM) and Interior-Point Methods (IPM) [7; 8] have been used to address quadratic programming issues. One of these techniques, for instance, works well for medium-sized situations with a few thousand variables
In recent years, algorithms like machine learning and others like as [9] have been used to solve nonlinear optimization problems involving very large scale variables or datasets. [10] suggested combining the Fast Projected Gradient Approach (FPGM) with the Argumented Lagrangian (AL) method to train SVMs that simulate large-scale convex quadratic optimization problems with linear constraints and straightforward bounds. The study omitted the AL-convergence FPGM's analysis, despite the encouraging results of the model. On the other hand, the convergence of the AL-FPGM was theoretically investigated in paper [9].
The three main contributions of this paper are (i) applying a technique based on fast projected gradient [10] for training dual soft margin OCC-SVMs, (ii) creating and implementing an AL-FPGM-based QP solver in Python for training OCC-SVMs, and (iii) testing the QP solver by training OCC-SVMs on few datasets used under PU-Learning problems [3]
The remaining parts of this paper is organized as follows: Section 2 describes the dual soft margin OCC-SVMs training problem, Section 3 describes the augmented Lagrangian method, Section 4 describes the fast projected gradient method, Section 5 presents numerical results for training the OCC-SVMs with the AL-FPGM and Section 6 presents concluding remarks.
## 2 The dual soft-margin SVM problem
[5] developed a method called "one-class classification" that extends the SVM methodology to handle training with just positive input. Only positive data can be used with the suggested Scholkopf mechanism. The algorithm checks for "outliers" within the positive instances and uses them as negative examples [11]. After changing the feature via a kernel, the one-class classification method of [5] treats the origin as the unique member of the second class. The image of one class is then separated from the origin using "relaxation parameters." Following that, the conventional OCC-SVMs algorithms are used [11].
The following quadratic programming problem must be solved in order to separate the data set from the origin:
\[\min\frac{1}{2}\|w\|^{2}+\frac{1}{vn}\sum_{i=1}^{n}\zeta_{i}-\beta \tag{1}\]
subject to:
\[(w\cdot\phi(x_{i}))\geq\beta-\zeta_{i}\ \ \ i=1,2,\ldots,n\ \ \zeta_{i}\geq 0\]
Here, \(v\in(0,1)\) is a parameter whose meaning will become clear later. Since nonzero slack variables \(\zeta_{i}\) are penalized in the objective function, we can expect that if w and \(\beta\) solve this problem, then the decision function \(f(x)=sign((w\cdot\phi(x))-\beta)\) will be positive for most examples \(x_{i}\) contained in the training set, while the SV type regularization term \(\|w\|\) will still be small. The actual trade-off between these two goals is controlled by \(v\). It is possible to demonstrate that the solution has an SV expansion by deriving the dual problem and applying the kernel transformation.
Using Lagrange multiplier, constraint optimization problem can further be expressed as Equation (2).
\[\mathcal{L}(w,\zeta,\alpha,b)=\frac{1}{2}\|w\|^{2}+\frac{1}{vn}\sum_{i=1}^{n} \zeta_{i}-b-\sum_{i=1}^{n}\alpha_{i}((w\cdot\phi(x_{i})-b)+\zeta_{i}) \tag{2}\]
The Lagrange multiplier \(\alpha\) must be greater or equal to zero. For the purpose of simplification, we expressed the \(\frac{1}{vn}\) as \(C\). We further find the partial derivatives of the loss function \(\mathcal{L}\) with respect to \(\zeta,w\) and \(\beta\)
\[\frac{\partial\mathcal{L}}{\partial w} =w-\sum_{i=1}^{n}\alpha_{i}x_{i}=0\implies w=\sum_{i=1}^{n} \alpha_{i}x_{i} \tag{3}\] \[\frac{\partial\mathcal{L}}{\partial\beta} =-1+\sum_{i=1}^{n}\alpha_{i}=0\implies\sum_{i=1}^{n}\alpha_{i}=1\] (4) \[\frac{\partial\mathcal{L}}{\partial\zeta} =C-\alpha_{i}=0\implies C=\alpha_{i} \tag{5}\]
The optimal value will be obtained using \(\zeta,w\) and \(\beta\) from the equations above, and this gives rise to Equation 6.
\[\mathcal{L}(w,\beta,\zeta,\alpha) =\frac{1}{2}w^{T}w+C\sum_{i=1}^{n}\zeta_{i}-\beta-w\sum_{i=1}^{n} \alpha_{i}x_{i}+\beta\sum_{i=1}\alpha_{i}-\sum_{i=1}^{n}\alpha_{i}\zeta_{i} \tag{6}\] \[=\frac{1}{2}w^{T}w-w^{T}w\] (7) \[\max\mathcal{L}(\alpha) =-\frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^{n}\alpha_{i}\alpha_{j}x_{i }^{T}x_{j} \tag{8}\]
Where \(x^{T}x=K(x,x)\) is a kernel to be computed. Minimizing Equation (8) gives:
\[f(\alpha)=\frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^{n}\alpha_{i}\alpha_{j}x_{i}^{T} x_{j} \tag{9}\]
Equation 9 can further be expressed in a more compact form as:
\[f(\alpha)=\frac{1}{2}\alpha^{T}K(x,x)\alpha \tag{10}\]
The patterns \(x_{i}\) with nonzero \(\alpha_{i}\) are called SVs, where the coefficients are found as the solution of the dual problem:
\[f(\alpha)=\frac{1}{2}\alpha^{T}K(x,x)\alpha\ \ \mbox{subject to}\ \ \ \ \sum_{i}^{n}\alpha_{i}=1 \tag{11}\]
and the bounded set:
\[Box=\{\alpha\in\mathbf{R}^{m}:0\leq\alpha_{i}\leq\frac{1}{vn},\ \ \ \ i=1,\cdots,m\}\]
Then the optimization problem 11 can be rewritten as follows:
\[\min f(\alpha)\ \ \mbox{s.t}\ \ h(\alpha)=0 \tag{12}\]
The augmented Lagrangian can be written as follows:
\[\mathcal{L}(\alpha)=f(\alpha)+\mu h(\alpha)+0.5ch(\alpha)^{2} \tag{13}\]
where \(\mu\in\mathbb{R}\) is the unknown Langrage multiplier that corresponds to the equality constraint and \(c>0\) is the scaling parameter.
## 3 Augmented Lagrangian Method Algorithm
the augmented Lagrangian method constitute up of a sequences of approximate minimizations of \(\mathcal{L}_{c}(\alpha,\mu)\) in \(\alpha\) on the Box set
\[\hat{\alpha}\approx\alpha(\hat{\mu})=\arg\min_{\alpha\in Box}\mathcal{L}_{c}( \alpha,\mu) \tag{14}\]
followed by updating the Lagrange multiplier \(\mu.\) We use the following function, which measures the first-order optimality conditions for problem (14), as the stopping criteria:
Algorithm 1 provides a preliminary sketch of an augmented Lagrangian technique.
```
Step 1: Set\(\alpha^{*}:=\alpha=0,\ \mu:=0,\ rec:=acc(\alpha,\mu)\) Select\(c>0,\,0<\theta<1,\ \delta>1\) Step 2: Find\(\hat{\alpha}\approx\arg\min_{\alpha\in Box}\mathcal{L}(\alpha,\mu)\) such that \(\mu(\alpha,\mu)\leq\theta*rec\) using FPGM Step 3: Find\(\hat{\mu}\approx\mu+ch(\alpha)\) Step 4: Set\(\alpha:=\hat{\alpha},\epsilon:=\theta\epsilon,c:=\delta c\) Step 5: If\(rec>\varepsilon\) then Goto 2. Step 6: End
```
**Algorithm 1** Augmented Lagrangian Methods
Standard QP methods can be used to solve this problem 11. The offset \(\beta\) may be obtained by exploiting that for any \(\alpha_{i}\) which is not at the upper or lower bound, the corresponding pattern \(x_{i}\) satisfies:
\[\beta=(w\cdot x_{i})=\sum_{j}\alpha_{j}k(x_{j},x_{i}) \tag{15}\]
Note that the upper limits of the Lagrange multipliers go to infinity if \(v\mapsto 0\).
## 4 Fast Projected Gradient Method (FPGM)
The Lipschitz constant \(L>0\) of the gradient of \(\mathcal{L}_{c}\) must be estimated for the fast projected gradient method (FPGM) for the inequality
\[\|\nabla_{\alpha}\mathcal{L}_{c}(\alpha_{{}_{1}},\lambda)-\nabla_{\alpha} \mathcal{L}_{c}(\alpha_{{}_{2}},\lambda)\|\leq L\|\alpha_{{}_{1}}-\alpha_{{}_{ 2}}\| \tag{16}\]
to hold \(\forall\alpha_{1},\alpha_{2}\in\mathbb{R}^{n}\)
The gradient and the Hessian of \(\mathcal{L}_{c}(\alpha,\lambda)\) for the OCC-SVM can be written as follows:
\[\nabla_{\alpha}\mathcal{L}_{c}(\alpha,\lambda)=K\alpha-(\lambda-c(\alpha^{T}e -1))e^{T} \tag{17}\]
\[\nabla_{\alpha\alpha}^{2}\mathcal{L}_{c}(\alpha,\lambda)=K+cee^{T} \tag{18}\]
where K is an \(n\times n\) matrix,
\[K_{i,j} =k(x_{i},x_{j})\] \[e =(1,...,1)^{T}\in\mathbb{R}^{n}\]
\(\mathcal{L}_{c}\) is a quadratic form with respect to \(\alpha\)
\[L=\|\nabla_{\alpha\alpha}^{2}\mathcal{L}_{c}(\alpha,\lambda)\|=\|K+cee^{T}\| \tag{19}\]
where the biggest singular value of a matrix is the matrix spectral norm, i.e the constant that is dependent only on the kernel Matrix \(\mathbf{K}\) and the scaling parameter c [10].
In the terms of Gaussian kernel we have this expression:
\[\mathcal{L}\leq trace(\nabla_{\alpha\alpha}^{2}\mathcal{L}_{c}( \alpha,\lambda) =trace(K+cee^{T}) \tag{20}\] \[=trace(K)+trace(cee^{T}) \tag{21}\]
In the estimation above, we took into account the facts that the trace of a matrix is the sum of its eigenvalues and that, when all eigenvalues are non-negative, the biggest eigenvalue is either less than or equal to the total of all
eigenvalues.
Therefore, the estimation (\(L\approx trace(K)+trace(c(ee^{T}))\) is considered in the training. Similar kernel-specific bounds for additional kernels can be determined. For the dot product kernel, polynomial kernel and others, for example, the same estimation (\(L\approx trace(K)+trace(cee^{T})\)) can be applied. The most computational expensive term of the \(\mathcal{L}_{c}(\alpha,\lambda)\) is the matrix-vector product \(K\alpha\) calculation which carries \(\mathcal{O}(n^{2})\) basic operations in the case when the matrix under consideration is dense (Steps 3 and 7 in Algorithm 3). The projection operator \(P_{Box}:\mathbb{R}^{n}\mapsto Box\) (Step 3) is not computationally expensive (see algorithm 2) and expects only \(\mathcal{O}(n)\) basic arithmetic operations. Less than dozen of arithmetic operations are contained in the remaining stages together. Therefore, one iteration of FPGM expects \(\mathcal{O}(n^{2})\) operations. Algorithm 3 describes the FPGM used in the step 2 of the Algorithm 1. For the sequences of \(\{\alpha_{s}\}\) that are generated by the FPGM for a convex quadratic problem, the following bound hold [10]
\[\mathcal{L}_{c}(\alpha_{s},\lambda)-\mathcal{L}_{c}(\alpha(\lambda),\lambda) \leq\frac{2L\|\alpha_{0}-\alpha(\lambda)\|^{2}}{(s+1)^{2}} \tag{22}\]
where \(L>0\) is the Lipschitz constant and \(\alpha(\lambda)\) is the expected solution of the problem (14) under consideration. As a result, the FPGM converges to the minimum value augmented Lagrangian with an error order of \(\mathcal{O}(s^{-2})\), where s is the number of iterations. Furthermore, with reference to the estimation of L for the various kernels and the fact that both \(\alpha_{0}\) and \(\alpha(\lambda)\in Box\), the number of steps expected for the FPGM to converge to the minimum of the Augmented Lagrangian \(\mathcal{L}_{p}(\alpha(\lambda),\lambda)\) can be estimated [10].
Since \(\{\mathcal{L}(\alpha)\}\) is a decreasing sequence (with reference to the fact that \(\alpha^{k}\neq\alpha^{k+1}\forall k\geq 0\) and that the objective function is minimized at every iteration) and also bounded below (based on the existence of an unknown global optimum), it can be said that convergences. Cauchy sequence can also be applied to prove that it is convergent.
By using this fact and by considering that \(\|\mathcal{L}(\alpha^{k})-\mathcal{L}(\alpha^{k+l})\|>\|\alpha^{k}-\alpha^{k+l }\|_{2}\forall k,l\geq 0\), It can be inferred that \(\{\alpha^{k}\}\) Therefore since the sequence lies also in a closed feasible set, it is convergent. That's as \(k\mapsto\infty\), \(\alpha^{k}\mapsto\alpha^{k+1}\) then, the algorithm (1) yields a sequence of points.
```
1:Loop over all \(i=1,\ldots,n\)
2:If\(\alpha_{i}<0\) then Set \(\alpha_{i}=0\)
3:If\(\alpha_{i}>C\) then Set \(\alpha_{i}=C\)
4:Return \(\alpha_{1}\)
```
**Algorithm 2** Operator \(P_{Box}\): Projection of \(\alpha\in\mathbb{R}^{n}\) onto the set B
The AL method falls into the class of the proximal algorithms that have been extensively investigated in the papers of [12, 13]. One of possible AL-FPGM convergence analysis paths is to use of the theory developed in [12]. The importance of using the proximal-point theory is the possibility to demonstrate the convergence of the AL-FPGM for any \(c>0\); which would explain the possibility of obtaining convergence for small \(c\).
## 5 Numerical results and discussion
In this section, we report on the numerical results of training OCC-SVMs with the AL-FPGM on datasets that were used to solve PU-Learning problem [3]. To assess an improvement in efficiency of the AL-FPGM relatively to optimization methods that solve linear systems at every iteration, we compared the efficiency of the former to the OCC-SVM solver in python. It was observed that both approaches converges in less than 1 second. We selected the OCC-SVM solver in Python for a few reasons. First, it happens to be the widely used solver built for OCC-SVM. Secondly, it turns out to be an efficient and accurate method that solves linear systems at each step while keeping the number of iterations low. Lastly, the numerical results of testing the OCC-SVM solver is published and available for comparison. The OCC-SVM implemented in Python was developed for anomaly detection. For the training set, 25% of the positives were used while the remaining 75% of the positives and all the negatives were used for the testing. The Gaussian kernel \(K(x_{i},x_{j})=e^{-\gamma\|x_{i}-x_{j}\|}\) with \(\gamma\) values of 0.1, 0.5 and 1 were used and the results were presented in Table 1. For the AL-FPGM we used the scaling parameter value \(c=0.1\) and the stopping criteria of \(10^{-6}\). The other parameters used were \(\theta=0.99\) and \(\delta=1.01\) for all the runs.
## 6 Conclusion
This manuscript presents numerical results on training dual soft margin OCC-SVMs with AL-FPGM. We developed and implemented a QP solver based on the AL-FPGM and tested the QP solver by training medium-sized data sets from the USMO paper. The numerical results presented here indicate that the AL-FPGM is an efficient training algorithm for | この研究では、効果的なトレーニング方法を用いて、一クラス分類を行う。訓練データに正例と負例が存在する必要があるが、一般二分類シナリオでは、この条件を満たすことが難しい。ここでは、一つのクラスのサンプルしか存在しない。単に正例を入力から学習するアルゴリズムは、この設定に対応するために作成されている。この論文では、双方のソフトマージンの一クラスSVMのトレーニングに有効なアルゴリズムが提案されている。本手法は、AugmentedLagrangian (AL-FPGM)を用いる。FPGMは、主にマトリクスベクトル積を求めることから、双方のソフトマージン OCC-SVMでは、1次方程式の解を求めることができる。このため、AL-FPGMは、計算コストが低いので、大きなSVMのトレーニングに既存の二次方程式ソルバーを補完できる可能性がある。実世界データセット上で、本手法の |
2309.07890 | Exploring the covariant form factors for spin-1 particles | The spin-1 particles is an admirable two quarks bound state system to
understand electromagnetic properties from hadronic states. These systems are
generally relativistic, and therefore, need an approach using quantum field
theory. In the present work, we will use both the quantum field theory at the
instant form, as well, quantum field theory on the light-front~(LFQFT). In
general, it is used to calculate the electromagnetic properties of spin-1
vector particles in the LFQFT formalism, with the plus component of the
electromagnetic current. In the present work, we used, in addition to the plus
component of the electromagnetic current; the minus component of the current,
and we use that components o the current, to extract the covariant form
factors; showing that to have an equivalence between these we need to add
non-valence terms to the electromagnetic current, in order to restore the
covariance, and obtain exactly the same results when using the instant form
quantum field theory. | J. P. B. C. de Melo | 2023-09-14T17:42:03 | http://arxiv.org/abs/2309.07890v1 | # Exploring the covariant form factors for spin-1 particles
###### Abstract
The spin-1 particles is an admirable two quarks bound state system to understand electromagnetic properties from hadronic states. These systems are generally relativistic, and therefore, need an approach using quantum field theory. In the present work, we will use both the quantum field theory at the instant form, as well, quantum field theory on the light-front (LFQFT). In general, it is used to calculate the electromagnetic properties of spin-1 vector particles in the LFQFT formalism, with the plus component of the electromagnetic current. In the present work, we used, in addition to the plus component of the electromagnetic current; the minus component of the current, and we use that components o the current, to extract the covariant form factors; showing that to have an equivalence between these we need to add non-valence terms to the electromagnetic current, in order to restore the covariance, and obtain exactly the same results when using the instant form quantum field theory.
Vector particles, Electromagnetic currents, Electromagnetic form factors, Light-front approach, Zero-modes +
Footnote †: preprint: **LFTC-23-08/81**
The quantum field theory, is the theory of the subatomic world, and, is the theory for both, strong and, also electroweak interactions. In the case of the electroweak interaction, the theory is the Quantum Electrodynamics (QED). On the other hand, the theory of strong interactions based on quantum field theory, is the Quantum Chromodynamics (QCD). The main purpose of the QCD, is to give a explanation of the bound states, in terms of the basic constituintes of the matter, quarks and gluons. In the present work, we use the light-front quantum field theory, or light-front dynamics, QCD based, to try describe hadronic bound states, for mesons or baryons; because the kinematic boost proprieties are relatively simple [1; 2; 3]. The present theory, which underlies this approach is the LFQFT, because, in this way, we can incorporate two important aspects; the QCD and the constituent quarks pictures.
The exploration of zero modes, or non-valence terms, which need to be added to the electromagnetic current, in order to have the covariance respected in the Light-front approach; has been explored in the literature for some times, both for scalar particles [4; 5], as well as pseudo scalars mesons [6; 7; 8; 9; 10; 11], and also vectors particles [12; 13; 14; 15; 16; 17; 18; 19], In the case of pseudo scalar particles, such as the pion and kaon mesons, both components of the electromagnetic current were explored, to extract the observables of these mesons, such as electromagnetic form factors, electromagnetic radius and electroweak decay constants [10]. In this work, we will explore the calculation of the covariant electromagnetic form factors, \(F_{1},F_{2}\) and \(F_{3}\), and the related electromagnetic form factors of charge, \(G_{0}(Q^{2})\), magnetic \(G_{1}(Q^{2})\), and quadrupole \(G_{2}(Q^{2})\), for the plus and minus components of the electromagnetic current for the rho meson scale, both, at equal time, as well as on the light-front approach. To the best of our knowledge, the minus component of the electromagnetic current, is not yet been explored for spin-1 particles, and, is their relationship to the covariant form factors.
The most general expression for the spin-1 electromagnetic current, taking into account parity and time-reverse invariance, can be written, for the plus components of the electromagnetic current (see the references [20; 21; 15] for details); and the general form for the relativistic spin-1 electromagnetic current, is give below [21; 22], with the following equation, for both components of the electromagnetic current,
\[J^{\pm}_{\mathcal{N}\lambda} = (p^{{}^{\prime}\pm}+p^{\pm})[F_{1}(q^{2})(\epsilon_{\mathcal{N} ^{\prime}}.\epsilon_{\lambda})-\frac{F_{2}(q^{2})}{2m_{v}^{2}}(q.\epsilon_{ \mathcal{N}^{\prime}})(q.\epsilon_{\lambda})] \tag{1}\] \[- F_{3}(q^{2})((q.\epsilon_{\mathcal{N}^{\prime}})\epsilon_{ \lambda}^{\pm}-(q.\epsilon_{\mathcal{N}})\epsilon_{\mathcal{N}^{\prime}}^{\pm })\.\]
In the equation above, Eq.(1), \(F_{1},F_{2}\) and \(F_{3}\), are the covariant electromagnetic form factors for spin-1 particles, \(m_{v}\), is the vector bound state mass, and, \(\lambda\), \(\lambda^{\prime}\), are the polarization index, ie., \((x,y,z)\). The matrix elements for the electromagnetic current, are calculated in the the Breit frame, \(p^{\mu}=(p_{0},-q/2,0,0)\), for initial state, and \(p^{\prime\mu}=(p_{0},q/2,0,0)\), for the final state. The transfer momentum is, \(q^{\mu}=(0,q,0,0)\); the polarization in the cartesian basis are, \(\epsilon_{x}^{\mu}=(-\sqrt{\eta},\sqrt{1+\eta},0,0),\epsilon_{y}^{\mu}=(0,0,1,0),\epsilon_{z}^{\mu}=(0,0,0,1)\), for the vector meson in the initial state, and in the final state, \(\epsilon^{\prime\mu}=(\sqrt{\eta},\sqrt{1+\eta},0,0),\epsilon_{y}^{\prime\mu} =(0,0,1,0),\epsilon_{z}^{\prime\mu}=(0,0,0,1)\), here, defining \(\eta=q^{2}/4m_{v}^{2}\).
The light-front coordinates here, are defined by, the following quadrivector, \(a^{\mu}=(a^{+}=a^{0}+a^{3},a^{-}=a^{-}+a^{3},(a_{x},a_{y})=a_{\perp})\). The scalar product is given by the expression, \(a^{\mu}b_{\mu}=\frac{1}{2}(a^{+}b^{-}+a^{-}b^{+})-\vec{a}_{\perp}\cdot\vec{b} _{\perp}\), and, the Dirac matrix at the high-front are, \(\gamma^{+}=\gamma^{0}+\gamma^{3}\), and, and \(\gamma^{-}=\gamma^{0}-\gamma^{3}\)[3]. That Dirac matrix, are related with the plus component of the electromagnetic current \(J^{+}_{ji}\), and minus component of the electromagnetic current \(J^{-}_{ji}\), respectivly [15]. With the plus components of the electromagnetic current, \(J^{+}_{ji}\), the following relations between the matrix elements of electromagnetic current, and the covariant form factors, are obtnaide
below,
\[J^{+}_{xx} = 2p^{+}\left(-2F_{1}(1+2\eta)-\frac{F_{2}}{2m_{v}^{2}}q^{2}(1+\eta) \right)-F_{3}2q\sqrt{\eta}\sqrt{1+\eta},\] \[J^{+}_{yy} = -2p^{+}F_{1},\] \[J^{+}_{zz} = -2p^{+}F_{1},\] \[J^{+}_{zx} = -F_{3}q\sqrt{1+\eta},\] \[J^{+}_{xz} = F_{3}q\sqrt{1+\eta},\] \[J^{+}_{yx} = J^{+}_{xy}=J^{+}_{zy}=J^{+}_{yz}\ =\ 0. \tag{2}\]
With the matrix elements of the electromagnetic given above, Eq.(2), is possible to extract the covariant form factors, in function of the matrix elements of the electromagnetic current, for the plus component of the electromagnetic current, written as,
\[F_{1}^{+} = -\frac{J^{+}_{yy}}{2p^{+}}\ =\ -\frac{J^{+}_{zz}}{2p^{+}},\] \[F_{2}^{+} = \frac{m_{v}^{2}}{p^{+}q^{2}(1+\eta)}\left[J^{+}_{yy}(1+2\eta)-J^{ +}_{xx}+J^{+}_{zx}2\sqrt{\eta}\right],\] \[F_{3}^{+} = -\frac{J^{+}_{zx}}{q\sqrt{1+\eta}}\ =\ \frac{J^{+}_{xz}}{q\sqrt{1+\eta}}. \tag{3}\]
Using the same strategy for the minus component of the electromagnetic current, \(\ J^{-}_{ji}\), we have below, the following relations,
\[J^{-}_{xx} = 2p^{-}\left(-F_{1}(1+2\eta)-\frac{F_{2}}{2m_{v}^{2}}q^{2}(1+\eta) \right)-F_{3}2q\sqrt{\eta}\sqrt{1+\eta},\] \[J^{-}_{yy} = -2p^{-}F_{1},\] \[J^{-}_{zz} = -2p^{-}F_{1},\] \[J^{-}_{zx} = F_{3}q\sqrt{1+\eta},\] \[J^{-}_{yx} = J^{-}_{xy}=J^{-}_{zy}=J^{-}_{yz}\ =\ 0. \tag{4}\]
As done before for the plus component of the electromagnetic current, writting below, the covariant form factors, \(F_{1},F_{2},\) and \(F_{3}\), in terms of the minus component of the electromagnetic current, namely,
\[F_{1}^{-} = -\frac{J^{-}_{yy}}{2p^{-}}\ =\ -\frac{J^{-}_{zz}}{2p^{-}},\] \[F_{2}^{-} = \frac{m_{v}^{2}}{p^{-}q^{2}(1+\eta)}\left[J^{-}_{yy}(1+2\eta)-J^{ -}_{xx}+J^{-}_{zx}2\sqrt{\eta}\right],\] \[F_{3}^{-} = -\frac{J^{-}_{zx}}{q\sqrt{1+\eta}}\ =\ \frac{J^{-}_{xz}}{q\sqrt{1+\eta}}. \tag{5}\]
In the equation above, we working in the Breit-frame, with the Drell-Yan condition (\(q^{+}=0\)), then we have the identity, \(p^{+}=p^{\prime+}=p^{-}=p^{\prime-}=p_{0}\). For the rest, of the present work, that equality wil be used. Obviously the covariant electromagnetic form factors are independent of which component of the electromagnetic current we use for perform the calculations; for the quantum field theory in instant form, or, equal time, we have no problem with the previous sentence; Nonetheless, in the light-front approach, such a sentence is not necessarily
true, because in this case, we have a loss of covariance, notably of rotational symmetry, which leads us to different results [15; 17; 23; 24].
Using the polarization vectors on the spherical basis; we have in the spin instant form basis,
\[\epsilon^{\mu}_{+-} = -(+)\frac{\epsilon^{\mu}_{x}+(-)\imath\epsilon^{\mu}_{y}}{\sqrt{2}}\] \[\epsilon^{\mu}_{0} = \epsilon^{\mu}_{z}\, \tag{6}\]
The matrix elements of the electromagnetic current, for the plus and minus componentes, obtnaide from Eq.(1), are writing below as,
\[J^{\pm}=\frac{1}{2}\left(\begin{array}{ccc}J^{\pm}_{xx}+J^{\pm}_{yy}&\sqrt{ 2}J^{\pm}_{xx}&J^{\pm}_{yy}-J^{\pm}_{xx}\\ -\sqrt{2}J^{\pm}_{xx}&2J^{\pm}_{zz}&\sqrt{2}J^{\pm}_{xx}\\ J^{\pm}_{yy}-J^{\pm}_{xx}&-\sqrt{2}J^{\pm}_{zx}&J^{\pm}_{xx}+J^{\pm}_{yy}\\ \end{array}\right)\, \tag{7}\]
with the spin projections in the order \(m=(+,0,-)\), for the initial, and final state polarization. In the case of the Light-front, the matrix elements \(I^{\pm}\), for the plus and minus components of the eletromagnetic current are,
\[I^{\pm}_{m^{\prime}m}=\left(\begin{array}{ccc}I^{\pm}_{11}&I^{\pm}_{10}&I^{ \pm}_{1-1}\\ -I^{\pm}_{10}&I^{\pm}_{00}&I^{\pm}_{10}\\ I^{\pm}_{1-1}&-I^{\pm}_{10}&I^{\pm}_{11}\\ \end{array}\right)\, \tag{8}\]
The relations between the matrix elements of the electromagnetic current in the Cartesian spin basis, and, in the light-front spin (helicity) basis,is given by the Melosh rotation [6; 24], are, for the plus component of the electromagnetic current, \(I^{+}_{m^{\prime}m}\),\((m^{\prime};m)=(-1,0,1;-1,0,1)\), are given below by the following relations [15; 25],
\[I^{+}_{11}=\frac{J^{+}_{xx}+(1+\eta)J^{+}_{yy}-\eta J^{+}_{zz}-2 \sqrt{\eta}J^{+}_{zx}}{2(1+\eta)},\] \[I^{+}_{10}=\frac{\sqrt{2\eta}J^{+}_{xx}+\sqrt{2\eta}J^{+}_{zz}- \sqrt{2}(\eta-1)J^{+}_{zx}}{2(1+\eta)},\] \[I^{+}_{1-1}=\frac{-J^{+}_{xx}+(1+\eta)J^{+}_{yy}+\eta J^{+}_{zz} +2\sqrt{\eta}J^{+}_{zx}}{2(1+\eta)},\] \[I^{+}_{00}=\frac{-\eta J^{+}_{xx}+J^{+}_{zz}-2\sqrt{\eta}J^{+}_{ zx}}{(1+\eta)}\, \tag{9}\]
for the plus component of the electromagnetic current, and for the minus component of the electromagnetic currrent, \(I^{-}_{m^{\prime}m}\), we have the following relations below,
\[I^{-}_{11}=\frac{J^{-}_{xx}+(1+\eta)J^{-}_{yy}-\eta J^{-}_{zz}-2 \sqrt{\eta}J^{-}_{zx}}{2(1+\eta)},\] \[I^{-}_{10}=\frac{\sqrt{2\eta}J^{-}_{xx}+\sqrt{2\eta}J^{-}_{zz}- \sqrt{2}(\eta-1)J^{-}_{zx}}{2(1+\eta)},\] \[I^{-}_{1-1}=\frac{-J^{-}_{xx}+(1+\eta)J^{-}_{yy}+\eta J^{-}_{zz}+ 2\sqrt{\eta}J^{-}_{zx}}{2(1+\eta)},\] \[I^{-}_{00}=\frac{-\eta J^{-}_{xx}+J^{-}_{zz}-2\sqrt{\eta}J^{-}_{ zx}}{(1+\eta)}. \tag{10}\]
We can see from the two equations above, Eq.(9), and Eq. (10), for the plus and minus components, of the electromagnetic current, written in terms of the light-front basis, have essentially the same structure. But,
these two equations only produce the same result if the calculations are performed at the equal time approach. However, in the case of the light-front formalism, we have, in addition to valence contributions to the elements of electromagnetic current matrix, non-valence contributions, or the zero modes contribuitions, also appear [15; 24; 26], (see the Fig.'s (1) and (2)). The vertex model for the spinor structure of the composite spin-one particle, \((m_{v}-q\bar{q})\), comes from the model proposed in the Ref. [15], still, in the present work, we use simplified version of the full vertex, i.e, \(\Gamma(k,p)=\gamma^{\mu}\), in order to simplify the exploration of the minus component of the electromagnetic current; anyway, the complete vertex is about consideration.
In order to compute the electromagnetic current for spin-1 particles in the impulse approximation, we use the Mandelstam formula [15], for the plus and minus components of the electromagnetic current, \(J^{\pm}=J^{0}\pm J^{3}\), below,
\[J^{\pm}_{ji}=\imath\int\frac{d^{4}k}{(2\pi)^{4}}\frac{Tr\left[ \Gamma\Gamma\right]^{\pm}_{ji}\Lambda(k,p_{f})\,\Lambda(k,p_{i})}{((k-p_{i})^ {2}-m^{2}+\imath\epsilon)(k^{2}-m^{2}+\imath\epsilon)((k-p_{f})^{2}-m^{2}+ \imath\epsilon)}, \tag{11}\]
For the electromagnetic current, Eq. (11) above, we have the following Dirac trace in the numerator,
\[Tr\left[\Gamma\Gamma\right]^{\pm}_{ji}=Tr\left[\epsilon_{j} \cdot\Gamma(k,p_{f})(k\!\!\!/-p\!\!\!/_{f}+m)\gamma^{\pm}(k\!\!\!/-p\!\!\!/_{i} +m)\epsilon_{i}\cdot\Gamma(k,p_{i})(k\!\!\!/+m)\right]. \tag{12}\]
The regularization function in the Eq. (11), is given by, \(\Lambda(k,p)=N/[(k-p)^{2}-m_{R}^{2}+\imath\epsilon]^{2}\), which is chosen for turn the loop integration finite [15].
In the light-front approach, the matrix elements of the electromagnetic current, following the equation below, called the "angular condition" [14; 26; 27; 28; 15]; after the use of the relation between the matrix elements of the electromagnetic current, \(I^{+}_{m^{\prime}m}\) (light-front basis) and the matrix elements for the electromagnetic current in the cartsesain basis, \(J^{+}_{ji}\), we have the following expression,
\[\Delta^{+}(Q^{2}) = (1+2\eta)I^{+}_{11}+I^{+}_{1-1}-\sqrt{8\eta}I^{+}_{10}-I^{+}_{00} \tag{13}\] \[= (1+\eta)(J^{+}_{yy}-J^{+}_{zz})=0;\]
for the plus componente of the electromagnetic current; and also, for the minus component of the electromagnetic current below,
\[\Delta^{-}(Q^{2}) = (1+2\eta)I^{-}_{11}+I^{-}_{1-1}-\sqrt{8\eta}I^{-}_{10}-I^{-}_{00} \tag{14}\] \[= (1+\eta)(J^{-}_{yy}-J^{-}_{zz})=0.\]
In Fig.'s (4), we can verify, that for the matrix elements of the electromagnetic current, for both components of the electromagnetic current, plus, as well as for minus component, the Eq.(13), and Eq.(14), are satisfied immediately at equal time,with the condition, \(J^{\pm}_{yy}=J^{\pm}_{zz}\), for all range of \(Q^{2}\). On the other hand, because the zero modes, or non-valence contribuitions, that condition is not satisfied in the light-front approach; but, after all contribuitions added to the matrix elements of the electromagnetic currents, the angular condition equations are respected [15; 23; 24].
The calculation at the equal time, is used to integrate the Feynman amplitude at four dimensions; in the present work the Breit frame is used, with \(q^{+}=q^{0}+q^{3}=0\), and to do made the integral analytically in \(k^{0}\) first, and other dimensions numericaly. With the light-front approach, the first integration is donne in the light-front energy internal loop, \(k^{-}\), and, the following integrations, \(k^{+}\), and \(\vec{k}_{\perp}\), are numerical integrations.
In the case of the covariant calculation, or equal time calculation, the numerical results, for the plus electromagnetic, or minus components of the electromagnetic current, produced exactly the same results for the respective polarizations, Eq. (11), (see the Fig's. (1,2)); and, the covariant form factors given the same numerical results, independent wich component of the electromagnetic is utilized (see the Fig. 3). In principle, electromagnetic form factors are Lorentz invariant. But for the light front calculations, the matriz elements of the current calculated with for both, \(J^{+}\), and \(J^{-}\) components of the eletromagnetic current, need to add the extra contributions, zero modes (some times call at the literature, non-valence contribuitions), in order to have
the full covariance respected. For the angular condition equation, can use also, the equations for the covariant form factors, \(F_{1}^{\pm}\), Eq. (3), and, Eq. (5),
\[J_{yy}^{\pm} = -2p^{\pm}F_{1}^{\pm},\] \[J_{zz}^{\pm} = -2p^{\pm}F_{1}^{\pm}. \tag{15}\]
Put the relations above, Eq. (15), at the equation for the angular condition, Eq. (13), and Eq. (14), we have, for both components of the electromagnetic current, the following results,
\[\Delta(Q^{2}) = (1+2\eta)(J_{yy}^{\pm}-J_{zz}^{\pm}) \tag{16}\] \[= (1+2\eta)(-2p^{\pm}F_{1}^{\pm}+2p^{\pm}F_{1}^{\pm})=0.\]
It is interesting to note that, when we calculate the covariant factors, both using the plus component of the electromagnetic current, as well the minus component of the electromagnetic current, we obtain the same values for these covariant electromagnetic form factors ; in particular in the case of the form factor \(F_{1}\), which is directly linked to the angular condition in the light front formalism, leads this condition to be immediately satisfied for both components of the electromagnetic current (see Eq.(15) and the Fig. (3)).
But, with the Light-front approach, the results are very differents, because, mostly, by the rotational symmetry breaking, and the contributions coming from the of non-valence terms, also called zero modes [6; 15; 17; 24], both for the component \(J_{ji}^{+}\), as well as for \(J_{ji}^{-}\) component of the electromagnetic current, which can be seen in figures for the electromagnetic matrix elements of the current for wich polarization vector, Fig's. (1,2).
The electromagnetic form factors, are expressed in terms of the matrix elements of the electromagnetic current in the light-front and the instant form basis, with the prescription of the reference [27], for the plus and minus components of matrix elements of the electromagnetic current, as
\[G_{0}^{GK} = \frac{1}{3}[(3-2\eta)I_{11}^{\pm}+2\sqrt{2\eta}I_{10}^{\pm}+I_{1- 1}^{\pm}]\] \[= \frac{1}{3}[J_{xx}^{\pm}+(2-\eta)J_{yy}^{\pm}+\eta J_{zz}^{\pm}],\] \[G_{1}^{GK} = 2[I_{11}^{\pm}-\frac{1}{\sqrt{2\eta}}I_{10}^{\pm}]=J_{yy}^{\pm}- J_{zz}^{\pm}-\frac{J_{xx}^{\pm}}{\sqrt{\eta}},\] \[G_{2}^{GK} = \frac{2\sqrt{2}}{3}[\sqrt{2\eta}I_{10}^{\pm}-\eta I_{11}^{\pm}-I _{1-1}^{\pm}]=\frac{\sqrt{2}}{3}[J_{xx}^{\pm}-(1+\eta)J_{yy}^{\pm}+\eta J_{zz} ^{\pm}]. \tag{17}\]
As explored recently in [23; 24], the choice to use the formulation above, is due to the fact that the prescription from the reference [27], eliminates among all possible combinations of the four matrix elements of the current electromagnetic, \(I_{m^{\prime}m}^{+}\), the component \(I_{00}^{+}\), this is very important, because, this component of the electromagnetic current is responsible for the appearance of the zero mode, or non-valence contributions to the electromagnetic current; since the expressions above, for the electromagnetic form factors, for spin-1 particles, given exactly the same results compared with the equal time calculations.
However, as we can see in Fig. 5, even using the prescription by Inna Grach et al., the minus component of the electromagnetic current, with the high-front approach, the results differ from the instantaneous form, that is, we have zero modes or non-valence mode contributions for the calculation of the electromagnetic form factors, in case we use the minus component of the electromagnetic current, other words, the valence sector is not enough to extract the electromagnetic form factors for spin-1 particles with the minus component of the electromagnetic current. When we add for the the matrix elements the minus component of the electromagnetic current, these non-valence terms, and or zero modes, we restore the covariance, and it can be verified, the both components of electromagnetic current produce the same results.
In the case of the spin one particles, like deuteron [20; 21], the charge electromagentic form factor for the \(\rho\)-meson, have an zero, i.e., the charge form factor change the signal [14; 15; 29]. In the present work, that zero is about \(Q_{zero}^{2}\sim 3~{}GeV^{2}\) in the Fig. 5, for the charge electromagnetic form factor, \(G_{0}(Q_{zero}^{2})=0\), with
the plus and minus component of the electromagnetic current calculated in the instant form. Anyhow, as we can see from the figure for the charge electromagnetic form factor, \(G_{0}(Q^{2})\), that the calculation performed on the light-front, with the plus component of the electromagnetic current, using the prescription of Grach et al., [15; 27], numerically produces the same results as equal-time calculations, because, with this prescription, the zero-modes, or non-valence contribuitions not contribute to the electromagnetic current [23; 24]. Taking the universal ratios between the electromagnetic form factors, given in the references [24; 30], we get an estimate for the zero of the electromagnetic charge form factor, \(Q^{2}_{zero}=6m_{\rho}^{2}\sim 0.35\ GeV^{2}\), for the experimental bound state mass of the rho meson, \(m_{\rho}=0.775\ GeV\)[31]; so that the value predicted in this work is close to this value.
For the calculation with the minus component of the electromagnetic current, the presence of the zero for the charge electromagnetic form factor, \(G_{0}(Q_{0}^{2})\), occurs around \(Q_{0}^{2}\sim 1.55\ GeV^{2}\); using in this case only the valence component of the electromagnetic current to calculate that form factor, (see in Fig. 5, dashed red line, left panel above). After including non-valence terms (or zero-modes) to the matrix elements of the electromagnetic current minus, we have the same value for this zero, obtained in the instantaneous form and, also with the plus component of the electromagnetic current. In the Light-front, that zero have some dependence with prescription in the Light-front approach is adopted [15; 24], but, after the zero added to the matrix elements of the electromagnetic current, for both, plus and minus components of the electromagnetic current, the zero positions are the same [24]. Remembering that in the present work, we used the prescription from Inna Grach [27], which was demonstrated in previous works, be free from pairs terms, or zero modes [23; 24] for the plus component of the electromagnetic current.
As in the case of the charge electromagnetic form factor, \(G_{0}(Q^{2})\), the magnetic form factor, \(G_{1}(Q^{2})\), both for the plus component of the electromagnetic current, as well as for its minus component, when calculated at equal times, we get the exact same result, as can be seen in the Fig. 5, above, right panel. In the case of the light front formalism, however, with the calculation of this electromagnetic form factor, using the minus component of the electromagnetic current, we have a very different result (dashed red line), when compared with the calculation both at times equal, as on the light front, with the plus component of the current electromagnetic (the other curves in the figure, which are impossible to distinguish, as they provide the same numerical results). Anyway, after adding the non-valence terms (or zero modes) to the minus component of the electromagnetic current; we get exactly the same results with the minus component of the electromagnetic current, it is impossible to distinguish these curves, as these calculations produce the same numerical results (see the figure 5).
The quadrupole moment for the rho meson is obtained when \(Q^{2}\) approaches zero; the quadrupole electromagnetic form factor being negative for other values of \(Q^{2}\), which can be seen in Fig. 5. As for the electromagnetic and charge form factors, calculations performed at equal times, for both components of the electromagnetic current, give the same numerical results, which we can see in the figure, the black and blue curves, are totally indistinguishable. On the other hand, the calculation on the light front also reproduces the same results with the reference prescription [27], with the use of the plus component of the electromagnetic current (orange line in the figure). When calculating this electromagnetic form factor using the minus component of the electromagnetic current, taking into account only the valence matrix elements of the minus component of the current electromagnetic, we have a very significant difference in the results (dashed red line in the figure).
However, taking into account the non-valence terms (or zero modes) of the matrix elements of the minus component of the electromagnetic current, we can see in Fig. 5, solid green curve, produces exactly the same results when we use the plus component of the electromagnetic current, both at equal times and or, in the light-front [23; 24].
For the numerical calculations, the vector bound state mass is \(m_{v}=0.775\) GeV, constituient quark masses are \(0.430\) GeV, and the regulator mass, \(m_{R}=3.0\) GeV, the same value used in the ref. [24], in order to reproduced the \(\rho\)-meson experimental proprieties [31].
In summary, in the present work, we use the plus and minus components of the electromagnetic current to study the covariant electromagnetic form factors for spin-1 particles, and, the related electromagnetic form factors for, charge, magnetic and quadrupole; verifying that in the case of the Light-front formalism, so that not have the rotational invariance break, not only at the elements of the electromagnetic current, but in the electromagnetic form factors. When we use the Light-front approach, we have to add the non-valence terms, or zero modes. It is under consideration to use the full vertex [15], as well other spin one particles, such as the
kaon star and the deuteron.
_Acknowledgements:_ This work was supported in part by the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), Process, No.307131/2020-3,, and Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP), No.2019/02923-5, and was also part of the projects, Instituto Nacional de Ciencia e Tecnologia-Nuclear Physics and Applications (INCT-FNA), Brazil, Process No.464898/2014-5, and FAPESP Tematico, Brazil, them
Figure 4: Angular condition for the plus componente of the electromagnetic current (left panel), and the minus componente of the electromagnetic current (right panel).
Figure 3: Left panel: The covariant form factor \(F_{1}(q^{2})\), calculate with the \(J^{\pm}_{yy}\) for equal time, and, also with the light-front approach. Right panel: \(F_{2}(q^{2})\) covariant form factor; below panel, \(F_{3}(q^{2})\) covariant form factor.
Figure 5: Upper left panel: The charge electromagnetic form factor \(G_{0}(q^{2})\), and, upper right panel, the magnetic form factor; calculated with the plus and minus components of the electromagnetic current, for both, in the instant form and light-front approach. Lower panel: the quadrupole form factor \(G_{2}(q)\), labels like the upper panels. | ```
スピン1準粒子である2つのクォークの結合状態系は、ハドロン状態からの電磁的特性を理解するために、優れたシステムです。これらのシステムは一般的に相対論的であり、量子場理論を用いるアプローチが必要です。本稿では、量子場理論の瞬時形と、光前量子場理論(LFQFT)を用います。LFQFTは、一般的に、光前量子場理論の形式で、スピン1ベクトル粒子電磁的特性を計算するのに用いられます。電磁場強度がプラスの方向に依存する電磁場電流のプラス成分を用います。本稿では、プラス成分の電磁場電流に加えて、マイナスの成分を用いて、 covariant formfactors を抽出しました。この結果、これらのベクトルが同一であるためには、電磁場電流に非対称項を追加する必要があることが明らかになりました。これ |
2302.14220 | Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5
for Machine Translation | Pretrained character-level and byte-level language models have been shown to
be competitive with popular subword models across a range of Natural Language
Processing (NLP) tasks. However, there has been little research on their
effectiveness for neural machine translation (NMT), particularly within the
popular pretrain-then-finetune paradigm. This work performs an extensive
comparison across multiple languages and experimental conditions of character-
and subword-level pretrained models (ByT5 and mT5, respectively) on NMT. We
show the effectiveness of character-level modeling in translation, particularly
in cases where fine-tuning data is limited. In our analysis, we show how
character models' gains in translation quality are reflected in better
translations of orthographically similar words and rare words. While evaluating
the importance of source texts in driving model predictions, we highlight
word-level patterns within ByT5, suggesting an ability to modulate word-level
and character-level information during generation. We conclude by assessing the
efficiency tradeoff of byte models, suggesting their usage in non-time-critical
scenarios to boost translation quality. | Lukas Edman, Gabriele Sarti, Antonio Toral, Gertjan van Noord, Arianna Bisazza | 2023-02-28T00:50:19 | http://arxiv.org/abs/2302.14220v3 | Are Character-level Translations Worth the Wait? An Extensive Comparison of Character- and Subword-level Models for Machine Translation
###### Abstract
Pretrained large character-level language models have been recently revitalized and shown to be competitive with subword models across a range of NLP tasks. However, there has not been any research showing their effectiveness in neural machine translation (NMT). This work performs an extensive comparison across multiple languages and experimental conditions of state-of-the-art character- and subword-level pre-trained models (ByT5 and mT5, respectively) on NMT, and shows that the former not only are effective in translation, but frequently outperform subword models, particularly in cases where training data is limited. The only drawback of character models appears to be their inefficiency (at least 4 times slower to train and for inference). Further analysis indicates that character models are capable of implicitly translating on the word or subword level, thereby nullifying a major potential weakness of operating on the character level.
## 1 Introduction
Character-level or byte-level models1 have been a source of interest in Natural Language Processing (NLP) for many years, with the promise of a pipeline without need of a subword tokenizer, and its ability to process and output text on a finer granularity. However they have failed to become the dominant paradigm over subword-level models. The general consensus seems to be that character models perform similarly to subword models, however require additional time and compute resources due to the models requiring longer input sequences. There are some instances that character models have been shown to outperform subword models, however these instances may be seen as niche (e.g. tasks that specifically require character information) or unrealistic (e.g. using data corrupted with synthetic noise) (Xue et al., 2022).
Footnote 1: We will hereafter refer to byte-level models as character-level models or simply character models.
There are however some instances in which the latest character models have not been tested. In this work we compare the effectiveness of state-of-the-art character and subword models on one of the most researched subfields of NLP, machine translation (MT). Despite the popularity of neural MT (NMT) overall, character models have not yet been thoroughly researched in this task. Current research on character models for NMT has used NMT models trained from scratch (Libovicky et al., 2021; Edman et al., 2022), and examining such models can only be reliably used to assess performance on high-resource languages, where the beneficial effects of multilingual pretraining are less impactful.
Given the diversity of NMT settings, be it low or high-resource, similar or distant language pairs, or various writing scripts, there are several potential avenues for character models to stand out, either positively or negatively. We fine-tune the character model ByT5 (Xue et al., 2022) and its subword counterpart mT5 (Xue et al., 2021) on a variety of languages and scenarios. Among our numerous findings, those that stand out are:
Figure 1: ByT5’s source vs. target attribution for the output translation: “That is super-good!” Peaks in source attribution at the beginning of each word indicate an internal word-level understanding.
* Using a standard NMT training scheme, ByT5 performs in general better than mT5.
* ByT5's edge over mT5 increases when resources are limited.
* With a simple NMT instruction tuning setup, ByT5 can learn NMT faster than mT5, with less data.
* Using many examples during instruction tuning causes ByT5 to lose cross-lingual performance in the zero-shot setting.
* ByT5 performs particularly well for languages not seen in pretraining that are similar to a high-resource language.
Our findings support the idea that in several realistic circumstances, particularly low-resource scenarios, character models are a superior choice in terms of quality of translation over the widely-used subword models.
We further analyze ByT5's performance on translation, finding that its success can be attributed to its ability to implicitly segment sentences into words or subwords (exemplified by Figure 1).
We start with an overview of prior work in Section 2. This is followed by our methodology in Section 3. Section 4 shows our results for trained language pairs, while Section 5 shows our cross-lingual results. In Section 6, we analyze our results, looking at how ByT5 is translating and where it performs well. Section 7 shows the efficiency of character models (or lack thereof), and we conclude in Section 8.
## 2 Related Work
Character-level models have long been of interest for use in machine translation, dating back to when statistical models were the dominant paradigm Tiedemann and Nakov (2013). At that time, character models were already competitive with word-level models, particularly in the cases where training data was limited to 100 thousand sentence pairs or less. We note that, at the time, subword tokenizers such as BPE Sennrich et al. (2016) were not yet commonly used.
Character-level NMT models have also been researched, first using RNNs Costa-jussa and Fonollosa (2016); Lee et al. (2017), and more recently with Transformers Libovicky et al. (2021); Edman et al. (2022), all of which compare against the more widely-used subword-based models as baselines. The overall results have been less convincing, with either moderate or no improvement over subword models, all while requiring considerably more time for both training and inference.
While these works were less convincing for character models, it should be noted that they were mostly focused on translation for high-resource languages. In the context of low-resource MT, Edman et al. (2022) showed that character-level models can perform better than subword models on the low-resource pair Xhosa-Zulu, while Carrion-Ponz and Casacuberta (2022) showed that quasi-character models (subword models with a small vocabulary of size 350) perform better than subwords with a more standard vocabulary size of 32 thousand when data is limited for a number of European languages, finding consistent improvements across several domains.
All of the previous work with character models in MT however has trained MT models from scratch, as this is a long-standing practice in the field of MT. This is not a particularly practical scenario for many languages, especially low-resource ones, where the paradigm of fine-tuning a pre-trained, multilingual model has shown to be effective Ranathunga et al. (2023).
The size of the models previously tested was also not conducive to testing the effect of cross-lingual transfer, as the models had less than 70 million parameters (based on Transformer-Base). In contrast, ByT5-small has 300 million, up to 13 billion for ByT5-XXL. With an order of magnitude difference in model size, there may be emergent properties of larger character or subword models when trained to translate.
There is evidence from other generative tasks that indicates that such could be the case. ByT5 has been shown to consistently outperform its subword counterpart mT5 for 3 generative tasks: TweetQA, GemXSUM, and DROP Xue et al. (2022). However these generative tasks are all in English, and all have a significantly shorter output sequence than their input sequence. Considering these distinctions, it remains unclear whether this superior performance of ByT5 would extend to translation tasks, necessitating this work.
## 3 Method
As our experiments are aimed to provide a fair comparison of state-of-the-art character and subword
models for translation, we first justify our model choice, followed by our training scheme, and lastly our choice of metric for evaluation.
### Models
While there are other models that operate on the character level (Boukkouri et al., 2020; Tay et al., 2021), we opt to compare ByT5 to its subword counterpart mT5. These models are, to our knowledge, the most comparable due to their similar training setup and parameter count.
We note that although the parameter counts between mT5 and ByT5 models are similar, Xue et al. elect to increase the width (i.e. hidden dimension size) of the ByT5 models to compensate for it using fewer parameters for embedding bytes. This is most noticeable in the small models, with 85% of the parameters of mT5 being used for embeddings, compared to 0.3% for ByT5 (Xue et al., 2022). As this disparity lessens with increasing model size, we consider this difference to be an explaining factor if the disparity in any of our results also correlates negatively with model size. As such, the majority of our experiments use the small, base, and large model sizes of ByT5 and mT5.
### Training
To train our translation models, we finetune mT5 and ByT5 models of sizes small, base, and large using the same prompt used in Raffel et al. (2020):
_Translate <S> to <T>: <src>_
where <S> is the source language, <T> is the target language, and <src> is the source text. We primarily use WMT's NewsCommentary v162 datasets for training. We consider 5 levels of "resourcedness" for training, using {0.4, 2, 10, 50, 250} thousand sentence pairs. We also use the WMT14 German-English dataset to test higher-resource settings of 1.25 and 4.5 million sentence pairs (i.e. the entire dataset).3 Our hyperparameter choices for training can be found in Appendix A. For development and testing, we use the FLoRes200 dataset Costa-jussa et al. (2022).
Footnote 2: [https://data.statmt.org/news-commentary/v16/](https://data.statmt.org/news-commentary/v16/)
Footnote 3: By default, we use NewsCommentary, any use of WMT14 is specified.
As for training language pairs, we train on {Arabic, German, Russian}\(\rightarrow\)English and {Portuguese, English}\(\rightarrow\)Spanish.4 We choose these language pairs as they are all within NewsCommentary, guaranteeing a similar quality, and for the varying degrees of language similarity.
Footnote 4: We opt for mostly into-English experiments so that our results can be easily compared for any of the source languages used. Nevertheless, we also include English\(\rightarrow\)German results in Appendix B.
We additionally test the models' cross-lingual retention with the FLoRes200 dataset, whose wide variety of languages allow us to further isolate important language characteristics. To test the models' cross-lingual capabilities, we simply swap out <S> and <src> for a new language, keeping the target language the same. No further training is performed, making the setting zero-shot. We specifically target 3 factors:
1. Whether the model has seen the substituted language during **pretraining**.
2. Whether the substituted language is in the same **script** as the trained language.
3. Whether the substituted language is in the same language **family** as the trained language.
These 3 factors are easily identifiable for any given language, allowing for a simple means of potentially assessing the efficacy of character and subword models in any specific linguistic context.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & 250k & 1.25M & 4.5M \\ \hline mt5-large & 54.72 & 58.38 & 61.51 \\ byt5-large & **56.83** & **59.78** & **62.73** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of mT5 and ByT5 trained on WMT14 German\(\rightarrow\)English, using chrF++.
Figure 2: Performance of mT5 and ByT5 on German\(\rightarrow\)English.
### Evaluation
We considered several metrics for evaluating these models, opting eventually for chrf++ [14], which is formulated as a combination of word-level and character-level F-scores.
There are several other potential metrics to choose from, and we cover in more detail the reasoning for why we did not select them in Appendix C. We additionally provide the BLEU [14] scores for all experiments in Appendix B.
## 4 Direct Translation Results
Our direct translation results look directly at the performance of the models on the language pairs for which they are fine-tuned. We start by varying the amount of training data. As seen in several previous works noted in Section 2, character models appear to thrive in situations where training data is scarce. We confirm this with our experiments on {Arabic, German}\(\rightarrow\)English.
We continue by varying the distance between the source and target languages to find any differences in the models' abilities to handle increasing language distance. This is motivated by the assumption that character-level models are more capable of handling similar languages with only small changes required for accurate translation, such as changes in spelling, while largely preserving word order. We test this using a similar language pair, Portuguese\(\rightarrow\)Spanish compared to the more distant English\(\rightarrow\)Spanish, as well as German\(\rightarrow\)English compared to Arabic\(\rightarrow\)English.
### Amount of Training Data
Varying the amount of training data reveals the largest difference in performance of character and subword models. Figures 2 and 3 show that ByT5 outperforms mT5 in all resource levels. However, the performance gap increases as resources are limited. We see that model size also plays a role, with the large model having the largest performance gap of 8-10 points when only 400 sentences are available. This is counter to our assumption that, given the differences in architecture being largest between the small models, we would see the largest difference in performance from them (see Section 3.1).
We test higher-resource settings in Table 1, which shows the performance of the large models on the German\(\rightarrow\)English WMT14 dataset. Here, we see that while ByT5 maintains a lead, the performance of the models does appear to be converging near the full size of the dataset. This indicates that eventually, with enough training data, the advantage of operating on the character-level diminishes. However, for many language pairs, such as those with less than 4.5 million sentence pairs, the advantage could potentially amount to an increase in translation performance.
Figure 4: Performance of models on English\(\rightarrow\)Spanish, relative to Portuguese\(\rightarrow\)Spanish.
Figure 5: Performance of models on Arabic\(\rightarrow\)English, relative to German\(\rightarrow\)English.
Figure 3: Performance of mT5 and ByT5 on Arabic\(\rightarrow\)English.
### Source-Target Language Distance
To examine the effect of language distance between the source and target, we divide the scores of our more distant language pairs (eng\(\rightarrow\)spa, ara\(\rightarrow\)eng) by the scores of our closer language pairs (por\(\rightarrow\)spa, deu\(\rightarrow\)eng). This is shown in Figures 4 and 5, respectively. Here we can see that, in most cases, ByT5 appears more robust to language distance than mT5, achieving a greater percentage of its performance on the closer pair when the distance between source and target increases.
## 5 Cross-lingual Results
Our cross-lingual results examine how well the models retain information from similar languages, or generalize to unseen languages, when trained on a specific language pair. This can have important ramifications on best practices for training a low-resource model, as cross-lingual transfer learning is a common technique for obtaining a good initialization for a language pair with few training sentences.
We first look at the general performance by selecting 10 languages from different language families with which we replace the source language. This aims to provide an overview of the performance one can expect from character and subword models.
Second, we focus on performance based on whether the substituted languages have been seen in pretraining. This tests the ability of the models to generalize to novel languages, compared to their retention of similar, often higher-resourced languages.
Third, we investigate the importance of writing script. Historically in MT, differences in script have motivated the use of transliteration Durrani et al. (2010); Nakov and Tiedemann (2012), however, as of yet, no work has examined the effect of script on neural character models. We compare the cross-lingual retention of Slavic languages written in Cyrillic and Latin using models trained on Russian\(\rightarrow\)English.
Lastly, we look at the effect of language distance. Rather than comparing the distance in source and target as examined in Section 4.2, here we look at the distance between the original and substituted source language. Specifically, we look at 4 branches in the Indo-European language family: Germanic (the original source language branch), Italic, Celtic, and Slavic.
### Overall
Figure 6 shows the average performance of every model on a selection of 10 languages from 10 of the most representative language families.5
Footnote 5: We provide details of this selection in Appendix D.
Overall, we see that ByT5 continues to outperform mT5 for lower resource scenarios, however its zero-shot performance drastically decreases in several cases above 10 thousand training examples. However, mT5 continues to perform well up to 250 thousand examples.6 This phenomenon of ByT5 being both better at learning and generalizing from fewer examples but also losing generality faster is difficult to explain and requires further investigation. However, the trend of requiring fewer examples has been seen in the growth of large language models, to the point where simply prompting without any fine-tuning can be effective Bommasani et al. (2021). This implies that the character models are behaving as though they are larger models, and in the sense of training time, they are (see Section 7). The faster loss of generality is, to our knowledge, not an attribute of larger language models, though this is also likely untested.
Footnote 6: Upon further testing with our higher resource models, we do start to see this trend with mT5 as well.
Seeing as this loss of generality appears to occur
Figure 6: Average performance of German\(\rightarrow\)English (left) and Arabic\(\rightarrow\)English (right) models where German or Arabic is replaced with a broad selection of languages.
only when at least 50 thousand sentences are used for training, the following sections only analyze results where 400-10,000 sentences are used for training.
### Presence in Pretraining
The presence of a language in pretraining could greatly affect the performance of a model. As we have seen in Section 4.1, the amount of data used for fine-tuning can have a great effect, so naturally the amount of data in pretraining should show similar trends. Here, we test Portuguese\(\rightarrow\)Spanish models on 3 related languages seen in pretraining (Catalan, French, and Italian) and 3 not seen (Astrurian, Friulian, and Occitan). We also test this with our German\(\rightarrow\)English models, again using 3 seen languages (Danish, Dutch, Swedish) and 3 unseen (Faroese, Limbugs, and Norwegian Nyorsk).
In Figure 7, we see that for German\(\rightarrow\)English and Portuguese\(\rightarrow\)Spanish, ByT5 performs markedly better than mT5 on languages that are similar to the source but are not seen in pretraining, compared to similar languages seen in pretraining. The differences in the Portuguese\(\rightarrow\)Spanish model are larger than in the German\(\rightarrow\)English model. This could be due to the relative closeness of Italic languages compared to Germanic languages, which have a larger degree of diversity.7
Footnote 7: Using lang2vec’s KNN features (Littell et al., 2017), the average similarity between Portuguese and the Italic languages (0.9127) is higher than German and the Germanic languages (0.8943).
### Script
With regards to the script of the languages used, we see very little effect. Figure 8 shows the performance of our Russian\(\rightarrow\)English model on other Slavic languages, some of which (Ukranian, Bulgarian, and Serbian) use the Cyrillic alphabet like Russian, while others (Polish, Czech, Slovenian) use the Latin alphabet.
Here we see that script has a smaller but noticeable effect, where ByT5 generally performs better on languages with the same script than those with a different script. This is intuitive, as the embeddings used for Cyrillic are almost entirely different from those used in Latin, so it is more difficult for ByT5 to detect similar words when they are spelled with a different character set.
### Source-Source Language Distance
Intuitively, it would make sense that a character model could capitalize on similar source languages, given that there are similar morphemes which differ only by a few characters. As such, we look at the zero-shot performance of a German\(\rightarrow\)English model for 4 different language families: Germanic, Italic, Celtic, and Slavic. These 4 families all contain at least 3 languages written in Latin script and
Figure 8: Average chrF++ performance of languages with the same script as the source (Cyrillic) or a different script (Latin).
Figure 7: Average chrF++ performance of languages seen in pretraining versus unseen in pretraining for Portuguese\(\rightarrow\)Spanish (left) and German\(\rightarrow\)English (right) models. S, B, or L indicate the model size (small, base, or large), and the subscript indicates the amount of training data used.
are part of the mC4 corpus (the corpus used for pre-training), making them ideal candidates to isolate the effect of language family.
As we can see in Figure 9, language family does not appear to play a large role. ByT5 performs considerably better on Celtic, though this may be more due to the resourcedness of Celtic during pretraining, where only 5.8 billion tokens of the Celtic languages were seen, compared to the 201.8 billion tokens of Slavic and 741 billion tokens of Italic.
The slight underperformance of Slavic with the smaller model sizes could be due to Slavic languages being more distant from Germanic, Celtic, and Italic languages.
## 6 Analysis
Our analysis focuses on the reasoning for why ByT5 may perform better than mT5 for many cases in translation, as shown in the previous two sections.
We start by showing that character models can and do predominantly operate on the word level when translating. We then identify two aspects in which character models stand out: translation of cognates and translation of rare words.
### Translation Granularity
Although Xue et al. (2022) showed that ByT5 works better on a few generative tasks such as summarization, translation is unique from these previously tested tasks due to the information on the source and target side having roughly equivalent length and content. As such, one could argue that a subword model would be better suited for translation, given that it may be easier for the model to create an almost one-to-one mapping between source and target subwords. Meanwhile there is rarely a one-to-one mapping between characters, save for some very closely-related languages or dialects.
The one-to-one mapping of source to target subwords is easy to visualize with saliency (Simonyan et al., 2013). Saliency is an attribution method using gradients collected at prediction time, which allows us to quantify the importance of each source token and previous target tokens in predicting the target token at each generation step. Thus, we can apply this attribution method to our character models to gauge the influence of the source and target context in the model's predictions. We use InSeq (Sarti et al., 2023) for our attribution analysis.
A similar analysis of token importance was conducted by Voita et al. (2021) on subword MT models, showing the relation between exposure bias and the distribution of source and target contributions. In particular, hallucinatory behavior in MT model was shown to be connected to a language modeling regime in which neural models disregard source tokens, assigning more importance to the previously generated target prefix.
For large character models operating at a very low input granularity, it is reasonable to assume that word-level or morpheme-level co-occurrences could be implicitly learned throughout the training process. We can verify this hypothesis by evaluating source-side and target-side contributions in relation to the generated characters.
From an input contribution perspective, this would result in higher contributions in the source text for characters marking the beginning of a new word in the translation, while all subsequent characters would then rely heavily on the previous ones in the same generated word.
In other words, in a character model, if it is translating character-by-character, we would expect the attributions to the source side for each character to be relatively uniform. However if it is translating word-by-word, we would expect the first character's attributions to be more based on the source side, and subsequent characters attributions to be more based on the previous characters in that generated word.
Referring back to Figure 1, we see an example of the source-side versus target-side attributions, where the source-side attribution is elevated at the beginning of each word. This implies that the model is translating word-by-word, as it decides on
Figure 9: Average chrF++ scores on different language families, compared to the same family (Germanic).
the appropriate translation for a given word when it generates the first character, and subsequently relies more on the target-side to complete the word.
In Figure 10, we see the average source attribution for each character, given its position in the word (with 0 being the 1st position). The average source-side attribution declines, therefore the average target-side attribution increases, indicating there is an implicit word-by-word translation. This might explain why, in every scenario shown in Section 4, ByT5's performance is greater or equal to mT5's performance: ByT5 is at the very least capable of mimicking word-by-word translation, while also exploiting character-by-character modeling whenever useful (e.g. for translating cognates, see Section 6.2).
We also see that the slope for English\(\rightarrow\)Spanish is steeper than Portuguese\(\rightarrow\)Spanish, which may be indicating that character models operate more on the character level when translating Portuguese into Spanish, given their linguistic similarity relative to English and Spanish. This implies that character models are capable of varying the granularity of their translation, depending on the language pair.
### Performance on Cognates
If character models primarily operate on the word-level for translation, why do we see such a large performance increase from character models, particularly when trained on small amounts of data?
One intuitive reason could be that they can operate on the character-level when desirable. Such would be the case for orthographically similar word pairs, or cognates, as their character-level similarity is innately useful and easily exploited by a character model.
To confirm that character models perform better at translating cognates, we align the source, reference, and hypothesis sentences with AWESoME (also known as awesome-align) (Dou and Neubig, 2021) in order to get a word-level accuracy. We define a cognate based on the inverse normalized-Levenshtein distance of the source and reference words, varying the threshold from 0 to 1.
In Figure 11, we see the difference in accuracy of the large ByT5 and mT5 models trained for Portuguese\(\rightarrow\)Spanish. We see that as the similarity threshold increases (i.e. the words become more similar), the accuracy disparity also increases in favor of ByT5. Such is especially the case for the models trained on less data, indicating that character models can learn cognate translations not only more effectively, but also more quickly.
### Performance on Rare Words
As we have seen in our cross-lingual results (Section 5), character models can substantially outperform subword models on similar languages not seen in pretraining. Additionally, across all of our results, a common theme appears to be that character models perform well in low-resource scenarios.
Ostensibly, it would follow that character models can correctly translate rare words more often than subword models, however is this indeed the case? To answer this, similar to our analysis of cognates (Section 6.2), we also measure word-level translation accuracy, this time binning based on the frequency of the words in the training set. We use the large models trained on 10 thousand sentence pairs of German\(\rightarrow\)English. Figure 12 shows the results.
Here, we see that ByT5 has a higher translation accuracy on words of fewer than 100 occurrences. The accuracy disparity between the trained source language (German) and the 3 Germanic languages
Figure 11: Difference in word-level accuracy for large models trained on Portuguese\(\rightarrow\)Spanish.
Figure 10: Average source attributions for ByT5-Large, based on the position of each byte token within a word.
seen in pretraining [14], butch, and Swedish) is minimal, showing that the impact of fine-tuning on a language has a relatively equal effect to similar languages seen in pretraining. Meanwhile for the languages unseen in pretraining [11], character models have a higher accuracy across the board, though still more pronounced for lower-frequency bins.
## 7 Efficiency
Although we have shown that the translation quality of character models is competitive or better than subword models, another important aspect that should not be ignored is the efficiency of character models. We report the training and inference times for our models in Table 2, using a single Nvidia V100 (32GB) GPU.
Both the training and inference speeds in samples per second are considerably higher for the character models, being 4-5 times slower for training and 5-6 times slower for inference. However, the number of epochs needed is lower for the character models, but not enough to counteract the slowness during training.
The slower speed comes largely from the increase in sequence length. While we tried to balance the size of the batches such that each model sees the same amount of text per batch, achieving this required the character models to accumulate gradients for 4 times as many iterations as the subword models.
Thus, if training or inference speed are a concern, subword models are likely the superior choice, particularly for high-resource languages. In the low-resource setting, there is a significant trade-off between accuracy and speed when choosing between character and subword models.
## 8 Conclusion
Subword-level models have been the dominant model for machine translation, however this work has shown that character models can be competitive or superior in many circumstances. First, character models outright have better performance on the trained language pair. Second, character models particularly excel when training data is scarce. Third, character models have superior cross-lingual transferability, especially with languages unseen but similar to the source language.
We attribute this superior performance, as shown in our analyses, to a character model's ability to translate both word-by-word and character-by-character, choosing the appropriate granularity for the context. This results in better word-level accuracy on cognates and rare words.
The performance increase is however not without a trade-off: speed. The character models are at least 4 times slower in both training and inference, leading them to be sub-par for many real-world situations. Nevertheless, character models can still find use in less time-sensitive settings.
So are character-level translations worth the wait? Maybe.
## Acknowledgements
We thank Gabriele Sarti for the many fruitful discussions regarding attribution analysis and for the idea of investigating the models' performances on translating cognates. We also thank the Center for Information Technology of the University of Groningen for their support and for providing ac
\begin{table}
\begin{tabular}{l r r r} \hline \hline & \multicolumn{2}{c}{Training} & \multicolumn{1}{c}{Inference} \\ Model & samples/s & epochs & samples/s \\ \hline mt5-small & **2.5** & 38.11 & **52.91** \\ byt5-small & 0.43 & **29.24** & 8.90 \\ mt5-base & **1.15** & 22.87 & **20.77** \\ byt5-base & 0.24 & **18.16** & 3.96 \\ mt5-large & **0.48** & 19.58 & **6.23** \\ byt5-large & 0.12 & **16.25** & 1.17 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The training and inference speeds for German\(\rightarrow\)English experiments. Epochs reported are using the models trained on 10 thousand pairs, using early stopping. Best result per model size and column shown in bold.
Figure 12: Difference in word level accuracy by their frequency in the test set for large models trained on German\(\rightarrow\)English with 10 thousand sentence pairs
cess to the Peregrine high performance computing cluster.
| Pré-entraînés modèles de langage à niveau de caractères et à niveau de octets ont été montrés à être compétitifs avec les modèles sous-詞 de popularité sur une gamme de tâches de traitement de langage naturel (NLP). Cependant, il y a eu peu de recherches sur leur efficacité pour la traduction automatique neuronale (NMT), en particulier dans le paradigme de pré-entraîner et de finetune. Ce travail effectue une comparaison approfondie sur plusieurs langues et conditions expérimentales de modèles pré-entraînés à niveau de caractères et de sous-詞 (ByT5 et mT5 respectivement) sur la NMT. Nous montrons l'efficacité du modèle à niveau de caractères dans la traduction, notamment dans les cas où les données de fine-tuning sont limitées. Dans notre analyse, nous montrons comment les gains des modèles à niveau de caractères dans la qualité de la traduction sont reflétés dans des traductions de mots similaires orthographiquement |
2309.00127 | FTA: Stealthy and Adaptive Backdoor Attack with Flexible Triggers on
Federated Learning | Current backdoor attacks against federated learning (FL) strongly rely on
universal triggers or semantic patterns, which can be easily detected and
filtered by certain defense mechanisms such as norm clipping, comparing
parameter divergences among local updates. In this work, we propose a new
stealthy and robust backdoor attack with flexible triggers against FL defenses.
To achieve this, we build a generative trigger function that can learn to
manipulate the benign samples with an imperceptible flexible trigger pattern
and simultaneously make the trigger pattern include the most significant hidden
features of the attacker-chosen label. Moreover, our trigger generator can keep
learning and adapt across different rounds, allowing it to adjust to changes in
the global model. By filling the distinguishable difference (the mapping
between the trigger pattern and target label), we make our attack naturally
stealthy. Extensive experiments on real-world datasets verify the effectiveness
and stealthiness of our attack compared to prior attacks on decentralized
learning framework with eight well-studied defenses. | Yanqi Qiao, Dazhuang Liu, Congwen Chen, Rui Wang, Kaitai Liang | 2023-08-31T20:25:54 | http://arxiv.org/abs/2309.00127v2 | # FTA: Stealthy and Adaptive Backdoor Attack with Flexible Triggers on Federated Learning
###### Abstract
Current backdoor attacks against federated learning (FL) strongly rely on universal triggers or semantic patterns, which can be easily detected and filtered by certain defense mechanisms such as norm clipping, trigger inversion and etc. In this work, we propose a novel generator-assisted backdoor attack, FTA, against FL defenses. We for the first time consider the natural stealthiness of triggers during global inference. In this method, we build a generative trigger function that can learn to manipulate the benign samples with naturally imperceptible trigger patterns (_stealthy_) and simultaneously make poisoned samples include similar hidden features of the attacker-chosen label. Moreover, our trigger generator repeatedly produces triggers for each sample (_flexibility_) in each FL iteration (_adaptivity_), allowing it to adjust to changes of hidden features between global models of different rounds. Instead of using universal and predefined triggers of existing works, we break this wall by providing three desiderate (i.e., stealthy, flexibility and adaptivity), which helps our attack avoid the presence of backdoor-related feature representations. Extensive experiments confirmed the effectiveness (above 98% attack success rate) and stealthiness of our attack compared to prior attacks on decentralized learning frameworks with eight well-studied defenses.
## 1 Introduction
Federated learning (FL) has recently provided practical performance in various real-world applications and tasks, such as prediction of oxygen requirements of symptomatic patients with COVID-19 [13], autonomous driving [32], Gboard [56] and Siri [38]. It supports collaborative training of an accurate global model by allowing multiple agents to upload local updates, such as gradients or weights, to a server without compromising local datasets. However, this decentralized paradigm unfortunately exposes FL to a security threat -- backdoor attacks [4; 55; 48; 62; 25]. Existing backdoor defenses on FL possess the capability to scrutinize the anomaly of malicious model updates. However, prior attacks fail to achieve adequate stealthiness under those robust FL systems due to malicious parameter perturbations introduced by the backdoor task.
We summarize the following open problems from the existing backdoor attacks against FL1:
Footnote 1: Due to space limit, we review prior backdoor attacks and defenses on FL in Appendix A.1
**P1: The abnormality of feature extraction in convolutional layers.** Existing attacks use patch-based triggers ("squares", "stripe" and etc.) [2; 55; 62; 25] on a fixed position or semantic backdoor triggers (shared attributes within the same class) [2; 48]. Consequently, the poisoned samples are misclassified by the victim model towards the target label after backdoor training. However, we found that prior attacks manipulate the samples with universal patterns along the whole training iterations, which fails to provide enough "stealthiness" of the hidden features of the poisoned samples.
The backdoor training with such triggers attaches extra hidden features to the backdoor patterns or revises current hidden features from the feature space in benign classes domain. This makes the latent representations of poisoned samples extracted from filters _standalone_ compared to the benign counterparts. **Figure 5 (a)** intuitively illustrates the statement. Therefore, unrestricted trigger patterns can cause aberrant weight changes in the filters for backdoor patterns. This abnormality induces weight outliers which makes the backdoor attacks vulnerable to filter-wise adversarial pruning [52, 29, 51].
**P2: The abnormality of backdoor routing in fully connected layers.** Compared with the benign model, the malicious model needs to be trained on one more task, i.e. backdoor task. Specifically, in fully connected (FC) layers, the backdoor task is to establish a _new_ routing [49, 10], separated from benign ones, between the independent hidden features of attacker's universal pattern and its corresponding target label, which yields an anomaly at the parameter level. The cause of this anomaly is natural, since the output neurons for the target label must contribute to both benign and backdoor routing, which requires significant weight/bias adjustments to the neurons involved. We note that last FC layer in the current mainstream neural networks are always with a large fraction of the total number of parameters (e.g., 98% for Classic CNN, 62% in ResNet18). As mentioned in [40], the final FC layer of the malicious classifier presents significantly greater abnormality than other FC layers, with backdoor routing being seen as the secondary source of these abnormalities. Note that these abnormalities (**P1-2**) would arise in existing universal trigger designs under FL.
**P3: The perceptible trigger for inference.** Admittedly, it is not necessary to guarantee natural stealthiness of triggers on training data against FL, since its accessibility is limited to each client exclusively due to the privacy issue. However, we pay attention to the trigger stealthiness during the inference stage, in which a poisoned sample with a naturally stealthy trigger can mislead human inspection. The test input with perceptible perturbation in FL [2, 55, 62, 25] can be easily identified by an evaluator or a user who can distinguish the difference between 'just' an incorrect classification/prediction of the model and the purposeful wrong decision due to a backdoor in the test/use stage.
**P1-3** can fatally harm the stealthiness and effectiveness of prior attacks under robust FL systems. The stealthiness issue can be seen in two aspects (trigger/routing). For **P3**, the visible fixed triggers contain independent hidden features, and these hidden features lead to a new backdoor routing as discussed in **P1-2**. Meanwhile, the backdoor inference stage cannot perform properly because those triggers are not sufficiently hidden. For example, we recall that DBA [55], Neurotoxin [62] and 3DFed [25] use universal patterns that can be clearly filtered out by trigger inversion method such as FLIP [59]. Moreover, **P1-2** can cause weight dissimilarity between benign and backdoor routing. And this dissimilarity can be easily detected by cluster-based filtering, such as FLAME [33]. Efficiency problem is also striking for **P1-2** since extra computational budget is required to learn the new features of the poisoned data and to form the correspondent backdoor routing. In this work, we regard the problems **P1-3** as the _stealthiness_ of backdoor attacks in the context of FL.
A natural question then arises: _could we eliminate the anomalies introduced by new backdoor features and routing (i.e., tackling P1-2) while making the trigger sufficiently stealthy for inference on decentralized scenario (i.e., addressing P3)?_
To provide a concrete answer, we propose a stealthy generator-assisted backdoor attack, FTA, to adaptively (per FL iteration) provide triggers in a flexible manner (per sample) on decentralized setup. FTA achieves a satisfied stealthiness by producing imperceptible triggers with a generative neural network (GAN) [18, 1] in a _flexible_ way for each sample and in an _adaptive_ manner during entire FL iterations. To address **P3**, our triggers should provide natural stealthiness to avoid inspection during inference. To solve **P1**, the difference of hidden features between poisoned data and benign counterparts should be minimized. Due to the imperceptibility between poisoned and benign data in latent representation, the correspondent backdoor routing will not be formed and thus **P2** is naturally addressed.
Specifically, the generator is learnt to produce triggers for each sample, which ensure similar latent features of poisoned samples to benign ones with target label (**P1**). This idea naturally reduce the abnormality of creating an extra routing for backdoor in **P2** since the latent features make poisoned data "looks like" benign ones with target label. Thus our trigger is less perceptible and more flexible than predefined patch-based ones in prior attacks (**P3**). Additionally, to make the flexible trigger robust and adaptive to the changes in global model, the generator is continuously trained across FL
iterations. Compared with existing works using fixed and universal trigger patterns, we break this wall and for the first time make the generated trigger to be stealthy, flexible and adaptive in FL setups. Additionally, compared to universal trigger-based attacks, e.g., 3DFed, our trigger-assisted attack can naturally evade (universal) trigger inversion defense such as FLIP. Since our trigger generation method forces poisoned samples to share similar hidden feature as benign one, the benign routing can mostly reused by poisoned data and thus the backdoor task is not purely learned from scratch. Our trigger generation ensures that poisoned samples have similar hidden features to benign ones, allowing poisoned data to reuse the benign routing. As a result, the backdoor task does not need to be learned entirely from scratch and achieves high attack efficiency as shown in Figure 3. Finally, we formulate the process of finding optimal trigger generator and training malicious model in a bi-level, non-convex and constrained optimization problem, and achieve optimum by proposing a simple but practical optimization process. We illustrate learning the trigger generator, training the malicious model and testing the backdoor in Figure 1, and showcase various backdoor images in Figure 2 to demonstrate the imperceptible perturbation by our generator.
Our main **contributions** are summarized as follows:
\(\bullet\) We propose a stealthy generator-assisted backdoor attack (FTA) against robust FL. Instead of utilizing an universal trigger pattern, we design a novel trigger generator which produces naturally imperceptible triggers during inference stage. Our flexible triggers provide hidden feature similarity of benign data and successfully lead poisoned data to reuse benign routing of target label. Hereby FTA can avoid parameter anomaly of malicious update and improve attack effectiveness.
\(\bullet\) We design a new learnable and adaptive generator that can learn the flexible triggers for global model at current FL iteration to achieve the best attack effectiveness. We propose a bi-level and constrained optimization problem to find our optimal generator each iteration efficiently. We then formulate a customized learning process for the FL scenario and solved it with reasonable complexity.
\(\bullet\) Finally, we present intensive experiments to empirically demonstrate that the proposed attack provides state-of-the-art effectiveness and stealthiness against existing eight well-study defense mechanisms under four benchmark datasets.
## 2 Threat Model and Intuition
### Threat Model
**Attacker's Knowledge & Capabilities:** We consider the same threat model as in prior works [2; 4; 48; 62; 43; 36], where the attacker can have full access to malicious agent device(s), local
Figure 1: Overview of FTA. (I) Learn the optimal trigger generator \(g_{\xi}\). (II) Train malicious model \(f_{\theta}\). Inference/Backdoor Attack: The global model performs well on benign tasks while misclassifying the poisoned samples to the target label.
training processes and training datasets. Furthermore, we do not require the attacker to know the FL aggregation rules applied in the server.
**Attacker's Goal**: Unlike untargeted poisoning attacks [20] preventing the convergence of the global model, the goal of our attack is to manipulate malicious agents' local training processes to achieve high accuracy in the backdoor task without undermining the accuracy of the benign task.
### Our Intuition
Recall that prior attacks use universal predefined patterns (see Figure 2) which cannot guarantee stealthiness **(P1-3)** since the poisoned samples are visually inconsistent with natural inputs. These universal triggers (including tail data) used in whole FL iterations with noticeable modification can introduce new hidden features during extraction and further influence the process of backdoor routing. Consequently, this makes prior attacks be easily detected by current robust defenses due to **P1-2**. Also, the inconsistency between benign and poisoned samples is not stealthy for the attacker during the global model inference **(P3)** and the triggers can be inversed in decentralized setup.
Compared to prior attacks that focus on manipulating parameters, we bridge the gap and focus on designing stealty triggers. To address **P1-3**, a well-designed trigger should provide 4 superiorities: _i_) the poisoned sample is naturally stealthy to the original benign sample; _ii_) the trigger is able to achieve hidden feature similarity between poisoned and benign samples of target label; _iii_) the trigger can eliminate the anomaly between backdoor and benign routing during learning; _iv_) the trigger design framework can evade robust FL defenses. The only solution that provides these advantages over prior works simultaneously is _flexible_ triggers. The optimal flexible triggers are learnt to make latent representations of poisoned samples similar to benign ones and thus make the reuse of benign routing possible, which naturally diminish the presence of outlier at parameter level. Therefore, to achieve the flexibilty of trigger patterns and satisfy four requirements, we propose a learnable and adaptive trigger generator to produce flexible and stealthy triggers.
**v.s. Trigger generators in centralized setting.** One may argue that the attacker can simply apply a similar (trigger) generator in centralized setup [15; 14; 64; 28; 65] on FL to achieve imperceptible trigger and stealthy model update.
\(\bullet\)**Stealthiness.** For example, the attacker can use a generator to produce imperceptible triggers for poisoned samples and make their hidden features similar to original benign samples' as in [64; 65]. This, however, cannot ensure the indistinguishable perturbation of model parameters (caused by backdoor routing) during malicious training and fail to capture the stealthiness (in **P1-2**). This is so because it only constrains the distinction of the input domain and the hidden features between poisoned and benign samples other than the hidden features between poisoned and benign samples of **target** label. In other words, a centralized generator masks triggers in the input domain and feature space of benign samples, conceals the poisoned sample for visibility and feature representation, whereas this cannot ensure the absence of backdoor routing for poisoned data. A stealthy backdoor attack on FL should mitigate the routing introduced by backdoor task and guarantee the stealthiness of model parameters instead of just the hidden features of poisoned samples compared to their original inputs.
**Learning.** The centralized learning process of existing trigger generator cannot directly apply to decentralized setups due to the continuously changing of global model and time consumption
Figure 2: Visualization of backdoored images. **Top**: the original image; backdoored samples generated by baseline/Neurotoxin, DBA, Edge-case, and FTA; **Bottom**: the residual maps.
of training trigger generator. As an example, IBA [65] directly constrains the distance of feature representation between benign and poisoned samples. This approach cannot achieve satisfied attack effectiveness due to inaccurate hidden features of benign samples before global model convergence. In contrast, we propose a customized optimization method for FL scenarios that can learn the optimal trigger generator for global model of current iteration to achieve the best attack effectiveness and practical computational cost as depicted in Section 3.3 and Appendix A.10.
\(\bullet\)**Defenses.** We note that the robust FL aggregator can only access local updates of all agents other than local training datasets. The centralized backdoor attack does not require consideration of the magnitude of the malicious parameters. However, in reality, the magnitude of malicious updates is usually larger than that of benign updates under FL setups. In that regard, norm clipping can effectively weaken and even eliminate the impact of the backdoor [46; 43]. Thanks to the flexibility of our triggers, we advance the state-of-the-art by enhancing the stealthiness and effectiveness of the backdoor attack even against well-studied defenses such as trigger inversion method on FL, e.g. FLIP. FLIP is effective in removing prior backdoors with patch-based triggers whereas our attack can naturally evade this SOTA defense.
## 3 Proposed Methodology: FTA
### Problem Formulation
Based on the federated scenario in Appendix A.1.1, the attacker \(m\) trains the malicious models to alter the behavior of the global model \(\theta\) under ERM as follows: \(\theta^{*}_{m}=\operatorname*{argmin}_{\theta}\sum_{(x,y)\in D^{cln}\cup D^{ bd}}\mathcal{L}(f_{\theta}(x),y),\) where \(D^{cln}\) is clean training set and \(D^{bd}\) is a small fraction of clean samples in \(D^{cln}\) to produce poisoned data by the attacker. Each clean sample \((x,y)\) in the selected subset is transformed into a poisoned sample as \((\mathcal{T}(x),\eta(y))\), where \(\mathcal{T}:\mathcal{X}\rightarrow\mathcal{X}\) is the trigger function and \(\eta\) is the target labeling function. And the poison fraction is defined as \(|D^{bd}|/|D^{cln}|\). During inference, for a clean input \(x\) and its true label \(y\), the learned \(f\) behaves as: \(f(x)=y,f(\mathcal{T}(x))=\eta(y)\).
To generate a stealthy backdoor, our main goal is to learn a stealthy trigger function \(\mathcal{T}:\mathcal{X}\rightarrow\mathcal{X}\) to craft poisoned samples and a malicious backdoor model \(f_{\theta^{*}_{m}}\) to inject backdoor behavior into the global model with the followings: 1) the poisoned sample \(\mathcal{T}(x)\) provides an imperceptible perturbation to ensure that we do not bring distribution divergences between clean and backdoor datasets; 2) the injected global classifier simultaneously performs indifferently on test input \(x\) compared to its vanilla version but changes its prediction on the poisoned image \(\mathcal{T}(x)\) to the target class \(\eta(y)\); 3) the latent representation of backdoor sample \(\mathcal{T}(x)\) is similar to its benign input \(x\). Inspired by recent works in learning trigger function backdoor attacks [11; 15; 34; 64], we propose to jointly learn \(\mathcal{T}(\cdot)\) and poison \(f_{\theta}\) via the following constrained optimization:
\[\min_{\theta}\sum_{(x,y)\in D^{cln}}\mathcal{L}(f_{\theta}(x),y)+ \sum_{(x,y)\in D^{bd}}\mathcal{L}(f_{\theta}(\mathcal{T}_{\xi^{*}(\theta)}(x) ),\eta(y)) \tag{1}\] \[s.t. (i)\quad\xi^{*}=\operatorname*{argmin}_{\xi}\sum_{(x,y)\in D^{bd} }\mathcal{L}(f_{\theta}(\mathcal{T}_{\xi}(x)),\eta(y))\] \[(ii)\quad d(\mathcal{T}_{\xi}(x),x)\leq\epsilon\]
where \(d\) is a distance measurement function, \(\epsilon\) is a constant scalar threshold value to ensure a small perturbation by \(l_{2}\)-norm constraint, \(\xi\) is the parameters of trigger function \(\mathcal{T}(\cdot)\). In the above bilevel problem, we optimize a generative trigger function \(\mathcal{T}_{\xi^{*}}\) that is associated with an optimally malicious classifier. The poisoning training finds the optimal parameters \(\theta\) of the malicious classifier to minimize the linear combination of the benign and backdoor objectives. Meanwhile, the generative trigger function is trained to manipulate poisoned samples with imperceptible perturbation, while also finding the optimal trigger that can cause misclassification to the target label. The optimization in Equation (1) is a challenging task in FL scenario since the target classification model \(f_{\theta}\) varies in each iteration and its non-linear constraint. Thus, the learned trigger function \(\mathcal{T}_{\xi}\) is unstable based on dynamic \(f_{\theta}\). For the optimization, we consider two steps: learning trigger generator and poisoning
training, and further execute these steps respectively (not alternately) to optimize \(f_{\theta}\) and \(\mathcal{T}_{\xi}\). The details are depicted in Algorithm 1 (please see Appendix A.2 for more optimization details).
### FTA Trigger Function
We train \(\mathcal{T}_{\xi}\) based on a given generative classifier \(g_{\xi}\), i.e., our FTA trigger generator. Similar to the philosophy of generative trigger technology [15; 64], we design our trigger function to guarantee: 1) The perturbation of poisoned sample is imperceptible; 2) The trigger generator can learn the features of input domain of target label to fool the global model. Given a benign image \(x\) and the corresponding label \(y\), we formally model \(\mathcal{T}_{\xi}\) with restricted perturbation as follows:
\[\mathcal{T}_{\xi}(x)=x+g_{\xi}(x),\quad\|g_{\xi}(x)\|_{2}\leq\epsilon\quad \forall x,\quad\eta(y)=c, \tag{2}\]
where \(\xi\) is the learnable parameters of the FTA trigger generator and \(\epsilon\) is the trigger norm bound to constrain the value of the generative trigger norm. We use the same neural network architecture as [15] to build our trigger generator \(g_{\xi}\), i.e., an autoencoder or more complex U-Net structure [41]. The \(l_{2}\)-norm of the imperceptible trigger noise generated by \(g_{\xi}\) is strictly limited within \(\epsilon\) by: \(\frac{g_{\xi}(x)}{max(1,\|g_{\xi}(x)\|_{2}/\epsilon)}\). Note that, under Equation (2), the distance \(d\) in Equation (1) is \(l_{2}\)-norm on the image-pixel space between \(\mathcal{T}_{\xi}(x)\) and \(x\).
### FTA's Optimization
To address the non-convex and constrained optimization in Equation (1), one may consider alternately updating \(f_{\theta}\) while keeping \(\mathcal{T}_{\xi}\) unchanged, or the other way round. However, according to our trials, we find that simply updating the parameters makes the training process unstable and harms the backdoor performance. Inspired by [16; 15], we divide the local malicious training into two phases. In phase one, we fix the classification model \(f_{\theta}\) and only learn the trigger function \(\mathcal{T}_{\xi}\). In phase two, we use the pre-trained \(\mathcal{T}_{\xi^{*}}\) to generate the poisoned dataset and train the malicious classifier \(f_{\theta}\). Since the number of poisoning epochs of malicious agents is fairly small, which means \(f_{\theta}\) would not vary too much during poisoning training process, the hidden features of samples in target label extracted from \(f_{\theta}\) will also remain similarly. The pre-trained \(\mathcal{T}_{\xi^{*}}\) can still match with the final locally trained \(f_{\theta}\).
In order to make flexible triggers generated by \(g_{\xi}\) adaptive to global models in different rounds, \(g_{\xi}\) should be continuously trained. If a malicious agent is selected more than one round to participate in FL iterations, it can keep training on the previous pre-trained \(g_{\xi}\) under new global model to make the flexible trigger produced by \(g_{\xi}\) match with hidden features of benign samples with target label from new model.
**Input**: Clean dataset \(D_{cln}\), Global model \(f_{\theta}\), Learning rate of malicious model \(\gamma_{f}\) and trigger function \(\gamma_{\mathcal{T}}\), Batch of clean dataset \(B_{cln}\) and poisoned dataset \(B_{bd}\), Epochs to train trigger function \(e_{\mathcal{T}}\) and malicious model \(e_{f}\).
**Output**: Malicious model update \(\delta^{*}\).
```
1:Initialize parameters of trigger function \(\xi\) and global model: \(f_{\theta}\).
2:Sample subset \(D_{bd}\) from \(D_{cln}\).
3:// Stage I: Update flexible \(\mathcal{T}\).
4:Sample minibatch \((x,y)\in B_{bd}\) from \(D_{bd}\)
5:for\(i=1,2,\cdots,e_{\mathcal{T}}\)do
6: Optimize \(\xi\) by using SGD with fixed \(f_{\theta}\) on \(B_{bd}\): \(\xi\leftarrow\xi-\gamma_{\mathcal{T}}\nabla_{\xi}\mathcal{L}(f_{\theta}( \mathcal{T}_{\xi}(x)),\eta(y))\)
7:endfor
8:\(\xi^{*}\leftarrow\xi\)
9:// Stage II: Train malicious model \(f\).
10:Sample minibatch \((x,y)\in B_{cln}\) from \(D_{cln}\) and \((x_{m},y_{m})\in B_{bd}\) from \(D_{bd}\)
11:for\(i=1,2,\cdots,e_{f}\)do
12: Optimize \(\theta\) by using SGD with fixed \(\mathcal{T}_{\xi^{*}}\) on \(B_{cln}\), \(B_{bd}\): \(\theta\leftarrow\theta-\gamma_{f}\nabla_{\theta}(\mathcal{L}(f_{\theta}(x,y))+ \mathcal{L}(f_{\theta}(\mathcal{T}_{\xi}(x_{m})),\eta(y_{m})))\)
13:endfor
14:\(\theta^{*}\leftarrow\theta\)
15:Compute malicious update: \(\delta^{*}\leftarrow\theta^{*}-\theta\)
```
**Algorithm 1** FTA Backdoor Attack
## 4 Attack Evaluation
We show that FTA outperforms the current SOTA attacks (under robust FL defenses) by conducting experiments on different computer vision tasks.
### Experimental Setup
**Datasets and Models.** We demonstrate the effectiveness of FTA backdoor through comprehensive experiments on four publicly available datasets, namely Fashion-MNIST, FEMNIST, CIFAR-10, and Tiny-ImageNet. The classification model used in the experiments includes Classic CNN models, VGG11 [45], and ResNet18 [19]. These datasets and models are representative and commonly used in existing backdoor and FL research works. The overview of our models is described in Appendix A.6.
**Tasks.** There are \(4\) computer vision tasks in total using different datasets, classification models, and trigger generators respectively. The details are depicted in Appendix A.3.
**Attack Settings.** As in [62], we assume that the attacker can only compromise a limited number of agents (!1% ) in practice [43] and uses them to launch the attack by uploading manipulated gradients to the server. Malicious agents can only participate in a constrained number of training rounds in FL settings. Note even if the attacker has the above restrictions, our attack can still be effective, stealthy and robust against defenses (see Figures 3 and 4). Also, the effectiveness of the attack should last even though the attacker stops the attack under robust FL aggregators (see Figure 7 in Appendix A.4.1). We test the stealthiness and durability of FTA with two attack modes respectively, i.e., fixed-frequency and few-shot as [62].
**Fixed-frequency mode.** The server randomly chooses 10 agents among all agents. The attacker controls exactly one agent in each round in which they participate. For other rounds, 10 benign agents are randomly chosen among all agents.
**Few-shot mode.** The attacker participates only in **Attack_num** rounds. During these rounds, we ensure that one malicious agent is selected for training. After Attack_num rounds or backdoor accuracy has reached 95%, the attack will stop. Under this setting, the attack can take effect quickly, and gradually weaken by benign updates after the attack is stopped. In our experiments, the Attack_num is 100 for all attacks, and the total FL round is 1000 for CIFAR-10, and 500 for other datasets.
**Evaluation Metrics.** We evaluate the performance based on backdoor accuracy (BA) and benign accuracy according to the following criteria: effectiveness and stealthiness against current SOTA defense methods under fixed-frequency mode, durability evaluated under few-shot mode.
**Comparison.** We compare FTA with three SOTA attacks, namely DBA, Neurotoxin and Edge-case [48], and the baseline attack method described in [62] under different settings and defenses. The results demonstrate that FTA delivers the best performance as compared to others.
Due to space limit, we put more experimental setup details in Appendix A.7.
### Attack Effectiveness
**Attack effectiveness under fixed-frequency mode.** Compared to the attacks with unified triggers, FTA converges much faster and delivers the best BA in all cases, see Figure 3. It can yield a high backdoor accuracy on the server model within very few rounds (!50) and maintain above 97% accuracy on average. Especially in Tiny-ImageNet, FTA reaches 100% accuracy extremely fast,
Figure 3: Fixed-frequency attack performance under FedAvg. FTA is more effective than others.
with at least 25% advantage compared to others. In CIFAR-10, FTA achieves nearly 83% BA after 50 rounds which is 60% higher than other attacks on average. There is only ;5% BA gap between FTA and Edge-case on FEMNIST in the beginning and later, they reach the same BA after 100 rounds. We note that the backdoor task of Edge-case in FEMNIST is relatively easy, mapping 7-like images to the target label of digit "1", which makes its convergence slightly faster than ours.
**Attack effectiveness under few-shot mode.** As an independent interest, we test the durability of the attacks in this setting. Due to space limit, please see Appendix A.4.1 for more details.
**Influence on Benign accuracy and computational cost.** We include all benign accuracy results across tasks in Appendix A.5. Like other SOTA attacks, FTA has a minor effect (no more than 1.5%) on benign accuracy. Our attack does not significantly increase the computational and time cost due to our optimization procedure (see Appendix A.10 for details).
### Stealthiness against Defensive Measures
We test the stealthiness **(P1-2)** and robustness of FTA and other attacks using 8 SOTA robust FL defenses introduced in Appendix A.1.3, such as norm clipping and FLAME, under fixed-frequency scenarios. All four tasks are involved in this defense evaluation. The results, see Figure 4 show that FTA can break the listed defenses. Beyond this, we also evaluate different tasks on Multi-Krum, Trimmed-mean, RFA, SignSGD, Foolsgold and SparseFed. FTA maintains its stealthiness and robustness under these defenses. We put the results of the compared attacks under the defenses in Appendix A.4.
#### 4.3.1 Resistance to Vector-wise Scaling
We use the norm clipping as the vector-wise scaling defense method, which is regarded as a potent defense and has proven effective in mitigating prior attacks [43]. On the server side, norm clipping is applied on all updates before performing FedAvg. Inspired by [33], we utilize the variant of this method in our experiments. As introduced in Appendix A.3, if we begin the attack from scratch, the norm of benign updates will be unstable and keep fluctuating, making us hard to set a fixed norm bound for all updates. We here filter out the biggest and smallest updates and compute the average norm magnitude based on the rest updates, and set it as the norm bound in current FL iteration.
As shown in Figure 4 (a)-(d), this variant of norm clipping can effectively undermine prior attacks in Fashion-MNIST, CIFAR-10, and Tiny-ImageNet. It fails in FEMNIST because benign updates have a larger norm (for example, 1.2 in FEMNIST at round 10, but only 0.3 in Fashion-MNIST), which cannot effectively clip the norm of malicious updates, thus resulting in a higher BA of existing attacks. We see that FTA provides the best BA which is less influenced by clipping than others. FTA only needs a much smaller norm to effectively fool the global model. Although converging a bit slowly in FEMNIST, FTA can finally output a similar performance (above 98%) compared to others.
Figure 4: Attack stealthiness against defenses. (a)-(d): The variant of norm clipping; (e)-(h): FLAME.
#### 4.3.2 Resistance to Cluster-based Filtering
The cluster-based filtering defense method is FLAME [33], which has demonstrated its effectiveness in mitigating SOTA attacks against FL. It mainly uses HDBSCAN clustering algorithm based on cosine similarity between all updates and strains the updates with the least similarity compared with other updates. In Figure 4 (e)-(h), we see that FLAME can effectively sieve malicious updates of other attacks in Fashion-MNIST and CIFAR-10, but provides relatively weak effectiveness in FEMNIST and Tiny-ImageNet. This is so because data distribution among different agents are fairly in non-i.i.d. manner. Cosine similarity between benign updates is naturally low, making malicious update possibly evade from the clustering filter.
Similar to the result of Multi-Krum (see Appendix A.4.2), FTA achieves \(>\)99% BA and finishes the convergence within 50 rounds in CIFAR-10 and Tiny-ImageNet, while delivering an acceptable degradation of accuracy,!20%, in Fashion-MNIST. In FEMNIST, FTA converges slightly slower than the baseline and Neurotoxin but eventually maintains a similar accuracy with only 2% difference. The result proves that FTA enforces malicious updates to have highly cosine-similarity against benign updates due to the same reason in Appendix A.4.2, so that it can bypass the defenses based on similarity of updates.
### Explanation via Feature Visualization by t-SNE
We use t-SNE [47] visualization result on CIFAR-10 to illustrate why FTA is more stealthy than the attacks without "flexible" triggers. We select 1,000 images from different classes uniformly and choose another 100 images randomly from the dataset and add triggers to them (in particular, patch-based trigger "square" in baseline method, flexible triggers in FTA). To analyze the hidden features of these samples, we use two global poisoned models injected by baseline attack and FTA respectively. We exploit the output of each sample in the last convolutional layer as the feature representation. Next, we apply dimensionality reduction techniques and cluster the latent representations of these samples using t-SNE. From Figure 5 (a)-(b), We see that in the baseline, the distance of clusters between images of the target label "7" and the poisoned images are clearly distinguishable. So the parameters responsible for backdoor routing should do adjustments to map the hidden representations of poisoned images to target label. In FTA, the hidden features of poisoned data overlapped with benign data of target label, which eliminates the anomaly in **feature extraction (P1)**. FTA can reuse the benign routing in FC layers for backdoor tasks, resulting in much less abnormality in **backdoor routing (P2)**, thus the malicious updates can be more similar to benign ones, see Figure 5 (c)-(d), producing a natural parameter stealthiness.
### Ablation Study in FTA Attack
We here analyze several hyperparameters that are critical for the FTA's performance.
**Trigger Size.** This size refers to the \(l_{2}\)-norm bound of the trigger generated by the generator, corresponding to \(\epsilon\) in Algorithm 1. If the size is set too large, the poisoned image can be easily distinguished (i.e., no stealthiness) by human inspection in test/evaluation stage. On the other hand, if we set it too small, the trigger will have a low proportion of features in the input domain. In this sense,
Figure 5: (a)-(b): T-SNE visualization of hidden features of input samples in Fashion-MNIST. The hidden features between poisoned and benign samples of target label is indistinguishable in FTA framework. (c)-(d): Similarity comparison between benign & malicious updates. FTA’s malicious updates is more similar to benign updates than the baseline attack’s.
the global model will encounter difficulty in catching and learning these features of trigger pattern, resulting in a drop of attack performance.
In Figure 6, the trigger size significantly influences the attack performance in all the tasks. The accuracies of FTA drop seriously and eventually reach closely to 0% while we keep decreasing the size of the trigger, in which evidences can be seen in CIFAR-10, FEMNIST, and Tiny-ImageNet.
The sample-specific trigger with \(l_{2}\)-norm bound of 2 in CIFAR-10 and Tiny-ImageNet is indistinguishable from human inspection (see Figure 12 in Appendix A.7.1), while for Fashion-MNIST and FEMNIST (images with back-and-white backgrounds), additional noise can be still easily detected. Thus, a balance between visual stealthiness and effectiveness should be considered before conducting an FTA.
**Poison Fraction.** This is the fraction of poisoned training samples in the training dataset of the attacker. Setting a low poison fraction can benefit the attack's stealthiness by having less abnormality in parameters and less influence on benign tasks. But this can slow down the attack effectiveness, as a side effect. Fortunately, we find that FTA can still take effect under a low poison fraction. The experimental results of FTA under different poison fractions are presented in Appendix A.8.
**Dataset Size of Trigger Generator.** Theoretically, if this dataset is small-scale, the trigger generator could not be properly trained, thus resulting in bad quality and further endangering the attack performance. From Figure 13 (e)-(h) in Appendix A.8, we see that this concern should not be crucial for FTA.
### Natural Stealthiness
We evaluate natural stealthiness of our backdoor samples by SSIM [50] and LPIPS [60] to indicate that **P3** is well addressed by FTA flexible triggers (see Appendix A.9 for experimental results).
### Ablation Study in FTA Attack
We here analyze several hyperparameters that are critical for the FTA's performance including trigger size, poison fraction and data size of our generator (please see 3 for details).
## 5 Conclusion
We design an effective and stealthy backdoor attack against FL called FTA by learning an adaptive generator to produce imperceptible and flexible triggers, making poisoned samples have similar hidden features to benign samples with target label. FTA can provide stealthiness and robustness in making hidden features of poisoned samples consistent with benign samples of target label; reducing the abnormality of parameters during backdoor task training; manipulating triggers with imperceptible perturbation for training/testing stage; learning the adaptive trigger generator across different FL rounds to generate flexible triggers with best performance. The empirical experiments demonstrate that FTA can achieve a practical performance to evade SOTA FL defenses. Due to the space limit, we present discussions on the proposed attack and experiments in Appendix A.11. We hope this work can inspire follow-up studies that provide more secure and robust FL aggregation algorithms.
Figure 6: Different trigger sizes on backdoor accuracy. | ```
現在、 federated learning(FL)に対するバックドア攻撃は、汎用的なトリガーや sematic pattern を強く利用しており、正規化クリッピングや、ローカル更新のパラメータの divergences を比較するなどの防御メカニズムによって簡単に検出・フィルタリングされます。この論文では、FL防御に抵抗する、新しいステルス性と強固なバックドア攻撃を提案します。この目的のため、可変なトリガーを生成する新しい生成トリガー関数を構築します。この関数は、不透明な可変なトリガーパターンを使って、好ましいサンプルを操作し、同時にトリガーパターンに攻撃者選択のラベルの最も重要な隠された特徴を組み込むことができます。さらに、このトリガー生成器は、異なるラウンドで学習し、学習を継続することで、グローバルモデルの変化に適応できます。トリガーパターンのとターゲットラベルとの間の明確な差を埋めることで |
2309.07648 | Incorporating Class-based Language Model for Named Entity Recognition in
Factorized Neural Transducer | Despite advancements of end-to-end (E2E) models in speech recognition, named
entity recognition (NER) is still challenging but critical for semantic
understanding. Previous studies mainly focus on various rule-based or
attention-based contextual biasing algorithms. However, their performance might
be sensitive to the biasing weight or degraded by excessive attention to the
named entity list, along with a risk of false triggering. Inspired by the
success of the class-based language model (LM) in NER in conventional hybrid
systems and the effective decoupling of acoustic and linguistic information in
the factorized neural Transducer (FNT), we propose C-FNT, a novel E2E model
that incorporates class-based LMs into FNT. In C-FNT, the LM score of named
entities can be associated with the name class instead of its surface form. The
experimental results show that our proposed C-FNT significantly reduces error
in named entities without hurting performance in general word recognition. | Peng Wang, Yifan Yang, Zheng Liang, Tian Tan, Shiliang Zhang, Xie Chen | 2023-09-14T12:14:49 | http://arxiv.org/abs/2309.07648v2 | Incorporating Class-Based Language Model for Named Entity Recognition in Factorized Neural Transducer
###### Abstract
In spite of the excellent strides made by end-to-end (E2E) models in speech recognition in recent years, named entity recognition is still challenging but critical for semantic understanding. In order to enhance the ability to recognize named entities in E2E models, previous studies mainly focus on various rule-based or attention-based contextual biasing algorithms. However, their performance might be sensitive to the biasing weight or degraded by excessive attention to the named entity list, along with a risk of false triggering. Inspired by the success of the class-based language model (LM) in named entity recognition in conventional hybrid systems and the effective decoupling of acoustic and linguistic information in the factorized neural Transducer (FNT), we propose a novel E2E model to incorporate class-based LMs into FNT, which is referred as C-FNT. In C-FNT, the language model score of named entities can be associated with the name class instead of its surface form. The experimental results show that our proposed C-FNT presents significant error reduction in named entities without hurting performance in general word recognition.
Peng Wang\({}^{2}\), Yifan Yang\({}^{1}\), Zheng Liang\({}^{1}\), Tian Tan\({}^{1}\), Shiliang Zhang\({}^{3}\), Xie Chen\({}^{1}\)\({}^{\dagger}\)\({}^{1}\)MoE Key Lab of Artificial Intelligence, AI Institute, X-LANCE Lab
Department of Computer Science and Engineering, Shanghai Jiao Tong University
\({}^{2}\)Key Lab of Speech Acoustics and Content Understanding, Institute of Acoustics, CAS, China
\({}^{3}\)Speech Lab, Alibaba Group, China named entity recognition, factorized neural Transducer, class-based language model, beam search
## 1 Introduction
End-to-end (E2E) models have become the de facto mainstream automatic speech recognition (ASR) due to their simplicity and promising performance [1, 2, 3]. In ASR systems, the ability to recognize named entities, especially people names, is crucial for the semantic understanding of various downstream tasks. However, the joint optimization of acoustic and linguistic information in E2E models brings about difficulties in recognizing the long-tail but semantically critical name entities. This issue can be roughly attributed to two causes. One is the linguistic mismatch, as it is infeasible to cover all possible name entities in the training data. E2E models perform the speech recognition heavily relying on the training data and are inclined to assign a low probability to these long-tail or unseen name entities [4, 5, 6]. The other is the acoustic or pronunciation mismatch. Modern E2E systems normally adopt subwords derived from the spells, such as BPE and sentence piece [7, 8]. This is proven to work well for common words in English but found to perform poorly for some named entities, especially for foreign names [9]. In this paper, we mainly focus our discussion and research scope on the linguistic mismatch and leave the acoustic mismatch for future work.
There are consistent and active research efforts on improving named entity recognition in E2E models. Typically, an entity name list is prepared in advance and treated as contextual information. A popular practice is to apply various rule-based [10, 11, 12] or attention-based [13, 14, 15] contextual biasing to facilitate the named entity recognition. Rule-based contextual biasing merely adds extra biasing weight when spotting some named entity in the partial recognition result. By contrast, the attention-based contextual biasing approach applies an attention mechanism over the named entity list, which implicitly boosts the probability of the named entity in the given list. According to the results reported in literature [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], the performance of named entity recognition can be improved significantly by applying contextual biasing. However, their performance might be sensitive to the biasing weight and degraded by excessive attention to the named entity list. There are also some other attempts to enhance the named entity recognition in E2E models, such as the post-processing speller [23] and adding name tags [9] into the target transcription during training.
Recalling the named entity recognition in conventional hybrid systems, where the acoustic model and language model are trained separately to capture the acoustic and linguistic information respectively, the class-based language models [24, 25] can be applied to compute the gross LM probability of a specific class, e.g. person name. This offers an elegant and theoretically sound solution to compute the LM probability of the named entity class instead of their surface form. However, in E2E models, the acoustic and linguistic information are fused to predict the next word jointly. As a result, there is no explicit language model component in standard E2E models [26, 27], which hinders the direct use of class-based LM for named entity recognition. Recently, the factorized neural Transducer (FNT) [28] was proposed to decouple the acoustic and linguistic information by introducing a standalone LM into the E2E models. In FNT, significant and consistent WER improvements can be achieved by improving the standalone LM based on text data.
Inspired by the success of class-based LM in hybrid systems and effective information decoupling in the factorized neural Transducer, we propose a novel E2E model, C-FNT, to integrate the factorized neural Transducer and class-based LMs. As a result, C-FNT performs named entity speech recognition in a similar way as hybrid systems with class-based LM while maintaining the key advantages of E2E models. The experimental results demonstrate that our proposed C-FNT presents significant error reduction in named entities without hurting performance in general word recognition. It is also worth noting that, in this paper, we mainly focus on the person names as a case study for named entity recognition and choose the neural Transducer as the E2E model, given its popularity and excellent per | Despite end-to-end (E2E) speech recognition model advances, named entity recognition (NER) remains a challenge but critical for semantic understanding. Previous studies mainly focused on rule-based or attention-based contextual biasing algorithms. However, their performance may be sensitive to the biasing weight or degraded by excessive attention to the named entity list, along with a risk of false triggering. Inspired by the success of the class-based language model (LM) in NER in conventional hybrid systems and the effective decoupling of acoustic and linguistic information in the factorized neural transducer (FNT), we propose C-FNT, a novel E2E model that incorporates class-based LMs into FNT. In C-FNT, the LM score of named entities can be associated with the name class instead of its surface form. The experimental results show that our proposed C-FNT significantly reduces errors in named entities without hurting performance in general word recognition. |
2309.10334 | Information geometric bound on general chemical reaction networks | We investigate the dynamics of chemical reaction networks (CRNs) with the
goal of deriving an upper bound on their reaction rates. This task is
challenging due to the nonlinear nature and discrete structure inherent in
CRNs. To address this, we employ an information geometric approach, using the
natural gradient, to develop a nonlinear system that yields an upper bound for
CRN dynamics. We validate our approach through numerical simulations,
demonstrating faster convergence in a specific class of CRNs. This class is
characterized by the number of chemicals, the maximum value of stoichiometric
coefficients of the chemical reactions, and the number of reactions. We also
compare our method to a conventional approach, showing that the latter cannot
provide an upper bound on reaction rates of CRNs. While our study focuses on
CRNs, the ubiquity of hypergraphs in fields from natural sciences to
engineering suggests that our method may find broader applications, including
in information science. | Tsuyoshi Mizohata, Tetsuya J. Kobayashi, Louis-S. Bouchard, Hideyuki Miyahara | 2023-09-19T05:37:13 | http://arxiv.org/abs/2309.10334v1 | # Information geometric bound on general chemical reaction networks
###### Abstract
We investigate the dynamics of chemical reaction networks (CRNs) with the goal of deriving an upper bound on their reaction rates. This task is challenging due to the nonlinear nature and discrete structure inherent in CRNs. To address this, we employ an information geometric approach, using the natural gradient, to develop a nonlinear system that yields an upper bound for CRN dynamics. We validate our approach through numerical simulations, demonstrating faster convergence in a specific class of CRNs. This class is characterized by the number of chemicals, the maximum value of stoichiometric coefficients of the chemical reactions, and the number of reactions. We also compare our method to a conventional approach, showing that the latter cannot provide an upper bound on reaction rates of CRNs. While our study focuses on CRNs, the ubiquity of hypergraphs in fields from natural sciences to engineering suggests that our method may find broader applications, including in information science.
## I Introduction
Over the past three decades, extensive research has been dedicated to understanding stochastic and information thermodynamics, specifically focusing on bounds related to entropy production and various physical quantities [1; 2; 3; 4]. This trajectory persists, with newer studies shedding light on thermodynamic uncertainty relations [5; 6; 7; 8; 9] and establishing thermodynamic bounds on cross-correlations [10; 11; 12]. Parallel to the work on physical systems, researchers have also explored bounds in non-physical realms such as biological systems. For example, limits concerning population growth have been studied [13; 14; 15; 16; 17; 18].
Recent studies have unveiled the geometric structure of chemical reaction networks (CRNs) and have also extended these concepts to the domain of general hypergraphs [19; 20; 21]. Concurrently, topological analyses on CRNs and hypergraphs have been performed [22; 23; 24; 25]. Despite these advancements, the intrinsic nonlinearity in CRNs presents a significant challenge for elucidating specific properties, leaving gaps in our understanding.
Information geometry offers a framework that applies differential geometry to probability distributions, facilitating the exploration of their geometric structures [26; 27]. Among its significant contributions is the concept of the natural gradient (NG) [28], which has demonstrated effectiveness in optimization problems, particularly in the realm of machine learning. Additional studies have ventured into the acceleration of information gradient flows [29] and have investigated the biological significance of gradient flows [30]. Research has also extended to constraints involving rates of statistical divergences and mutual information [31]. These diverse applications underline the versatility of information geometry, which we leverage in this study.
In the present work, we explore the upper bound on reaction rates in general CRNs using NG. Initially, we present a geometrical description of CRN dynamics [20; 21]. Subsequently, we categorize CRNs according to the number of chemicals involved, the maximum coefficients in the reactions, and the total number of reactions. Utilizing this classification, we formulate a nonlinear system that provides an upper bound on reaction rates for a given class of CRNs. Through numerical simulations, we find that our system exhibits a steeper gradient, facilitating faster convergence. Importantly, this fast convergence minimizes the Kullback-Leibler (KL) divergence to zero. In contrast, conventional CRNs often maintain a nonzero KL divergence due to nontrivial equilibrium points. We also note that conventional methods are insufficient for achieving these results, underscoring the uniqueness of our approach.
The remainder of this paper is structured as follows. In Sec. II, we furnish an overview of CRNs. Section III elucidates the challenges of establishing an upper bound on CRNs using Newton's method. Section IV is dedicated to explaining NG. In Sec. V, we introduce a dynamical system that serves as an upper bound for CRNs in a specified class. Numerical simulations are presented in Sec. VI. In Sec. VII, we provide discussions on our findings. The paper concludes with Sec. VIII.
Chemical reaction networks
In this section, the primary aim is to formulate the geometric representation of the dynamical equations governing CRNs, as delineated in [20; 21]. We commence by presenting the standard notation for hypergraphs and CRNs. Subsequently, we elucidate the dynamics intrinsic to CRNs, as well as concepts of Legendre duality and detailed balance. These elements are then combined to construct the geometric expression of CRN dynamics.
### Definition of CRNs
We begin with a hypergraph \((\mathbb{V},\mathbb{E})\) where \(\mathbb{V}\coloneqq\{\mathsf{v}_{i}\}_{i=1}^{N_{\mathsf{v}}}\) and \(\mathbb{E}\coloneqq\{\mathsf{e}_{e}\}_{e=1}^{N_{\mathsf{e}}}\) as a hypergraph provides a mathematical framework to describe a chemical reaction. Suppose that a CRN of interest involves \(N_{\mathsf{X}}\) chemicals, denoted as \(\mathbb{X}_{1},\mathbb{X}_{2},\ldots,\mathbb{X}_{N_{\mathsf{K}}}\). In the case of a CRN, each hypervertex \(\mathsf{v}_{i}\) is composed of a combination of chemicals \(\mathbb{X}_{1},\mathbb{X}_{2},\ldots,\mathbb{X}_{N_{\mathsf{K}}}\) and given by
\[\mathsf{v}_{i}\coloneqq\gamma_{1,i}\mathbb{X}_{1}+\gamma_{2,i}\mathbb{X}_{2}+ \cdots+\gamma_{N_{\mathsf{K}},i}\mathbb{X}_{N_{\mathsf{K}}}. \tag{1}\]
Each hyperedge \(\mathsf{e}_{e}\) corresponds to a chemical reaction and is defined by a directed pair of two hypervertices \(\mathsf{e}_{e}\coloneqq(\mathsf{v}_{e}^{+},\mathsf{v}_{e}^{-})\), which can be expressed as
\[\alpha_{1,e}\mathbb{X}_{1}+\alpha_{2,e}\mathbb{X}_{2}+\cdots+ \alpha_{N_{\mathsf{K}},e}\mathbb{X}_{N_{\mathsf{K}}}\] \[\stackrel{{\mathsf{e}_{e}}}{{\longrightarrow}} \beta_{1,e}\mathbb{X}_{1}+\beta_{2,e}\mathbb{X}_{2}+\cdots+\beta_{N_{ \mathsf{K}},e}\mathbb{X}_{N_{\mathsf{K}}}. \tag{2}\]
Here, \(\mathsf{v}_{e}^{\pm}\) are chosen from \(\{\mathsf{v}_{i}\}_{i=1}^{N_{\mathsf{v}}}\), and in Eq. (II.1), \(\mathsf{v}_{e}^{+}=\alpha_{1,e}\mathbb{X}_{1}+\alpha_{2,e}\mathbb{X}_{2}+ \cdots+\alpha_{N_{\mathsf{K}},e}\mathbb{X}_{N_{\mathsf{K}}}\) and \(\mathsf{v}_{e}^{-}=\beta_{1,e}\mathbb{X}_{1}+\beta_{2,e}\mathbb{X}_{2}+\cdots+ \beta_{N_{\mathsf{K}},e}\mathbb{X}_{N_{\mathsf{K}}}\). We also define the order of reaction as follows:
\[m\coloneqq\max_{i,e}\{\alpha_{i,e},\beta_{i,e}\}. \tag{3}\]
To characterize CRNs, \(m\) in Eq. (3) will play an important role.
When a CRN involves multiple chemical reactions, the description provided above may be inadequate. To describe a complex CRN, the stoichiometric matrix plays a crucial role. The stoichiometric matrix \(S\) is defined as an \(N_{\mathsf{X}}\times N_{\mathsf{e}}\) matrix and is given by
\[S\coloneqq[\mathbf{s}_{1},\mathbf{s}_{2},\ldots,\mathbf{s}_{N_{\mathsf{e}}}], \tag{4}\]
where, for \(e=1,2,\ldots,N_{\mathsf{e}}\),
\[\mathbf{s}_{e}\coloneqq\begin{bmatrix}\beta_{1,e}-\alpha_{1,e}\\ \beta_{2,e}-\alpha_{2,e}\\ \vdots\\ \beta_{N_{\mathsf{K}},e}-\alpha_{N_{\mathsf{K}},e}\end{bmatrix}. \tag{5}\]
That is, the \((j,e)\)-th element of \(S\) is given by \(s_{j,e}=\beta_{j,e}-\alpha_{j,e}\) for \(j=1,2,\ldots,N_{\mathsf{K}}\) and \(e=1,2,\ldots,N_{\mathsf{e}}\). In general, when a CRN involves multiple chemical reactions, the stoichiometric matrix provides a concise representation of the relationships between the reactants and products.
The stoichiometric matrix \(S\) is also expressed as \(S=-\Gamma B\). Here, \(B\in\{1,0,-1\}^{N_{\mathsf{v}}\times N_{\mathsf{e}}}\) is the incidence matrix whose \((i,e)\)-th element is given for \(i=1,2,\ldots,N_{\mathsf{v}}\) and \(e=1,2,\ldots,N_{\mathsf{e}}\) by
\[b_{i,e}\coloneqq\begin{cases}1&(\mathsf{v}_{i}\text{ is the head of hyperedge }\mathsf{e}_{e}\colon\mathsf{v}_{i}=\mathsf{v}_{e}^{+}),\\ -1&(\mathsf{v}_{i}\text{ is the tail of hyperedge }\mathsf{e}_{e}\colon\mathsf{v}_{i}= \mathsf{v}_{e}^{-}),\\ 0&(\text{otherwise}),\end{cases} \tag{6}\]
and \(\Gamma\in\mathbb{Z}_{\geq 0}^{N_{\mathsf{v}}\times N_{\mathsf{v}}}\) is given by
\[\Gamma\coloneqq[\mathbf{\gamma}_{1},\mathbf{\gamma}_{2},\ldots,\mathbf{\gamma}_{N_{ \mathsf{v}}}], \tag{7}\]
where, using \(\gamma_{1,i},\gamma_{2,i},\ldots,\gamma_{N_{\mathsf{K}},i}\) in Eq. (1), \(\mathbf{\gamma}_{i}\) is defined as
\[\mathbf{\gamma}_{i}\coloneqq[\gamma_{1,i},\gamma_{2,i},\ldots,\gamma_{N_{\mathsf{ K}},i}]^{\intercal}, \tag{8}\]
for \(i=1,2,\ldots,N_{\mathsf{v}}\). Having defined the necessary variables to describe CRNs, we will now derive the equation that characterizes the dynamics of CRNs in the remainder of this section.
### Dynamics of CRNs
To analyze the dynamics of a CRN, we introduce fluxes associated with each hyperedge. Let \(j_{e}^{+}(\mathbf{x})\) and \(j_{e}^{-}(\mathbf{x})\) denote the currents from the head to the tail and from the tail to the head of hyperedge \(\mathsf{e}_{e}\), respectively, where \(\mathbf{x}\) is the chemical concentration vector. We define \(\mathbf{j}^{+}(\mathbf{x})\coloneqq[j_{1}^{+}(\mathbf{x}),j_{2}^{+}(\mathbf{x}),\ldots,j_{N_{ \mathsf{e}}}^{+}(\mathbf{x})]^{\intercal}\) and \(\mathbf{j}^{-}(\mathbf{x})\coloneqq[j_{1}^{-}(\mathbf{x}),j_{2}^{-}(\mathbf{x}),\ldots,j_{N_{ \mathsf{e}}}^{-}(\mathbf{x})]^{\intercal}\).
The law of mass action is widely observed to hold for CRNs and is considered one of the fundamental characteristics that differentiate CRNs from nonchemical hypergraphs. Based on this, we make the assumption of mass action kinetics for the forward and reverse reaction fluxes on hyperedge \(\mathsf{e}_{e}\) in Eq. (II.1):
\[j_{e}^{\pm}(\mathbf{x})=k_{e}^{\pm}\sum_{i=1}^{N_{\mathsf{v}}}b_{i,e}^{\pm}\prod_{ j=1}^{N_{\mathsf{K}}}x_{j}^{\gamma_{j,i}}, \tag{9}\]
where, for \(i=1,2,\ldots,N_{\mathsf{K}}\) and \(e=1,2,\ldots,N_{\mathsf{e}}\),
\[b_{i,e}^{+}\coloneqq\max(b_{i,e},0), \tag{10}\] \[b_{i,e}^{-}\coloneqq-\min(b_{i,e},0), \tag{11}\]
and \(k_{e}^{\pm}\) are the reaction rate coefficients for the forward and backward currents on \(\mathsf{e}_{e}\). Expressed in vector notation, Eq. (9) can be written as
\[\mathbf{j}^{\pm}(\mathbf{x}) =\mathbf{k}^{\pm}\circ(B^{\pm})^{\intercal}\mathbf{x}^{\Gamma\,\intercal} \tag{12}\] \[=\mathbf{k}^{\pm}\circ\mathbf{x}^{(\Gamma B^{\pm})^{\intercal}}, \tag{13}\]
where
\[B^{+} \coloneqq\max(B,\mathbb{0}), \tag{14}\] \[B^{-} \coloneqq-\min(B,\mathbb{0}),\] (15) \[\boldsymbol{x}^{\mathsf{T}}\coloneqq[\boldsymbol{x}^{\mathsf{T}_{ \mathsf{T}}},\boldsymbol{x}^{\mathsf{T}_{\mathsf{T}_{\mathsf{S}_{\mathsf{e}}}}}, \ldots,\boldsymbol{x}^{\mathsf{T}\mathsf{N}_{\mathsf{e}}}]^{\mathsf{T}},\] (16) \[\boldsymbol{x}^{\mathsf{T}_{i}} \coloneqq\prod_{j=1}^{N_{\mathsf{x}}}x_{j}^{\gamma_{j,i}},\] (17) \[\boldsymbol{k}^{\pm} \coloneqq[k_{1}^{\pm},k_{2}^{\pm},\ldots,k_{N_{\mathsf{e}}}^{\pm}]. \tag{18}\]
Here, \(\mathbb{0}\) represents the zero matrix, which has the same size as matrix \(B\). The functions \(\max(\cdot,\cdot)\) and \(\min(\cdot,\cdot)\) are applied elementwise, meaning that for each element \([A]_{i,j}\) and \([B]_{i,j}\), we have \([\max(A,B)]_{i,j}=\max([A]_{i,j},[B]_{i,j})\) and \([\min(A,B)]_{i,j}=\min([A]_{i,j},[B]_{i,j})\), respectively. The notation \([\cdot]_{i,j}\) represents the element located at the \(i\)-th row and \(j\)-th column. Moreover, the symbol \(\circ\) denotes the element-wise product, which is defined as follows:
\[\boldsymbol{x}\circ\boldsymbol{y}\coloneqq\begin{bmatrix}x_{1}y_{1}\\ x_{2}y_{2}\\ \vdots\\ x_{N_{\mathsf{x}}}y_{N_{\mathsf{x}}}\end{bmatrix}, \tag{19}\]
where \(\boldsymbol{x}\coloneqq[x_{1},x_{2},\ldots,x_{N_{\mathsf{x}}}]^{\mathsf{T}}\), \(\boldsymbol{y}\coloneqq[y_{1},y_{2},\ldots,y_{N_{\mathsf{x}}}]^{\mathsf{T}}\).
The chemical concentration vector \(\boldsymbol{x}_{t}\) at time \(t\) satisfies the chemical rate equation (CRE) given by [32; 33; 34]
\[\dot{\boldsymbol{x}}_{t}=S\boldsymbol{j}(\boldsymbol{x}_{t}), \tag{20}\]
where \(\boldsymbol{j}(\boldsymbol{x})\coloneqq\boldsymbol{j}^{+}(\boldsymbol{x})- \boldsymbol{j}^{-}(\boldsymbol{x})\).
### Legendre duality of fluxes and forces
In the realm of physics, the relationship between fluxes and forces is commonly expressed through Legendre duality, a concept that describes how forces and fluxes are dual aspects of the same system. Their product results in entropy production, denoted as \(\langle\boldsymbol{j},\boldsymbol{f}\rangle\). In the context of chemical thermodynamics, we define the force on a hyperedge \(\mathsf{e}_{e}\) in a manner consistent with entropy production:
\[f_{e}(\boldsymbol{x})\coloneqq\frac{1}{2}\ln\frac{j_{e}^{+}(\boldsymbol{x})} {j_{e}^{-}(\boldsymbol{x})}, \tag{21}\]
for \(e=1,2,\ldots,N_{\mathsf{e}}\). The corresponding vector form is given by Eq. (21), denoted as
\[\boldsymbol{f}(\boldsymbol{x})\coloneqq[f_{1}(\boldsymbol{x}),f_{2}( \boldsymbol{x}),\ldots,f_{N_{\mathsf{e}}}(\boldsymbol{x})]^{\mathsf{T}},\]
can be expressed as
\[\boldsymbol{f}(\boldsymbol{x})=\frac{1}{2}\ln\frac{\boldsymbol{j}^{+}( \boldsymbol{x})}{\boldsymbol{j}^{-}(\boldsymbol{x})}, \tag{22}\]
where the division and the logarithmic function are computed elementwise.
We introduce a quantity called "frenetic activity," particularly on hyperedge \(\mathsf{e}_{e}\), to describe the rate of change in the state of the system \(\mathsf{e}_{e}\)[20; 21] as
\[\omega_{e}(\boldsymbol{x})\coloneqq 2\sqrt{j_{e}^{+}(\boldsymbol{x})j_{e}^{-}( \boldsymbol{x})}, \tag{23}\]
for \(e=1,2,\ldots,N_{\mathsf{e}}\). The vector form of Eq. (23), denoted as \(\boldsymbol{\omega}(\boldsymbol{x})\coloneqq[\omega_{1}(\boldsymbol{x}), \omega_{2}(\boldsymbol{x}),\ldots,\omega_{N_{\mathsf{e}}}(\boldsymbol{x})]^{ \mathsf{T}}\), can be expressed as
\[\boldsymbol{\omega}(\boldsymbol{x})=2\sqrt{\boldsymbol{j}^{+}(\boldsymbol{x} )\circ\boldsymbol{j}^{-}(\boldsymbol{x})}. \tag{24}\]
Then, the following strictly convex smooth function \(\Psi^{*}_{\boldsymbol{\omega}(\boldsymbol{x})}(\boldsymbol{f}(\boldsymbol{x}))\), which is called the dissipation function, establishes the Legendre duality between force \(\boldsymbol{f}(\boldsymbol{x})\), Eq. (22) and flux \(\boldsymbol{j}(\boldsymbol{x})\), Eq. (24):
\[\Psi^{*}_{\boldsymbol{\omega}(\boldsymbol{x})}(\boldsymbol{f}(\boldsymbol{x}) )\coloneqq\boldsymbol{\omega}(\boldsymbol{x})^{\mathsf{T}}[\cosh( \boldsymbol{f}(\boldsymbol{x}))-\boldsymbol{1}], \tag{25}\]
where
\[\cosh(\boldsymbol{f}(\boldsymbol{x})) \coloneqq\begin{bmatrix}\cosh(f_{1}(\boldsymbol{x}))\\ \cosh(f_{2}(\boldsymbol{x}))\\ \vdots\\ \cosh(f_{N_{\mathsf{e}}}(\boldsymbol{x}))\end{bmatrix}, \tag{26}\] \[\boldsymbol{f}(\boldsymbol{x}) \coloneqq[f_{1}(\boldsymbol{x}),f_{2}(\boldsymbol{x}),\ldots,f_{N_ {\mathsf{e}}}(\boldsymbol{x})]^{\mathsf{T}},\] (27) \[\boldsymbol{1} \coloneqq[\underbrace{1,1,\ldots,1}_{N_{\mathsf{e}}}]^{\mathsf{T}}. \tag{28}\]
As a result we have 1
Footnote 1: We have used the following notation: \(\partial_{\boldsymbol{f}}\Psi^{*}_{\boldsymbol{\omega}(\boldsymbol{x})}( \boldsymbol{f}(\boldsymbol{x}))=\partial_{\boldsymbol{f}}\Psi^{*}_{\boldsymbol{ \omega}(\boldsymbol{x})}(\boldsymbol{f})|_{\boldsymbol{f}=\boldsymbol{f}( \boldsymbol{x})}\).
\[\boldsymbol{j}(\boldsymbol{x})=\partial_{\boldsymbol{f}}\Psi^{*}_{\boldsymbol{ \omega}(\boldsymbol{x})}(\boldsymbol{f}(\boldsymbol{x})). \tag{29}\]
Note that
\[\partial_{\boldsymbol{f}}\Psi^{*}_{\boldsymbol{\omega}(\boldsymbol{x})}( \boldsymbol{f}(\boldsymbol{x})) =\boldsymbol{\omega}(\boldsymbol{x})\circ\sinh(\boldsymbol{f}( \boldsymbol{x})) \tag{30}\] \[=\begin{bmatrix}\omega_{1}(\boldsymbol{x})\sinh(f_{1}(\boldsymbol{ x}))\\ \omega_{2}(\boldsymbol{x})\sinh(f_{2}(\boldsymbol{x}))\\ \vdots\\ \omega_{N_{\mathsf{e}}}(\boldsymbol{x})\sinh(f_{N_{\mathsf{e}}}(\boldsymbol{x})) \end{bmatrix}. \tag{31}\]
Combining Eqs. (20) and (29), we get
\[\dot{\boldsymbol{x}}_{t}=S\partial_{\boldsymbol{f}}\Psi^{*}_{\boldsymbol{ \omega}(\boldsymbol{x}_{t})}(\boldsymbol{f}(\boldsymbol{x}_{t})). \tag{32}\]
While Eq. (32) is a well-defined differential equation, it lacks an explicit functional form for \(\boldsymbol{f}(\boldsymbol{x})\), thus limiting its predictive capability. The functional form of \(\boldsymbol{f}(\boldsymbol{x})\) based on thermodynamics and kinetics will be elaborated in the subsequent subsection.
### Chemical reaction dynamics
Until this point, the discussion has centered on the general description of dynamics on hypergraphs. Going forward, the focus will be exclusively on CRNs. In the realm of chemical thermodynamics, it is a common assumption to employ mass action kinetics to describe reaction rates. Within this framework, a specific definition of force is accepted and widely used [20; 21; 32; 33]:
\[\mathbf{f}(\mathbf{x})=-\frac{1}{2}\bigg{(}S^{\intercal}\ln\mathbf{x}-\ln\frac{\mathbf{k}^{+} }{\mathbf{k}^{-}}\bigg{)}. \tag{33}\]
To clarify the geometric meaning of Eq. (33), we introduce the Bregman divergence \(\mathcal{D}_{\phi}(\mathbf{x}\|\mathbf{y})\) associated with potential \(\phi(\cdot)\)2:
Footnote 2: We have used the notation: \(\partial_{\mathbf{x}}\phi(\mathbf{y})=\partial_{\mathbf{x}}\phi(\mathbf{x})|_{\mathbf{x}=\mathbf{y}}\).
\[\mathcal{D}_{\phi}(\mathbf{x}\|\mathbf{y})\coloneqq\phi(\mathbf{x})-\phi(\mathbf{y})-\langle \mathbf{x}-\mathbf{y},\partial_{\mathbf{x}}\phi(\mathbf{y})\rangle. \tag{34}\]
The derivative of Eq. (34) is given by
\[\partial_{\mathbf{x}}\mathcal{D}_{\phi}(\mathbf{x}\|\mathbf{y})=\partial_{\mathbf{x}}\phi( \mathbf{x})-\partial_{\mathbf{x}}\phi(\mathbf{y}). \tag{35}\]
The KL divergence is Eq. (34) with the following potential 3:
Footnote 3: See Appendix A for detail.
\[\phi_{\text{KL}}(\mathbf{x})\coloneqq\sum_{i=1}^{N_{\text{K}}}x_{i}\ln x_{i}. \tag{36}\]
Then, the KL divergence is defined by \(\mathcal{D}_{\phi_{\text{KL}}}(\cdot\|\cdot)\coloneqq\mathcal{D}_{\text{KL}} (\cdot\|\cdot)\) and it reads
\[\mathcal{D}_{\text{KL}}(\mathbf{x}\|\mathbf{y})=\sum_{i=1}^{N_{\text{K}}}x_{i}\ln \frac{x_{i}}{y_{i}}-\sum_{i=1}^{N_{\text{K}}}x_{i}+\sum_{i=1}^{N_{\text{K}}} y_{i}, \tag{37}\]
and its derivative takes the following form:
\[\partial_{\mathbf{x}}\mathcal{D}_{\text{KL}}(\mathbf{x}\|\mathbf{y})=\begin{bmatrix}\ln x _{1}-\ln y_{1}\\ \ln x_{2}-\ln y_{2}\\ \vdots\\ \ln x_{N_{\text{K}}}-\ln y_{N_{\text{K}}}\end{bmatrix}. \tag{38}\]
Then, Eq. (33) is rewritten as
\[\mathbf{f}(\mathbf{x})=-\frac{1}{2}S^{\intercal}\partial_{\mathbf{x}}\mathcal{D}_{\text{ KL}}(\mathbf{x}\|\hat{\mathbf{x}})+\mathbf{f}_{\text{ne}}. \tag{39}\]
The definition of \(\hat{\mathbf{x}}\) will be given in the following subsection, and \(\mathbf{f}_{\text{ne}}\not\in\text{Im}[S^{\intercal}]\) represents the nonequilibrium force incurred to the system [19].
Mass action kinetics also offers the following definitions of the flux and activity [20; 21; 32; 33]:
\[\mathbf{j}(\mathbf{x})=(\mathbf{k}^{+}\circ(B^{+})^{\intercal}-\mathbf{k}^{-}\circ(B^{-})^{ \intercal})\mathbf{x}^{\Gamma\intercal}. \tag{40}\]
Substituting Eq. (40) into Eq. (2.24), we also get the activity for CRNs:
\[\mathbf{\omega}(\mathbf{x})=2\sqrt{\mathbf{k}^{+}\circ\mathbf{k}^{-}}\circ\mathbf{x}^{R^{ \intercal}/2}. \tag{41}\]
where
\[R\coloneqq\Gamma(B^{+}+B^{-}). \tag{42}\]
In the remaining part of this section, we will present the geometric expression of the equation for CRNs.
### Geometric expression of an equilibrium CRE
Up to this point, the discussion has centered on the geometric relationships that exist among the chemical concentration, potential, force, and flux in a CRN. Subsequently, the CRE specified in Eq. (20) can be reformulated into a geometric expression [32; 33; 34]. To accomplish this, the detailed balance condition (DBC) must be taken into account. The DBC, a criterion for the dynamic stability of a system at equilibrium, is described in the following section [20; 21]:
\[\ln\frac{\mathbf{k}^{+}}{\mathbf{k}^{-}}=S^{\intercal}\ln\mathbf{x}_{\text{eq}}, \tag{43}\]
Here, \(\mathbf{x}_{\text{eq}}\) represents the equilibrium chemical concentration vector, which is dependent on both the initial concentration vector \(\mathbf{x}_{\text{ini}}\) and the specific CRE under consideration. Additionally, if Eq. (43) is met, then \(\mathbf{f}_{\text{ne}}=\mathbf{0}\). Generally, at equilibrium, net fluxes cease (\(\mathbf{j}=\mathbf{0}\)), allowing us to define a set of equilibrium chemical concentration vectors as follows:
\[V_{\text{eq}}\coloneqq\{\mathbf{x}>\mathbf{0}|\mathbf{j}(\mathbf{x})=\mathbf{0}\}. \tag{44}\]
From Eq. (43), Eq. (44) is transformed into
\[V_{\text{eq}}=\{\mathbf{x}>\mathbf{0}|\exists\mathbf{\eta}\in\mathbb{R}^{|\text{ker}(S^{ \intercal})|},\ln\mathbf{x}=\ln\mathbf{x}_{\text{eq}}+U\mathbf{\eta}\}. \tag{45}\]
where \(U\coloneqq[u_{1},u_{2},\ldots,u_{|\text{ker}(S^{\intercal})|}]\) and \(\{u_{i}\}_{i=1}^{|\text{ker}(S^{\intercal})|}\) are the bases of \(\text{ker}(S^{\intercal})\). We have introduced \(\hat{\mathbf{x}}\) in Eq. (39). We here impose the following relation to \(\hat{\mathbf{x}}\):
\[\hat{\mathbf{x}}\in V_{\text{eq}}. \tag{46}\]
Then Eq. (39) describes dynamics of gradient flow to \(V_{\text{eq}}\). Equation (46) is equivalently written as
\[\ln\frac{\mathbf{k}^{+}}{\mathbf{k}^{-}}=S^{\intercal}\ln\hat{\mathbf{x}}. \tag{47}\]
Note that using \(\hat{\mathbf{x}}\) instead of \(\mathbf{x}_{\text{eq}}\) provides us with a generalized expression of the dynamical system.
Finally, we have arrived at the geometric expression of a CRE. Namely, combining Eqs. (2.32), (2.39), (2.41),
and (2.43), we get 4
Footnote 4: We have used the following notation: \(\partial_{\mathbf{x}}\mathcal{D}_{\mathrm{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})=\partial_{ \mathbf{x}}\mathcal{D}_{\mathrm{KL}}(\mathbf{x}\|\hat{\mathbf{x}})|_{\mathbf{x}=\mathbf{x}_{t}}\).
\[\dot{\mathbf{x}}_{t}=S\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{\omega}(\mathbf{x}_{t})}\bigg{(}- \frac{1}{2}S^{\intercal}\partial_{\mathbf{x}}\mathcal{D}_{\mathrm{KL}}(\mathbf{x}_{t} \|\hat{\mathbf{x}})\bigg{)}, \tag{2.48}\]
where \(\hat{\mathbf{x}}\in V_{\mathrm{eq}}\). Note that in Eq. (2.48), replacing \(\hat{\mathbf{x}}\) with \(\mathbf{x}_{\mathrm{eq}}\) does not affect the dynamics of CRNs because \(SU\eta=\mathbf{0}\).
## III Difficulty of constructing an upper bound on the reaction rates of CRNs
In this section, we briefly revisit Newton's method and present a counterexample illustrating its limitations in establishing an upper bound on the reaction rates of CRNs.
### Newton's method
As stated in Sec. I, the objective of this paper is to determine an upper bound on the reaction rates of CRNs. One might assume that straightforward optimization methods could achieve this. However, before discussing NG, we elucidate the challenges of using Newton's method [35] as an optimization technique for this purpose. While the gradient method is another elementary optimization technique, its indeterminate step size precludes its consideration in this study. We now turn to a specific optimization problem:
\[\min_{\mathbf{x}}f(\mathbf{x}). \tag{3.1}\]
Letting \(\mathbf{x}_{t}\) be the state at the \(t\)-th iteration for \(t\in\mathbb{Z}_{\geq 0}\), Newton's method for Eq. (3.1) is given by
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}-[\partial_{\mathbf{x}}^{2}f(\mathbf{x}_{t})]^{-1}\partial_{ \mathbf{x}}f(\mathbf{x}_{t}). \tag{3.2}\]
In the case of CRNs, we have \(f(\mathbf{x})=\mathcal{D}_{\phi}(\mathbf{x}\|\hat{\mathbf{x}})\); then Eq. (3.2) reads
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}-G_{\phi}^{-1}(\mathbf{x}_{t})\partial_{\mathbf{x}}\mathcal{D }_{\phi}(\mathbf{x}_{t}\|\hat{\mathbf{x}}), \tag{3.3}\]
where \(G_{\phi}\) is the Hessian of \(\phi(\cdot)\).
### Counterexample
We will demonstrate a counterexample to show that Eq. (3.3) does not yield an upper bound for a CRN. We consider the following CRN with \(N_{\mathcal{K}}=2\), \(m=3\), and \(N_{\mathbf{e}}=1\):
\[2\mathcal{X}_{1}\rightleftharpoons 3\mathcal{X}_{2}. \tag{3.4}\]
For the simulations of Eq. (2.48), we set \(k_{\mathrm{c}}^{\pm}=1\), \(\Delta t=1.0\times 10^{-4}\), \(\mathbf{x}_{\mathrm{ini}}=[3/4,11/8]^{\intercal}\), and \(\hat{\mathbf{x}}=[1,1]^{\intercal}\). In Fig. 1, we plot the dynamics of Eq. (3.4) as well as the dynamics obtained using Newton's method. At \(t=1\), the divergence of Newton's method is greater than that of the CRN, indicating that Newton's method fails to bound the dynamics. This observation is illustrated in the figure. The reason for this discrepancy lies in the nonlinearity of Eq. (3.4).
## IV Natural gradient
In this section, we explore the NG method and its applicability to the problem of constraining reaction rates in CRNs. As our proposed methodology hinges on NG, understanding its theoretical underpinnings and its distinction from Newton's method is crucial.
### Derivation of NG
In this section, we outline the derivation of the NG method, which is grounded in information geometry. Specifically, we will elucidate how the dynamics of a given vector \(\mathbf{x}_{t}\) at time \(t\) are updated within the framework of NG:
\[\mathbf{x}_{t+\Delta t}=\mathbf{x}_{t}+\Delta\mathbf{x}_{t}(\epsilon), \tag{4.1}\]
where \(\Delta\mathbf{x}_{t}(\epsilon)\) is defined as 5
Footnote 5: We have used the following notation: \(\partial_{\mathbf{x}}f(\mathbf{x}_{t})=\partial_{\mathbf{x}}f(\mathbf{x})|_{\mathbf{x}=\mathbf{x}_{t}}\).
\[\Delta\mathbf{x}_{t}(\epsilon) =\operatorname*{arg\,min}_{\Delta\mathbf{x}:\mathcal{D}_{\phi^{\prime }}(\mathbf{x}_{t}+\Delta\mathbf{x}\|\mathbf{x}_{t})\leq\epsilon}[f(\mathbf{x}_{t}+\Delta\mathbf{x })-f(\mathbf{x}_{t})] \tag{4.2}\] \[\approx\operatorname*{arg\,min}_{\Delta\mathbf{x}:\frac{1}{2}\Delta \mathbf{x}^{\intercal}G_{\phi^{\prime}}(\mathbf{x}_{t})\Delta\mathbf{x}\leq\epsilon} \partial_{\mathbf{x}}f(\mathbf{x}_{t})^{\intercal}\Delta\mathbf{x}. \tag{4.3}\]
Here, \(G_{\phi^{\prime}}(\mathbf{x}_{t})\) is the Hessian given by
\[[G_{\phi^{\prime}}(\mathbf{x}_{t})]_{i,j}\coloneqq\frac{\partial^{2}}{ \partial x_{i}\partial x_{j}}\phi^{\prime}(\mathbf{x}_{t}), \tag{100}\]
where \([\cdot]_{i,j}\) is the \((i,j)\)-th element. In the case of Eq. (36), Eq. (101) reads
\[[G_{\phi^{\prime}}(\mathbf{x}_{t})]_{i,j}=\delta_{i,j}\frac{1}{[\mathbf{x}_{t}]_{i}}, \tag{102}\]
where \(\delta_{i,j}\) is the Kronecker delta function and \([\cdot]_{i}\) is the \(i\)-th element. To derive Eq. (100), we have used the following expansion of the Bregman divergence:
\[\mathcal{D}_{\phi^{\prime}}(\mathbf{x}_{t}+\Delta\mathbf{x}\|\mathbf{x}_{t})\] \[\quad=\phi^{\prime}(\mathbf{x}_{t}+\Delta\mathbf{x})-\phi^{\prime}(\mathbf{x} _{t})-\langle(\mathbf{x}_{t}+\Delta\mathbf{x})-\mathbf{x}_{t},\partial_{\mathbf{x}}\phi^{ \prime}(\mathbf{x}_{t})\rangle \tag{103}\] \[\quad\approx\phi^{\prime}(\mathbf{x}_{t})+\partial_{\mathbf{x}}\phi(\mathbf{ x}_{t})^{\intercal}\Delta\mathbf{x}+\frac{1}{2}\Delta\mathbf{x}^{\intercal}G_{\phi^{ \prime}}(\mathbf{x}_{t})\Delta\mathbf{x}\] \[\quad\quad-\phi^{\prime}(\mathbf{x}_{t})-\langle(\mathbf{x}_{t}+\Delta \mathbf{x})-\mathbf{x}_{t},\partial_{\mathbf{x}}\phi^{\prime}(\mathbf{x}_{t})\rangle\] (104) \[\quad=\frac{1}{2}\Delta\mathbf{x}^{\intercal}G_{\phi^{\prime}}(\mathbf{x }_{t})\Delta\mathbf{x}. \tag{105}\]
Note that \(\Delta t\) in Eq. (100) is set to unity in the conventional formulation of NG; in the following section, we will impose a specific relationship between \(\Delta t\) and \(\epsilon\) in Eq. (100) to connect NG and CRNs.
To find the solution of Eq. (100), we employ the method of Lagrange multipliers where the Lagrange function reads
\[L(\Delta\mathbf{x},\lambda)\coloneqq\partial_{\mathbf{x}}f(\mathbf{x}_{t})^{ \intercal}\Delta\mathbf{x}-\frac{\lambda}{2}(\Delta\mathbf{x}^{\intercal}G_{\phi^{ \prime}}(\mathbf{x}_{t})\Delta\mathbf{x}-\epsilon). \tag{106}\]
The derivative of Eq. (106) with respect to \(\Delta\mathbf{x}\) takes the following form:
\[\frac{\partial}{\partial\Delta\mathbf{x}}L(\Delta\mathbf{x},\lambda)= \partial_{\mathbf{x}}f(\mathbf{x}_{t})-\lambda G_{\phi^{\prime}}(\mathbf{x}_{t})\Delta \mathbf{x}. \tag{107}\]
Then, the solution of Eq. (104) is given by
\[\Delta\mathbf{x}=\frac{1}{\lambda}G_{\phi^{\prime}}^{-1}(\mathbf{x}_{t}) \partial_{\mathbf{x}}f(\mathbf{x}_{t}). \tag{108}\]
The derivative of Eq. (106) with respect to \(\lambda\) has the following form:
\[\frac{\partial}{\partial\lambda}L(\Delta\mathbf{x},\lambda) =-\frac{1}{2}(\Delta\mathbf{x}^{\intercal}G_{\phi^{\prime}}(\mathbf{x}_{t })\Delta\mathbf{x}-\epsilon) \tag{109}\] \[=0. \tag{110}\]
Taking Eq. (108) into account, the solution of Eq. (110) is written as
\[\lambda^{2}=\frac{\partial_{\mathbf{x}}f(\mathbf{x}_{t})^{\intercal}G_{ \phi^{\prime}}^{-1}(\mathbf{x}_{t})\partial_{\mathbf{x}}f(\mathbf{x}_{t})}{\epsilon}. \tag{111}\]
Combining Eqs. (108) and (111) and taking account of the nature of the minimization problems, the solution of Eq. (100) takes the following form:
\[\Delta\mathbf{x}_{t}(\epsilon)=-\sqrt{\frac{\epsilon}{\partial_{\mathbf{x}}f(\mathbf{x}_{ t})^{\intercal}G_{\phi^{\prime}}^{-1}(\mathbf{x}_{t})\partial_{\mathbf{x}}f(\mathbf{x}_{t})}}G_{ \phi^{\prime}}^{-1}(\mathbf{x}_{t})\partial_{\mathbf{x}}f(\mathbf{x}_{t}). \tag{112}\]
Note that \(\phi^{\prime}(\cdot)\) in Eq. (112) may be different from \(\phi(\cdot)\) appearing in Sec. II. In the case of CRNs, \(f(\mathbf{x}_{t})\) in Eq. (112) represents \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\). As shown in Eq. (112), \(\epsilon\) is a key parameter in NG. From the perspective of applying NG to CRNs, the relationship between \(\epsilon\) in NG and \(\Delta t\) in CRNs, when discretized, is still missing. Therefore, NG cannot be directly applied to CRNs. In the following section, we will explain how to address this challenge and develop a general upper bound on the dynamics of CRNs.
### Comparison with Newton's method
In this section, we compare NG with Newton's method. Newton's method is a special case of NG when Eq. (112) is adjusted according to certain conditions. Specifically, the conditions are \(\phi(\cdot)=\phi^{\prime}(\cdot)\) and \(\epsilon=\partial_{\mathbf{x}}f(\mathbf{x}_{t})^{\intercal}G_{\phi^{\prime}}^{-1}(\mathbf{x }_{t})\partial_{\mathbf{x}}f(\mathbf{x}_{t})\). The equation thus becomes equivalent to Eq. (100). This equivalency leads us to introduce a systematic NG-based method to determine the direction and step size for a gradient system that bounds CRNs of a specific class.
## V Upper bound on reaction rates
In this section, we construct a nonlinear system that gives an upper bound on reaction rates of CRNs in a given class. The class is characterized by several topological numbers of CRNs: \(N_{\text{v}}\), \(N_{\text{e}}\), and \(m\).
### Upper bound system
Comparing discretized CRE dynamics with NG dynamics, represented by Eq. (100), presents a challenge. The difficulty arises from the absence of an established relationship between \(\epsilon\), the constraint parameter in NG, and \(\Delta t\), the time step in the discretized CRE. To address this issue, we propose the following relationship between \(\epsilon\) and \(\Delta t\):
\[\epsilon=\mathcal{D}_{\phi^{\prime}}(\mathbf{x}_{t}+\|\hat{\mathbf{x}}_{t} \|_{\text{F}}\mathbf{e}_{t}\Delta t\|\mathbf{x}_{t}), \tag{113}\]
where \(\|\cdot\|_{\text{F}}\) is the Frobenius norm and \(\mathbf{e}_{t}\) is a vector that satisfies \(\|\mathbf{e}_{t}\|_{\text{F}}=1\). Then, we try to compute the maximum value of \(\epsilon\) in Eq. (113). Note that \(S:\mathbb{R}^{N_{\text{e}}}\rightarrow\mathbb{R}^{N_{\text{F}}}\) and \(S\) is a \(N_{\text{F}}\times N_{\text{e}}\) matrix. From Eq. (48), we get
\[=\left\|S\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{\omega}(\mathbf{x}_{t})}\bigg{(}- \frac{1}{2}S^{\intercal}\partial_{\mathbf{x}}\mathcal{D}_{\phi}(\mathbf{x}_{t}\|\hat{\mathbf{ x}})\bigg{)}\right\|_{\mathrm{F}} \tag{100}\] \[\leq\|S\|_{\mathrm{F}}\bigg{\|}\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{ \omega}(\mathbf{x}_{t})}\bigg{(}-\frac{1}{2}S^{\intercal}\partial_{\mathbf{x}} \mathcal{D}_{\phi}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\bigg{)}\bigg{\|}_{\mathrm{F}}\] (101) \[\leq\|S\|_{\mathrm{F}}\bigg{\|}\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{ \omega}(\mathbf{x}_{t})}\bigg{(}\bigg{\|}-\frac{1}{2}S^{\intercal}\partial_{\mathbf{x} }\mathcal{D}_{\phi}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\bigg{\|}_{\mathrm{abs}}\bigg{)} \bigg{\|}_{\mathrm{F}}\] (102) \[\leq\|S\|_{\mathrm{F}}\bigg{\|}\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{ \omega}(\mathbf{x}_{t})}\bigg{(}\frac{1}{2}\|S^{\intercal}\|_{\mathrm{F}}\| \partial_{\mathbf{x}}\mathcal{D}_{\phi}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\|_{\mathrm{F}} ^{N_{\mathbf{x}}\to N_{\mathbf{x}}}\bigg{)}\bigg{\|}_{\mathrm{F}}. \tag{103}\]
Here, \(\|\cdot\|_{\mathrm{abs}}\) and \(\|\cdot\|_{\mathrm{F}}^{N_{\mathbf{x}}\to N_{\mathbf{x}}}\) are defined as, respectively,
\[\|\mathbf{v}\|_{\mathrm{abs}} =[|v_{1}|,|v_{2}|,\ldots,|v_{N_{\mathbf{x}}}|]^{\intercal}, \tag{104}\] \[\|\mathbf{v}\|_{\mathrm{F}}^{N_{\mathbf{x}}\to N_{\mathbf{x}}}\coloneqq \underbrace{[\|\mathbf{v}\|_{\mathrm{F}},\|\mathbf{v}\|_{\mathrm{F}}, \ldots,\|\mathbf{v}\|_{\mathrm{F}}]}_{N_{\mathbf{x}}}\mathsf{I}^{\intercal}, \tag{105}\]
for \(\mathbf{v}\coloneqq[v_{1},v_{2},\ldots,v_{N_{\mathbf{x}}}]^{\intercal}\). From Eq. (31), we have
\[\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{\omega}(\mathbf{x})}(\|\mathbf{f}(\mathbf{x}) \|_{\mathrm{abs}}) =\mathbf{\omega}(\mathbf{x})\circ\sinh(\|\mathbf{f}(\mathbf{x})\|_{\mathrm{abs}}), \tag{106}\] \[\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{\omega}(\mathbf{x})}(\|\mathbf{f}(\mathbf{x}) \|_{\mathrm{F}}^{N_{\mathbf{x}}\to N_{\mathbf{x}}}) =\mathbf{\omega}(\mathbf{x})\circ\sinh(\|\mathbf{f}(\mathbf{x})\|_{\mathrm{F}}^{ N_{\mathbf{x}}\to N_{\mathbf{x}}}). \tag{107}\]
Given \(S:\mathbb{R}^{N_{\mathbf{x}}}\to\mathbb{R}^{N_{\mathbf{x}}}\) and \(\mathbf{v}\in\mathbb{R}^{N_{\mathbf{x}}}\), we have the following inequality for \(e=1,2,\ldots,N_{\mathbf{e}}\):
\[[\|S^{\intercal}\mathbf{v}\|_{\mathrm{abs}}]_{e} \leq\|S^{\intercal}\|_{\mathrm{F}}\|\mathbf{v}\|_{\mathrm{F}} \tag{108}\] \[=[\|S^{\intercal}\|_{\mathrm{F}}\|\mathbf{v}\|_{\mathrm{F}}^{N_{\mathbf{ x}}\to N_{\mathbf{x}}}]_{e}, \tag{109}\]
where \([\cdot]_{e}\) is the \(e\)-th element. Then, we have finished computing the bound on \(\|\hat{\mathbf{x}}_{t}\|_{\mathrm{F}}\) within a given class of CRNs.
Next, we compute \(\mathbf{e}_{t}\) as follows:
\[\mathbf{e}_{t} =\operatorname*{arg\,max}_{\mathbf{e}:\|\mathbf{e}\|_{\mathrm{F}}=1} \mathcal{D}_{\phi^{\prime}}(\mathbf{x}_{t}+\|\hat{\mathbf{x}}_{t}\|_{\mathrm{F}}\mathbf{e} \Delta t\|\mathbf{x}_{t}) \tag{110}\] \[\approx\operatorname*{arg\,max}_{\mathbf{e}:\|\mathbf{e}\|_{\mathrm{F}}=1 }\biggl{(}\frac{1}{2}\|\hat{\mathbf{x}}_{t}\|_{\mathrm{F}}^{2}(\Delta t)^{2}\mathbf{e} ^{\intercal}G_{\phi^{\prime}}(\mathbf{x}_{t})\mathbf{e}\biggr{)}\] (111) \[=\operatorname*{arg\,max}_{\mathbf{e}:\|\mathbf{e}\|_{\mathrm{F}}=1}\mathbf{ e}^{\intercal}G_{\phi^{\prime}}(\mathbf{x}_{t})\mathbf{e}. \tag{112}\]
Thus, \(\mathbf{e}_{t}\) is the eigenvector associated with the maximum eigenvalue of \(G_{\phi^{\prime}}(\mathbf{x}_{t})\). Substituting Eq. (103) and the solution of Eq. (112) into Eq. (103), we can calculate the maximum value of \(\epsilon\) within a given class of CRNs.
### \(S\) and \(R\) of an upper bound system
To identify the upper bound described by Eq. (103) for CRNs under certain constraints, both \(S\) in Eq. (102) and \(R\) in Eq. (104) must be carefully designed. We introduce a method for determining \(S_{\mathrm{ub}}\) and \(R_{\mathrm{ub}}\) specific to a class of CRNs characterized by \(N_{\mathbf{\chi}}\) as the number of chemicals, \(m\) as the highest coefficient in chemical reactions, and \(N_{\mathbf{e}}\) as the number of reactions. The \(S_{\mathrm{ub}}\) and \(R_{\mathrm{ub}}\) matrices are of dimensions \(N_{\mathbf{\chi}}\times N_{\mathbf{e}}\), and their elements at the \((i,e)\)-th position are defined as follows:
\[[S_{\mathrm{ub}}]_{i,e} \coloneqq m, \tag{113}\] \[[R_{\mathrm{ub}}]_{i,e} \coloneqq\mathbb{1}[x_{i}\leq 1]\min_{i}([R]_{i,e})+\mathbb{1}[x_{i}>1] \max_{i}([R]_{i,e}). \tag{114}\]
Here, \(\mathbb{1}[\cdot]\) denotes the indicator function, and \([\cdot]_{i,e}\) represents the \((i,e)\)-th element. The reader may think that \(\mathbb{1}[\cdot]\) is not necessary. This reflects the fact that \(x^{n}\geq x^{m}\) for \(x\in[1,\infty)\) and \(n\geq m\) but \(x^{n}\leq x^{m}\) for \(x\in(0,1]\) and \(n\geq m\). By solving Eq. (113) with Eqs. (103), (104), (113), and (114), we can compute the upper bound for a given class. In other words, we use the following inequality to construct an upper bound system:
\[\|\dot{\mathbf{x}}_{t}\|_{\mathrm{F}}\] \[\leq\|S_{\mathrm{ub}}\|_{\mathrm{F}}\] \[\quad\times\bigg{\|}\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{\omega}_{ \mathrm{ub}}(\mathbf{x}_{t})}\bigg{(}\frac{1}{2}\|S^{\intercal}_{\mathrm{ub}}\|_{ \mathrm{F}}\|\partial_{\mathbf{x}}\mathcal{D}_{\phi}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\|_{ \mathrm{F}}^{N_{\mathbf{\chi}}\to N_{\mathbf{\varsigma}}}\bigg{)}\bigg{\|}_{\mathrm{F}}, \tag{115}\]
where
\[\mathbf{\omega}_{\mathrm{ub}}(\mathbf{x})\coloneqq 2\sqrt{\mathbf{k}^{+}\circ\mathbf{k}^{-}} \circ\mathbf{x}^{R^{\intercal}_{\mathrm{ub}}/2}. \tag{116}\]
### Upper bound system with the KL constraint
We utilize Eq. (36), represented as \(\phi^{\prime}(\cdot)=\phi_{\mathrm{KL}}(\cdot)\), as the potential function for the Bregman divergence in the constraint of NG 6. Subsequently, by substituting \(\|\partial_{\mathbf{x}}\mathcal{D}_{\mathrm{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\|_{ \mathrm{F}}^{N_{\mathbf{\chi}}\to N_{\mathbf{\varsigma}}}\) into Eq. (103), we can determine the maximum value of \(\|\dot{\mathbf{x}}_{t}\|_{\mathrm{F}}\) as stated in Eq. (103).
Footnote 6: While there are many different candidates for \(\phi^{\prime}(\cdot)\), the \(L^{2}\) constraint is often used. Then, we explain the case of the \(L^{2}\) constraint in Appendix B.
## VI Numerical simulations
In this section, numerical simulations are conducted to elucidate the upper-bound dynamics for a specified class of CRNs. The parameters are set as follows: \(N_{\mathbf{\chi}}=4\), \(m=4\), and \(N_{\mathbf{\varsigma}}=1\). The initial condition is chosen as \(\hat{\mathbf{x}}=[1,1,1,1]^{\intercal}\) and the time step as \(\Delta t=1.0\times 10^{-5}\). The rate constants \(k_{e}^{\pm}\) are fixed at \(1\) for all \(e\)
### Case where CRNs have different equilibrium state
We introduce CRNs that satisfy the same conditions and compare them from the viewpoint of reaction rate. Here we consider the following six different reactions, which have the same topological quantities (\(N_{\mathbb{X}}=4\), \(m=4\), and \(N_{\mathsf{e}}=1\)):
\[\mathbb{X}_{1}+4\mathbb{X}_{2} \rightleftharpoons 4\mathbb{X}_{3}+4\mathbb{X}_{4}, \tag{11a}\] \[4\mathbb{X}_{1}+4\mathbb{X}_{2} \rightleftharpoons 4\mathbb{X}_{3}+4\mathbb{X}_{4},\] (11b) \[\mathbb{X}_{1}+2\mathbb{X}_{2} \rightleftharpoons \mathbb{X}_{3}+3\mathbb{X}_{4},\] (11c) \[4\mathbb{X}_{1} \rightleftharpoons 4\mathbb{X}_{2}+4\mathbb{X}_{3}+4\mathbb{X}_{4},\] (11d) \[\mathbb{X}_{1} \rightleftharpoons 2\mathbb{X}_{2}+2\mathbb{X}_{3}+3\mathbb{X}_{4},\] (11e) \[2\mathbb{X}_{1} \rightleftharpoons 3\mathbb{X}_{2}+2\mathbb{X}_{3}+3\mathbb{X}_{4}. \tag{11f}\]
We set \(\mathbf{x}_{\text{ini}}=[9/8,87/80,27/20,27/20]^{\intercal}\), \(\hat{\mathbf{x}}=[1,1,1,1]^{\intercal}\), and \(\Delta t=1.0\times 10^{-5}\). In Fig. 2, we plot the dynamics of Eq. (11) and that of the system constructed in Sec. V. It clearly shows that the system constructed in Sec. V gives an upper bound on CRNs. The CRNs in Eq. (11) have equilibrium states different from \(\hat{\mathbf{x}}\) because of \(\ker(S^{\intercal})\); then the gap in \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}||\hat{\mathbf{x}})\) remains for \(t\gg 0\) and the upper bound is relatively loose.
### Case where CRNs do not have different equilibrium state
Next, we consider Eq. (11b) and set \(\mathbf{x}_{\text{ini}}=[1/2,1/2,3/2,3/2]^{\intercal}\), \(\hat{\mathbf{x}}=[1,1,1,1]^{\intercal}\), and \(\Delta t=1.0\times 10^{-5}\). In this case, we have \(\mathbf{x}_{\text{eq}}=\hat{\mathbf{x}}\). In Fig. 3, we plot the dynamics of Eq. (11) and that of the system constructed in Sec. V. The system constructed in Sec. V provides a tighter bound. In Fig. 4, we show the time-difference of the KL divergence \(-\Delta\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}||\hat{\mathbf{x}})\) per \(\Delta t\). We have used \(\mathbf{x}_{t}\) on the solution of Eq. (11b) with \(\mathbf{x}_{\text{ini}}=[1/2,1/2,3/2,3/2]^{\intercal}\); that is, \(-\Delta\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}||\hat{\mathbf{x}})\) of the CRN in Eq. (11b) and the system constructed in Sec. V on the orbit of the CRN in Eq. (11b). As shown in Fig. 4, the system constructed in Sec. V shows faster convergence at each \(\mathbf{x}_{t}\).
### Case of \(N_{\mathsf{e}}>1\)
We consider the fully-connected CRNs whose hypervertices are given by
\[\mathbb{V}_{1} =\{\mathbb{X}_{1}+\mathbb{X}_{2},\mathbb{X}_{2}+\mathbb{X}_{3}, \mathbb{X}_{3}+\mathbb{X}_{4},\mathbb{X}_{4}+\mathbb{X}_{1}\}, \tag{12a}\] \[\mathbb{V}_{2} =\{\mathbb{X}_{1}+3\mathbb{X}_{2}+4\mathbb{X}_{3},\mathbb{X}_{2}+ 2\mathbb{X}_{3},\] \[4\mathbb{X}_{1}+\mathbb{X}_{3}+\mathbb{X}_{4},\mathbb{X}_{1}+3 \mathbb{X}_{2}+\mathbb{X}_{4}\},\] (12b) \[\mathbb{V}_{3} =\{4\mathbb{X}_{1}+3\mathbb{X}_{2}+4\mathbb{X}_{3},4\mathbb{X}_{ 2}+2\mathbb{X}_{3}+4\mathbb{X}_{4},\] \[4\mathbb{X}_{1}+4\mathbb{X}_{3}+\mathbb{X}_{4},2\mathbb{X}_{1}+ 3\mathbb{X}_{2}+4\mathbb{X}_{4}\}. \tag{12c}\]
The CRNs in Eq. (12) belong to the class of CRNs labeled by \(N_{\mathbb{X}}=4\), \(N_{\mathsf{e}}=6\), and \(m=4\). We call the CRNs in Eq. (12) type 1, type 2, and type 3 from above.
We plot the dynamics of the CRNs in Eq. (12) and its upper bound in the case of \(\mathbf{x}_{\text{eq}}\neq\hat{\mathbf{x}}\). In Fig. 5, we set \(\mathbf{x}_{\text{ini}}=[9/8,87/80,27/20,27/20]^{\intercal}\), \(\hat{\mathbf{x}}=[1,1,1,1]^{\intercal}\), \(k_{e}^{\pm}=1\), and \(\Delta t=1.0\times 10^{-5}\). Figure 5 clearly demonstrates the upper bound holds for \(N_{\mathsf{e}}>1\).
We show the dependence of \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}||\hat{\mathbf{x}})\) on time \(t\) for the CRN in Eq. (12c) and its upper bound in the case of \(\mathbf{x}_{\text{eq}}=\hat{\mathbf{x}}\). In Fig. 6, we set \(\hat{\mathbf{x}}=[1.2547,1.1021,1.1951,1.3388]^{\intercal}\). In Fig. 7, we also show the dependence of \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}||\hat{\mathbf{x}})\) on time \(t\) for the CRN in Eq. (12c) and its upper bound in the case of \(\mathbf{x}_{\text{eq}}=\hat{\mathbf{x}}\).
### Comparison of the upper bounds
In this section, we examine the behavior of the upper bound under varying parameters. The parameters are \(N_{\mathsf{X}}=4\), \(N_{\mathsf{e}}=1\), \(\mathbf{x}_{\text{ini}}=[3/4,3/4,5/4,5/4]^{\intercal}\), and \(\mathbf{x}_{\text{eq}}=[1,1,1,1]^{\intercal}\). Figure 8 depicts the dependence of \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\) on \(t\) for \(m=1,2,3,4\). Figure 9 portrays the relationship between \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\) and \(-\Delta\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\) for \(N_{\mathsf{X}}=4\) and \(N_{\mathsf{e}}=1\). The figures indicate that higher values of \(m\) are associated with increased rates of convergence. This behavior is consistent with the expectation that nonlinearity in CRNs tends to influence reaction rates.
The relationship between the KL divergence and the entropy production was pointed out in Ref. [36]. Letting \(\Sigma_{\text{tot}}(\mathbf{x}_{t})\) be the total entropy, the following relationship holds:
\[\Sigma_{\text{tot}}(\mathbf{x}_{t})-\Sigma_{\text{tot}}(\mathbf{x}_{t^{ \prime}})=-\frac{V}{T}[\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})- \mathcal{D}_{\text{KL}}(\mathbf{x}_{t^{\prime}}\|\hat{\mathbf{x}})], \tag{11}\]
where \(V\) is the volume of the system and \(T\) is the temperature of the environment. In NG, the right-hand side of Eq. (10) is maximized under \(\mathcal{D}_{\phi^{\prime}}(\mathbf{x}_{t}+\Delta\mathbf{x}\|\mathbf{x}_{t})\leq\epsilon\) as written in Eq. (12). Furthermore, \(\epsilon\) in the optimization problem to find the upper bound in Sec. V is equal to or larger than the time-difference of the KL divergence of CRNs in a given class. Thus, the entropy production of the system designed in Sec. V is larger than those of CRNs in a given class and it shows faster convergence toward \(\hat{\mathbf{x}}\).
## VII Discussion
The relationship between the KL divergence and the entropy production was pointed out in Ref. [36]. Letting \(\Sigma_{\text{tot}}(\mathbf{x}_{t})\) be the total entropy, the following relationship holds:
\[\Sigma_{\text{tot}}(\mathbf{x}_{t})-\Sigma_{\text{tot}}(\mathbf{x}_{t^{ \prime}})=-\frac{V}{T}[\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})- \mathcal{D}_{\text{KL}}(\mathbf{x}_{t^{\prime}}\|\hat{\mathbf{x}})], \tag{12}\]
where \(V\) is the volume of the system and \(T\) is the temperature of the environment. In NG, the right-hand side of Eq. (10) is maximized under \(\mathcal{D}_{\phi^{\prime}}(\mathbf{x}_{t}+\Delta\mathbf{x}\|\mathbf{x}_{t})\leq\epsilon\) as written in Eq. (12). Furthermore, \(\epsilon\) in the optimization problem to find the upper bound in Sec. V is equal to or larger than the time-difference of the KL divergence of CRNs in a given class. Thus, the entropy production of the system designed in Sec. V is larger than those of CRNs in a given class and it shows faster convergence toward \(\hat{\mathbf{x}}\).
## VIII Conclusions
In this study, we developed a framework based on NG to establish an upper bound on the dynamics of a specific subset of CRNs. The physical meaning of this bound relates to the concept of entropy production, which in turn is related to the speed of convergence of the chemical reaction. The nonlinearity commonly present in CRNs presents a challenge, which is addressed here. The optimization problem in the NG derivation was found to be related to entropy production, enriching the understanding of NG within a thermodynamic context. While the primary focus has been on CRNs, the methods and discussions are applicable to a wider range of hypergraph dynamics. The study holds implications for fields beyond chemistry and physics, including information science and machine learning.
###### Acknowledgements.
H.M. was supported by JSPS KAKENHI Grant Number 23H04489. T. J. K was supported by JST (Grants No. JPMJCR2011 and No. JPMJCR1927) and JSPS (Grant No. 19H05799). L.-S.B. was partially funded by NSF award CHE-2002313.
## Appendix A Derivation of the KL divergence from the Bregman divergence
In this section, we show that the Bregman divergence, Eq. (34), with Eq. (36) is equivalent to the KL divergence, Eq. (37). Let us define the following potential for \(\alpha\in\mathbb{R}\):
\[\phi_{\text{KL}}^{(\alpha)}(\mathbf{x})\coloneqq\sum_{i=1}^{N_{\text{K}}}x_{i}( \ln x_{i}-\alpha) \tag{100}\]
The Bregman divergence, Eq. (34), with Eq. (100) is computed as follows:
\[\mathcal{D}_{\phi_{\text{KL}}^{(\alpha)}}(\mathbf{x}\|\mathbf{y}) =\phi_{\text{KL}}^{(\alpha)}(\mathbf{x})-\phi_{\text{KL}}^{(\alpha)} (\mathbf{y})-\langle(\mathbf{x}-\mathbf{y}),\nabla\phi_{\text{KL}}^{(\alpha)}(\mathbf{y})\rangle \tag{101}\] \[=\sum_{i=1}^{N_{\text{K}}}x_{i}(\ln x_{i}-\alpha)-\sum_{i=1}^{N_ {\text{K}}}y_{i}(\ln y_{i}-\alpha)\] \[\quad-\sum_{i=1}^{N_{\text{K}}}(x_{i}-y_{i})(\ln y_{i}-\alpha+1)\] (102) \[=\sum_{i=1}^{N_{\text{K}}}x_{i}\ln x_{i}-\sum_{i=1}^{N_{\text{K} }}y_{i}\ln y_{i}\] \[\quad-\sum_{i=1}^{N_{\text{K}}}(x_{i}-y_{i})\ln y_{i}-\sum_{i=1}^ {N_{\text{K}}}(x_{i}-y_{i})\] (103) \[=\sum_{i=1}^{N_{\text{K}}}x_{i}\ln\frac{x_{i}}{y_{i}}-\sum_{i=1}^ {N_{\text{K}}}(x_{i}-y_{i})\] (104) \[=\mathcal{D}_{\text{KL}}(\mathbf{x}\|\mathbf{y}) \tag{105}\]
Thus, the Bregman divergence, Eq. (34), with Eq. (100) is equivalent to the KL divergence, Eq. (37), independently from \(\alpha\). Furthermore, Eq. (36) is the special case of Eq. (100) with \(\alpha=0\).
## Appendix B Upper bound system with the \(L^{2}\) constraint
In Sec. V, we have considered \(\phi_{\text{KL}}(\cdot)\), Eq. (36), as the potential of the Bregman divergence in the constraint term since the KL divergence is minimized in CRNs. However, we are not limited to this choice, and it is expected that a different potential in the constraint may give us a different bound. Another simple candidate for the potential of the Bregman divergence in the constraint is the \(L^{2}\) norm given by
\[\phi_{L^{2}}(\mathbf{x})\coloneqq\sum_{i=1}^{N_{\text{K}}}|x_{i}|^{2}. \tag{106}\]
In this case, \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}+\|\hat{\mathbf{x}}_{t}\|_{\text{F}}\mathbf{e}_{t} \Delta t\|\mathbf{x}_{t})\) does not depend on \(\mathbf{e}_{t}\) and the Hessian \(G_{\phi_{L^{2}}}(\mathbf{x}_{t})\) becomes the identity matrix: \(G_{\phi_{L^{2}}}(\mathbf{x}_{t})=\mathbb{1}\).
| 化学反応ネットワーク(CRN)のダイナミクスを調査し、その反応速度の上限を求めることを目的とする。この課題は、CRNに inherent な非線形性と離散構造のため、困難である。この問題に対処するため、自然勾配を用いた情報幾何学的アプローチを採用し、CRNダイナミクスの上限を決定する非線形システムを開発する。この方法を数値シミュレーションを用いて検証し、特定のCRNクラスにおけるより速い収束を示した。このクラスは、化学物質の数、化学反応の化学量論係数の最大値、反応の数によって特徴付けられる。従来の方法と比較し、後者はCRNの反応速度の上限を求めることができない。この研究はCRNに焦点を当てているが、自然科学から工学まで、ハイパーグラフの ubiquityは、この方法が情報科学にも応用される可能性を示唆している |
2309.04341 | Design of a Single-User RIS-Aided MISO System Based on Statistical
Channel Knowledge | Reconfigurable intelligent surface (RIS) is considered a prospective
technology for beyond fifth-generation (5G) networks to improve the spectral
and energy efficiency at a low cost. Prior works on the RIS mainly rely on
perfect channel state information (CSI), which imposes a huge computational
complexity. This work considers a single-user RIS-assisted communication
system, where the second-order statistical knowledge of the channels is
exploited to reduce the training overhead. We present algorithms that do not
require estimation of the CSI and reconfiguration of the RIS in every channel
coherence interval, which constitutes one of the most critical practical issues
in an RIS-aided system. | Sadaf Syed, Dominik Semmler, Donia Ben Amor, Michael Joham, Wolfgang Utschick | 2023-09-08T14:12:02 | http://arxiv.org/abs/2309.04341v1 | # Design of a Single-User RIS-Aided MISO System Based on Statistical Channel Knowledge
###### Abstract
Reconfigurable intelligent surface (RIS) is considered a prospective technology for beyond fifth-generation (5G) networks to improve the spectral and energy efficiency at a low cost. Prior works on the RIS mainly rely on perfect channel state information (CSI), which imposes a huge computational complexity. This work considers a single-user RIS-assisted communication system, where the second-order statistical knowledge of the channels is exploited to reduce the training overhead. We present algorithms that do not require estimation of the CSI and reconfiguration of the RIS in every channel coherence interval, which constitutes one of the most critical practical issues in an RIS-aided system.
MISO, Downlink, RIS, CSI, statistical knowledge, bilinear precoders
## I Introduction
Massive multiple-input multiple-output (MIMO) systems can meet the ever-increasing demand of high throughput and low energy consumption in current wireless communication systems. However, equipping the base station (BS) with a large number of antennas may lead to high circuit energy consumption, including very high hardware costs. Recently, reconfigurable intelligent surface (RIS) has emerged as a promising low-cost solution to enhance the spectral efficiency in a wireless communication system [1]. Specifically, an RIS is a passive array composed of a large number of reconfigurable reflecting elements. Each passive element of the RIS is able to introduce a phase shift to the incident signal in a controlled manner, thereby boosting the received power for the desired user or creating a destructive interference for the non-intended users. Additionally, the passive elements of the RIS do not require any transmit radio frequency (RF) chain, and hence, their energy and hardware costs are much lower as compared to that of the traditional active antennas at the BS. Thus, they can be scaled much more easily than the antennas at the BS.
Most of the existing algorithms for RIS rely on the assumption of perfect channel state information (CSI), e.g., [1, 2, 3]. However, owing to the passive structure of the RIS as well as its massive number of reflecting elements, the acquisition of perfect CSI for the RIS-associated links is formidable. Moreover, these algorithms demand the joint optimisation of the phase shifts and the transmit filters to be performed in every channel coherence interval, which is computationally very expensive. This issue is being recently studied in the literature [4, 5, 6, 7], where the key idea is to exploit the statistical knowledge of the channels to design the phase shifts of the RIS. Since the structure of the channels varies slowly, the covariance matrices remain constant for many channel coherence intervals, and hence, it is possible to obtain accurate information of the second-order statistics of the channels through long-term observation. The phase shifts and the filters which are designed based on the covariance matrices do not need to be updated regularly, i.e., there is no need to estimate the channels and perform the joint optimisation in every channel coherence interval. This significantly reduces the channel training overhead and the design complexity of the RIS-assisted systems. The algorithms proposed in [5] and [6] consider the statistical knowledge of the channels for the phase-shift optimisation, however, they consider a hybrid online/offline approach. The phase shifts of the RIS are designed considering the long-term statistics of the channels during the offline step, whereas the filters are designed considering the perfect knowledge of the instantaneous CSI in the online step, thereby, requiring the channel to be estimated perfectly in every channel coherence interval again.
In this work, we present two low-complexity algorithms for a single-user RIS-aided multiple-input single-output (MISO) system, which are only based on the statistical knowledge of the channels. These algorithms employ the lower bound of the user's rate as the figure of merit, which is based on the worst-case noise bound [8]. We consider a more realistic setup, where the covariance matrices of the channels are known perfectly, however, the accurate knowledge of the instantaneous CSI is not available. The bilinear precoders [9] are used as the transmit filters, for which a closed-form solution of the optimal filters can be obtained for the single-user case. As such, the filters and the phase shifts can be designed jointly. The algorithm in [4] is also based on the statistical knowledge of the channels for a single-user MISO system, however, it is based on the assumption that the RIS is deployed at a favourable location and a line-of-sight (LOS) channel exists to both the BS and the user. The phase shift optimisation in [4] is only dependent on the LOS components, which are assumed to be perfectly known. In this work, we consider a general zero-mean channel model with perfectly known covariance matrices. We compare our algorithms to the one presented in [7], which assumes a similar zero-mean
channel model for a multi-antenna single-user system. The algorithm in [7] maximises the upper bound of the user's rate, which is computed using the Jensen's inequality and it is based on the alternating optimisation (AO) approach, where the filters and the phase shifts are optimised alternatingly in each subproblem. Such an AO method offers a good performance but it has convergence and complexity issues (discussed in [10]).
## II System Model
This paper investigates the downlink (DL) of an RIS-aided single-user MISO communication system. The system consists of one BS equipped with \(M\) antennas, serving one single-antenna user, and one RIS having \(N\) passive reflecting elements. The phase-shift matrix of the RIS is defined by a diagonal matrix \(\boldsymbol{\Phi}=\mathrm{diag}(\phi_{1},\cdots,\phi_{N})\), where \(\phi_{1},\cdots,\phi_{N}\) are the phase shift coefficients of the \(N\) elements of the RIS with \(|\phi_{n}|=1\;\forall\;n\), and \(\boldsymbol{\phi}=[\phi_{1},\cdots,\phi_{N}]^{\mathrm{T}}\) denotes the corresponding phase-shift vector. The direct channel from the BS to the user is denoted by \(\boldsymbol{h}_{\mathrm{d}}\in\mathbb{C}^{M\times 1}\), and it is assumed to be circularly symmetric, complex Gaussian distributed with zero mean and covariance matrix \(\boldsymbol{C}_{\mathrm{d}}\), i.e., \(\boldsymbol{h}_{\mathrm{d}}\sim\mathcal{N}_{\mathbb{C}}(\boldsymbol{0}, \boldsymbol{C}_{\mathrm{d}})\). The channel from the RIS to the user is denoted by \(\boldsymbol{r}\in\mathbb{C}^{N\times 1}\), which has a zero mean and the covariance matrix \(\boldsymbol{C}_{\boldsymbol{r}}\). The channel from the BS to the RIS is denoted by \(\boldsymbol{T}\in\mathbb{C}^{N\times M}\), and it is assumed to follow the Kronecker channel model, given by
\[\boldsymbol{T}=\sqrt{\beta}\boldsymbol{R}_{\text{RIS}}^{1/2} \boldsymbol{W}\boldsymbol{R}_{\text{Tx}}^{1/2,\text{H}}. \tag{1}\]
The entries of \(\boldsymbol{W}\in\mathbb{C}^{N\times M}\) are independent and identically distributed with unit variance and zero mean. \(\boldsymbol{R}_{\text{RIS}}\) and \(\boldsymbol{R}_{\text{Tx}}\) denote the channel correlation matrices on the side of the RIS and the BS respectively, and \(\beta\geq 0\) represents the scaling factor such that \(\mathrm{tr}(\boldsymbol{R}_{\text{Tx}})=\mathrm{tr}\left(\boldsymbol{R}_{ \text{RIS}}\right)\) is satisfied. The effective channel of the RIS-assisted system is given by
\[\boldsymbol{h}^{\text{H}}=\boldsymbol{h}_{\mathrm{d}}^{\text{H}}+ \boldsymbol{r}^{\text{H}}\boldsymbol{\Phi}^{\text{H}}\boldsymbol{T} \tag{2}\]
which has zero mean and its covariance matrix is given by \(\boldsymbol{C}\). It is assumed that the BS has only access to a noisy channel observation \(\boldsymbol{\psi}\), but not the actual CSI. The observation \(\boldsymbol{\psi}\) is the Least-Squares (LS) estimate of the channel, which is obtained by correlating the received signal with the pilot sequences during the training phase, and is given by
\[\boldsymbol{\psi}=\boldsymbol{h}\;+\;\boldsymbol{n} \tag{3}\]
where \(\boldsymbol{n}\sim\mathcal{N}_{\mathbb{C}}(\boldsymbol{0},\boldsymbol{C}_{ \mathrm{n}})\) denotes the noise in the channel observation and \(\boldsymbol{C}_{\mathrm{n}}\) is the noise covariance matrix.
The transmit filter at the BS is designed such that it only depends on the channel statistics and the noisy observation. To this end, the bilinear precoder [9] is used as the transmit filter in this work. The bilinear precoder (\(\boldsymbol{p}\)) is designed such that it linearly depends on the observation \(\boldsymbol{\psi}\), i.e., \(\boldsymbol{p}\;=\;\boldsymbol{A}\boldsymbol{\psi}\), with \(\boldsymbol{p}\;\in\mathbb{C}^{M\times 1}\) and \(\boldsymbol{A}\in\mathbb{C}^{M\times M}\) being a deterministic transformation matrix, which needs to be designed such that the user's rate is maximised. The signal received by the user reads as: \(y=\boldsymbol{h}^{\text{H}}\boldsymbol{p}\,s+v\), where \(s\sim\mathcal{N}_{\mathbb{C}}(0,1)\) denotes the data symbol and \(v\sim\mathcal{N}_{\mathbb{C}}(0,\sigma^{2})\) is the noise at the user's side.
Because of the imperfect CSI, we cannot compute the closed-form expression of the actual rate of the user. Instead of that, a lower bound on the user's rate based on the worst-case error, which is extensively used in the massive MIMO literature is employed here [8]. The lower bound of the user's rate is given by \(\log_{2}(1+\gamma^{\text{lb}})\), where \(\gamma^{\text{lb}}\) is the lower bound of the actual signal-to-noise-ratio (SNR), expressed as
\[\gamma^{\text{lb}}=\frac{|\mathbb{E}[\boldsymbol{h}^{\text{H}} \boldsymbol{p}]|^{2}}{\mathbb{E}[|\boldsymbol{h}^{\text{H}}\boldsymbol{p}- \mathbb{E}[\boldsymbol{h}^{\text{H}}\boldsymbol{p}]|^{2}]+\sigma^{2}}. \tag{4}\]
Evaluating the terms in (4) yields (cf. [9, 11])
\[\gamma^{\text{lb}}=\frac{|\mathrm{tr}(\boldsymbol{A}\boldsymbol{C})|^{2}}{ \mathrm{tr}\Big{(}\boldsymbol{A}\boldsymbol{Q}\boldsymbol{A}^{\text{H}} \boldsymbol{C}\Big{)}+\sigma^{2}} \tag{5}\]
where \(\boldsymbol{Q}=\mathbb{E}[\boldsymbol{\psi}\boldsymbol{\psi}^{\text{H}}]= \boldsymbol{C}+\boldsymbol{C}_{\mathrm{n}}\) is the covariance matrix of the LS estimate of the channel. Note that the above closed-form expression of the lower bound is obtained with the Gaussian assumption of \(\boldsymbol{h}\), which is indeed true for a large \(N\)[12]. The matrices \(\boldsymbol{C}\) and \(\boldsymbol{Q}\) implicitly depend on the phase-shift vector \(\boldsymbol{\phi}\) (shown in the next section). The objective is to maximise the user's rate w.r.t. \(\boldsymbol{\phi}\) and the transformation matrix \(\boldsymbol{A}\) of the bilinear precoder. Since the logarithm is a monotonically non-decreasing function, maximising the rate is equivalent to maximising the SNR. Hence, the rate maximisation can be equivalently written as
\[\begin{split}\max_{\boldsymbol{A},\boldsymbol{\phi}}& \gamma^{\text{lb}}\\ \text{s.t.}&\mathbb{E}[||\boldsymbol{p}||^{2}]= \mathrm{tr}\Big{(}\boldsymbol{A}\boldsymbol{Q}\boldsymbol{A}^{H}\Big{)}\leq P \\ &|\phi_{n}|=1\;\forall\;n=1,\cdots,N.\end{split}\] (P1)
## III Joint Optimisation Problem Formulation
Problem (P1) is non-convex, and hence, it is difficult to obtain a closed-form solution. We next propose theorems to simplify (P1) such that the filter and the phase shifts can be optimised jointly.
### _Simplification of the Objective Function_
**Theorem 1**: For a fixed phase-shift vector \(\boldsymbol{\phi}\) of the RIS, the optimal transformation matrix \(\boldsymbol{A}\in\mathbb{C}^{M\times M}\) maximising the SNR expression in (5) and satisfying the DL power constraint \(\mathbb{E}[||\boldsymbol{p}||^{2}]\leq P\) for a positive definite matrix \(\boldsymbol{C}\) is given by
\[\boldsymbol{A}_{\mathrm{opt}}=\eta\;\boldsymbol{Q}^{-1},\quad\text{where}\; \eta=\sqrt{\frac{P}{\mathrm{tr}(\boldsymbol{Q}^{-1})}}. \tag{6}\]
Proof.: The SNR expression in (5) is a positive real quantity, hence, Wirtinger derivatives are used to find \(\boldsymbol{A}\) maximising \(\gamma^{\text{lb}}\), which yields \(\boldsymbol{A}_{\mathrm{opt}}=\eta\;\boldsymbol{Q}^{-1}\). Further, \(\eta\) can be found from the DL power constraint \(\mathrm{tr}(\boldsymbol{A}\boldsymbol{Q}\boldsymbol{A}^{\text{H}})=P\).
Now replacing \(\boldsymbol{A}\) in (5) with the optimal transformation matrix, the lower bound of the SNR expression becomes
\[\gamma^{\text{lb}}=\frac{\eta^{2}\;\mathrm{tr}^{2}\left(\boldsymbol{Q}^{-1} \boldsymbol{C}\right)}{\eta^{2}\;\mathrm{tr}\left(\boldsymbol{Q}^{-1} \boldsymbol{C}\right)+\sigma^{2}}. \tag{7}\]
**Theorem 2**: The lower bound of the SNR given in (7) increases monotonically with \(\mathrm{tr}\big{(}\mathbf{Q}^{-1}\mathbf{C}\big{)}\) for a spatially white noise covariance matrix \(\mathbf{C}_{\mathrm{n}}=\zeta^{2}\mathbf{I}_{\mathrm{M}}\) with \(\zeta^{2}>0\).
Proof.: Please refer to Appendix A.
Since \(\gamma^{\mathrm{lb}}\) is monotonically increasing with \(\mathrm{tr}\big{(}\mathbf{Q}^{-1}\mathbf{C}\big{)}\), it is sufficient to maximise \(\mathrm{tr}\big{(}\mathbf{Q}^{-1}\mathbf{C}\big{)}\). Rewriting \(\mathbf{Q}^{-1}\mathbf{C}\) as \(\mathbf{I}_{M}-\mathbf{Q}^{-1}\mathbf{C}_{\mathrm{n}}\) along with the assumption of \(\mathbf{C}_{\mathrm{n}}\) to be spatially white, i.e., \(\mathbf{C}_{\mathrm{n}}=\zeta^{2}\mathbf{I}_{\mathrm{M}}\), (P1) can be simplified to
\[\min_{\mathbf{\phi}}\qquad\mathrm{tr}\big{(}\mathbf{Q}^{-1}\big{)}\ \text{s.t.}\quad|\phi_{n}|=1\ \forall\,n=1,\cdots,N.\] (P2)
To solve (P2), we first need to express \(\mathbf{Q}\) as a function of \(\mathbf{\phi}\) explicitly.
### _Computation of the Channel Covariance Matrix_
The channel covariance matrix of the effective channel can be computed as
\[\mathbf{C}=\mathbb{E}\left[\mathbf{h}\mathbf{h}^{\mathrm{H}}\right] =\mathbb{E}\left[(\mathbf{h}_{\mathrm{d}}+\mathbf{T}^{\mathrm{H}}\mathbf{\Phi }\mathbf{r})(\mathbf{h}_{\mathrm{d}}+\mathbf{T}^{\mathrm{H}}\mathbf{\Phi}\mathbf{r})^{\mathrm{H}}\right] \tag{9}\] \[\stackrel{{(a)}}{{=}}\mathbf{C}_{\mathrm{d}}+\mathbb{E} \left[\mathbf{T}^{\mathrm{H}}\mathbf{\Phi}\mathbf{r}\mathbf{r}^{\mathrm{H}}\mathbf{\Phi}^{ \mathrm{H}}\mathbf{T}\right] \tag{10}\]
where \((a)\) follows from the fact that the random variables \(\mathbf{h}_{\mathrm{d}}\), \(\mathbf{T}\) and \(\mathbf{r}\) are mutually independent with zero mean, and \(\mathbf{h}_{\mathrm{d}}\sim\ \mathcal{N}_{\mathrm{C}}(\mathbf{0},\mathbf{C}_{\mathrm{d}})\).
Inserting the expression of \(\mathbf{T}\) from (1), the covariance matrix of the effective channel can be written as
\[\mathbf{C}\stackrel{{(b)}}{{=}}\mathbf{C}_{\mathrm{d}}+\beta\,\mathbb{E }\left[\mathbf{R}_{\mathrm{Tx}}^{1/2}\mathbf{W}^{\mathrm{H}}\mathbf{R}_{\mathrm{RIS}}^{1/ 2,\mathrm{H}}\mathbf{\Phi}\mathbf{C}_{\mathbf{r}}\mathbf{\Phi}^{\mathrm{H}}\mathbf{R}_{\mathrm{ RIS}}^{1/2}\mathbf{W}\mathbf{R}_{\mathrm{Tx}}^{1/2,\mathrm{H}}\right]\]
where \((b)\) follows from the fact that \(\mathbf{r}\) and \(\mathbf{W}\) are independent random variables, and \(\mathbf{r}\sim\mathcal{N}_{\mathrm{C}}(\mathbf{0},\mathbf{C}_{\mathbf{r}})\). Since the entries of \(\mathbf{W}\) are i.i.d. with zero mean and unit variance, and \(\mathbf{\Phi}=\mathrm{diag}(\mathbf{\phi})\), the above expression can be simplified as
\[\mathbf{C} =\mathbf{C}_{\mathrm{d}}+\beta\mathrm{tr}\big{(}\mathbf{R}_{\mathrm{RIS} }\mathbf{\Phi}\mathbf{C}_{\mathbf{r}}\mathbf{\phi}^{\mathrm{H}}\big{)}\mathbf{R}_{\mathrm{Tx}} \tag{11}\] \[=\mathbf{C}_{\mathrm{d}}+\beta\mathrm{tr}\Big{(}\mathbf{R}_{\mathrm{RIS} }(\mathbf{C}_{\mathbf{r}}\odot\mathbf{\phi}\mathbf{\phi}^{\mathrm{H}})\Big{)}\mathbf{R}_{\mathrm{Tx}} \tag{12}\]
where \(\odot\) denotes the Hadamard product. Using Lemma 1 of Appendix B, the above expression can be rewritten as
\[\mathbf{C}=\mathbf{C}_{\mathrm{d}}+\beta\mathbf{\phi}^{\mathrm{H}}\left(\mathbf{R}_{\mathrm{ RIS}}\odot\mathbf{C}_{\mathbf{r}}^{\mathrm{T}}\right)\mathbf{\phi}\mathbf{R}_{\mathrm{Tx}}. \tag{13}\]
Thus, the covariance matrix of the LS estimate is given by
\[\mathbf{Q}=\mathbf{C}_{\mathrm{d}}+\beta\mathbf{\phi}^{\mathrm{H}}\left(\mathbf{R}_{\mathrm{ RIS}}\odot\mathbf{C}_{\mathbf{r}}^{\mathrm{T}}\right)\mathbf{\phi}\mathbf{R}_{\mathrm{Tx}}+\mathbf{C}_{ \mathrm{n}}. \tag{14}\]
## IV Low-Complexity Algorithms Depending on the Channel Statistics
In this section, we propose two low-complexity algorithms to solve (P2).
### _Algorithm 1: Projected Gradient Descent Method_
The minimisation problem in (P2) can be solved by the iterative projected gradient descent method. The gradient of \(\mathrm{tr}(\mathbf{Q}^{-1})\ \mathrm{w.r.t.}\ \mathbf{\phi}^{*}\) is given by [see (14)]
\[\frac{\partial}{\partial\mathbf{\phi}^{*}}\big{(}\mathrm{tr}(\mathbf{Q}^{-1})\big{)} =-\beta\mathrm{tr}(\mathbf{Q}^{-1}\mathbf{R}_{\mathrm{Tx}}\mathbf{Q}^{-1})(\mathbf{R}_{ \mathrm{RIS}}\odot\mathbf{C}_{\mathbf{r}}^{\mathrm{T}}\big{)}\mathbf{\phi}. \tag{15}\]
The expression of the gradient in (15) depends on \(\mathbf{Q}^{-1}\), which depends on \(\mathbf{\phi}\). This means that the computation of each gradient step would require the update of the \(\mathbf{Q}\) matrix and henceforth, the computation of the inverse. This can become computationally very expensive if the size of the matrix \(\mathbf{Q}\) is large, e.g., as in the case of massive MIMO systems. However, this problem can be easily averted by exploiting the structure of the gradient. The matrix \(\mathbf{Q}^{-1}\) only appears in the term \(\mathrm{tr}(\mathbf{Q}^{-1}\mathbf{R}_{\mathrm{Tx}}\mathbf{Q}^{-1})\). It can be easily observed that the term \(\beta\mathrm{tr}(\mathbf{Q}^{-1}\mathbf{R}_{\mathrm{Tx}}\mathbf{Q}^{-1})\) is a real non-negative quantity which can be included in the step size optimisation, and thus, we do not have to update the \(\mathbf{Q}\) matrix after each gradient step. This significantly reduces the computational complexity. The phase shift update rule can hence be summarised as
\[\mathbf{\phi}\leftarrow\mathbf{\phi}+\kappa\left(\mathbf{R}_{\mathrm{RIS}}\odot\mathbf{C}_{ \mathbf{r}}^{T}\right)\mathbf{\phi} \tag{16}\]
where \(\kappa\) is the optimal step size, which can be computed by the Armijo rule [13]. The new phase-shift vector obtained after every gradient step in (16) should be normalised to satisfy the unit modulus constraints of (P2).
### _Algorithm 2: Element-Wise Optimisation_
The objective function in (P2) can be reformulated such that it only depends on the \(n\)-th element of \(\mathbf{\phi}\), i.e., \(\phi_{n}\), and the remaining \(N-1\) elements are kept fixed in a particular iteration step. To this end, the final expression of \(\mathbf{Q}\) from (14) can be rearranged such that it explicitly depends on \(\phi_{n}\).
\[\mathbf{Q}=\mathbf{C}_{\mathrm{d}}+\beta\bigg{(}\sum_{i=1}^{N}\sum_{j=1}^{N}\phi_{i}^{*} \phi_{j}\big{[}\mathbf{R}_{\mathrm{RIS}}\odot\mathbf{C}_{\mathbf{r}}^{\mathrm{T}}\big{]}_{i,j }\bigg{)}\mathbf{R}_{\mathrm{Tx}}+\mathbf{C}_{\mathrm{n}}.\]
Rearranging the above equation, we get
\[\mathbf{Q}=\mathbf{D}+\phi_{n}\mathbf{B}_{n}+\phi_{n}^{*}\mathbf{B}_{n}^{\mathrm{H}} \tag{17}\]
where the matrices \(\mathbf{D}\) and \(\mathbf{B}_{n}\) are independent of \(\phi_{n}\), and are given by
\[\mathbf{D} =\mathbf{C}_{\mathrm{d}}+\beta\sum_{\begin{subarray}{c}i=1\\ i\neq n\end{subarray}}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq n\end{subarray}}^{N}\phi_{i}^{*}\phi_{j}\big{[}\mathbf{R}_{\mathrm{RIS}} \big{]}_{i,j}\big{[}\mathbf{C}_{\mathbf{r}}\big{]}_{j,i}\mathbf{R}_{\mathrm{Tx}} \tag{18}\] \[+\beta\big{[}\mathbf{R}_{\mathrm{RIS}}\big{]}_{n,n}\big{[}\mathbf{C}_{\mathbf{r }}\big{]}_{n,n}\mathbf{R}_{\mathrm{Tx}}+\mathbf{C}_{\mathrm{n}}\] \[\mathbf{B}_{n} =\beta\sum_{\begin{subarray}{c}i=1\\ i\neq n\end{subarray}}^{N}\phi_{i}^{*}\big{[}\mathbf{R}_{\mathrm{RIS}}\big{]}_{i,n} \big{[}\mathbf{C}_{\mathbf{r}}\big{]}_{n,i}\mathbf{R}_{\mathrm{Tx}}.\]
The optimisation problem in (P2) can now be reduced to
\[\min_{\phi_{n}}\qquad\mathrm{tr}\big{(}\mathbf{Q}^{-1}\big{)}\ \ \text{s.t.}\quad|\phi_{n}|=1.\] (P3)
The Lagrangian function for the above problem reads as
\[\mathcal{L}=\operatorname{tr}(\mathbf{Q}^{-1})+\mu(\phi_{n}\phi_{n}^{*}-1) \tag{20}\]
where \(\mu\in\mathbb{R}\) is the dual variable corresponding to the unit modulus constraint in (P3). Solving \(\dfrac{\partial\mathcal{L}}{\partial\phi_{n}^{*}}\doteq 0\), we get a closed-form update rule of \(\phi_{n}\) as follows
\[\phi_{n}\leftarrow\dfrac{\operatorname{tr}(\bar{\mathbf{Q}}^{-1}\mathbf{B}_{n}^{\text {H}}\bar{\mathbf{Q}}^{-1})}{|\operatorname{tr}(\bar{\mathbf{Q}}^{-1}\mathbf{B}_{n}^{\text {H}}\bar{\mathbf{Q}}^{-1})|} \tag{21}\]
where \(\bar{\mathbf{Q}}\) denotes the value of \(\mathbf{Q}\) from the previous iteration. In this approach, we do not need to find the optimal step size as in Algorithm 1. However, after each update step, the matrices \(\mathbf{Q}\) and \(\mathbf{B}_{n}\) need to be updated, which would be computationally expensive for large \(M\), as in the case of massive MIMO systems.
## V Results
In this section, numerical results are provided to validate the effectiveness of the proposed algorithms. The system consists of one BS equipped with \(M=4\) antennas, serving one single-antenna user. The RIS is equipped with \(N=40\) reflecting elements. The setup is illustrated in Fig. 1. The user is placed at a distance \(D\,\text{m}\) from the BS. Each of the channels is generated according to its distribution as defined in Section II. The covariance matrix of each channel is generated according to the urban micro channel model described in the 3GPP technical report [14]. For \(D=20\,\text{m}\), the convergence plot of the two proposed algorithms is shown in Fig. (2). The convergence analysis reveals that both algorithms converge in a few iterations. The element-wise optimisation algorithm converges in less than 4 iterations, and the gradient descent based algorithm requires slightly more iterations to converge. It is also observed that the low complexity gradient descent algorithm converges to a similar value as the element-wise optimisation method.
The user's rate is taken as the performance metric in Fig. (3), which is computed with the different algorithms and compared over the transmit power levels \(P\). The average rate of the user is given by \(\mathbb{E}\left[\log_{2}(1+|\mathbf{h}^{\text{H}}\mathbf{p}|^{2}/\sigma^{2})\right]\), where \(\sigma^{2}\) is set to 1. The estimation noise covariance matrix \(\mathbf{C}_{n}\) is assumed to be the identity matrix. The rate is averaged over 100 covariance matrices, which are generated by varying the distance \(D\) in between \(15\,\text{m}\) to \(60\,\text{m}\) and the path loss factors of the scatterers randomly. For each of the generated covariance matrices, the user's instantaneous rate is averaged over 1000 channel realisations. The performance of the proposed algorithms is compared with the following baselines: (i) a system without RIS with the bilinear precoders as the transmit filters [9], (ii) a system with RIS where the phase shifts are chosen randomly and the bilinear precoders are used as the transmit filters, (iii) the SDR approach of [3] for the genie-aided setup of perfectly known CSI, (iv) the SDR approach of [3] used for the imperfect CSI setup, (v) the algorithm in [7] based on the statistical channel knowledge, and (vi) the two-timescale (TTS) approach of [5]. Fig. (3) compares the user's rate for the different schemes with respect to the transmit power \(P\) in dB. The topmost curve represents the upper bound of the rate that can be achieved for the considered system setup when the CSI is perfectly known, and the optimisation of filters and phase shifts is performed in every channel coherence interval with the SDR method [3]. The SDR algorithm of [3] is then employed in an imperfect CSI setup and the user's rate degrades by 9 dB approximately. The simulation results reveal that the two proposed algorithms are very similar in performance. Moreover, their performance gap to the SDR approach for the imperfect CSI scenario is small, despite the fact that these algorithms are computationally much less expensive as the filters and the phase shifts do not need to be optimised in every channel coherence interval. Furthermore,
Figure 1: Simulation Setup
Figure 3: User’s Rate vs Transmit Power \(P\) in dB
Figure 2: Convergence Plot for \(D\) = \(20\,\text{m}\)
these algorithms based on the maximisation of the lower bound of the user's rate considering the worst-case noise bound [8] outperform the AO algorithm in [7], which maximises the upper bound of the rate obtained through Jensen's inequality. Additionally, we extend the algorithms to the TTS approach of [5]. The algorithm in [5] employs the stochastic successive convex approximation (SSCA) method [15] to compute the optimal phase shifts based on the channel statistics. In the TTS approach, the optimal phase shifts obtained by (16), (21) or the SSCA method [5] are kept fixed in the coherence interval of the covariance matrices and the filters are updated in every channel coherence interval with the matched filter (MF). It is observed that the TTS approach employing Algorithm 2 outperforms the algorithm in [5] for our system setup, i.e., the performance of the TTS optimisation is boosted by the method underlying Algorithm 2 and it offers the best performance among other approaches involving the statistical channel knowledge in Fig. (3).
## VI Conclusion
In this work, we have presented algorithms for the single-user RIS-aided MISO systems based on the bilinear precoders. The simulation results illustrate that a performance gain can be achieved by optimising the phase shifts of the RIS, even when the actual CSI is not available, by exploiting the second-order statistics. This significantly reduces the training overhead as the channels do not need to be estimated in every channel coherence interval and the phase shifts of the RIS do not need to be updated frequently. The extension of the algorithms for the multi-user setup will be presented in our next work.
## VII Appendix
### _Proof of Theorem 2_
With \(\eta=\sqrt{\frac{P}{\operatorname{tr}(\mathbf{Q}^{-1})}}\), \(\gamma^{\mathrm{lb}}\) can be rewritten as
\[\gamma^{\mathrm{lb}}=\frac{\operatorname{tr}^{2}(\mathbf{Q}^{-1}\mathbf{C})}{ \operatorname{tr}(\mathbf{Q}^{-1}\mathbf{C})+\sigma^{2}\,\operatorname{tr}(\mathbf{Q}^{- 1})/P}\,. \tag{22}\]
Assuming \(\mathbf{C}_{\mathrm{n}}=\zeta^{2}\operatorname{\mathbf{I}}_{M}\), where \(\zeta^{2}>0\), the term \(\operatorname{tr}(\mathbf{Q}^{-1})\) can be written as \(\operatorname{tr}\bigl{(}\mathbf{Q}^{-1}(\mathbf{Q}-\mathbf{C})\bigr{)}/\zeta^{2}\), which, in fact, equals to \(\Bigl{(}M-\operatorname{tr}\bigl{(}\mathbf{Q}^{-1}\mathbf{C}\bigr{)}\Bigr{)}/\zeta^{2}\). Plugging this into (22), and replacing the term \(\operatorname{tr}\bigl{(}\mathbf{Q}^{-1}\mathbf{C}\bigr{)}\) by \(x\) for the ease of notation, the lower bound of the SNR can be expressed as a function of \(x\) by
\[\gamma^{\mathrm{lb}}=f(x)=\frac{x^{2}}{\left(1-\frac{\sigma^{2}}{P\zeta^{2}} \right)x+\frac{\sigma^{2}M}{P\zeta^{2}}}. \tag{23}\]
Replacing \(\left(1-\frac{\sigma^{2}}{P\zeta^{2}}\right)\) by \(k_{1}\) and \(\frac{\sigma^{2}M}{P\zeta^{2}}\) by \(k_{2}\), we get
\[f(x)=\frac{x^{2}}{k_{1}\,x+k_{2}}\,\text{and}\,f^{\prime}(x)=\frac{k_{1}x^{2}+2 \,k_{2}\,x}{(k_{1}\,x+k_{2})^{2}}. \tag{24}\]
It can be easily observed that \(x=\operatorname{tr}\bigl{(}\mathbf{Q}^{-1}\mathbf{C}\bigr{)}\) is always positive because \(\mathbf{Q}\) and \(\mathbf{C}\) are positive definite matrices. Hence, we are interested in the sign of the term \(k_{1}x+2\,k_{2}\) to determine the sign of \(f^{\prime}(x)\). Also, note that \(k_{2}>0\) since \(M,\,P,\,\zeta^{2},\,\sigma^{2}>0\).
Case 1: \(P\zeta^{2}-\sigma^{2}\geq 0\), i.e., \(k_{1}\geq 0\).
It is easy to verify that \(f^{\prime}(x)>0\) for this case.
Case 2: \(P\zeta^{2}-\sigma^{2}<0\), i.e., \(k_{1}<0\).
\[k_{1}x+2\,k_{2} =\left(1-\frac{\sigma^{2}}{P\zeta^{2}}\right)\operatorname{tr} \bigl{(}\mathbf{Q}^{-1}\mathbf{C}\bigr{)}+\frac{2\,\sigma^{2}M}{P\zeta^{2}}\] \[\overset{(a)}{=}M+\frac{\sigma^{2}M}{P\zeta^{2}}-k_{1}\operatorname {tr}\bigl{(}\mathbf{Q}^{-1}\mathbf{C}_{\mathrm{n}}\bigr{)}>0 \tag{25}\]
where \((a)\) follows from \(\mathbf{C}=\mathbf{Q}-\mathbf{C}_{\mathrm{n}}\). This shows that \(f^{\prime}(x)\ >\ 0\) holds for this case too. Hence, \(f(x)\) is always monotonically increasing in \(x\). This proves Theorem 2.
### _Lemma 1_
For any three matrices \(\mathbf{A}\), \(\mathbf{B}\) and \(\mathbf{C}\) of the same dimensions, we have
\[\operatorname{tr}\Bigl{(}\mathbf{A}(\mathbf{B}\odot\mathbf{C})\Bigr{)}= \operatorname{tr}\Bigl{(}(\mathbf{A}\odot\mathbf{B}^{\mathrm{T}})\mathbf{C}\Bigr{)}. \tag{26}\]
Proof.: \[\operatorname{tr}\Bigl{(}\mathbf{A}(\mathbf{B}\odot\mathbf{C})\Bigr{)} =\sum_{i}\big{[}\mathbf{A}(\mathbf{B}\odot\mathbf{C})\big{]}_{i,i}\] \[=\sum_{i}\bigg{(}\sum_{k}\big{[}\mathbf{A}\big{]}_{i,k}\,\big{[}\mathbf{B} \big{]}_{k,i}\,\big{[}\mathbf{C}\big{]}_{k,i}\bigg{)}\] \[\operatorname{tr}\Bigl{(}(\mathbf{A}\odot\mathbf{B}^{\mathrm{T}})\mathbf{C} \Bigr{)} =\sum_{i}\Big{[}(\mathbf{A}\odot\mathbf{B}^{\mathrm{T}})\mathbf{C}\big{]}_{i,i}\] \[=\sum_{i}\bigg{(}\sum_{k}\big{[}\mathbf{A}\big{]}_{i,k}\,\big{[}\mathbf{B} \big{]}_{k,i}\,\big{[}\mathbf{C}\big{]}_{k,i}\bigg{)}\]
Hence, L.H.S. = R.H.S., and this proves Lemma 1.
| 可変性 intelligent surface (RIS) は、5G ネットワークの超5世代 (5G) に適用される将来性のある技術として、低コストでスペクトルとエネルギー効率を向上させることが期待されています。RISに関する従来の研究は、主に完璧なチャネル状態情報 (CSI) に依存しており、これは巨大な計算的複雑さを生み出しています。この研究では、ユーザー1個の RIS を支援する通信システムを考慮しており、チャネルの2次統計的知識を効率的に活用することでトレーニングのオーバーヘッドを削減しています。私たちは、CSI の見積もりや RIS の再構成を、チャネル相関の各周期ごとに実行する必要がないアルゴリズムを提案しています。これにより、RIS を支援するシステムにおける最も重要な実用的な課題の一つである、計算コストの削減が実現されます。 |
2309.14152 | Assessment of Brightness Mitigation Practices for Starlink Satellites | Photometric characteristics for all models of Starlink satellites launched to
date are reviewed. The Original design that lacked brightness mitigation is the
most luminous. SpaceX installed a sunshade on the VisorSat model which reduced
its luminosity by a factor of 3. The visor was omitted on Post-VisorSat
spacecraft with laser communication which followed, but the company added a
reflective layer which resulted in an intermediate brightness between Original
and VisorSat. SpaceX is applying advanced brightness mitigation techniques to
their Generation 2 Starlink satellites which are larger. The first of these,
called Minis, are dimmer than Gen 1 Starlinks despite their greater size.
Photometric observations verify that brightness mitigation efforts employed by
SpaceX reduce spacecraft luminosity substantially. However, the satellites
still have some negative impact on astronomical observations and the very large
satellites planned for later in Gen 2 may interfere more seriously. | Anthony Mallama, Andreas Hornig, Richard E. Cole, Scott Harrington, Jay Respler, Ron Lee, Aaron Worley | 2023-09-25T14:05:47 | http://arxiv.org/abs/2309.14152v3 | ###### Abstract
###### Abstract
Photometric characteristics for all models of Starlink satellites launched to date are reviewed. The Original design that lacked brightness mitigation is the most luminous. SpaceX installed a sunshade on the VisorSat model which reduced its luminosity by a factor of 3. The visor was omitted on Post-VisorSat spacecraft with laser communication which followed, but the company added a reflective layer which resulted in an intermediate brightness between Original and VisorSat. SpaceX is applying advanced brightness mitigation techniques to their Generation 2 Starlink satellites which are larger. The first of these, called Minis, are dimmer than Gen 1 Starlinks despite their greater size. Photometric observations verify that brightness mitigation efforts employed by SpaceX reduce spacecraft luminosity substantially. However, the satellites still have some negative impact on astronomical observations and the very large satellites planned for later in Gen 2 may interfere more seriously.
**Assessment of Brightness Mitigation Practices for Starlink Satellites**
**Anthony Mallama\({}^{\ast}\)\({}^{\ast}\), Andreas Hornig\({}^{\ast}\), Richard E. Cole,**
**Scott Harrington, Jay Respler\({}^{\ast}\), Ron Lee and Aaron Worley**
**2023 October 1**
* Correspondence: _anthony.mallama@gmail.com_
\({}^{1}\)IAU - Centre for the Protection of Dark and Quiet Skies from Satellite Constellation Interference
\({}^{2}\) University of Stuttgart, Germany
**Keywords:** starlink, brightness mitigation, photometry
## 1 Introduction
Satellite constellations are beginning to impact the work of professional astronomers as reported by Barentine et al. (2023). They point out that space objects leave streaks on images which can reduce their scientific potential. Additionally, smaller objects elevate the diffuse brightness of the sky. The authors compute the potential increase in sky brightness and address the corresponding loss of astronomical information.
Amateur astronomers and others who appreciate the aesthetics and cultural significance of the night sky are also adversely affected by satellites as discussed by Mallama and Young (2021). Spacecraft brighter than magnitude 6 are distractions visible to the unaided eye, while those brighter than 7 impact professional research.
SpaceX operates the largest satellite constellation with more than 4,000 Starlink spacecraft already in orbit and regulatory approval for many more. The initial launch of 60 satellites on one rocket in 2019 raised concerns because of their brightness. SpaceX responded by making several changes to the spacecrafts' physical designs and to their satellite operations. This paper reviews the brightness mitigation strategies and the corresponding luminosity changes recorded by observers.
Section 2 defines the terminology used in this paper. Section 3 summarizes the brightness mitigation techniques implemented by SpaceX. Section 4 describes the methods of photometry used to record satellite magnitudes. Section 5 characterizes the luminosity of Starlink satellites as derived from observed magnitudes. Section 6 describes numerical modeling of spacecraft brightness and illustrates how the models fit photometric observations. Section 7 discusses the impact of Starlink satellites on astronomy and addresses international efforts to mitigate the negative effects of all satellite constellations. Our conclusions are given in Section 8.
## 2 Definitions and abbreviations
The terms elevation, height and range are differentiated as follows in this paper. Elevation is the angular distance of a satellite above the Earth's horizon measured in degrees. Height refers to the vertical distance of a satellite above the Earth's surface in km. Range is the distance between an observer and a spacecraft in km. The term altitude is not used here to avoid confusion.
The observed brightness of a satellite is its apparent magnitude. That luminosity may be adjusted to a
standard distance of 1000 km by applying the inverse square law of light. The distance-adjusted brightness, or 1000-km magnitude in this paper, is useful for comparing satellite luminosities measured at different ranges. Magnitudes may also be adjusted to 550 km which was the orbital height of early Starlink satellites. The 550-km values are referred to as characteristic magnitudes because they correspond to the brightness of many Starlink satellites when they are near the observer's zenith.
Statistical means sometimes include variational parameters. The standard deviations, abbreviated as SD, represent the scatter about the mean. The standard deviation of the mean, SDM, is its formal uncertainty.
A bidirectional reflectance function defines how light is reflected from a surface. The BRDF is used in conjunction with the physical layout of a satellite's component parts. In the case of Starlink spacecraft, the main components are its antenna panel and solar array as shown in Figure 1. Parameters of the BRDF model may be adjusted to fit observed magnitudes.
Phase angle is the arc measured at the satellite between directions to the Sun and to the observer. This angle is used to characterize satellite brightness and it leads to the phase function which is brightness as the dependent variable of phase angle.
Orbit-raise is the phrase used by SpaceX in referring to satellites that are ascending from their injection heights to higher orbits. Parking orbits are where low height satellites wait for precession to change their orbital plane. On-station satellites are those which have attained their final heights. Spacecraft attitude refers to the orientation of the satellite in space especially with respect to the Sun and the observer. Lastly, SpaceX uses the term conops to mean 'concept of operations'.
## 3 Brightness mitigation practices
This Section reviews the strategies employed by SpaceX to dim Starlink satellites. The corresponding changes of observed brightness are also mentioned qualitatively. Quantitative photometry is addressed in Sections 4 and 5.
The Original model of Starlink spacecraft consisted of an antenna panel measuring 1.3 x 2.8 m and a solar array 2.8 x 8.1 m, with a total surface area of 26.32 m\({}^{2}\). These dimensions remained unchanged until the second generation of spacecraft described later in this Section.
No brightness mitigation measures were implemented for the Original satellites because their impact on astronomy was not foreseen. In 2020 SpaceX applied a low albedo coating to a test satellite named DarkSat. Tregloan-Reed et al. (2020) and Halferty et al. (2022) found it to be dimmer but Takahashi et al. (2020) reported that it was brighter. In any case, the spacecraft absorbed too much sunlight which caused thermal problems and this approach was abandoned.
The next design change was incorporated into the VisorSat model of Starlink. The 'visor' refers to a shade that prevents sunlight from reaching the underside of the antenna panel which faces observers on the ground. This modification reduced the brightness of satellites on-station substantially (Mallama 2021a and 2021b, Krantz et al. 2023 and Halferty et al. 2022). However, SpaceX stopped attaching visors on the next model of Starlink satellites which used laser communication because they interfered with the beam.
The spacecraft model that followed VisorSat is referred to herein as Post-VisorSat. While these satellites lacked the Sun shade, SpaceX applied a dielectric reflective layer to the bottom of the antenna panel, as shown in Figure 2, which directed sunlight into space rather than allowing it to scatter toward the ground. The Post-VisorSat spacecraft on-station were found to be intermediate in brightness between Original and VisorSat satellites by Mallama and Respler (2022) and by Krantz et al. (2023).
Additionally, SpaceX changed the roll angle for VisorSat and Post-VisorSat spacecraft in order to mitigate their brightness. This 'knife-edge' attitude, which was applied to satellites in orbit-raising, placed the Sun in the plane of their flat surfaces. Mallama and Respler (2023) found that knife-edge configuration reduced luminosity in the early mission phases.
Fig. 1: The horizontal component of Starlink is the antenna panel and the vertical one is the solar array. Illustration from SpaceX.
SpaceX began launching their second-generation Starlink spacecraft in 2023. The first model is called Mini because it is smaller than the full-sized Gen 2 satellites which will follow. The antenna panels of Mini satellites measure 2.7 x 4.1 m and their two solar panels are each 4.1 x 12.8 m. The total surface area of 116.0 m\({}^{2}\) is more than four times that of Gen 1 spacecraft.
Surface area usually correlates with brightness. So, astronomers were especially concerned about the luminosity of Gen 2 spacecraft. However, SpaceX made two changes to reduce the brightness of these satellites. First, they improved the mirror-like reflective layer on the antenna panel so that more sunlight is directed into space. Second, they developed a conops similar to knife-edge and implemented it for on-station satellites. This configuration points the plane of the solar arrays toward the Earth's limb when the satellites are near the terminator. Thus, observers only see their dark sides as shown in Figure 3. Mallama et al. (2023) found that the mitigation strategy is effective in reducing the brightness of Mini satellites.
This Section has addressed brightness mitigation strategies implemented by SpaceX. The next Section describes the methods used to measure Starlink satellite magnitudes. In Section 5 we examine the observational results for each spacecraft model more thoroughly.
## 4 Observation methods
Starlink brightness measurements have been acquired by several different techniques. These include visual perception by the human eye, recordings made with a digital camera used in video mode, output from a wide-field 9 channel system with sCMOS sensors, and telescopic observations recorded by solid state sensors.
Visual observers record Starlink magnitudes by comparing their brightness to nearby reference stars. Angular proximity between the spacecraft and those stellar objects accounts for variations in sky transparency and sky brightness. The perceptual method of observing is described more thoroughly by Mallama (2022).
Video observations were recorded with a Sony Alpha A7s-I camera and a Sony FE 1.8/50 lens. The Astrometry.net application was run on a Raspberry Pi 4 device for extracting information about the stars. Specially coded python software was executed on a Windows computer to perform the overall measurements and data processing. Magnitudes from video frames were averaged over five second time intervals to form a mean value. This system is the prototype optical ground station (OGS) for the Distributed Ground Station Network (DGSN) being developed at the University of Stuttgart. The DGSN project was started within the SmallSat-Design-Studies at the Institute of Space Systems (IRS). It was part of several annual Google and ESA Summer of Code campaigns. The DSFN is a PhD-research topic at the Institute for Photogrammetry (IFP) at the University of Stuttgart.
Observations were also gathered from the database of the MMT9 system described by Karpov et al. (2015) and Beskin et al. (2017). This robotic observatory consists of nine 71 mm diameter f/1.2 lenses and 2160 x 2560 sCMOS sensors. The detectors are sensitive to the
Figure 3: Observers only see the dark side of solar arrays. Illustration from SpaceX.
Figure 2: Reflective surfaces direct sunlight away from observers on the ground. Illustration from SpaceX.
visible spectrum from red through blue. We collected their apparent magnitudes along with date/time values and computed other quantities needed for analysis.
The methods described above were utilized by the authors of this paper to obtain observational data, and magnitudes collected from the MMT9 database have also been used in our studies. The magnitude scales for all these techniques closely agree. MMT9 values are within 0.1 magnitude of the V-band based on information in a private communication from S. Karpov as discussed by Mallama (2021). The video magnitudes match visual and V-band results closely because the camera is panchromatic in visible light. That agreement is shown empirically by Mallama et. al. (2023).
Additional observations have been reported by other groups. Their instruments include the Pomenis LEO Satellite Photometric Survey Telescope at Mt. Lemmon in Arizona USA (Krantz et al., 2023), the Chakana 0.6-m telescope in Chile (Tregloan-Reed et al., 2020), the Stingray prototype consisting of a telephone lens and CMOS sensor also located in Arizona (Halferty et al., 2022), the Zwicky Transit Facility which uses the Schmidt telescope at Palomar Observatory (Mroz et al., 2022), the Plaskett 1.6 m telescope of the Dominion Astrophysical Observatory (Boley et al., 2022), the SCUDO telescope in Italy (Hossein et al., 2022) and an ensemble of eight different telescopes (Takahashi et al., 2023).
## 5 Empirical brightness characterization
This Section characterizes the brightness of all four models of Starlink satellites that have been launched to date. Mean magnitudes, phase functions and brightness surges are discussed.
### Original design is brightest
The first photometric survey of Starlink satellites was performed by McDowell (2020) using visual magnitudes from the SeeSat email archive. He found that magnitudes spanned 'from 3 to 7 with most between visual mag \(5.5\pm 0.5\)' for satellites on-station at 550 km.
A follow-up study combining visual magnitudes from SeeSat with V-band magnitudes from MMT9 was conducted by Mallama (2020). The 830 luminosities for on-station satellites were adjusted to the standard 1000-km distance. The mean of adjusted magnitudes was 5.93 +/-0.67 +/-0.02, where the first variational quantity is the SD and the second is the SDM. When the mean 1000-km magnitude is re-adjusted to 550 km, which is the height of those on-stations spacecraft, the characteristic magnitude is 4.63.
Mallama also reported on brightness surges or 'flares'. Very bright flares spanning from magnitude -3 to -8 were reported on 8 occasions for orbit-raising satellites between 380 and 425 km. The Original design of Starlink satellites is still the brightest of all models in terms of their mean magnitudes and their flares.
### VisorSat is Fainter than Original
SpaceX added a visor to this model in order to prevent sunlight from reaching the underside of the antenna panel which faces observers on the ground. Several studies quantified the effectiveness of this brightness mitigation.
Takahashi et al. (2023) recorded 19 observations of the first VisorSat spacecraft and 12 of Original design satellite Starlink-1113, each in 8 filters. They found that VisorSat was generally dimmer than the other spacecraft.
Halferty et al. (2022) recorded 363 GAIA G magnitudes of Original and VisorSat spacecraft. Their results indicate that the brightness mitigation applied to VisorSats dimmed them by an average of 0.9 magnitudes or a luminosity factor of 2.3.
Mallama (2021a) analyzed 430 visual and MMT9 magnitudes for on-station VisorSats. The mean of 1000-km maps was 7.22 +/- 0.85 +/- 0.04. Adjustment to the 550 km on-station height of these spacecraft indicated a characteristic mag of 5.92. The difference between these results and those for Original design (Mallama, 2020) is 1.29 magnitudes which corresponds to a factor of 3.2 in dimming.
In a large-scale study of MMT9 data Mallama (2021b) analyzed more than 60,000 VisorSat magnitudes and over 40,000 Original maps for on-station spacecraft. The mean of 1000-km magnitudes was 7.21 +/- 0.89 +/- 0.01 for VisorSats and 5.89 +/- 0.46 +/- 0.01 for Originals. The characteristic magnitudes at a distance of 550-km are 5.91 and 4.59. The difference of 1.32 magnitudes implies that VisorSats were dimmer by a factor of 3.3.
This study also compared the size and frequency of flare events of these two models. The light curve of a large flare is shown in Figure 4.
The data in Table 1 indicate that VisorSats produce more flares than Originals. The mean intervals between flares exceeding 0.5 magnitude were 129 seconds for VisorSats and 622 seconds for Originals. The
percentage of the elapsed time spent above threshold amplitudes of 0.5, 1.0 and 2.0 magnitudes are also listed in the Table. They vary from 0.0% for flares of Original satellites exceeding 1.0 magnitude to 2.8% for VisorSat flares of 0.5 mag.
Finally, Hossein et al. (2022) obtained 571 RGB magnitudes for Original and VisorSat spacecraft. They analyzed the data as a function of satellite heights, ranges and other parameters. However, the results did not distinguish between Originals and VisorSats. So, no brightness comparison can be reported here.
### Post-VisorSats are intermediate in brightness
When SpaceX added lasers to Starlink satellites they stopped including visors because these structures blocked the light beams. The omission of visors would have returned the brightness of Post-VisorSat spacecraft to approximately that of Originals. However, the company added a dielectric layer (Figure 2) to the bottom of the antenna panel for brightness mitigation. This mirror-like surface directed sunlight into space rather than allowing it to scatter toward observers on the ground.
Mallama (2022b) analyzed 58 visual magnitudes for on-station Post-VisorSats and 44 for VisorSats recorded by J. Respler in 2022. After adjustment for distance the Post-VisorSat spacecraft averaged 0.5 mags brighter than VisorSat. Nevertheless, they were 0.8 mags fainter than the Original design.
### Comparison of all three models from Generation 1
Mallama and Respler (2022) analyzed a uniform set of visual magnitudes which they had recorded for on-station Original design, VisorSat and Post-VisorSat spacecraft. Figure 5 demonstrates that Original is the brightest followed by Post-VisorSat and VisorSat. A more recent set of video magnitudes for all three Gen 1 models, also shown in the figure, indicates the same ordering of brightness.
Krantz et al. (2023) reported findings similar to Mallama and Respler (2022). Their median apparent magnitudes for Original Design, VisorSat and Post-VisorSat are 5.72, 6.87 and 6.15, and the corresponding interdecile ranges span 2.58, 2.90 and 2.59 magnitudes, respectively. They point out that the brightness distribution is not statistical randomness.
An important aspect of the phase functions shown in Figure 5 is their concave upwards curvature. High luminosity at small phase angles is expected because the satellites are nearly opposite the Sun from the observer and so are almost fully lit. However, the brightness at large phase angles occurs when the spacecraft are between the Sun and the observer. In that case the high luminosity indicates forward scattering from back-lit components.
Krantz et al. (2023) reported excess brightness for satellites 'at mid-elevations opposite the Sun with an additional hot spot at low solar elongation above the below-horizon Sun'. These areas of the sky are equivalent to low and high phase angles, respectively. The great luminosity at high phase angles is due to satellites reflecting light from the dayside of the Earth. This phenomenon is discussed more fully in the next section which describes BRDF modeling.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Mean Interval & Time Percentage \\ & (seconds) & (at amplitude) \\ \hline & & 0.5 & 1.0 & 2.0 \\ \cline{3-4} Original & 622 & 0.4 & 0.0 & 0.0 \\ Visorsat & 129 & 2.8 & 1.0 & 0.1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Flare amplitude and frequency
Figure 4: A flare of Starlink-1538 recorded by MMT9 on 2021 May 2. Illustration from Mallama (2021b).
Mallama and Respler (2023) also examined the effectiveness of roll angle adjustment in dimming different models of Gen 1 satellites. SpaceX developed this knife-edge technique which places the Sun in the plane of flat surfaces on the satellites for brightness mitigation. The company applied it during orbit-raising to VisorSat and Post-VisorSat spacecraft but not in time for Originals. Roll angle adjustment was found to reduce distance-adjusted brightness by a factor of 10 as illustrated in Figure 6.
### Gen 2 Mini satellites are largest and faintest
Mini satellites have a surface area of 116 m\({}^{2}\) which is more than 4 times that of Gen 1 spacecraft. They are called 'Mini' because regular Gen 2 spacecraft will be even larger. The increased size concerned astronomers because bigger satellites are usually brighter. However, SpaceX instituted an aggressive strategy for brightness mitigation to compensate for the larger dimensions. They improved the dielectric layer on the bottom of the antenna panel which directed more sunlight back into space (Figure 2). They also developed a new conops, similar to knife-edge, for on-station spacecraft where the planes of the solar arrays point to the Earth's limb (Figure 3) when satellites are near the terminator. Observers on the ground only see the dark sides of the arrays in this attitude.
Mallama et al. (2023) found that mitigation reduced the brightness of on-station Mini satellites when compared to spacecraft observed during early mission phases without mitigation. The means of apparent magnitudes for mitigated spacecraft along with their SDs and SDMs were 7.06 +/- 0.91 +/- 0.10 and the values for magnitudes adjusted to 1000-km distance were 7.87 +/- 0.79 +/- 0.09. The corresponding statistics for satellites recorded during early mission phases were
Fig. 5: Individual magnitudes and best-fit quadratic phase functions for the three models of Gen 1 Starlink satellites illustrate that Original design is brightest and VisorSat is faintest over most of the observed phase angles. Visual data are plotted in the panel on the top and video data are on the bottom. Subtract 1.3 magnitudes to adjust to 550-km.
Fig. 6: The knife-edge technique of brightness mitigation reduced luminosity for apparent and for distance-adjusted magnitudes. Illustration from Mallama and Respler (2023).
3.97 +/- 1.96 +/- 0.09 and 5.08 +/- 1.70 +/- 0.08. The difference of distance-adjusted means of 2.79 magnitudes indicated that mitigated satellites are more than 10 times fainter (Figure 7).
_Fig. 7. The distribution of distance-adjusted luminosity for satellites with and without brightness mitigation. Illustration from Mallama et al. (2023)._
More recently the authors have concentrated their observations on Mini satellites at small and large phase angles. These magnitudes were needed in order to fully parameterize the BRDF model discussed in Section 6. The phase function in Figure 8 demonstrates that Minis are bright at small angles and even brighter at large angles relative to mid-range angles.
On 2023 July 14 SpaceX informed our team that they were experimenting with off-pointing the solar arrays during orbit-raising for additional brightness mitigation of the Mini satellites. So, we now distinguish between on-station and orbit-raising mitigation as well as 'no mitigation'. The magnitude distribution between these three modes is shown in Figure 9. The unmitigated satellites are brightest by far, while the luminosities of on-station and orbit-raising spacecraft are much reduced.
_Fig. 8. The phase function for Mini satellites illustrates their brightness as a function of angle._
_Fig. 9. The distribution of magnitudes for on-station and orbit-raising modes as well as for no mitigation._
_Fig. 7. The distribution of distance-adjusted luminosity for satellites with and without brightness mitigation. Illustration from Mallama et al. (2023)._
_Fig. 8. The phase function for Mini satellites illustrates their brightness as a function of angle._
_Fig. 9. The distribution of magnitudes for on-station and orbit-raising modes as well as for no mitigation._
_Fig. 7. The distribution of distance-adjusted luminosity for satellites with and without brightness mitigation. Illustration from Mallama et al. (2023)._
## 6 Brightness modeling
The physical design of a spacecraft along with the reflective properties of its surfaces account for its luminosity. Observed magnitudes or laboratory measurements may be used to parameterize a brightness model. That numerical representation can then be used to predict spacecraft luminosities for any geometry involving the satellite, the Sun and the observer. So, the spacecraft brightness model is an important tool for observation planning purposes.
Cole (2020, 2021) developed a BRDF model for VisorSat which takes account of its antenna panel and solar array. These components were in the attitude that SpaceX called shark-fin where the panel faced the Earth and the array faced the Sun as shown in Figure 11.
Cole's model considers eight angles and other factors relative to the spacecraft, the observer and the Sun. Examples are the off-base view angle measured at the spacecraft between nadir and the direction to the observer, the Sun depression angle taken between the horizontal at the satellite and the direction to the Sun, and the range measured between the spacecraft and the observer.
The model has 10 adjustable parameters such as diffuse and specular reflectivity of the antenna panel, and diffuse reflectivity of the solar array. The single output parameter is the modeled apparent magnitude.
This VisorSat model was fit to 131 magnitude records for 66 satellites at their on-station heights. Visual observations were made by the authors of the paper, and V-band measurements were obtained from the MMT9 database as well as those reported in Walker et al. (2021). The RMS residual of the model was 0.4 magnitude which Cole considered to be reasonable given the accuracy of the observations. Figure 12 illustrates the correlation between model and observed luminosity over a span of 4 magnitudes.
Several insights were gleaned from the model. For example, the solar elevation at the observer is an
Figure 11: Shark-fin configuration as illustrated by SpaceX.
Figure 12: The model is too faint above the dotted line and too bright below it. Illustration from Cole (2021).
Figure 10: The phase functions for observations recorded about six months apart. Satellites were brighter in 2021 at phase angles less than 60\({}^{\circ}\) and fainter at large angles. Illustration adapted from Mallama (2021b).
important determinant of satellite brightness with larger negative elevations leading to greater brightness. Furthermore, maps of spacecraft brightness across the sky revealed that the satellites are generally fainter when seen nearer the horizon except for those in the anti-solar direction as shown in Figure 13.
Cole also found that satellites opposite the Sun were brighter in the summer of 2021 than during the previous winter with magnitudes around 4 to 4.5 as mentioned in Section 5. The BRDF model was modified to fit these observations by changing the tilt angle of the solar array.
Fankhauser et al. (2023) modeled Starlink laser communication satellites which we refer to herein as Post-VisorSats. The physical model only required the antenna panel and the solar array. SpaceX provided BRDF functions measured in the lab for these two components. In a separate solution they constrained the BRDF parameters using magnitudes recorded by Pomenis. These models were compared to a diffuse sphere model. Both the lab BRDF and the magnitude-constrained BRDF provided a better fit to observations than the diffuse sphere model as shown in Figure 14.
The numerical model developed by Fankhauser et al. is the first to include light reflected from the Earth to the satellite. They found that this light source causes noticeable brightness at low elevations in the solar direction as shown in Figure 15. They also point out that this excess luminosity may interfere with searches for potentially hazardous asteroids conducted during evening and morning twilight.
## 7 Discussion
This Section begins by addressing Starlink brightness in the context of observational astronomy. Then several reports concerning the impact of these satellites on specific instruments and facilities including radio telescopes are discussed. Next, an approach to mitigating satellite interference by scheduling observations to avoid them is described. Finally, the international effort aimed at protecting dark skies from bright satellite constellations is summarized.
### Impact of Starlink on astronomy
Tyson et al. (2020) established that streaks from satellites of magnitude 7 and fainter could be successfully removed from images obtained at the Rubin Observatory. This is an important criterion since their Legacy Survey of Space and Time (LSST) is highly vulnerable to satellite interference. The magnitude 7 limit of Tyson et al. is for the g-band of the Sloan photometric system but that for the V-band is generally taken to be about the same. Both apply to satellites near the 550 km height of many Starlink spacecraft. Meanwhile, amateur astronomers refer to magnitude 6 as the limit for satellite interference because fainter objects cannot usually be seen with the unaided eye. SpaceX has stated that it aims to make
Figure 14: The laboratory and observation-constrained BRDF models correlate more strongly with measured magnitudes than does a diffuse sphere model. Illustration from Fankhauser et al. (2023).
Figure 13: VisorSat brightness mapped onto the sky in Cartesian coordinates. The Sun is 15\({}^{\circ}\) below azimuth 90\({}^{\circ}\). The observations are shown by small white symbols. Note the bright patch centered at azimuth 270\({}^{\circ}\) and elevation 35\({}^{\circ}\). Illustration from Cole (2021).
Figure 15: Satellite brightness derived from the laboratory BRDF model mapped onto the sky in polar coordinates. The Sun is toward the east at the elevations indicated below each map. The top row does not include light reflected from the Earth while the bottom row does. Notice the additional component of brightness in map 6 that does not appear in map 5. This extra satellite illumination comes from the Earth’s day side. Illustration from Fankhauser et al. (2023).
on-station Starlink satellites invisible to the unaided eye. Original design Starlink satellites largely fail to meet the magnitude criteria for LSST and the unaided eye, while more VisorSats, Post-VisorSats and Minis do attain it. The surface area for full-sized Gen 2 satellites will be more than 10 times that of Gen 1 spacecraft. So, they will present a greater challenge for brightness mitigation.
### The impact on specific instruments and facilities
Bassa et al. (2022) evaluate the impact of Starlink satellites on a variety of astronomical instruments including narrow and wide-field imagers along with long-slit and fiber-fed spectrographs. Their results indicate that the wide-field imagers were most seriously affected. They also examined observation scheduling as a mitigation strategy. Mroz et al. (2022) addressed the impact of Starlink satellites on survey observations at the Zwicky Transient Facility. They noted a large increase in the percentage of streaked images between 2019 and 2021 but concluded that their observations were not yet strongly affected. Williams et al. (2021) reported on the potential impact of Starlink satellites on the ESO optical telescopes located at Paranal and La Silla in Chile. They found the interference to be manageable at that time. They also addressed the effect of satellite noise on the ALMA radio astronomy facility at Llano de Chajnantor and reported that only one band was affected. Di Vruno et al. (2023) reported on radio noise from Starlink satellites recorded at the LOFAR radio telescope. They detected interference at frequencies between 110 and 188 MHz. The authors characterise this noise as 'unintended' and point out that it is not subject to existing regulations.
### The scheduling approach
Hu et al. (2022) examine the effectiveness of adjusting the scheduler algorithm for the LSST to avoid satellites at the cost of decreased efficiency in executing other science goals. They find that the need for this mitigation strategy will depend on the overall impact of satellite streaks. They further state the impact is not yet well known due to incomplete information about satellite luminosities. That knowledge is incomplete, as they said, but it is rapidly growing.
### Protecting dark skies
The observations, analyses and models described in this paper quantify satellite brightness. This research contributes to a larger effort aimed at mitigating the adverse effects of spacecraft on astronomy. We have summarized our own research and that of numerous other investigators, but the complete body of literature on satellite interference is too extensive to include here. Many more useful papers can be found in the proceedings of Dark and Quiet Skies conferences (Walker et al. 2021 and Walker and Benvenuti 2022).
The International Astronomical Union established the Centre for the Protection of Dark and Quiet Skies from Satellite Constellation Interference in 2022. This organization coordinates world-wide efforts aimed at mitigating the negative impact of satellite constellations. The CPS has 'hubs' that specialize in public policy, industry and technology, and community engagement. Their SatHub offers an astronomical data repository, an orbital solutions portal, software tools, a training curriculum and real-time collaboration.
## 8 Conclusions
The Original design of Starlink satellites concerned astronomers because their large number and great brightness were seen as a serious threat to celestial observations. SpaceX responded to these concerns by changing the physical design of their VisorSat and Post-VisorSat models and by modifying their conops for spacecraft in orbit. Meanwhile photometric observers verified that these alterations substantially mitigated brightness.
There were new concerns when SpaceX revealed that their second generation satellites would be larger. The most recent observations indicate that the first model of Gen 2 spacecraft, called Mini, is actually dimmer than those of Gen 1. The full-sized satellites to come later will present a greater challenge to the company's brightness mitigation efforts. Future observations will measure the brightness of those very large spacecraft and monitor the luminosity of earlier models.
| Starlink衛星を全て搭載したモデルの光度特性に関するレビューが実施されています。光度軽減を欠いたオリジナル設計は最も光輝度が高いです。SpaceXはVisorSatモデルに日よけを設置し、その光輝度を3倍に抑えました。VisorSatモデルのVisorは、レーザー通信を備えた後継型衛星に省略されましたが、反射層を追加し、Originaland VisorSatの間に中間的な光輝度を達成しました。SpaceXは、より大きい第2世代のStarlink衛星に高度な光度軽減技術を適用しています。これらのうち、MinisはGen 1 Starlinkよりも光輝度が低いですが、サイズが大きいです。光度軽減の取り組みがSpaceXによって実施されていることで、衛星は光輝度が大幅に低下しています。しかし、衛星は天文観測に依然として負の影響を与え、Gen |
2310.00044 | From particles to orbits: precise dark matter density profiles using
dynamical information | We introduce a new method to calculate dark matter halo density profiles from
simulations. Each particle is 'smeared' over its orbit to obtain a dynamical
profile that is averaged over a dynamical time, in contrast to the traditional
approach of binning particles based on their instantaneous positions. The
dynamical and binned profiles are in good agreement, with the dynamical
approach showing a significant reduction in Poisson noise in the innermost
regions. We find that the inner cusps of the new dynamical profiles continue
inward all the way to the softening radius, reproducing the central density
profile of higher resolution simulations within the 95$\%$ confidence
intervals, for haloes in virial equilibrium. Folding in dynamical information
thus provides a new approach to improve the precision of dark matter density
profiles at small radii, for minimal computational cost. Our technique makes
two key assumptions: that the halo is in equilibrium (phase mixed), and that
the potential is spherically symmetric. We discuss why the method is successful
despite strong violations of spherical symmetry in the centres of haloes, and
explore how substructures disturb equilibrium at large radii. | Claudia Muni, Andrew Pontzen, Jason L. Sanders, Martin P. Rey, Justin I. Read, Oscar Agertz | 2023-09-29T18:00:01 | http://arxiv.org/abs/2310.00044v2 | # From particles to orbits: precise dark matter density profiles using dynamical information
###### Abstract
We introduce a new method to calculate dark matter halo density profiles from simulations. Each particle is'smeared' over its orbit to obtain a dynamical profile that is averaged over a dynamical time, in contrast to the traditional approach of binning particles based on their instantaneous positions. The dynamical and binned profiles are in good agreement, with the dynamical approach showing a significant reduction in Poisson noise in the innermost regions. We find that the inner cusps of the new dynamical profiles continue inward all the way to the softening radius, reproducing the central density profile of higher resolution simulations within the 95% confidence intervals, for haloes in virial equilibrium. Folding in dynamical information thus provides a new approach to improve the precision of dark matter density profiles at small radii, for minimal computational cost. Our technique makes two key assumptions: that the halo is in equilibrium (phase mixed), and that the potential is spherically symmetric. We discuss why the method is successful despite strong violations of spherical symmetry in the centres of haloes, and explore how substructures disturb equilibrium at large radii.
keywords: galaxies: kinematics and dynamics - galaxies: haloes - dark matter
## 1 Introduction
The observationally inferred density distribution of dark matter in haloes around galaxies offers a crucial hint as to the nature of the elusive substance. However, the observations must be carefully compared with theoretical predictions based largely on numerical simulations (for reviews see e.g. Frenk and White, 2012; Vogelsberger et al., 2020; Angulo and Hahn, 2022). Dark-matter-only (DMO) simulations have shown that the spherically-averaged density profiles of haloes in the Cold Dark Matter (CDM) paradigm follow approximately the Navarro-Frenk-White (NFW) profile (Dubinski and Carlberg, 1991; Navarro et al., 1996, 1997; Dutton and Maccio, 2014) described by a divergent cusp (\(\rho\sim r^{-1}\)) at small radii, and by a steeper power law (\(\rho\sim r^{-3}\)) in the outer regions. The NFW profile has two free parameters which may be fitted to the density structure of simulated haloes for most of the radial extent, but the fit becomes poor in the innermost parts and in the outskirts of the haloes (e.g. Navarro et al., 2004; Diemer and Kravtsov, 2014; Fielder et al., 2020; Wang et al., 2020; Lucie-Smith et al., 2022).
Over time, a variety of fitting functions have been proposed to better represent the profile's inner slope, such as Einasto models (Einasto, 1965; Chemin et al., 2011) or other forms of double-power law (e.g. Hernquist, 1990; Burkert, 1995; Zhao, 1996; Salucci et al., 2007; Hague and Wilkinson, 2013; Oldham and Auger, 2016; Hayashi et al., 2020). However, the central regions of the profiles remain notoriously difficult to probe due the finite number of particles and consequent need to'soften' the potential (e.g. Power et al., 2003; Diemand et al., 2004; Dehnen and Read, 2011), causing the cusp to be numerically flattened (e.g. Navarro et al., 1996; Ghigna et al., 2000; Fukushige and Makino, 2001; Wang et al., 2020). Constraining the central asymptotic behaviour of the profile therefore remains largely dependent on the number of particles concentrated at small radii.
While the focus in the present work will be on DMO simulations, we note that when baryons are added into simulations, effects such as supernova feedback and enhanced dynamical friction can cause the central cusp to turn into a flattened density 'core' (e.g. Navarro et al., 1996; Read and Gilmore, 2005; Pontzen and Governato, 2012; Read et al., 2016; El-Zant et al., 2001; Nipoti and Binney, 2014; Popolo and Pace, 2016; Orney et al., 2022). Ultimately, understanding the predicted distribution of dark matter does require such baryonic simulations, especially since there are strong indications of flattened central cores in observations; see e.g. Flores and Primack (1994); de Blok et al. (2001); Marchesini et al. (2002); Battaglia et al. (2008); Walker and Penarrubia (2011); Oh et al. (2015); Read et al. (2017, 2019); Zoutendijk et al. (2012); De Leo et al. (2023), or for converting views see Pineda et al. (2016); Genina et al. (2017); Oman et al. (2018). The focus in the present work is nonetheless on understanding how DMO predictions can be improved and better understood; we will consider baryonic effects in a future paper.
In the outskirts of haloes, density profiles scatter significantly due to the presence of surrounding substructures and the out-of-equilibrium dynamics of accreting material. For instance, the caustics
generated by the infalling particles on their first apocentre passage sets the scale for the splashback radius, which creates an observable signature in the outer regions of halo profiles (Diemer and Kravtsov, 2014; Adhikari et al., 2014; More et al., 2015; Shin et al., 2019). Recently, Lucie-Smith et al. (2022) showed that a good fit to the diversity of halo profiles out to two virial radii can be obtained using only three free parameters (i.e., one additional parameter is sufficient to capture the diversity of these outer regions). This relatively simple behaviour may be linked to the typical orbits on which material accretes into a halo, further motivating a study of how the instantaneous profile relates to a dynamically-generated equilibrium profile (e.g. Diemer, 2022, 2022).
In this work, we present and study a method to calculate dark matter density profiles from simulated halos using dynamical information. This possibility has been discussed before, notably in appendices to Read and Gilmore (2005) and Pontzen and Governato (2013), but its possible application to reducing the noise in numerical density estimates has not been explored in detail. Specifically, the technique'smears' particles in a snapshot along their orbits, spreading the mass of each across multiple density bins. Such a dynamical approach shares some similarities with certain classical mass modelling techniques (Schwarzschild, 1979; Syer and Tremaine, 1996) but, unlike these, it does not attempt to match observational constraints to underlying orbits and potentials; rather it constructs these from a simulation snapshot. The result is a profile which is averaged over a dynamical time, and which consequently has reduced Poisson noise compared to traditional binned estimates at the same resolution. This, in turn, makes it possible to probe further into the behaviour of the inner regions, at radii where there are very few particles present.
Calculating a density profile through this averaging process inherently assumes an equilibrium, phase-mixed distribution function. This assumption is expected to be significantly broken in the outer parts of a halo approaching the virial radius or beyond. Furthermore, for a practical calculation, we will also assume spherical symmetry (although this assumption could in principle be relaxed). The gravitational potentials of real and simulated haloes are far from being perfectly spherical. Their shapes tend to be closer to triaxial, especially towards the centre (e.g. Frenk et al., 1988; Jing and Suto, 2002; Allgood et al., 2006; Orkney et al., 2023); however it has previously been argued using Hamiltonian perturbation theory that approximating the true triaxial potential by a spherically-averaged version should make little difference to dynamical density estimates if the system is in equilibrium (Pontzen et al., 2015). We will return to this point in our discussion. Our results focus on the innermost and the outermost regions of haloes to investigate the limits of dynamical halo profiles subject to these coupled assumptions of equilibrium and spherical symmetry.
The rest of the paper is structured as follows. In Section SS2 we explain the procedure used to generate the dynamical density profiles. In Section SS3 we describe the simulation suites and the selection of snapshots analysed in this work. In Section SS4 we present the main results for the dynamical profiles, focusing on the inner and outer regions, and comparing our dynamical technique to traditional binned methods. In Section SS5, we discuss the implication of our results and outline possible further work.
## 2 Methods
We now describe the methods used to construct dynamical profiles. Section SS2.1 considers the construction of a spherically-averaged gravitational potential starting from a simulation snapshot; the calculation of particle orbits within that potential; and finally the computation of the dynamical density profile. In Section SS2.2, we introduce a refinement to the method which improves the accuracy of the orbit integration around apocentre and pericentre. Then, in Section SS2.3, we describe an iterative process via which a self-consistent density-potential pair may be generated.
### Creating the dynamical density profiles
We start by assuming that we have a snapshot containing only dark matter particles, centred on the target halo. The spherically-averaged gravitational potential given by all the particles in the snapshot is then calculated in bins of width \(\Delta r\) according to the discretized integral
\[\Phi(r_{k})=G\sum_{j=1}^{k}\frac{M(<r_{j})}{r_{j}^{2}}\Delta r, \tag{1}\]
where \(j\) is an index over the bins, \(k\) is the bin number for which the potential is being calculated, and \(r_{j}\) is the radius in the centre of the \(j\)th bin, taking the value \(r_{j}=(j-1/2)\Delta r\). In addition, \(M(<r_{j})\) is the mass enclosed within radius \(r_{j}\), and \(G\) is the gravitational constant. Although the potential for each bin \(k\) is evaluated from quantities at the centre of the bin, the values are assigned to the right edge of the corresponding bins, since \(\Phi(r_{k})\) represents the average of the potential over the entire bin \(k\). The zero point of the potential is set at \(r=0\) (the left edge of the first bin).
Equation (1) is the simplest of several possible choices to perform numerical integration. We tested that adopting a more sophisticated method does not significantly affect the final results. Therefore, we adopted the simple approach for transparency.
The total number of bins over which \(\Phi\) is calculated is determined by the radius of a 'bounding sphere' centred around the halo. In addition to choosing the radius at which to truncate the potential, we must also decide how to treat particles whose orbits cross this boundary. In keeping with the core assumption of equilibrium, we make the boundary _reflecting_, i.e. particles bounce elastically off it. One may equivalently imagine the potential as having an infinite potential step at the truncation radius. While this is unphysical for any individual particle considered in isolation, across the population it is equivalent to the much more reasonable assumption that the outwards flux through the sphere is balanced by a matching inwards flux. This assumption can be tested by changing the truncation radius; the halo virial radius is a natural first choice, and we will explore the effects of other choices on the final density profile in Section SS4.2.2.
Assuming equilibrium, the probability density \(p_{i}(r)\) of finding particle \(i\) at radius \(r\) is proportional to the time spent by the particle in the infinitesimal interval around that radius:
\[p_{i}(r)=\frac{1}{T_{i}}\int_{0}^{T_{i}}\delta(r-r_{i}(t))\ \mathrm{d}t\ =\ \frac{2}{T_{i}}\frac{1}{i\,\dot{r}(r,E_{i},j_{i},\Phi)}, \tag{2}\]
where \(r_{i}(t)\) describes the radius as a function of time for the particle (on its spherical idealised orbit), \(T_{i}\) is the period of the orbit, \(E_{i}\) is its specific energy, and \(j_{i}\) is its specific angular momentum. Rather than calculate \(T_{i}\) directly we first calculate an unnormalised version of the probability, \(q_{i,k}\equiv(T_{i}/2)p_{i}(r_{k})\). Here \(i\) indexes the particles, and \(k\) indexes the spatial bins. By writing the specific energy of a particle as the sum of the potential energy, the kinetic energy due to the angular momentum, and the kinetic energy due to the radial motion, we can solve for \(\dot{r}\) and obtain
\[q_{i,k}\equiv\frac{1}{\dot{r}(r_{k},E_{i},j_{i},\Phi)}=\left(E_{i}-\frac{j_{i}^ {2}}{2r_{k}^{2}}-\Phi(r_{k})\right)^{-\frac{1}{2}}. \tag{3}\]
Note that this expression is only valid between pericentre and apocentre; outside this radial range, it becomes imaginary. However the true probability of finding the particle outside the extrema of its orbit is zero by definition, and therefore one may make Eq. (3) true for all radial bins by taking its real part. We produce a normalized probability for each bin \(k\) and particle \(i\) according to
\[p_{i,k}\equiv\left(\frac{\Re\epsilon\,q_{i,k}}{\Re\epsilon\,\sum_{j}q_{i,j}} \right). \tag{4}\]
where \(\Re\epsilon\) denotes the real part. If a particle \(i\) is on an almost perfectly circular trajectory, it may remain within a single radial bin \(k\) for its entire orbit; in this case, the equation above fails and instead a unit probability is assigned to the bin enclosing the original position of the particle in the snapshot, \(p_{i,k}=1\).
The density at the centre of bin \(k\) can then be estimated from the set of \(p_{i,k}\) as
\[\rho(r_{k})=\frac{3}{4\pi}\sum_{i=1}^{N}\frac{m_{i}p_{i,k}}{(r_{k}+\Delta r/2) ^{3}-(r_{k}-\Delta r/2)^{3}}, \tag{5}\]
where \(m_{i}\) is the mass of each particle \(i\), and there are \(N\) particles in total.
The statistical errors in the dynamical density profile are estimated using bootstrapping. For each of 100 bootstrap samples, we create a mock set of particles by sampling (with replacement) from the actual set of particles in the halo; we then perform the full dynamical density estimate on the mock set of particles. We determined that 100 bootstrap samples was sufficient to achieve convergence on the 95% confidence interval; in Section SS4, our results are shown with these uncertainties as a shaded band.
### Improving accuracy at apocentre and pericentre
The function in Eq. (3) has two integrable divergences located at the pericentre and apocentre of each orbit (Figure 1). Unless the bins are infinitesimally small, the probability of finding the particle in the bin \(p_{i,k}\) containing such a divergence might be misestimated. To correct for this, in these two bins we use an approximation scheme based on a local Taylor expansion of the potential. We define the effective potential as \(\Phi_{\rm eff}=\Phi+j^{2}/(2r^{2})\), and expand \(\Phi_{\rm eff}(r_{0}+\delta r)\) around \(r_{0}\), where \(r_{0}\) is the divergence point (pericentre or apocentre) for every orbit, i.e., a root of Eq. (3). We now consider the case of a pericentre where the divergence \(r_{0}\) is inside the \(k\)th bin (i.e., \((k-1)\Delta r<r_{0}<k\Delta r\)), as an example. The mean value of \(\Re\epsilon\,r^{-1}\) across the entire bin may be calculated as
\[\tilde{q} \equiv\frac{1}{\Delta r}\int_{r_{0}}^{r_{0}}(E_{i}-\Phi_{\rm eff }(r))^{-1/2}\mathrm{d}r,\] \[\approx\frac{1}{\Delta r}\int_{r_{0}}^{r_{0}}\left(-\frac{\mathrm{ d}\Phi_{\rm eff}}{\mathrm{d}r}\right|_{r_{0}}(r-r_{0})\right)^{-1/2}\mathrm{d}r. \tag{6}\]
Here we have also used the fact that \(\Phi_{\rm eff}(r_{0})=E_{i}\), by definition. We can furthermore approximate \(\mathrm{d}\Phi_{\rm eff}/\mathrm{d}r|_{r_{0}}\approx\Phi_{\rm eff}/\mathrm{d} r|_{r_{0}}\) to avoid having to calculate the exact location of the divergences; this will give us a correction that is accurate to first order. The integration is then analytically tractable, giving
\[\tilde{q}\approx\frac{1}{\Delta r}\frac{2(E-\Phi_{\rm eff}(r_{k}))^{1/2}}{ \mathrm{d}\Phi_{\rm eff}(r_{k})/\mathrm{d}r|_{r_{k}}}. \tag{7}\]
This analytical estimate of the mean value is then used to represent the value of the probability density function within the pericentre bin \(q_{k}\). The apocentre bin is treated in the same way, and both corrections are included before producing the normalized probability according to Eq. (4).
There are two cases in which these corrections cannot be evaluated. One of them is when an orbit is unresolved (i.e. its probability function only spans one bin), since in that case pericentre and apocentre are coincident. As previously stated, when this occurs, the particle is given unit probability to be found within the single bin, and corrections are not required. The apocentre corrections are also ignored when the particle's apocentre falls outside of the radius of the'reflecting wall' which serves as the boundary for the halo. Since the particles can be thought of as being reflected back once they hit the boundary, their radial paths are truncated at the location of the wall, and no apocentre corrections are required.
### Iterating the potential
The dynamical density profile given by Eq. (5) implies also a mass profile \(M(<r)\) and therefore a potential \(\Phi(r)\) through Eq. (1). However, the potential used in producing the density estimate was initialized directly using the particle radii from the original snapshot. The overall procedure, therefore, results in an inconsistent potential-density pair. The difference between the mass distribution is especially evident in the inner regions because our potential is calculated without softening, and the pericenters of orbits can therefore reach radii closer to the centre of the haloes. To resolve this discrepancy, we iterate until a self-consistent density-potential configuration pair is reached. Over the course of the iterations, the gravitational potential from the simulation is gradually transformed into the potential inferred from the dynamical density profile. This technique also removes any discontinuities in the derivatives of the potential at small radii due to the finite particle number.
Figure 1: The binned probability density implied by Eq. (3) evaluated for a typical particle (light-blue bins), with a bin size \(\Delta r=\epsilon/2\), compared with the analytic integrand (black line). The integrand is well behaved for most of the radial range of the orbit, and therefore well approximated by the binned density. However, it has two integrable divergences at pericentre and apocentre (here located at \(\sim 2.2\) kpc and \(\sim 8.7\) kpc, respectively). Even if the particle never reaches the centre of one of these extremal bins, it may still spend significant time within the bin. Capturing this effect correctly in the binned probability requires the special treatment explained in the text. The dark-blue shaded areas represent the analytical corrections added at the pericentre and apocentre for this orbit.
The iteration process involves a series of steps:
1. A dynamical density profile is first obtained as described in Sections SS2.1 and SS2.2.
2. The mass distribution implied by the dynamical profile is calculated according to \[M(<r_{j}+\Delta r/2)=\sum_{i=1}^{N}\sum_{k=1}^{j}m_{i}p_{i,k}.\] (8) The mass at the centre of the bin (\(r_{j}\)) is then obtained by averaging the mass at adjacent edges.
3. The new mass distribution is inserted into Eq. (1) to evaluate a new gravitational potential.
4. The angular momenta of the particles is assumed to be unchanged, and the energies are updated by keeping the radial action constant at first order (see below).
5. The cycle is repeated, starting from point (ii) and using the updated dynamical profile, until convergence in the dynamical profile is reached.
In step (iv), the updated energies are calculated by keeping the radial action of each particle constant to first order,
\[J_{r}\left(E_{\mathrm{new},i},j_{i},\Phi_{\mathrm{new}}\right)=J_{r}\left(E_ {\mathrm{old},i},j_{i},\Phi_{\mathrm{old}}\right)+\mathcal{O}(\Delta\Phi^{2}), \tag{9}\]
for each particle \(i\), where \(E_{\mathrm{new},i},\Phi_{\mathrm{new}}\) and \(E_{\mathrm{old},i},\Phi_{\mathrm{old}}\) are the specific energy and the potential after and before the iteration respectively, and \(\Delta\Phi=\Phi_{\mathrm{new}}-\Phi_{\mathrm{old}}\). We made this choice because actions stay constant in a potential which is adiabatically evolving towards a new configuration. If we interpret our system as slowly transforming into a new potential with each iteration, then this choice is justified.
The definition of the radial action is
\[J_{r}\left(E,j,\Phi\right)=\frac{2}{\pi}\int_{r_{\mathrm{peri}}}^{r_{\mathrm{ prop}}}\sqrt{E-\frac{j^{2}}{2r^{2}}-\Phi(r)}\,\mathrm{d}r\,. \tag{10}\]
With this in hand, we solve Eq. (9) to first order in the quantities \(\Delta\Phi\) and \(\Delta E_{i}=E_{\mathrm{new},i}-E_{\mathrm{old},i}\). By Taylor expanding, we find
\[\Delta E_{i}\approx\frac{\int_{r_{\mathrm{peri}}}^{r_{\mathrm{ prop}}}\Delta\Phi(r)\left(E_{\mathrm{old},i}-\frac{j^{2}}{2r^{2}}-\Phi_{\mathrm{old}}(r) \right)^{-1/2}\mathrm{d}r}{\int_{r_{\mathrm{peri}}}^{r_{\mathrm{ prop}}}\left(E_{\mathrm{old},i}-\frac{j^{2}}{2r^{2}}-\Phi_{\mathrm{old}}(r) \right)^{-1/2}\mathrm{d}r}=\langle\Delta\Phi\rangle, \tag{11}\]
i.e. the change in energy is equal to the average of the change in potential, weighted by the probability of finding the particle at a given radius. (At first order, the changes to the values of apocentre and pericentre of the orbit do not contribute to \(\Delta E\), and can therefore be neglected.)
The first iteration produces a significant change in the inner density distribution but after approximately 3 iterations, convergence in the dynamical profile is reached (i.e. the changes in the density profiles become significantly smaller than the bootstrap-determined uncertainties). We will discuss this further in Section SS4.1.2 below.
## 3 The Simulation snapshots
We analyse a selection of seven snapshots drawn from cosmological zoom simulations of dark matter haloes spanning a wide range of masses, from \(\sim\)10\({}^{9}\)M\({}_{\odot}\) to \(\sim\)10\({}^{12}\)M\({}_{\odot}\) (see Table 1).
The five smallest haloes are part of the Engineering Dwarfs at Galaxy Formation's Edge (EDGE) project (Agertz et al., 2019; Rey et al., 2019, 2020; Orkney et al., 2021); the two largest haloes were taken from the wintergatan-gm project, which in turn uses the initial conditions described by Rey & Starkenburg (2021). Both suites of simulations assume a \(\Lambda\)CDM cosmology: EDGE adopts cosmological parameters based on data from Planck Collaboration et al. (2014) (\(\Omega_{m}=0.309\), \(\Omega_{\Lambda}=0.691\), \(H_{0}=67.77\) km s\({}^{-1}\)Mpc\({}^{-1}\)) with a box size of 50 Mpc, while wintergatan-gm uses cosmological parameters from Planck Collaboration et al. (2016) (\(\Omega_{m}=0.314\), \(\Omega_{\Lambda}=0.686\), \(H_{0}=67.27\) km s\({}^{-1}\)Mpc\({}^{-1}\)) with a box size of 73 Mpc. As previously stated, we consider the dark-matter-only simulations from these suites, i.e. they do not contain any baryonic components; hence steep cusps are expected in the central regions of the density profiles.
The selected haloes were re-simulated at two different resolutions; the particle mass ratio between the lower and in the higher resolution runs is 64 (for EDGE) and 8 (for wintergatan-gm). Both suites of simulations are generated using the adaptive mesh refinement (AMR) code ramses (Teyssier, 2002). The mesh is refined whenever a grid cell contains more than 8 particles; consequently, the softening lengths are adaptive and we provide a softening scale estimate equal to the size of the smallest grid cell used for gravity calculations. We call _low resolution_ the simulations with softening scale of 0.095 kpc (0.142 kpc for the wintergatan-gm haloes), and _high resolution_ the ones with softening of 0.012 kpc (0.035 kpc for the wintergatan-gm haloes). _Ultra-high resolution_ runs with softening scale \(\sim\) 0.006 kpc are also available for some EDGE simulations. All the snapshots analysed in the current work are taken at the present day (\(z=0\)).
Simulation snapshots are loaded using pynbody (Pontzen et al., 2013). Before processing, each halo is centred using the shrinking-sphere method of Power et al. (2003); the central 1 kpc is used to calculate a centre of mass velocity, which is then subtracted from all particles. We also calculate a virial radius, \(r_{\mathrm{vir}}\), defined to be the radius at which the enclosed mean density is equal to 178 times the cosmic mean.
All particles interior to the reflecting wall at the time of the snapshot are included in the calculations. Some of the selected haloes contain large substructures, especially in their outskirts; these are deliberately retained in our analysis in order to test the limits of the assumption of equilibrium. The reflecting boundary described in Section SS2.1 was placed at 120 kpc for the haloes with mass \(\lesssim 5\times 10^{9}\)M\({}_{\odot}\). This is between 2 and 3 times the size of their virial radii, a choice which allows us to explore how the dynamical information affects the density distribution in their outer regions. The boundary for the two largest haloes was placed at 350 kpc, which is approximately the location of their virial radii, and was not extended to larger radii in this work because the 'zoomed' region of these haloes is only twice the virial radius, beyond which low resolution particles are present. For efficiency, the dynamical profiles of the two largest haloes are generated using only a randomly selected fraction (a third) of the particles.
While it is not possible to recreate precisely the in-simulation softening \(\epsilon\) with a spherical approximation, it is clear that the bin width \(\Delta r\) must be comparable to \(\epsilon\) in order that the potential is meaningful. We found that our results were insensitive to the precise bin width chosen, provided that it is of this order, and therefore chose to fix \(\Delta r=\epsilon/2\). This choice of bin width is sufficiently small to allow investigation of the dynamically-inferred density profile close to the halo centre. We note that for \(r\lesssim 3\epsilon\equiv r_{\mathrm{conv}}\) the effect of spurious relaxation in simulation becomes important and a profile constructed through direct particle binning is poorly resolved. Detailed studies of convergence (e.g. Power et al., 2003; Gao et al., 2012; Ludlow et al., 2019) show that the value of \(r_{\mathrm{conv}}\) must be determined empirically for each simulation setup, and any relation to the softening length \(\epsilon\) is approximate; the scale is mainly dictated by the number of particles present in the innermost regions. Our comparisons of binned profiles
between high and low resolution simulations below confirm that \(r_{\rm conv}\sim 3\epsilon\) gives a sufficiently good approximation to the innermost reliable radius of the low resolution binned profiles.
## 4 Results
In this Section, we present and discuss the dynamical density profiles of our dark matter haloes. In each case, we calculate dynamical profiles from the low resolution snapshots and compare them with binned profiles from both low resolution and high resolution snapshots. The profiles are shown in Figures 2, 3 and 4 (for lowest-mass dwarf, intermediate-mass dwarf and Milky-Way-mass haloes respectively), alongside images of the haloes' dark matter density projected down the \(z\) axis. We compare our dynamical profiles (blue lines) to the traditional binned estimates from both the high and the low resolution snapshots (black and pink points respectively), which are plotted down to their estimated softening length (see Table 1). Inset panels show the inner density profile in greater detail.
Overall, the dynamical profiles (blue lines), obtained from the low resolution simulations, agree well with the low resolution binned profiles (pink points) for the majority of the radial extent of the haloes. The 95% bootstrap-determined uncertainties on the dynamical profiles are shown as shaded blue bands, and are significantly smaller than the 95% Poisson noise on direct binned estimates at the same resolution (pink error-bars). This follows from the fact that the particles in the original snapshot are now spread across multiple density bins, hence providing better statistics.
By dividing the total volume occupied by each halo into thin shells, we can also calculate the average radial velocities of the particles contained within the shells. These are shown for the low resolution simulations in the panels below the density profiles in Figures 2 - 4. These values will help us discuss below how well the assumption of equilibrium holds for each halo.
We will first discuss the behaviour of the dynamical profiles in the inner regions (around or even interior to the traditional convergence radius; Section SS4.1), then in the outer regions (around and beyond the virial radius; Section SS4.2).
### Inner regions
The direct comparison of dynamical profiles (blue lines) with binned profiles from higher resolution simulations (black points) is of considerable interest: it addresses the question of whether our technique can partially correct for finite particle number in the innermost regions of the halo.
At radii below the approximate convergence radius of the low resolution binned profiles (\(r_{\rm conv}=3\epsilon\), indicated by the pink arrows in Figures 2, 3, and 4), our dynamical density cusps are steeper than the traditional binned profiles at the same resolution. This is particularly clear in the case of the Milky-Way-mass halos (Figure 4). Comparing our results to the binned distribution of the high resolution simulations (black points), we see that the dynamical method is, in nearly all cases, able to predict the 'cuspier' behaviour of higher resolution simulations. Halo600 is an exception in which the dynamically predicted density is substantially lower than that in the high resolution simulation; Section SS4.1.1 considers that case in some detail, and more broadly discusses caveats about making comparisons between low and high resolution simulations. Nonetheless, in the other cases studied, the dynamically predicted cusp extends below \(r_{\rm conv}\) of the low resolution simulations, where very few particles are present at the time of the snapshot1. As well as being less biased than the binned profiles, our dynamical profiles also have lower numerical noise. On average across all halos, the uncertainties at small radii (between \(\epsilon\) and \(r_{\rm conv}\)) are reduced by a factor of 12 compared to traditional binned estimates. Thus, our technique uses information about the entire phase-space of the particles to produce more precise central
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Halo & Figure & Resolution & \(\epsilon\) (kpc) & Particle Mass (\(\rm M_{Q}\)) & Number of Particles & \(r_{\rm vir}\) (kpc) & Virial Mass (\(\rm M_{Q}\)) & Structure \\ \hline
1445 & 2, top & Low & 0.095 & \(7.1\times 10^{4}\) & \(3\times 10^{4}\) & 41.7 & \(2\times 10^{9}\) & Substructures at large \(r\); dynamical equilibrium \\ & & High & 0.012 & \(1.1\times 10^{3}\) & \(2\times 10^{6}\) & 41.5 & \(2\times 10^{9}\) & Substructures at large \(r\); dynamical equilibrium \\ & & Low & 0.095 & \(7.1\times 10^{4}\) & \(3\times 10^{4}\) & 41.4 & & \\
1459 & 2, bottom & High & 0.012 & \(1.1\times 10^{3}\) & \(2\times 10^{6}\) & 41.1 & \(2\times 10^{9}\) & Substructures at large \(r\); dynamical equilibrium \\ & & Ultra-high & 0.006 & \(1.4\times 10^{2}\) & \(1\times 10^{7}\) & 41.1 & & dynamical equilibrium \\ & & Low & 0.095 & \(7.1\times 10^{4}\) & \(8\times 10^{4}\) & 56.8 & & Low res: recent merger, \\
600 & 3, top & High & 0.012 & \(1.1\times 10^{3}\) & \(5\times 10^{6}\) & 56.2 & \(5\times 10^{9}\) & disequilibrium (cusp). \\ & & Ultra-high & 0.006 & \(1.4\times 10^{2}\) & \(4\times 10^{7}\) & 56.2 & & Higher res: equilibrium \\
605 & 3, middle & Low & 0.095 & \(7.1\times 10^{4}\) & \(7\times 10^{4}\) & 55.0 & \(5\times 10^{9}\) & Minimal substructure; dynamical equilibrium \\ & & High & 0.012 & \(1.1\times 10^{3}\) & \(4\times 10^{6}\) & 54.7 & & dynamical equilibrium \\
624 & 3, bottom & Low & 0.095 & \(7.1\times 10^{4}\) & \(7\times 10^{4}\) & 56.3 & \(5\times 10^{9}\) & Pre-merger; significant disequilibrium \\ & & High & 0.012 & \(1.1\times 10^{3}\) & \(5\times 10^{6}\) & 56.2 & \(5\times 10^{9}\) & Pre-merger; significant disequilibrium \\
685 & 4, top & Low & 0.142 & \(2.3\times 10^{6}\) & \(5\times 10^{6}\) & 349.0 & \(1\times 10^{12}\) & Minimal substructure; dynamical equilibrium \\
715 & 4, bottom & Low & 0.142 & \(2.3\times 10^{5}\) & \(6\times 10^{6}\) & 358.4 & \(1\times 10^{12}\) & Minimal substructure; dynamical equilibrium \\ & & High & 0.035 & \(2.9\times 10^{4}\) & \(5\times 10^{7}\) & 357.3 & \(1\times 10^{12}\) & Minimal substructure; dynamical equilibrium \\ \hline \end{tabular}
\end{table}
Table 1: Properties (softening length, particle mass, number of particles, virial radius, virial mass, and brief comments on the density structure) of the seven haloes investigated in this work. The haloes can be grouped into 3 main categories based on their virial mass, from dwarf to Milky Way mass. The number of particles refers to the particles enclosed by each halo’s virial radius at \(z=0\).
density profiles which partially correct for the effects of softening and which are less subject to Poisson noise.
At radii just larger than \(r_{\rm conv}\), we notice a small but statistically significant density excess in both the binned and dynamical low resolution profiles when compared with the high resolution binned profiles. This excess only covers a few density bins and is more evident for some haloes (e.g. Halo605 and 624) than others; see the inset panels zoomed in on this radius in Figure 3. Since this feature is also present when using binned methods, it must be unrelated to the inclusion of dynamical information into the calculations. We therefore leave investigation to a future study.
#### 4.1.1 The challenge of direct comparisons between differing resolutions
Overall, the improvement offered by dynamical profiles over binned profiles is significant: the uncertainties at small radii are significantly mitigated compared to binned estimates, making it a substantially more precise technique. Qualitatively, it is clear that the dynamical profiles reproduce steeper profiles which appear to be in agreement with higher resolution simulations within the 95% error bounds. However, quantifying how accurate the dynamical estimates are compared to the true density distributions (i.e. the density profiles that would be obtained from simulations of infinite resolution) is difficult for two reasons. The first is the problem of formulating a suitable comparison summary statistic; the second is the impact of small differences in halo formation and merger history on the final profile. We will describe each of these in turn.
Figure 2: Density profiles (left) and images of the dark matter density projected down the \(z\) axis (right) for our two lowest-mass dwarf haloes (\(M\sim 2\times 10^{9}\)M\({}_{\odot}\)). The dynamical density profiles obtained from the low resolution snapshots (blue lines) agree very well with both the low and high resolution binned profiles (pink and black points) for most of the radial extent of all the haloes. The largest variations between the dynamical and binned estimates are observed in the outer regions, beyond the virial radius, where large substructures in the outskirts cause spikes in the mass distribution. Any such substructures with mass greater than 3% of the mass of the main halo are shown by brown circles in the halo images, and by corresponding brown arrows in the dynamical profile plots. The panels below the density profiles show the variations in the average radial velocity of the particles contained within concentric shells as a fraction of the virial velocity, which can be used to quantify how close the low resolution halo is to equilibrium. The pink arrows indicate the radius corresponding to 3 times the value of the softening scale of the low resolution simulations (i.e. \(r_{\rm conv}\) for the low resolution binned profiles).
Figure 3: Same as Figure 2 but for the three intermediate-mass dwarf haloes (\(M\sim 5\times 10^{9}\)M\({}_{\odot}\)). Similarly to the other cases, the dynamical density profiles from the low resolution snapshots agree well with both binned profiles. Halo600 is an outlier since it recently had a merger close to the halo’s centre which disrupted the equilibrium in the inner regions; as a result the plot of \(\dot{\nabla}_{\rm r}/{\rm v}_{\rm vir}\) shows significant deviations from zero at small radii. Halo624 has a large substructure within its virial radius which will reach the centre of the main halo and merge with it in the next \(\sim 500\) Myrs. (The structure is found slightly closer to the centre in the high resolution simulation.) The significant disruption caused by this substructure to the halo’s equilibrium is also evident in the average radial velocity panel, but our dynamical method nonetheless recovers a sensible ‘smoothed’ density profile.
The most natural way to measure the accuracy of a low resolution density profile would be to construct a chi-squared test to decide whether the binned or dynamical profiles more accurately predict the high resolution result. However, the size of the statistical errors on the dynamical profile are substantially smaller than those on the binned profile, putting the dynamical profiles at an automatic disadvantage in such a test. Even if one were to artificially inflate the dynamical profile error estimates, the results would remain very sensitive to the precise radial range over which the statistic is calculated. The dynamical profiles clearly predict more accurate densities interior to \(r_{\rm conv}\), but outside this radius the situation is more nuanced. In particular, at large radii, the dynamical profiles' tendency to wash out substructure would lead to a heavy \(\chi^{2}\) penalty (as will be discussed in Section SS4.2 below). There is therefore no straight-forward quantitative measurement of the improvement offered by dynamical density profiles, despite the clear qualitative advantages in the cusp region.
The second challenge relates to recent events in the formation and merger history, and is most clearly seen in the case of Halo600 (shown at the top of Figure 3). As with the other examples, the gradient of the dynamical profile interior to \(r_{\rm conv}\) is steeper than the low resolution binned profile; however, unlike the other cases, the steepening in Halo600 is insufficient to reach agreement with the high resolution binned profile. The reason can be traced to the halo's recent history in the respective simulations. The low resolution version of Halo600 underwent a minor merger at \(z=0.03\) (\(\sim 70\) Myrs before present day). This merger only occurred in the low resolution version of the simulation. Although the mass of the merger is relatively small (\(\sim\)10\({}^{8}\)M\({}_{\odot}\), around 2% of the total host mass), its centre of mass before disruption is located within 1 kpc of the centre of mass of the main halo. By tracking the particles that formed the subhalo to \(z=0\), we find that they have traversed the halo from one side to the other, and remain in disequilibrium. The out-of-equilibrium behaviour is also visible as large fluctuations in the binned radial velocities as seen in the lower panel of the Halo600 plot in Figure 3. Despite this, note that the dynamical density profile still performs somewhat better than the binned profile.
Figure 4: Same as Figures 2 and 3 but for the two most massive (\(M\sim 10^{12}\)M\({}_{\odot}\)) out of all seven haloes. Similarly to the other haloes, the dynamical density profiles from the low resolution snapshots agree well with both the low and high resolution binned profiles. For efficiency, the dynamical profiles for these haloes were generated using only a randomly selected fraction (a third) of all the particles within the halo and therefore even smaller errors on the dynamical density profile are achievable in principle. In these examples, all substructures are small (less than 1% of the halo mass) and do not have a visible effect on the density profiles.
#### 4.1.2 Effect of potential iterations
Having established that dynamical profiles offer an accuracy improvement over binned profiles near the centres of halos, albeit one that is hard to quantify, we now consider the effect of the iterative part of our algorithm (Section SS2.3) in achieving this.
Figure 5 shows the effect that the iteration process outlined has on the dynamical profile. After the iterations, the profile's central gradient becomes moderately steeper. This can be understood by considering that the particles previously located at larger radii are now allowed to extend further inwards compared to their original positions in the snapshot, hence increasing the density in the inner regions. Note that the increase in central density may appear to violate mass conservation, since the total mass of the halo should be unaltered. However we verified that the mass enclosed converges to the same value at the virial radius; the volume of the sphere inside \(r_{\rm conv}\) is just \(0.00003\%\) of the total volume inside the virial radius, and therefore a very small reduction in density across a large range of radii is able to provide the mass for an increased density cusp.
Overall, we therefore conclude that the iterative component of the algorithm is important not just for self-consistency (as argued in Section SS2.3) but also to achieve the increased densities interior to the binned profile's convergence radius. Given that we kept actions fixed (to first order) during the iterations, one can envisage them as adiabatically transforming away some numerical effects of softening.
#### 4.1.3 Comparison at ultra-high resolution
So far, we have applied our dynamical method to the _low_ resolution snapshots and compared our results against the binned profiles obtained from the high resolution versions of the simulations. In order to understand whether this improvement is independent of resolution, we now test the dynamical approach on the _high_ resolution simulations and compare the results to _ultra-high_ resolution snapshots.
Figure 6 shows the dynamical density profile calculated from the high resolution simulation of Halo1459 compared to the binned distribution from an ultra-high resolution simulation with \(\epsilon\simeq 6\) pc (half the softening length of the high resolution snapshots previously analysed). We take Halo1459 as an example, but similar results are observed for the other haloes.
All the conclusions drawn in the case of the low resolution dynamical profile are still valid when the code is applied to the high resolution snapshot: the dynamical density shows smaller uncertainties, a steeper cusp that extends further inwards and approximately follows the higher resolution binned profile, and a small density excess at \(r\)\(\sim\)\(r_{\rm conv}\) in the lower resolution profile. Overall, this confirms that the improvements obtained by adding dynamical information to the profiles continue even for increasingly precise simulations, making them resolution-independent.
In Figure 7 we show the dynamical profile obtained from the high resolution simulation of Halo600. When the dynamical code was previously applied to the low resolution simulation (top of Figure 3), we saw that the steepening in the cusp was insufficient to reach agreement with the high resolution binned profile. This is not the case when the dynamical profile is calculated from the high resolution snapshot: the cusp of the dynamical profile is entirely consistent with the ultra-high resolution binned profile. This provides further evidence that the disagreement between the dynamical and binned profiles at small radii in the low resolution case is a result of the disequilibrium caused by the merger event, which did not occur in the high resolution version.
### Outer regions
Having shown that the dynamical profile technique performs well in suppressing numerical noise at small radii (comparable to the convergence radius), we next consider its predictions at large radii (comparable to the virial radius \(r_{\rm vir}\)). At such large radii, finite particle number is unlikely to be a limiting factor in drawing physical conclusions and therefore the motivation for studying the dynamical profile is different. Specifically, we are interested in understanding the degree to which halos may be considered equilibrium struc
Figure 5: Dynamical density profile before (yellow) and after (blue) the dynamical iteration process compared to the high resolution binned profile (black points), shown here for the example of Halo1459. The pink arrow marks the convergence radius of the low resolution simulation binned profile (which, for clarity, is not itself shown). The effect of the iterations is especially evident at small radii, where they act to make the central regions moderately denser, in better agreement with the high resolution profile.
Figure 6: Dynamical density profile (blue line) obtained from the _high_ resolution simulation of Halo1459, compared to the binned density profiles of the high (black points) and ultra-high (green points) resolution snapshots. The binned profile obtained from the low resolution snapshot is shown for reference (pink points). The black arrow indicates the approximate convergence radius of the high resolution binned profile (3\(\,\)\(\epsilon\)). The dynamical density profile from the high resolution simulation predicts the ultra-high resolution simulation well, underscoring how the method can be applied at any resolution to extract additional information.
tures; departure from such equilibrium invalidates our assumptions and therefore should lead to an inaccurate profile. The virial radius roughly defines the point past which most particles are no longer gravitationally bound to the halo, such that infalling particles from the halo's environment begin to dominate.
We are able to study the dynamical profiles beyond \(r_{\rm vir}\) for dwarf-scale haloes, since the zoom region extends several times further out. Beyond the virial radius we find, as expected, that the dynamical profiles are typically inaccurate; see Halo1445 and 1459 in Figure 2 for particularly clear cases.
This provides one clear signature of out-of-equilibrium dynamics. However, another way to measure departures from equilibrium is via the binned average radial velocities of the particles (\(\bar{v}_{r}\)), which should be consistent with zero in equilibrium. Measured values of \(\bar{v}_{r}\) are shown in the panels below the density profiles in Figures 2, 3, and 4. As expected, these values deviate strongly from zero outside the virial radius, confirming our interpretation above. However, more surprisingly, the mean velocity values deviate from zero even _interior_ to the virial radius, in regions where the binned and dynamical profiles fully agree (e.g. in Halo600, 605, 1459 over the radial range \(1<r<40\) kpc). The root-mean square deviation of the radial velocities of all haloes (excluding Halo624) in the region \(r<r_{\rm vir}\) is of order \(\sim 5\%\) of the virial velocity. These deviations are statistically important, and yet do not appear to have a significant effect on the overall density structure which is in good agreement with the binned estimates. This suggests that the dynamical profiles are robust to even significant violations of their equilibrium assumption.
#### 4.2.1 The role of substructures
Although dynamical profiles remain robust despite the existence of smooth inflows detectable well interior to the virial radius, a more difficult challenge is posed by substructures. Most haloes have spikes in the _binned_ density distribution at certain radii: for Halo600, 1445, and 1459 (Figure 2, and top of Figure 3) these can be seen beyond the virial radius at \(r\sim 90-100\) kpc, while for Halo624 (bottom of Figure 3) we see them much closer to the centre at \(r\sim 10-20\) kpc. We refer to the locations of these features as \(r_{\rm spike}\). We verified that these local density spikes are indeed caused by substructures (see brown circles in the haloes density images in Figures 2, 3, 4), which each contain between 3% and 9% of the mass of the main halo. All the other substructures present within the reflecting boundary have masses below 0.5% of the main halo's mass.
The dynamical density profile does not reproduce spikes associated with substructure; by design, it smears them out along their orbit without taking into account the self-binding of the substructure. This leads to systematic differences between the binned and dynamical profiles, since the spike is smoothed out while conserving the total mass. This effect is especially evident outside the virial radius in Halo1445 and 1459 (Figure 2). In these cases, substructures (indicated by brown arrows at the appropriate radii on the density plots) coincide with significant disagreements between binned and dynamical halo profiles.
Halo624 contains a large substructure of mass \(\sim\)\(1.4\times 10^{9}\)M\({}_{\odot}\) within its virial radius (at \(r\)\(\sim\)20-25 kpc). This is clearly visible in the density image at the bottom of Figure 3. The substructure will reach the centre of the main halo and merge in the next \(\sim 500\) Myrs (based on its estimated infall velocity at \(z=0\)), and the disruption to the halo's equilibrium caused by the presence of substructure is also evident in the large deviations from zero in the average radial velocity panel. Despite this, the dynamical profile still faithfully represents the density distribution at radii between the centre of the halo and the location of the substructure. This shows that the effects of the dark matter spike are localised to the area around the substructure, and our method can represent the correct density distribution in other regions of the halo.
Halo605 provides an example with no large substructures present within the entire volume analysed. Despite fluctuations of the binned mean velocity, the dynamical profile agrees with the binned profile up to radii of 100 kpc which is around \(2r_{\rm vir}\). Taken with the discussion above, this counterexample strongly suggests that substructures, rather than smooth radial flows, are the dominant factor in determining whether binned and dynamical profiles differ significantly, and that the effect of substructures on the profile is always localised.
#### 4.2.2 Effect of the reflecting boundary
As described in Section SS2.1, the dynamical density profile requires an outer boundary condition. We have assumed a perfectly reflecting wall, which is equivalent to assuming that the particles flowing inwards across the boundary are exactly balanced by the flux outwards, in keeping with our broader assumption of dynamical equilibrium. However, there remains the freedom to move the reflecting wall to an arbitrary location. We carried out a number of experiments to determine the effect of this choice. If, for example, a boundary is placed inside the virial radius we found that the dynamical density profile is insensitive to the particular choice of location. However, in order to probe the outer parts of the halo the results above were all presented with the boundary outside the virial radius. In this case, there is more sensitivity to the particular choice of location.
An example is shown in Figure 8 for Halo605. As usual, the binned profile is shown by pink points with error bars while dynamical profiles are represented as lines. Here, however, we show two alternative dynamical profiles: one with the reflecting boundary moved inwards to 100 kpc (\(\simeq 2\) times the virial radius, as previously adopted, and illustrated here with a blue line) and one with the reflecting boundary moved outwards to 200 kpc (\(\simeq 4\) times the virial radius, illustrated with a grey line). This shift causes the dynamical profile to deviate
Figure 7: Same as Figure 6 but for Halo600. The dynamical profile from the high resolution simulation of this halo shows a steep cusp consistent with the ultra-high resolution binned profile. The high resolution simulation, unlike the low resolution version, did not recently undergo a merger close to the halo’s centre. This provides further evidence that the disagreement between the dynamical and binned profiles seen at small radii in the low resolution case is due to disequilibrium caused by the merger event.
from the binned density in the range \(r_{\rm vir}<r<2r_{\rm vir}\), where there was previously agreement.
The change is caused by particles that, at the time of the snapshot, are exterior to \(2r_{\rm vir}\) but infalling such that they spread to lower radii when the equilibrium assumption is imposed. The binned profile shows a 'kink' at \(\simeq 100\,\)kpc which means that, in this particular case, there is a relatively large mass in such infalling particles. When the reflecting wall is present at \(2r_{\rm vir}\), these particles are not even considered and are therefore safely isolated from affecting the profile.
In a sense, moving the reflecting wall to increasingly large radii provides a prediction of the future profile, since it extrapolates to a time when far-out particles have been able to fall into the inner regions. However, we did not study to what extent this can actually be used to make meaningful predictions and we caution that the actual process via which infalling particles relax into virial equilibrium is unlikely to be fully captured; in effect, our algorithm assumes conservation of their adiabatic invariants which is unlikely to be correct in detail.
For practical purposes, the most conservative choice of reflecting wall boundary is at the virial radius, but our results show that it is entirely possible to obtain accurate profiles out to twice the virial radius. Beyond this, dynamical profiles with extended radial range may be of interest for understanding the accretion processes of halos and'splashback' features (Diemer and Kravtsov, 2014; Adhikari et al., 2014; More et al., 2015; Shin and Diemer, 2023; Lucie-Smith et al., 2022), something we will investigate in the future.
## 5 Conclusions and Discussion
We presented a new method to estimate spherically-averaged densities in cosmological dark matter haloes. Instead of binning the particle in a snapshot by radius, which is the most obvious and prevalent approach, we use the velocity information in the snapshot to'smear' each particle along a trajectory, substantially reducing Poisson noise. Such a method has been proposed before (Read and Gilmore, 2005; Pontzen and Governato, 2013), but our work is the first systematic investigation of the approach. Additionally, we derive new corrections to take into account the integrable singularities at apocentre and pericentre, and introduce an iterative process to obtain a self-consistent potential-density pair. After iteration, we obtain central density estimates which (except in one case, Halo600, where a recent merger has occurred) follow the trend set by higher-resolution simulations. The agreement persists interior to the binned profile convergence radius, and all the way down to the simulation softening length. This highlights how our technique can squeeze extra information about the central regions of halos from existing simulations.
In the outer regions, the dynamical profiles continue to agree with the binned profiles even out to several times the virial radius, provided that no substructures are present. If substructures are present, the assumption of equilibrium is locally broken and the profiles in the vicinity of the substructure are'smoothed' relative to the binned profiles. Nonetheless, the overall profiles remain accurate. Eventually, at approximately \(r\sim 4r_{\rm vir}\), effects from the haloes' environments start to dominate, bringing the haloes too far out of equilibrium for the dynamical profiles to give meaningful density estimates. Including particles from these distant halo outskirts can produce changes to the dynamical profiles, sometimes even at radii below the virial radius. This is not a surprising result since the particles at large radii will eventually fall into the halo at future times in the simulation, and the dynamical approach is extrapolating the orbits of these particles accordingly. However, whether the resulting profile can be considered a 'prediction' of the growth of the dark matter distribution at later times remains to be investigated.
These effects in the outer parts of the halo relate to the departure from perfect equilibrium (or phase-mixing), which is one of two key assumptions underlying the method. The second assumption is that the potential is spherically symmetric; this assumption is, in fact, broken by all our simulated halos, since they have triaxial equipotential surfaces. The fact that the dynamical profiles are accurate despite this broken assumption warrants further discussion.
Orkney et al. (2023) estimated the shapes of the five least massive dark matter haloes studied in this work by calculating the intermediate-to-major and minor-to-major axial ratios (\(b/a\) and \(c/a\)) up to approximately 20 kpc in radius. The exact shape of each halo is not constant with radius: the \(c/a\) ratio for all the haloes varies within the interval 0.4-0.8 (ratios of exactly 1 indicate perfect sphericity). The DMO haloes are generally the least spherical near their centre, becoming increasingly more spheroidal at radii beyond the cusp (\(\gtrsim 1\) kpc). Nevertheless, the dynamical density profiles are able to correctly represent the density distributions for the entire radial extent of the haloes.
The nature of the particles' orbits in an aspherical system is very different from the orbits that would be observed in a spherically-averaged version of the same potential. In the spherical case the angular momentum of individual particles is always constant; this is not the case in aspherical systems where only the total angular momentum of the entire system is conserved. This allows specific types of orbits (which would not be allowed in a spherical potential) to exist, such as box orbits which plunge through the centre of the halo. Therefore, the fact that we are able to infer reliable results about the haloes' properties using only an artificial version of the dynamics which does not correspond to the real trajectories of the particles is not a straightforward outcome.
However, such an outcome was previously predicted by relying on
Figure 8: Zoom into the outer regions of the dynamical profile of Halo605 (middle of Figure 3) when the reflecting boundary is placed at 100 kpc (blue line) and then moved to 200 kpc (grey line) compared to the low resolution binned profile (pink points). The dynamical profile agrees well with the binned one when the boundary is placed anywhere up to 100 kpc, around twice the virial radius, but differs once contributions from particles out to 200 kpc are included in the calculations. These discrepancies propagate inwards to smaller radii, even below the virial radius (55 kpc, indicated by the vertical dashed line). This behaviour reflects our algorithm’s extrapolation of how particles and substructures in the outskirts, while currently unbound, will ultimately fall into the halo at later times in the simulation, altering the density distribution.
having a distribution function of particles in equilibrium (Pontzen et al., 2015). For every particle that is on an orbit losing angular momentum, there must be another particle on an orbit gaining angular momentum. To put it another way, the net flux of particles through the spherical action space must be everywhere zero, and so in a statistical sense, averaged across all particles, the spherical orbits remain a good approximation. For a more technical discussion, see Pontzen et al. (2015). The present work provides additional evidence that this mapping from a real triaxial system onto an effective spherical system is able to give accurate insights into dark matter halo structure. That said, the dynamical density method could be readily extended beyond the assumption of spherical symmetry, similarly to other mass modelling techniques (Schwarzschild, 1979; Syer & Tremaine, 1996).
Overall, our dynamical method for the evaluation of dark matter density profiles is a powerful tool which can represent the correct mass distribution even when its fundamental assumptions are partially broken, making it largely applicable to a wide range of systems.
However, dark matter halos in the real universe have potentially been altered by baryonic effects, something which we have not investigated at all in the present paper. In forthcoming work, we will apply our dynamical density code to hydrodynamical simulations. Adding baryons to the simulations will likely alter the shape of the profile's inner regions, transforming the cusp into a flatter core. At a technical level, the gravitational potential can no longer be made fully self-consistent with the dark matter density distribution, and the potential will need to be evaluated directly from the snapshot for the baryonic component. The iterative procedure that we have outlined will therefore need to be refined before we can use it in such cases.
## Acknowledgements
CM would like to thank the GMGalaxies team at UCL for useful discussions. CM is supported by the Science and Technology Facilities Council. AP is supported by the Royal Society. JLS acknowledges the support of the Royal Society (URFRI\(\lambda\)191555). JPR is supported by the Beecroft Fellowship funded by Adrian Beecroft. OA acknowledges support from the Knut and Alice Wallenberg Foundation and the Swedish Research Council (grant 2019-04659). This study was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 818085 GMGalaxies). This work was performed in part using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. This work was partially enabled by funding from the UCL Cosmoparticle Initiative.
## Author contributions
The contributions from the authors are listed below using key-words based on the CRedIT (Contribution Roles Taxonomy) system.
**CM:** investigation; methodology; software; formal analysis; visualisation; writing - original draft, review & editing.
**AP:** conceptualization; methodology; validation and interpretation; supervision; resources; writing - review & editing.
**JLS:** supervision; writing - review & editing.
**MPR:** data curation; writing - review & editing.
**JIR:** methodology; data curation; writing - review & editing.
**OA:** writing - review & editing.
## Data availability
Data is available upon reasonable request. The code used to calculate the dynamical density profiles is publicly available on GitHub (repository: dynamical_density_profiles).
| ```
新しい方法でダークマターハロの密度プロファイルをシミュレーションから計算します。粒子ごとに軌道上で「ぼかし」処理を行い、動的なプロファイルを作成し、そのプロファイルは、動的な時間を平均化したものです。これは、粒子に基づいて瞬時に位置を評価して、その位置に基づいて区切られた従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来の方法とは異なり、従来 |
2309.12523 | Non-locality of conjugation symmetry: characterization and examples in
quantum network sensing | Some quantum information processing protocols necessitate quantum operations
that are invariant under complex conjugation. In this study, we analyze the
non-local resources necessary for implementing conjugation-symmetric
measurements on multipartite quantum networks. We derive conditions under which
a given multipartite conjugation can have locally implementable symmetric
measurements. In particular, a family of numbers called the ``magic-basis
spectrum'' comprehensively characterizes the local measurability of a given
2-qubit conjugation, as well as any other properties that are invariant under
local unitary transformations. We also explore the non-local resources required
for optimal measurements on known quantum sensor networks by using their
conjugation symmetries as a guide. | Jisho Miyazaki, Seiseki Akibue | 2023-09-21T22:52:29 | http://arxiv.org/abs/2309.12523v2 | # Non-locality of conjugation symmetry: characterization and examples in quantum network sensing
###### Abstract
Some quantum information processing protocols necessitate quantum operations that are invariant under complex conjugation. In this study, we analyze the non-local resources necessary for implementing conjugation-symmetric measurements on multipartite quantum networks. We derive conditions under which a given multipartite conjugation can have locally implementable symmetric measurements. In particular, a family of numbers called the "magic-basis spectrum" comprehensively characterizes the local measurability of a given 2-qubit conjugation, as well as any other properties that are invariant under local unitary transformations. We also explore the non-local resources required for optimal measurements on known quantum sensor networks by using their conjugation symmetries as a guide.
_Keywords_: antiunitary symmetry, entanglement, real quantum theory, quantum parameter estimation, quantum network sensing
###### Contents
* 1 Introduction
* 2 Background
* 2.1 Antiunitary operators
* 2.2 Eigenvectors of antiunitary operators
* 2.3 Matrix representation
* 3 Local unitary equivalence classes
* 3.1 LU-equivalence classes of two-qubit conjugations
* 3.2 No bias in LU-invariant properties of two-qubit conjugations
* 4 Measurability of conjugations
* 4.1 Prod-measurable conjugations
* 4.1.1 Prod-measurable two-qubit conjugations
* 4.2 Sep-measurable conjugations
* 4.3 Sep-unmeasurable conjugations
* 4.3.1 (1,1,1,1): collective spin flips
* 4.3.2 (1,-1,-1,-1): conjugate swap
* 5
Conjugations in quantum sensor networks * 5.1 Introduction to imaginarity-free estimation * 5.2 Real informationally complete eigenframes * 5.3 Example: average phase estimation * 5.4 Example: antiparallel model
* 6 Conclusion
* A Local measurability
* A.1 Product eigenbases of Prod-measurable conjugations (theorem 3)
* A.2 Conditions for Prod-measurability
* A.2.1 Proof of corollary 3 (total-normality criterion)
* A.2.2 Proof of corollary 4
* A.3 Proof of theorem 3
* B Conjugations made of antiunitaries
* C Entanglement in eigenframes of two-qubit conjugations
* C.1 Proof of theorem 6
* C.2 Proof of theorem 5
* C.3 Eigenframes of conjugate swap with minimum entanglement
* D Local tomography of real pure states
* E Bosonic sensor networks
## 1 Introduction
Quantum theory formalized on real Hilbert spaces has been a longstanding subject of research. The original idea traces back to Stueckelberg's work in the 1960s [1]. Since then, researchers have recognized several disparities between real and complex quantum theories.
The disparity becomes particularly evident when comparing their multipartite systems. A fundamental question of interest has been whether imaginary numbers are indispensable in quantum theory. Recent advancements in local state discrimination [2, 3] and non-locality [4, 5] have shed light on this issue. Additionally, the impact of restricting systems to real multipartite scenarios has been observed in bilocal tomography [6, 7, 8, 9], non-monogamous entanglement [10, 11], and most recently, in field-dependent entanglement [12].
"Real" quantum systems are not only purely mathematical abstractions; they manifest naturally in various quantum information processing scenarios. Wootters demonstrated that constraining quantum systems to the real domain facilitates optimal information transfer [13]. It is possible to embed complex quantum states into a real system for simulating unphysical transformations [14, 15] and for calculating entanglement measures efficiently [16, 17]. Of particular relevance to this paper is [18], in which it is observed that diverse metrology schemes, including the renowned phase estimation [19, 20, 21], exclusively rely on real subspaces within the complex Hilbert space. Consequently, these schemes have been termed "imaginarity-free" estimations.
In the realm of real quantum theory, composite systems are defined as tensor products of real subsystems, thereby ensuring an equitable comparison with their complex counterparts. This perspective entails considering a specific real subspace within multipartite complex Hilbert spaces. However, it is important to note that, in general, such a real subspace cannot be represented as a mere composition of real subspaces. For example, in a two-qubit system, the real span of the magic basis [22, 23, 24] consists solely of maximally entangled vectors. Through leveraging the mathematical tools developed in the context of
real quantum theory, the investigation of novel real subspaces is expected to expand the prospects and enhance the efficacy of quantum information processing.
In particular, the non-local structure of real subspaces, as exemplified by the magic basis, has remained unexplored until now, despite its significance in practical information processing scenarios. Indeed, when adapting imaginarity-free estimation [18] to a network of distributed sensors [25], comprehending the non-local structure of the real subspace becomes pivotal in determining the consumption of communication resources.
In this article, we analyze the non-local structure of real subspaces through the lens of associated conjugations. Conjugations are specific antiunitary operators on complex Hilbert spaces, and they are in one-to-one correspondence with the real subspaces symmetric under their action. Although conjugation is not a physical operation, its mathematical structure resembles that of unitary operations. We hence explore the non-local structure of conjugations by exploiting the non-locality of unitary operations.
First, we examine the local unitary (LU) equivalence classes of multipartite conjugations. By identifying distinct operators that differ only locally, their non-local properties become more apparent. A related work is the Kraus-Cirac decomposition of two-qubit unitary gates [26], which yields clear LU-equivalence classes. We employ similar techniques to obtain a canonical decomposition of two-qubit conjugations and achieve a complete characterization of their LU-equivalence classes. However, the characterization for higher-dimensional systems remains an open question.
Second, we characterize the non-locality of a real subspace through measurements symmetric under the associated conjugation. We focus on the conjugation-symmetric rank-1 positive-operator-valued measures (POVMs). In order to describe the communication resources required for their distributed implementation, we classify the POVMs as product, separable, or entangled measurement. We identify conjugations with symmetric non-product separable measurements. Such conjugations do not exist in a two-qubit system, which implies that if a rank-1 POVM is symmetric under a conjugation \(\theta\) and implementable by local operations and classical communication, we can find a \(\theta\)-symmetric and rank-1 POVM implementable only by non-adaptive local operations. Additionally, we present several conditions for a conjugation to have symmetric product measurements. Conjugations associated with product real systems are not the only examples of this kind.
Furthermore, we show that entanglement of conjugation-symmetric measurements on two-qubit systems can be characterized by canonical decomposition of two-qubit conjugations. For given two-qubit conjugation, we identify a conjugation-symmetric rank-1 POVM that minimizes the average entanglement of POVM elements and obtain a closed formula for the minimum value. The minimum average entanglement is a lower bound of the state-entanglement required for implementing the measurement.
Conjugation-symmetric measurements play a pivotal role in imaginarity-free estimation [18], a method of quantum parameter estimation. In particular, implementing a optimal measurement with low entanglement consumption is desirable when estimating parameters encoded in a network of quantum sensors. Zhou et al. [27] demonstrated that optimal LOCC measurements always exist for any single-parameter estimation from multipartite pure states. However, finding optimal measurements for multiparameter estimations typically requires numerical calculations [28, 29], and as yet there is no algorithm to simultaneously optimize precision and entanglement consumption. For this reason, surveys on optimal measurements for multiparameter estimation on sensor networks have been restricted to particular examples [20, 30, 31, 32, 33, 25]. Our analysis on the non-locality of real subspaces makes it possible to determine the existence of an optimal product measurement for a given imaginarity-free estimation. Finally, we provide additional insights into how conjugation-symmetric measurements can be fully utilized in imaginarity-free estimations on sensor networks.
## 2 Background
Let us introduce the notation used throughout this paper. We use capital alphabetical characters \(A,\ M,\ U,\ V,\ldots\) to denote matrices. Linear operators are denoted by capitals with circumflexes \(\hat{U},\ \hat{V},\ldots\). Bases of Hilbert spaces refer to orthonormal ones and are denoted by small Greek letters \(\psi,\eta,\zeta,\mu,\ldots\). Among the bases, \(\zeta\) denotes the computational basis of any Hilbert space in question, and \(\mu=\{|\mu_{1}\rangle,\ |\mu_{2}\rangle,\ |\mu_{3}\rangle,\ |\mu_{4}\rangle\}\) denotes the magic basis of a two-qubit system [22] defined by
\[|\mu_{1}\rangle=|\Psi_{+}\rangle,\ |\mu_{2}\rangle=i|\Psi_{-}\rangle,\ |\mu_{3} \rangle=i|\Phi_{+}\rangle,\ |\mu_{4}\rangle=|\Phi_{-}\rangle, \tag{1}\]
where \(|\Psi_{\pm}\rangle\) and \(|\Phi_{\pm}\rangle\) are Bell-basis vectors,
\[|\Psi_{\pm}\rangle:=\frac{|00\rangle\pm|11\rangle}{\sqrt{2}},\ |\Phi_{\pm} \rangle:=\frac{|01\rangle\pm|10\rangle}{\sqrt{2}}. \tag{2}\]
The matrix representation of a linear operator \(\hat{U}\) in a basis \(\psi\) is denoted by \([\hat{U}]^{\psi}\).
### Antiunitary operators
An operator \(\Theta:\mathcal{H}\rightarrow\mathcal{H}\) on a Hilbert space \(\mathcal{H}\) is said to be antilinear if it satisfies \(\Theta(c_{1}|\psi_{1}\rangle+c_{2}|\psi_{2}\rangle)=c_{1}^{\intercal}\Theta| \psi_{1}\rangle+c_{2}^{\intercal}\Theta|\psi_{2}\rangle\) for any pair of vectors \(|\psi_{1}\rangle\), \(|\psi_{2}\rangle\) and any pair of complex numbers \(c_{1}\), \(c_{2}\). The Hermitian adjoint of an antilinear operator is defined by \(\langle\psi_{1}|(\Theta^{\dagger}|\psi_{2}\rangle)=\langle\psi_{2}|(\Theta| \psi_{1}\rangle)\). If \(\Theta^{\dagger}\) equals the inverse \(\Theta^{-1}\), \(\Theta\) is called antiunitary. Hermitian (in the sense of \(\Theta^{\dagger}=\Theta\)) antiunitary operators are called conjugations. In this article, antilinear operators are represented by capital theta \(\Theta\), while small theta \(\theta\) is used for stressing that the operator is a conjugation. We consider finite-dimensional Hilbert spaces in this article.
Any conjugation is a complex conjugation in a non-unique reference basis. If \(\{|\psi_{j}\rangle\}_{j=1,\ldots,\dim\mathcal{H}}\) is a reference basis of \(\theta\), so is the basis \(\{R|\psi_{j}\rangle\}_{j=1,\ldots,\dim\mathcal{H}}\), where \(R\) is a real orthogonal matrix.
For example, two different reference bases of a two-qubit system,
\[\{|00\rangle,\ |01\rangle,\ |10\rangle,\ |11\rangle\}, \tag{3}\] \[\{|\Psi_{+}\rangle,\ |\Psi_{-}\rangle,\ |\Phi_{+}\rangle,\ |\Phi_{-} \rangle\}\,, \tag{4}\]
define the same conjugation.
The (direct) tensor product \(\Theta_{A}\otimes\Theta_{B}\) of antiunitaries \(\Theta_{A}\) on \(\mathcal{H}_{A}\) and \(\Theta_{B}\) on \(\mathcal{H}_{B}\) is defined by
\[\Theta_{A}\otimes\Theta_{B}\sum_{j}c_{j}|\psi_{j}^{A}\rangle\otimes|\psi_{j}^ {B}\rangle:=\sum_{j}c_{j}^{*}(\Theta_{A}|\psi_{j}^{A}\rangle)\otimes(\Theta_{ B}|\psi_{j}^{B}\rangle), \tag{5}\]
for a vector \(\sum_{j}c_{j}|\psi_{j}^{A}\rangle\otimes|\psi_{j}^{B}\rangle\in\mathcal{H}_{A }\otimes\mathcal{H}_{B}\). \(\Theta_{A}\otimes\Theta_{B}\) is again an antiunitary. This definition of the tensor product leads to \((\hat{U}_{A}\Theta_{A})\otimes(\hat{U}_{B}\Theta_{B})=(\hat{U}_{A}\otimes\hat {U}_{B})(\Theta_{A}\otimes\Theta_{B})\), where \(\hat{U}_{A}\) and \(\hat{U}_{B}\) are linear operators.
### Eigenvectors of antiunitary operators
An eigenvector of an antilinear operator \(\Theta\) is a vector \(|\eta\rangle\) such that \(\Theta|\eta\rangle=c|\eta\rangle\) holds for some complex constant \(c\). Unlike for linear operators, the eigenvalues of an antilinear operator form circles in the complex plane since if \(c\) is an eigenvalue of \(\Theta\), \(\Theta e^{i\phi}|\eta\rangle=e^{-i\phi}\Theta|\eta\rangle=e^{-2i\phi}ce^{i \phi}|\eta\rangle\).
An antilinear operator does not necessarily has an eigenvalue. Moreover, even if an antilinear operator has an eigenvalue, it does not necessarily have an eigenbasis. An antilinear operator has an eigenbasis if and only if it is Hermitian.
A representative example of antiunitaries that does not have any eigenvalue is the two-dimensional spin flip. The spin-flip operator is defined by
\[\Theta_{f}:=i\hat{\sigma}_{Y}\theta_{\zeta}, \tag{6}\]
where \(\theta_{\zeta}\) is complex conjugation in the computational basis and \(\hat{\sigma}_{Y}\) is the Pauli-Y operator. A spin flip is known to operate as a universal-NOT gate, which takes any single qubit state to its orthogonal one [34]. Therefore, no state is invariant under the spin-flip operation.
A set of vectors \(\{|f_{j}\rangle\}_{j\in J}\) on a Hilbert space \(\mathcal{H}\) satisfying
\[\sum_{j\in J}|\langle x|f_{j}\rangle|^{2}=\langle x|x\rangle\qquad(\forall|x \rangle\in\mathcal{H}), \tag{7}\]
is called a _frame_ in harmonic analysis. In the wording of quantum information theory, a vector set \(\{|f_{j}\rangle\}_{j\in J}\) is a frame if and only if the set of rank-1 operators \(\{|f_{j}\rangle\langle f_{j}|\}_{j\in J}\) is a POVM. A basis of a Hilbert space is a special kind of frame whose vectors do not overlap each other.
This article concerns the following frames related to a given conjugation.
**Definition 1** (eigenframe): _A frame is called an eigenframe of a conjugation \(\theta\) if every one of its components is an eigenvector of \(\theta\)._
A single conjugation has many different eigenframes. For example, the bases (3) and (4) are eigenframes of the same conjugation \(\theta_{\zeta}\), i.e., the complex conjugation in the two-qubit computational basis.
### Matrix representation
An indispensable tool for analyzing conjugations is their matrix representations. Let \(\psi:=\{|\psi_{j}\rangle\}_{j=1,\ldots,d}\) be a basis of a \(d\)-dimensional Hilbert space \(\mathcal{H}\). A \(d\times d\) matrix representation \([\Theta]^{\psi}\) of an antilinear operator \(\Theta\) on \(\mathcal{H}\) in this basis is defined by
\[[\Theta]^{\psi}_{jk}:=\langle\psi_{j}|\Theta|\psi_{k}\rangle. \tag{8}\]
When \([\Theta]^{\psi}\) is regarded as a linear operator, it satisfies
\[\Theta=[\Theta]^{\psi}\theta_{\psi}, \tag{9}\]
where \(\theta_{\psi}\) is the complex conjugation in the reference basis \(\psi\). The matrix \([\Theta]^{\psi}\) is unitary and symmetric if and only if \(\Theta\) is antiunitary and Hermitian, respectively.
Two matrix representations of an antilinear operator in different bases are related by a unitary congruence transformation. Let \(\eta:=\{|\eta_{j}\rangle\}_{j=1,\ldots,d}\) be another basis of \(\mathcal{H}\), and \(V\) represent the basis transformation,
\[|\psi_{k}\rangle=\sum_{j}V_{jk}|\eta_{j}\rangle.\qquad(k=1,\ldots,d). \tag{10}\]
Then, we obtain \([\Theta]^{\eta}\) from \([\Theta]^{\psi}\) by
\[[\Theta]^{\eta}_{jk}=\left[V[\Theta]^{\psi}V^{\top}\right]_{jk}, \tag{11}\]
where \(\top\) represents transposition.
Hermitian antilinear operators have symmetric matrix representations. A symmetric matrix \(A\) can be diagonalized by performing unitary congruence transformations,
\[A=V\ \mathrm{diag}(\lambda_{1},\ldots,\lambda_{\dim\mathcal{H}})\ V^{\top}\ \ ( \lambda_{j}\geq 0,\ \forall j) \tag{12}\]
and the result is called the _Autonne-Takagi factorization_ of \(A\). \(V\)'s columns and the diagonal elements \(\lambda_{j}\) are respectively called the Takagi vectors and Takagi values of \(A\). The Takagi vectors are unique up to sign if the Takagi values are non-degenerate. We call \(V\), a matrix whose columns are mutually orthogonal Takagi vectors, the _Takagi matrix_.
Takagi values and Takagi vectors have been used in matrix analysis for several decades. However, they are not as well known as eigenvalues and eigenvectors. Analytical techniques for computing Takagi values and vectors are detailed in [35, 36], and various numerical algorithms have been proposed for their computation [37, 38, 39].
A basis \(\eta=\{|\eta_{j}\rangle\}_{j=1,\ldots,\dim\mathcal{H}}\) of \(\mathcal{H}\) is an eigenbasis of a Hermitian antilinear operator \(\Theta\) if and only if the matrix \([\Theta]^{\eta}\) is diagonal. Therefore, an eigenbasis is obtained by diagonalizing \(\Theta\)'s matrix representation by making a unitary congruence transformation. If \(V[\Theta]_{\eta}V^{\top}\) is diagonal with some basis \(\eta\) and some unitary matrix \(V\), an eigenbasis \(\psi\) of the antilinear operator \(\Theta\) can be obtained through the relation (10). A unitary matrix \(V\) does not have to be a Takagi matrix. If \(V\) is a Takagi matrix, the eigenbasis \(\psi\) has the distinct property that all eigenvalues are non-negative real numbers.
## 3 Local unitary equivalence classes
The local unitary equivalence classes of conjugations are useful for characterizing their non-local properties.
**Definition 2** (local unitary equivalence): _Two conjugations \(\theta\) and \(\theta^{\prime}\) on the composite space \(\mathcal{H}_{1}\otimes\cdots\otimes\mathcal{H}_{N}\) are said to be local unitary (LU-) equivalent if there are unitary operators \(\hat{U}_{p}\) on subsystems \(\mathcal{H}_{p}\) for all \(p=1,\ldots,N\) such that_
\[\left(\bigotimes_{p=1}^{N}\hat{U}_{p}\right)\theta\left(\bigotimes_{p=1}^{N} \hat{U}_{p}\right)^{\dagger}=\theta^{\prime}. \tag{13}\]
The matrix representations \([\theta]^{\mu}\) and \([\theta^{\prime}]^{\mu}\) of the LU-equivalent conjugations are related by a congruence transformation,
\[\left(\bigotimes_{p=1}^{N}U_{p}\right)[\theta]^{\psi}\left(\bigotimes_{p=1}^{ N}U_{p}\right)^{\top}=[\theta^{\prime}]^{\psi}, \tag{14}\]
where \(U_{p}\) are now unitary matrices.
LU-equivalence is a means to characterize the non-locality of conjugation symmetries by disregarding their local properties. To exemplify this disregard for local properties, suppose we have two conjugations \(\theta_{1}\otimes\cdots\otimes\theta_{N}\) and \(\theta^{\prime}_{1}\otimes\cdots\otimes\theta^{\prime}_{N}\) on the same multipartite system. These two conjugations are LU-equivalent for any family of local conjugations \(\theta_{p}\) and \(\theta^{\prime}_{p}\) (\(p=1,\ldots,N\)). This is analogous to the identification of all pure product states in the characterization of state entanglement.
We should also note that our version of LU-equivalence characterizes the non-locality of conjugation symmetries rather than conjugations' power of generating entanglement. Two real subspaces, \(\mathcal{H}_{\theta}\) of \(\theta\) and \(\mathcal{H}_{\theta^{\prime}}\) of \(\theta^{\prime}\), are interchangeable through a product unitary transformation if and only if \(\theta\) and \(\theta\) are LU-equivalent. In this sense, LU-equivalence pertains to the real subspaces of a tensor-product complex Hilbert space.
If our focus were on the power of conjugations to generate entanglement, our definition of LU-equivalence would have been formulated differently. Both antiunitary and unitary transformations possess the ability to convert product states into entangled states. A notable instance is the investigation conducted by Zanardi [40], which examined the average entanglement generated by a unitary transformation applied to product states. In order to
characterize the entangling power of conjugations, two conjugations should be considered equivalent if there exist product unitaries \(\bigotimes_{p=1}^{N}\hat{U}_{p}\) and \(\bigotimes_{p=1}^{N}\hat{U}_{p}^{\prime}\) that satisfy
\[\left(\bigotimes_{p=1}^{N}\hat{U}_{p}\right)\theta\left(\bigotimes_{p=1}^{N} \hat{U}_{p}^{\prime}\right)=\theta^{\prime}. \tag{15}\]
The corresponding symmetric unitary matrices \([\theta]^{\psi}\) and \([\theta^{\prime}]^{\psi}\) are related by
\[\left(\bigotimes_{p=1}^{N}U_{p}\right)[\theta]^{\psi}\left(\bigotimes_{p=1}^{ N}U_{p}^{\prime}\right)^{*}=[\theta^{\prime}]^{\psi}. \tag{16}\]
This particular LU-equivalence class has been solved for two-qubit unitary matrices, as demonstrated in [26].
This article focuses solely on the non-locality of conjugation symmetries and employs definition 2, while definitions 2 and (15) lead to different LU-equivalence classes.
Identifying the LU-equivalence classes of conjugations is a challenging task. Here, we concentrate on a simple system consisting of two qubits. The magic basis is a convenient two-qubit basis to work with. Section 3.1 introduces a canonical form of conjugations that represents each LU-equivalence class.
### LU-equivalence classes of two-qubit conjugations
LU-equivalence classes of two-qubit conjugations are characterized by combinations of four unimodular numbers defined as follows.
**Definition 3** (magic-basis spectrum): _The magic-basis spectrum of a two-qubit conjugation \(\theta\) is defined to be an unordered set of \([\theta]^{\mu}\)'s eigenvalues in which degenerate eigenvalues appear multiple times according to their degeneracy._
Note the difference between the notion of a conjugations' spectrum and that of the magic-basis spectrum. A spectrum of a conjugation is an almost redundant concept since any unimodular complex number is an eigenvalue of a conjugation (see section 2).
Several distinct conjugations may share the same magic-basis spectrum. This is because several distinct symmetric unitary matrices share the same spectrum. Now let us define an equivalence relation on the space of magic-basis spectra.
**Definition 4**: _Two unordered sets of four unimodular complex numbers \(\{z_{1},z_{2},z_{3},z_{4}\}\) and \(\{z_{1}^{\prime},z_{2}^{\prime},z_{3}^{\prime},z_{4}^{\prime}\}\) are defined to be equivalent if there is a phase \(\phi\) satisfying_
\[\{z_{1}^{\prime},\ z_{2}^{\prime},\ z_{3}^{\prime},\ z_{4}^{\prime}\}=\{e^{i \phi}z_{1},\ e^{i\phi}z_{2},\ e^{i\phi}z_{3},\ e^{i\phi}z_{4}\}, \tag{17}\]
_and we denote it by \(\{z_{1},z_{2},z_{3},z_{4}\}\sim\{z_{1}^{\prime},z_{2}^{\prime},z_{3}^{\prime},z_{4}^{\prime}\}\). The equivalence class containing \(\{z_{1},z_{2},z_{3},z_{4}\}\) is denoted by \(\{\underline{z_{1},z_{2},z_{3},z_{4}}\}\)._
Any two-qubit conjugation can be transformed by a suitable LU-transformation into a canonical form.
**Theorem 1**: _Let \(\theta\) be a conjugation on the two-qubit space \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\), and let \(\{z_{1},z_{2},z_{3},z_{4}\}\) be its magic-basis spectrum. Then, there is a pair of unitaries \(U\) on \(\mathcal{H}_{A}\) and \(V\) on \(\mathcal{H}_{B}\) satisfying_
\[\left[\left(\hat{U}\otimes\hat{V}\right)\theta\left(\hat{U}\otimes\hat{V} \right)^{\dagger}\right]^{\mu}=\mathrm{diag}(z_{1}^{\prime},z_{2}^{\prime},z_ {3}^{\prime},z_{4}^{\prime}), \tag{18}\]
_if and only if \(\{z_{1}^{\prime},z_{2}^{\prime},z_{3}^{\prime},z_{4}^{\prime}\}\sim\{z_{1},z_ {2},z_{3},z_{4}\}\)._
_proof_) First, let us construct the unitaries \(U\) and \(V\).
Note that any complex symmetric unitary matrix is diagonalized by a real orthogonal matrix. If \(S\) is a complex symmetric unitary matrix, then
\[[\mathrm{Re}[S],\mathrm{Im}[S]]=\frac{[S+S^{*},S-S^{*}]}{4i}=\frac{[S^{\dagger}, S]-[S,S^{\dagger}]}{4i}=0, \tag{19}\]
where we have used \(S^{*}=S^{\dagger}\) and \([S,S^{\dagger}]=0\). Therefore, two real Hermitian matrices \(\mathrm{Re}[S]\) and \(\mathrm{Im}[S]\) can be diagonalized by the same real orthogonal matrix. The same orthogonal matrix diagonalizes \(S=\mathrm{Re}[S]+i\mathrm{Im}[S]\).
Since \([\theta]^{\mu}\) is a complex symmetric unitary matrix, there is a real orthogonal matrix \(O\) satisfying
\[O[\theta]^{\mu}O^{\top}=O[\theta]^{\mu}O^{\dagger}=\mathrm{diag}(z_{1},\ z_{2},\ z_{3},\ z_{4}), \tag{20}\]
where \(\{z_{1},\ z_{2},\ z_{3},\ z_{4}\}\) is the magic-basis spectrum of \(\theta\), as is introduced in the theorem.
Let \(\phi\) be a phase and \(\tau\) be a permutation on \(\{1,2,3,4\}\) such that
\[(z_{1}^{\prime},z_{2}^{\prime},z_{3}^{\prime},z_{4}^{\prime})=(e^{i\phi}z_{ \tau(1)},\ e^{i\phi}z_{\tau(2)},\ e^{i\phi}z_{\tau(3)},\ e^{i\phi}z_{\tau(4)}). \tag{21}\]
Let \(O_{\tau}\) be the orthogonal matrix representing the permutation of column vectors according to \(\tau\), and define a real orthogonal matrix \(O^{\prime}\) by
\[O^{\prime}=\left\{\begin{array}{ll}O_{\tau}O&(\det[O_{\tau}O]=1)\\ Z_{1}O_{\tau}O&(\det[O_{\tau}O]=-1)\end{array}\right., \tag{22}\]
where \(Z_{1}:=\mathrm{diag}(-1,1,1,1)\). Since \(\det[Z_{1}]=-1\), \(O^{\prime}\) has a positive determinant, we have
\[O^{\prime}[\theta]^{\mu}O^{\prime\top}=e^{-i\phi}\mathrm{diag}(z_{1}^{\prime}, z_{2}^{\prime},z_{3}^{\prime},z_{4}^{\prime}). \tag{23}\]
Now let us go back and forth between two representations. There is a bijective correspondence between \(4\times 4\) orthogonal matrices with a positive determinant to a pair of single-qubit \(\mathbb{SU}(2)\) unitary matrices given by
\[MOM^{\dagger}=U^{\prime}\otimes V^{\prime},\qquad M_{jk}:=\langle\zeta_{j}| \mu_{k}\rangle=\frac{1}{\sqrt{2}}\left(\begin{array}{cccc}1&i&&\\ &&i&1\\ &&i&-1\\ 1&-i&&\end{array}\right), \tag{24}\]
where \(O^{\prime}\in\mathbb{SO}(4)\) and \(U^{\prime},V^{\prime}\in\mathbb{SU}(2)\)[23, 41, 42]. Let \(\hat{U}\) and \(\hat{V}\) be unitary operators on \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\) which are represented by matrices \(e^{i\phi/2}U^{\prime}\) and \(V^{\prime}\) in the computational bases, respectively. When we return to the magic basis, we find
\[[(\hat{U}\otimes\hat{V})\theta(\hat{U}\otimes\hat{V})^{\dagger}]^{\mu} =[U\otimes V]^{\mu}[\theta]^{\mu}[U\otimes V]^{\mu\top} \tag{25}\] \[=e^{i\phi}(M^{\dagger}U^{\prime}\otimes V^{\prime}M)[\theta]^{\mu} (M^{\dagger}U^{\prime}\otimes V^{\prime}M)^{\top}\] (26) \[=e^{i\phi}O^{\prime}[\theta]^{\mu}O^{\prime\top}=\mathrm{diag}(z_ {1}^{\prime},z_{2}^{\prime},z_{3}^{\prime},z_{4}^{\prime}), \tag{27}\]
as desired.
To prove the converse, assume that
\[\left[\left(\hat{U}\otimes\hat{V}\right)\theta\left(\hat{U}\otimes\hat{V} \right)^{\dagger}\right]^{\mu}=\mathrm{diag}(z_{1}^{\prime},z_{2}^{\prime},z_ {3}^{\prime},z_{4}^{\prime}), \tag{28}\]
holds for unitaries \(\hat{U}\) and \(\hat{V}\), where \(z_{j}^{\prime}\) (\(j=1,2,3,4\)) are now numbers. Let \(U^{\prime}\) and \(V^{\prime}\) be \(\mathbb{SU}(2)\) unitary matrices and \(\phi\) be a phase such that \(e^{i\phi}U^{\prime}\otimes V^{\prime}=[\hat{U}\otimes\hat{V}]^{\zeta}\). From (24) and (28) follows
\[O[\theta]^{\mu}O^{\top}=e^{-2i\phi}\mathrm{diag}(z_{1}^{\prime},z_{2}^{\prime},z_{3}^{\prime},z_{4}^{\prime}), \tag{29}\]
where \(O=M^{\dagger}U^{\prime}\otimes V^{\prime}M\) is an orthogonal matrix. Since the spectrum of (29)'s left-hand-side \(O[\theta]^{\mu}O^{\top}\) is equal to the spectrum of \([\theta]^{\mu}\), \(\{e^{-2i\phi}z^{\prime}_{1},\ e^{-2i\phi}z^{\prime}_{2},e^{-2i\phi}z^{\prime}_{ 3},e^{-2i\phi}z^{\prime}_{4}\}\) must be equal to the magic-basis spectrum. Therefore, \(\{z^{\prime}_{1},z^{\prime}_{2},z^{\prime}_{3},z^{\prime}_{4}\}\) is equivalent to the magic-basis spectrum of \(\theta\). \(\blacksquare\)
Theorem 1 implies that any two-qubit conjugation has a magic basis \(\mu^{\prime}\), relative to some product basis \(\zeta^{\prime}\), as an eigenbasis.
The complete characterization of two-qubit conjugations is obtained as an immediate consequence of theorem 1.
**Corollary 1**: _Two two-qubit conjugations are LU-equivalent if and only if their magic-basis spectra are equivalent._
We can label each LU-equivalence class with the representative magic-basis spectrum, i.e., \(\{z_{1},z_{2},z_{3},z_{4}\}\). The space of LU-equivalent classes of conjugations is homeomorphic to the quotient space of magic-basis spectra divided by \(\sim\), and thus it is also homeomorphic to the configuration space of four unlabelled points on a circle.
The properties of two-qubit conjugations that are invariant under local unitary transformations can be characterized by their magic-basis spectra. One such property that we consider in section 4 is the concept of "Prod-measurability". In the following subsection, we prove that no property invariant under local unitary transformations can exhibit a certain bias between the two parties in the two-qubit system.
Table 1 presents the known two-qubit conjugations with nonequivalent magic-basis spectra. We will look more into each example in section 4, where we consider the "measurability" of conjugations.
### No bias in LU-invariant properties of two-qubit conjugations
LU-invariant properties, in general, may exhibit a preference towards one party over another. Here, by "LU-invariant property", we refer to a statement regarding conjugations whose veracity remains unchanged under LU transformations. For instance, there exists a set of quantum states involving two parties that can be differentiated through classical communication from one party to the other but not in the reverse direction. This one-way distinguishability, with designated senders and receivers, is an example of a biased LU-invariant property where the two parties are treated unequally.
More specifically, consider the following proposition "\(P_{X}(\theta)\)" (\(X=A,B\)) about conjugation \(\theta\) on \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\): an eigenframe measurement of \(\theta\) is implementable by local operations and a classical communication from the party owning the subsystem \(\mathcal{H}_{X}\). In general, there might be a bipartite conjugation such that \(P_{A}(\theta)=\) true and \(P_{B}(\theta)=\) false.
Can two-qubit conjugations exhibit a biased LU-invariant property? In other words, is there any LU-invariant property that changes its truth value for certain conjugations when the roles of the two qubit subsystems are exchanged? The following corollary provides a negative answer to this question.
**Theorem 2**: _Any two-qubit conjugation \(\theta\) is LU-equivalent to SWAP \(\theta\) SWAP\({}^{\dagger}\), where SWAP is the swap operator._
\begin{table}
\begin{tabular}{c|c|c} \hline magic-basis spectrum & conjugations & measurability \\ \hline \hline \(1,1,1,1\) & collective spin flip \(\Theta_{f}\otimes\Theta_{f}\) (41) & Sep-unmeasurable \\ \hline \(1,-1,-1,-1\) & conjugate swap \(\theta_{\text{SWAP}}\) (42) & Sep-unmeasurable \\ \hline \(1,-1,-1,1\) & product conjugation \(\theta_{A}\otimes\theta_{B}\) (30) & Prod-measurable \\ \hline \end{tabular}
\end{table}
Table 1: Magic-basis spectra of particular two-qubit conjugations.
_proof_) The swap operator is represented by an orthogonal matrix \(O=M^{\dagger}[\mathrm{SWAP}]^{\diamond}M\) in the magic basis [42]. The spectrum of \([\mathrm{SWAP}\ \theta\ \mathrm{SWAP}^{\dagger}]^{\mu}=O[\theta]^{\mu}O^{\top}\) is equal to the spectrum of \([\theta]^{\mu}\), because the orthogonal transformation does not change the spectrum. Therefore, the magic-basis spectra of \(\mathrm{SWAP}\theta\mathrm{SWAP}^{\dagger}\) and \(\theta\) are equal, and corollary 1 implies their LU-equivalence. \(\blacksquare\)
The LU-equivalence classes of two-qubit conjugations are preserved under permutation of the two subsystems, as are the truth values of LU-invariant properties. Therefore, if an LU-invariant property is exhibited by \(\theta\), it will still be exhibited when the roles of the two qubit subsystems are exchanged.
## 4 Measurability of conjugations
In this section, we investigate an LU-invariant property of conjugations which we refer to as "measurability". A conjugation has this property if and only if all of its LU-equivalent conjugations do so as well. As a result, measurability, as well as any other LU-invariant property, defines a partition in the space of LU-equivalence classes of conjugations.
Our focus is on the communication cost required to implement eigenframe measurements of a given multipartite conjugation. Such conjugations have eigenframes with various non-local properties, as illustrated by the two eigenframes (3) and (4) of the complex conjugation in the two-qubit computational basis. The former corresponds to a local measurement, while the latter corresponds to a Bell measurement. Measurement in eigenframe (3) requires fewer communication resources. The optimized communication resource depends on the properties of the multipartite conjugation.
The following LU-invariant properties will facilitate a more precise discussion on the eigenframe measurements.
**Definition 5**: _A conjugation \(\theta\) on a multipartite system \(\mathcal{H}_{1}\otimes\ldots\otimes\mathcal{H}_{N}\) is said to be Prod-measurable if the product of local frames \(\left\{\left|f_{j_{1}}^{1},\ldots,f_{j_{N}}^{N}\right\rangle\right\}_{j_{p} \in J_{p},\ p=1,\ldots,N}\) is its eigenframe. It is said to be Sep-measurable if there is an eigenframe \(\left\{\left|f_{j}^{1},\ldots f_{j}^{N}\right\rangle\right\}_{j\in J}\) comprising only product vectors._
The concept of measurability is relevant to certain quantum network sensing protocols, as will be discussed in section 5. Essential to the process of imaginarity-free quantum estimation [18], eigenframe measurements of particular conjugations are crucial components that saturate the quantum Cramer-Rao bound (CRB), which represents the limit of precision in parameter estimation. It is plausible that these measurements can be implemented with fewer communication resources and that one can rely on the measurability of conjugations to determine whether separable or entangled measurements are necessary.
The main result of this section is the strict hierarchy of conjugations based on their measurability, which we present in figure 1. We confirmed the existence of Prod-unmeasurable Sep-measurable conjugations by introducing a computable criterion for Prod-measurable conjugations. The magic-basis spectra of two-qubit conjugations directly reveal their measurability. As is shown in figure 1 (b), all the Sep-measurable two-qubit conjugations are Prod-measurable.
Our primary focus will be on Prod-measurable conjugations, which will be shown to play a crucial role in phase estimation protocols of quantum metrology in section 5. These conjugations offer an explanation for why local measurements can achieve optimal precision bounds in such protocols.
### Prod-measurable conjugations
Let us start with a primal example of Prod-measurable conjugations. Any tensor product
\[\theta_{1}\otimes\ldots\otimes\theta_{N} \tag{30}\]
of local conjugations is Prod-measurable. Furthermore, any tensor product of local eigenframes of \(\theta_{p}\) is an eigenframe of \(\theta_{1}\otimes\ldots\otimes\theta_{N}\). Our main subject is a quest to find more general Prod-measurable conjugations and their characterization.
The following theorem characterizes general Prod-measurable conjugations:
**Theorem 3** (Prod-measurability): _A conjugation is Prod-measurable if and only if there is a product basis \(\psi=\{|\psi_{j_{1}},\ldots,\psi_{j_{N}}\rangle\}_{j_{p}=1,\ldots,\dim{ \mathcal{H}}_{p}}\) in which the matrix representation is diagonalized:_
\[[\theta]^{\psi}_{j_{1},\ldots,j_{N};j_{1}^{\prime},\ldots,j_{N}^{\prime}}=e^{i \phi_{j_{1}},\ldots,j_{N}}\delta_{j_{1},j_{1}^{\prime}}\ldots\delta_{j_{N},j_{ N}^{\prime}}, \tag{31}\]
_where \(\phi_{j_{1},\ldots,j_{N}}\in\mathbb{R}\)._
The product basis \(\psi=\{|\psi_{j_{1}},\ldots,\psi_{j_{N}}\rangle\}_{j_{p}=1,\ldots,\dim{ \mathcal{H}}_{p}}\), described in the theorem, is a product eigenframe.
See appendix A.1 for the proof of the "only if" part. In the language of matrix representations, theorem 3 says that a conjugation \(\theta\) is Prod-measurable if \([\theta]^{\zeta}\) is diagonalized by a product unitary matrix through a congruence transformation.
The matrix representations of product conjugations (30) can be diagonalized by any product reference basis to the identity matrix (i.e. \(e^{i\phi_{j_{1}},\ldots,j_{N}}=1\) for any \(j_{1},\ldots,j_{N}\)). Theorem 3 indicates that product conjugations are not the only Prod-measurable conjugations. For example, control-Z (\(\text{CZ}=\text{diag}(1,1,1,-1)\) in the computational basis) defines a Prod-measurable conjugation.
To determine the product eigenframe, we must address the matrix diagonalization problem through the product congruence transformation. Although this seems quite challenging, the problem can be simplified by reducing it to a standard Autonne-Takagi factorization by using the subsequent corollary. Let \(\text{Tr}_{\overline{p}}[\cdot]\) denote the partial trace of systems other than \({\mathcal{H}}_{p}\) in the multipartite system \(\otimes_{p^{\prime}=1}^{N}{\mathcal{H}}_{p^{\prime}}\).
**Corollary 2**: _Let \(\theta\) be a Prod-measurable conjugation on \(\otimes_{p=1}^{N}{\mathcal{H}}_{p}\); then, there is a combination \((V^{1},\ldots,V^{N})\) of Takagi matrices \(V^{p}\) of partial traces \(\text{Tr}_{\overline{p}}\left[[\theta]^{\zeta}\right]\)\((p=1,\ldots,N)\) such that \((\otimes_{p}V^{p})[\theta]^{\zeta}(\otimes_{p}V^{p})^{\top}\) is diagonal._
Specifically, to search a product eigenframe of \(\theta\), one can initially compute the Takagi matrices \(U_{p}\) that diagonalize \(\text{Tr}_{\overline{p}}\left[[\theta]^{\zeta}\right]\) for each \(p\) through the Autonne-Takagi factorization
Figure 1: The measurability hierarchy of multipartite conjugations. (a) The measurability hierarchy of general multipartite conjugations with examples. (b) The measurability hierarchy of two-qubit conjugations. The classes of Sep-measurable conjugations reduces to those of Prod-measurable ones on a two-qubit system.
and subsequently verify whether the product \(\otimes_{p}V^{p}\) diagonalizes \([\theta]^{\zeta}\). Nevertheless, this technique has a potential drawback in that a limitless number of Takagi matrices exist if the Takagi values are degenerate. In such instances, one may fail to identify the desired product of Takagi matrices. For example, the algorithm does not halt for the conjugate swap, which is not Prod-measurable. Appendix A.2 gives a proof of corollary 2.
Corollary 2 gives computable necessary conditions for Prod-measurable conjugations.
**Corollary 3** (total-normality criterion): _If a conjugation \(\theta\) on \(\otimes_{p=1}^{N}\mathcal{H}_{p}\) is Prod-measurable, it satisfies two conditions: (i) matrices_
\[\mathrm{Tr}_{\overline{p}}\left[[\theta]^{\zeta}\right], \tag{32}\]
_are symmetric for all \(p\), and (ii) the matrix_
\[X_{\theta}:=\left([\theta]^{\zeta}\right)^{\dagger}\bigotimes_{p=1}^{N} \mathrm{Tr}_{\overline{p}}\left[[\theta]^{\zeta}\right], \tag{33}\]
_is normal (in the sense \(X_{\theta}X_{\theta}^{\dagger}=X_{\theta}^{\dagger}X_{\theta}\))._
The total-normality criterion is only a necessary condition for Prod-measurability, and we know that the condition is not sufficient. Conjugation \(\theta_{\mathrm{SWAP}}=\mathrm{SWAP}\ \theta_{\zeta}\), where SWAP is the swap operator on a \(d\times d\)-level system, is total-normal in the sense it satisfies conditions (i) and (ii) in corollary 3. In section 4.3.2, however, we will see that \(\theta_{\mathrm{SWAP}}\) is not Prod-measurable. This pitfall of the total-normality criterion arises from the degenerate Takagi values of \(\otimes_{p=1}^{N}\mathrm{Tr}_{\overline{p}}\left[[\theta_{\mathrm{SWAP}}]^{ \zeta}\right]\), for which we have the following corollary:
**Corollary 4**: _Let \(\theta\) be a conjugation satisfying the total-normality criterion. If the matrix \(\otimes_{p=1}^{N}\mathrm{Tr}_{\overline{p}}\left[[\theta]^{\zeta}\right]\) has non-degenerate Takagi values, then \(\theta\) is Prod-measurable._
Proofs of the corollaries are in appendix A.2.
As a nontrivial example, consider a bipartite composition of a \(d-\) and a \(d^{\prime}-\)level system \(\mathbb{C}_{d}\otimes\mathbb{C}_{d^{\prime}}\). Let \(\theta_{d\to d^{\prime}}\) be a bipartite conjugation with a block diagonal \(dd^{\prime}\times dd^{\prime}\) unitary matrix representation,
\[[\theta_{d\to d^{\prime}}]^{\zeta}=\left(\begin{array}{ccc}U_{1}&&\\ &\ddots&\\ &&U_{d}\end{array}\right), \tag{34}\]
where \(U_{j}\) are now arbitrary symmetric \(d^{\prime}\)-level unitaries for \(j=1,\ldots,d\). If \(d=2\), \(\theta_{d\to d^{\prime}}\) satisfies the total-normality criterion for arbitrary pairs of \(U_{1}\) and \(U_{2}\).
If \(d\geq 3\), there is a combination of \(d^{\prime}\)-level unitaries for which \(\theta_{d\to d^{\prime}}\) violates the total-normality criterion. \(\theta_{d\to d^{\prime}}\) is not Prod-measurable for such unitaries. Section 4.2 presents an example.
#### 4.1.1 Prod-measurable two-qubit conjugations
For two-qubit conjugations, a computable exact characterization of Prod-measurability is offered by the magic-basis spectrum.
**Theorem 4**: _A two-qubit conjugation \(\theta\) is Prod-measurable if and only if the magic-basis spectrum is equivalent to \(\{1,-1,z,-z\}\), where \(z\) is any unimodular number. A two-qubit conjugation \(\theta\) is a product conjugation if and only if the magic-basis spectrum is equivalent to \(\{1,1,-1,-1\}\)._
A proof of this theorem is in appendix A.3. The proof provides a way to find product eigenframes of Prod-measurable conjugations.
Among the two-qubit conjugations, the Prod-measurable ones are measure-zero. Figure 2 depicts the space of magic-basis spectra as the tetrahedron region \(0\leq\phi_{1}\leq\phi_{2}\leq\phi_{3}\leq 2\pi\), where the coordinates correspond to the phases of the magic-basis spectra \(\{1,e^{i\phi_{1}},e^{i\phi_{2}},e^{i\phi_{3}}\}\). Several points, such as the four vertices, represent identical spectra. Still, the spectra for Prod-measurable conjugations occupy only a one-dimensional line in the three-dimensional space.
The computable characterization of Prod-measurable conjugations does not generalize to higher dimensional systems. The crucial relation \(\mathrm{SU}(d)\times\mathrm{SU}(d)\simeq\mathrm{SO}(d^{2})\), exhibited by the magic basis, holds only when \(d=2\)[43].
### Sep-measurable conjugations
Sep-measurable conjugations have eigenframes that comprise only product vectors. In this section we show that such an intermediate conjugation exists, but not in two-qubit systems.
A procedure to construct Prod-unmeasurable but Sep-measurable conjugations starts from locally indistinguishable product bases. Let \(\{|\psi_{j}^{A},\psi_{j}^{B}\rangle\}_{j=1,\ldots,d_{A}\times d_{B}}\) be a locally indistinguishable separable basis on a \(d_{A}\times d_{B}\)-dimensional system. Define a conjugation by
\[\left(\sum_{j=1}^{d_{A}\times d_{B}}e^{i\phi_{j}}|\psi_{j}^{A},\psi_{j}^{B} \rangle\langle\psi_{j}^{A*},\psi_{j}^{B*}|\right)\theta_{\zeta}, \tag{35}\]
where \(\theta_{\zeta}\) is the complex conjugation in the product computational basis. This conjugation is Sep-measurable for any choice of phases since the separable basis \(\{|\psi_{j}^{A},\psi_{j}^{B}\rangle\}_{j=1,\ldots,d_{A}\times d_{B}}\) is an eigenframe. The phases \(\phi_{j}\) (\(j=1,\ldots,d_{A}\times d_{B}\)) are fixed so that the conjugation becomes Prod-unmeasurable, if possible.
The procedure ends up with a Prod-unmeasurable conjugation after starting from a basis of a \(3\times 2\)-dimensional system,
\[|0_{A},+_{B}\rangle,\ |0_{A},-_{B}\rangle,\ |1_{A},0_{B}\rangle,\ |1_{A},1_{B}\rangle,\ |2_{A},0_{B}\rangle,\ |2_{A},1_{B}\rangle, \tag{36}\]
where \(|\pm\rangle=(|0\rangle\pm|1\rangle)/\sqrt{2}\). The conjugation is of the form \(\theta_{3\to 2}\), whose matrix representation is given by (34). If we choose phases \(0,\pi,0,0,0,\pi/2\), the resulting conjugation violates the total-normality criterion and thus is not Prod-measurable.
Figure 2: The magic-basis spectra of Prod-measurable conjugations. The magic-basis spectrum \(\{1,e^{i\phi_{1}},e^{i\phi_{2}},e^{i\phi_{3}}\}\) is represented by the point \((\phi_{1},\phi_{2},\phi_{3})\) in the tetrahedron region \(0\leq\phi_{1}\leq\phi_{2}\leq\phi_{3}\leq 2\pi\). The magic-basis spectra for Prod-measurable conjugations correspond to the dashed line (colored red), whose endpoints correspond to product conjugations. The spectra of the four vertices and the four dots (colored blue) are equivalent to \(\{1,1,1,1\}\) and \(\{1,-1,-1,-1\}\), respectively.
A construction from a locally indistinguishable basis does not always work. Here, let us start the construction from a basis of a qubit-qubit system,
\[|0_{A},+_{B}\rangle,\ |0_{A},-_{B}\rangle,\ |1_{A},0_{B}\rangle,\ |1_{A},1_{B}\rangle. \tag{37}\]
The four states cannot be distinguished by product measurements. However, for any choice of phases \(\phi_{j}\) (\(j=1,\ldots,4\)), the conjugation (35) is Prod-measurable.
While the Prod-measurability of above two-qubit conjugation can be shown by an explicit calculation, we also have the following theorem:
**Theorem 5**: _Any two-qubit conjugations are either Prod-measurable or Sep-unmeasurable._
The proof is presented in appendix C.2. According to this theorem, any two-qubit conjugation constructed from a separable basis by (35) must be Prod-measurable. Prod-unmeasurable Sep-measurable conjugations exist only in systems with more than \(2\times 2\) dimensions.
Note that the basis (36) differs from (37) only in the two vectors \(|2_{A},0_{B}\rangle\) and \(|2_{A},1_{B}\rangle\). These two additional vectors do not increase the classical communication cost for discriminating the states but they do make the constructed conjugation Prod-unmeasurable. The local-distinguishability of basis elements and the Prod-measurability of conjugations are not related straightforwardly.
### Sep-unmeasurable conjugations
Any eigenframe of a Sep-unmeasurable conjugation contains at least a single entangled eigenvector. The corresponding real subspace cannot be spanned by any set of product vectors.
We can tell if a given two-qubit conjugation is Sep-unmeasurable or not from its magic-basis spectrum (corollary 1 and theorem 5). It is also possible to compute the minimum entanglement of eigenframe vectors of two-qubit conjugations. We define the average concurrence of a two-qubit frame \(\mathcal{V}=\{|v_{j}\rangle\}_{j=0,\ldots,n-1}\) by
\[C_{ave}(\mathcal{V}):=\sum_{j=0}^{n-1}\langle v_{j}|v_{j}\rangle C(v_{j}), \qquad\left(C(\psi):=\frac{|\langle\psi|\Theta_{f}^{\otimes 2}|\psi\rangle|}{ \langle\psi|\psi\rangle}\right), \tag{38}\]
where \(C\) is equal to the concurrence [23, 24]. The concurrence of quantum states is a monotonic function of the entanglement entropy.
**Theorem 6**: _The minimum average concurrence for a conjugation \(\theta\)'s eigenframe is given by_
\[\min_{\mathcal{V}:\theta^{s}\text{\emph{eigenframe}}}C_{ave}(\mathcal{V})= \left|\sum_{j=1}^{4}e^{i\phi_{j}}\right|=\left|\mathrm{Tr}[[\theta]^{\mu}] \right|, \tag{39}\]
_where \(\{e^{i\phi_{1}},e^{i\phi_{2}},e^{i\phi_{3}},e^{i\phi_{4}}\}\) is the magic-basis spectrum of \(\theta\). If \(\mu\) is a canonical magic basis such that \(\theta|\mu_{j}\rangle=e^{i\phi_{j}}|\mu_{j}\rangle\), then the vectors_
\[|v_{k}\rangle:=\sum_{j}\frac{H_{jk}}{2}e^{i\phi_{j}/2}|\mu_{j}\rangle,\qquad k =1,\ldots,4, \tag{40}\]
_share the same concurrence \(C(v_{k})=|\sum_{j=1}^{4}e^{i\phi_{j}}|/4\) and form an eigenframe attaining the minimum average concurrence. Here, \(H\) is a \(4\times 4\) real Hadamard matrix._
The frame (40) minimizes the entanglement entropy on average as well as the average concurrence, since the former is a convex function of the latter. (See appendix C for more details on this point and the proof of theorem 6.)
Figure 3 depicts the minimum average concurrence (39) of conjugations. It is worth noting that several LU-nonequivalent conjugations can have the same minimum average concurrence of eigenframes. For instance, conjugations with magic-basis spectra \(\{1,1,1,e^{i\pi}\}\) and \(\{1,1,e^{i2\pi/3},e^{i2\pi/3}\}\), despite not being LU-equivalent, have the same minimum average concurrence of 2.
The value of \(\min C_{ave}/4\), obtained by dividing the minimum average concurrence by 4, serves as a lower bound of the states' concurrence required to deterministically implement a symmetric measurement. To elaborate, the minimum average concurrence is attained by the four vectors sharing the same concurrence. To generate any of the four states as a measurement outcome, an entangled state with a concurrence of at least \(\min C_{ave}/4\) is required. In the case that the vectors of a different eigenframe do not share the same concurrence, the concurrence of one of the components exceeds \(\min C_{ave}/4\), necessitating a greater degree of entanglement for its production. Thus, it becomes necessary to have an ancillary state supply with a concurrence of at least \(\min C_{ave}/4\).
Note that \(\min C_{ave}/4\) may not be sufficient to implement an eigenframe measurement. Determining the minimum entanglement for a measurement implementation is a difficult task, even for two-qubit systems [44, 45]. The authors looked for a method to implement the elegant joint measurement [46, 47] with an ancillary input state with concurrence 1/2, but were unable to find one. The vectors of an elegant joint measurement form an eigenframe of \(\theta_{\text{SWAP}}\) (defined in section 4.3.2) saturating \(\min C_{ave}/4=1/2\). On the basis of our efforts so far, we conjecture that \(\min C_{ave}/4\) serves as a lower bound that is not saturated.
Below, we look into more detail at two representative examples of two-qubit conjugations whose magic basis spectra are \((1,1,1,1)\) and \((1,-1,-1,-1)\). Then, we consider generalizations of the two Sep-unmeasurable qubit-qubit conjugations to higher dimensional systems. The generalized conjugations remain Sep-unmeasurable and inherit the distinct properties of the original qubit-qubit conjugations.
#### 4.3.1 (1,1,1,1): collective spin flips
Our first example of a Sep-unmeasurable conjugation has the magic-basis spectrum \((1,1,1,1)\). It is the tensor product \(\Theta_{f}^{\otimes 2}\) of spin flips on a two-qubit space. Its matrix representation is
Figure 3: Minimum average concurrence of eigenframes of two-qubit conjugations with magic-basis spectra \(\{1,1,e^{i\phi_{2}},e^{i\phi_{3}}\}\) depicted in the area \(0\leq\phi_{2}\leq\phi_{3}\leq 2\pi\). The average concurrence has non-zero values except at \(\phi_{2}=\phi_{3}=\pi\), where the corresponding conjugation is Prod-measurable.
given by
\[[\Theta_{f}^{\otimes 2}]^{\zeta}=[i\sigma_{y}\otimes i\sigma_{y}]^{\zeta}= \left(\begin{array}{rrr}&0&1\\ &&-1&0\\ 0&-1&\\ 1&0&\end{array}\right), \tag{41}\]
in the product computational basis \(\zeta\).
A two-qubit state is an eigenvector of \(\Theta_{f}^{\otimes 2}\) if and only if it is maximally entangled. The Bell measurement is an eigenframe measurement.
The magic basis (1) is a reference basis of \(\Theta_{f}^{\otimes 2}\). Any real linear combination of vectors from the magic basis is thus an eigenvector of \(\Theta_{f}^{\otimes 2}\), and is maximally entangled [22].
It is remarkable that the collective spin flip is a conjugation while the single spin flip itself is not. The collective spin flip and its higher-dimensional generalizations are the only examples of this kind. If a tensor product of two antiunitary operators is a conjugation, they are either both conjugations or unitarily equivalent to direct sums of spin-flip operators (see appendix B for details). If \(\Theta_{1}\) and \(\Theta_{2}\) are unitarily equivalent to direct sums of spin flips, then \(\Theta_{1}\otimes\Theta_{2}\) is Sep-unmeasurable, since neither \(\Theta_{1}\) nor \(\Theta_{2}\) has any eigenvector.
One may also wonder if a tensor product of three or more non-hermitian antiunitary operators can be a conjugation. It turns out that there is no such "genuine multipartite conjugation" that can be made from local antiunitaries (see appendix B for details).
#### 4.3.2 (1,-1,-1,-1): conjugate swap
Let SWAP be the swap operator on the tensor product of two \(d\)-dimensional spaces \(\mathcal{H}\otimes\mathcal{H}\). We define a conjugation \(\theta_{\text{SWAP}}\) by
\[\theta_{\text{SWAP}}:=\text{SWAP}\ \theta_{\zeta}, \tag{42}\]
and call it the conjugate swap.
When \(d=2\), the two-qubit conjugate swap has the magic-basis spectrum \(\{1,-1,-1,-1\}\), implying Sep-unmeasurability. In figures 2 and 3, the blue dots indicate the magic-basis spectra equivalent to \(\{1,-1,-1,-1\}\). Unlike the collective spin flip, the two-qubit conjugate swap has product eigenvectors, such as \(|0,0\rangle\) and \(|1,1\rangle\). Any eigenframe of the conjugate swap, however, must contain at least a single entangled vector. Among the eigenframes of the conjugate swap, ones consisting only of vectors proportional to
\[|\Psi_{\mathbf{n}}\rangle :=\frac{1+\sqrt{3}}{2\sqrt{2}}|\mathbf{n},\mathbf{n}^{\star}\rangle+\frac{ 1-\sqrt{3}}{2\sqrt{2}}i\sigma_{Y}\otimes i\sigma_{Y}|\mathbf{n}^{\star},\mathbf{n}\rangle \tag{43}\] \[=\frac{1}{2}|\Psi_{+}\rangle+\frac{\sqrt{3}}{2}\sin\theta\cos \phi|\Psi_{-}\rangle+\frac{\sqrt{3}}{2}\cos\theta|\Phi_{+}\rangle-\frac{\sqrt {3}}{2}i\sin\theta\sin\phi|\Phi_{-}\rangle,\] (44) \[\left(|\mathbf{n}\rangle=\cos\frac{\theta}{2}|0\rangle+e^{i\phi}\sin \frac{\theta}{2}|1\rangle\right), \tag{45}\]
exhibit the minimum average concurrence of eigenframe vectors (see appendix C.3 for the proof). In (43), \(\mathbf{n}=(\sin\theta\cos\phi,\ \sin\theta\sin\phi,\ \cos\theta)\) is a directional vector pointing on the unit sphere. The vector \(|\Psi_{\mathbf{n}}\rangle\) is an eigenvector of \(\theta_{\text{SWAP}}\) with eigenvalue 1 for any \(\mathbf{n}\). In the special case where \(\mathbf{n}\) points to the four vertices of a tetrahedron inscribed in the unit sphere, the corresponding eigenframe measurement is local unitary equivalent to the elegant joint measurement [48, 46].
The conjugate swap is Sep-unmeasurable for any dimension \(d\). Otherwise, the precision limit of multiparameter estimations from parallel states \(\hat{\rho}_{\mathbf{x}}\otimes\hat{\rho}_{\mathbf{x}}\) and from mutually conjugate states \(\hat{\rho}_{\mathbf{x}}\otimes\hat{\rho}_{\mathbf{x}}^{\star}\) would have to coincide for any \(\hat{\rho}_{\mathbf{x}}\), but this is not always the case [18]. A higher-dimensional generalization of the elegant joint measurement is constructed in [47] under the assumption that SCI-POVMs exist for each dimension. The generalized elegant joint measurement defines a reference basis of \(\theta_{\text{SWAP}}\).
Conjugations in quantum sensor networks
Here, we examine the conjugation symmetries of quantum sensor networks (QSNs) for imaginarity-free estimations. QSNs aim to estimate physical parameters accurately by using quantum systems shared by multiple nodes. A typical QSN protocol involves three steps: preparing the initial state, evolving the state to be estimated, and measuring the state. Our interest is in the entanglement required for the measurement step.
In several situations, entangled measurements do not enhance precision. Local operations and classical communication (LOCC) measurements saturate the optimal precision limit for single-parameter estimations with pure states or certain rank-two mixed states [27]. Note that their optimal LOCC measurement generally depends on the parameter to be estimated. Although phase estimation requires an entangled initial state, local measurements are sufficient to achieve the optimal precision [20, 21]. This holds true for estimating the average phase over a network, as studied in [49, 33, 25]. In this section, we demonstrate that these are instances of imaginarity-free estimations governed by Prod-measurable conjugations.
The discussion begins in section 5.1 below with a review of imaginarity-free estimation.
Violation of local tomography in real-vector-space quantum theory [8] prevents us from estimating multipartite states from their local components. In section 5.2, we confirm that the product eigenframes of product conjugations can be made informationally complete over real pure states sharing the same conjugation symmetries.
One of the authors previously used antiunitary symmetry to demonstrate that bipartite entangled measurements achieve the quantum CRB in 3D-magnetometry [50] of an arbitrary size [18]. We will use the same approach to estimate the average phase over a network and establish the optimality of product measurements (section 5.3). In so doing, we enable the protocol to achieve the classical CRB at all phase values.
In section 5.4, we propose an idea for rewiring entangled measurements in a general QSN. This idea involves doubled local systems at all nodes and the "antiparallel model" proposed in [18]. Measurements that are entangled between nodes of the original QSN are replaced by those entangled inside the nodes of the antiparallel model.
### Introduction to imaginarity-free estimation
Eigenframe measurements are key components of an imaginarity-free estimation [18]. Let \(\{\hat{\rho}_{\mathbf{x}}|\mathbf{x}\in X\subset\mathbb{R}^{n}\}\) be a quantum statistical model with pure states \(\hat{\rho}_{\mathbf{x}}\). The task is to estimate \(\mathbf{x}\) by making a POVM measurement \(\Pi^{\otimes N}\) (\(\Pi=\{\hat{\Pi}_{\omega}\in\mathcal{B}(\mathcal{H})\}_{\omega\in\Omega}\)) on \(\hat{\rho}_{\mathbf{x}}^{\otimes N}\) and by using an estimator \(\tilde{\mathbf{x}}:\Omega^{N}\to X\). The estimation error is given by the mean square error matrix \(\Sigma_{\mathbf{x}}(\Pi^{\otimes N},\tilde{\mathbf{x}})\). The classical Cramer-Rao bound (classical CRB),
\[N\Sigma_{\mathbf{x}}(\Pi^{\otimes N},\tilde{\mathbf{x}})\geq\mathcal{F}_{C}(\Pi,\mathbf{x })^{-1}, \tag{46}\]
where \(\mathcal{F}_{C}(\Pi,\mathbf{x})\) is the classical Fisher information matrix, applies to certain classes of estimators \(\hat{\mathbf{x}}\). For any measurement \(\Pi\) and at any point \(\mathbf{x}\), the classical Fisher information matrix satisfies the quantum CRB:
\[\mathcal{F}_{C}(\Pi,\mathbf{x})\leq\mathcal{F}_{Q}(\mathbf{x}), \tag{47}\]
where \(\mathcal{F}_{Q}(\mathbf{x})\) is the quantum Fisher information matrix [51, 52] of the symmetric logarithmic derivatives. When the model consists only of conjugation-invariant pure states, the quantum CRB is attained at all point in \(X\) by any eigenframe measurement of the conjugation [18]. Namely, any rank-1 conjugation-symmetric POVM gives the optimal classical Fisher information.
General quantum multiparameter estimation suffers from an incompatibility of parameters, preventing one to attain the quantum CRB (47). In the case of incompatible parameters, the attainable precision is given by the so-called Holevo CRB, and finding optimal
measurements typically requires numerical calculations [28, 29]. When the parametrized state satisfies a "partial commutation relation," the parameters become compatible and one can obtain an analytical expression for quantum-CRB-saturating measurements [53]. The imaginarity-free estimation is a subclass of compatible multiparameter estimation in which the analytical expression of the optimal measurement is further simplified by the symmetry.
A measurement that uses up few non-local resources is desirable when estimating parameters encoded in a QSN implementing the quantum CRB saturation. Zhou et al. [27] partly solved this problem by constructing quantum-CRB-saturating LOCC measurements for all single-parameter estimations from multipartite pure states. In the multiparameter scenario, optimal measurements must be entangled in certain QSNs such as those consisting of antiparallel spins [48, 54, 18].
The existence of an optimal product measurement for multipartite imaginarity-free estimation is determined by the Prod-measurability of the associated conjugation symmetry. A precise statement of this is as follows:
**Theorem 7**: _Let \(\{\hat{\rho}_{\mathbf{x}}\in\mathcal{B}(\mathcal{H}_{1}\otimes\ldots\otimes \mathcal{H}_{N})|\mathbf{x}\in X\subset\mathbb{R}^{n}\}\) be a multipartite quantum statistical model with pure state \(\hat{\rho}_{\mathbf{x}}\). If \(\theta\hat{\rho}_{\mathbf{x}}\theta=\hat{\rho}_{\mathbf{x}}\) holds for all \(\mathbf{x}\in X\) with a Prod-measurable conjugation \(\theta\), there is a product measurement \(\Pi\) that saturates the quantum CRB (47) everywhere in the parameter space \(X\)._
This theorem is an immediate consequence of theorem 3 of [18] and the definition of Prod-measurable conjugations, so we will omit the proof.
### Real informationally complete eigenframes
This section pertains to global parameter estimation on quantum statistical models, with "global" referring to the parameter space rather than composite systems. We will examine a scenario where multiple identical state copies from a quantum statistical model are provided and our objective is to estimate the parameter using quantum measurements. Generally, the optimal measurement that saturates the quantum CRB varies depending on the parameter value. Therefore, if the parameter value is unknown, a two-step method is necessary in which one performs rough state tomography using a few of the initial copies and subsequently carries out the parameter-dependent optimal measurement on the remaining copies.
However, the two-step method for global parameter estimation is redundant if the state of the model is pure and symmetric under conjugation. The eigenframe measurements of the conjugation saturate the quantum CRB for any value of the parameter. By utilizing an eigenframe measurement that is informationally complete over the model, the maximal likelihood estimator saturates both the quantum and classical CRB for any parameter value. Here, it suffices to use specific eigenframes that are informationally complete over the conjugation-symmetric pure states [18].
The situation becomes more intricate for multipartite quantum sensor networks. It is better to determine optimal measurements that necessitate less entanglement in such scenarios. Zhou et al. [27] demonstrated that the two-step method with LOCC measurements saturates the quantum CRB for specific models, including any single-parameter pure-state model. Thus, when it comes to multiparameter pure-state models, imaginarity-free estimation is a fundamental method to start with.
As we have observed, if a pure-state model is symmetric under a Prod-measurable conjugation, there are product eigenframes that saturate the quantum CRB. The product eigenframe must also be informationally complete over the model to enable a global parameter estimation. The following theorem guarantees the existence of such eigenframes for product conjugations:
**Lemma 1**: _Let \(\theta_{p}\) and \(\Psi_{p}:=\{|\psi^{\prime}_{p}\rangle\}_{j\in J_{p}}\)\((p=1,\ldots,N)\) be conjugations and their eigenframes on subsystems \(\mathcal{H}_{p}\) of \(\mathcal{H}=\otimes_{p=1}^{n}\mathcal{H}_{p}\). If \(\Psi_{p}\) is informationally complete over the set of \(\theta_{p}\)-symmetric pure states on \(\mathcal{H}_{p}\) for \(p=1,\ldots,N\), then \(\Psi:=\otimes_{p=1}^{N}\Psi_{p}\) is informationally complete over over the set of \(\theta_{1}\otimes\cdots\otimes\theta_{N}\)-symmetric pure states._
This lemma is not trivial because local tomography is violated in quantum theory over real vector spaces [8]. Thus, removing the word "pure" from the statements of lemma 1 would render it false. Lemma 1 can be proven by invoking a theorem from harmonic analysis [55] and is detailed in appendix D.
The restriction to pure states also enables local state discrimination. In fact, there exists a pair of bipartite orthogonal real mixed states that cannot be distinguished by any local real operations and classical communications [2, 3]. In contrast, any pair of bipartite orthogonal real pure states can be perfectly distinguished in the same setting [2, 3]. This resolution of local real pure state discrimination is akin to lemma 1 of local real pure state tomography.
Finally, we are able to guarantee that global parameter estimation is possible on imaginarity-free models when the symmetry is a product conjugation \(\theta_{1}\otimes\cdots\otimes\theta_{N}\):
**Theorem 8**: _Let \(\{\hat{\rho}_{\mathbf{x}}\in\mathcal{B}(\mathcal{H}_{1}\otimes\ldots\otimes \mathcal{H}_{N})|\mathbf{x}\in X\subset\mathbb{R}^{n}\}\) be a multipartite quantum statistical model with pure state \(\hat{\rho}_{\mathbf{x}}\) such that \(\mathbf{x}\neq\mathbf{x}^{\prime}\) implies \(\hat{\rho}_{\mathbf{x}}\neq\hat{\rho}_{\mathbf{x}^{\prime}}\) for any pair of points. If \(\theta\hat{\rho}_{\mathbf{x}}\theta=\hat{\rho}_{\mathbf{x}}\) holds for all \(\mathbf{x}\in X\) with a product conjugation \(\theta\), there is a quantum-CRB-saturating product measurement \(\Pi\) with which the maximum likelihood estimator asymptotically saturates the classical CRB (46) everywhere in the parameter space \(X\)._
This theorem is an immediate consequence of corollary 1 of [18] and lemma 1, so we will omit the proof.
As an illustration of a global parameter estimation on quantum sensor networks, section 5.3 reviews and extends the average phase estimation presented in [25]
### Example: average phase estimation
Various studies [49, 25, 33] have extended the phase estimation procedure [20, 21] to averages of multiple uncorrelated phases. In this subsection, we begin by providing a summary of the generalized phase estimation scheme put forth in [25]. Then, we establish that the model is symmetric with respect to a product conjugation. The use of a product conjugation explains the known optimality of a product measurement [33] and furthermore provides insight into how to globally achieve the classical CRB.
Suppose we have a network of size \(N\) that is described by an \(N\)-tensor product \(\otimes_{p=1}^{N}\mathcal{H}_{p}\) of Hilbert spaces \(\mathcal{H}_{p}\)\((p=1,\ldots,N)\), each isomorphic to \(\mathcal{H}\). The \(p\)th system undergoes a unitary transformation \(\hat{U}_{p,\phi_{p}}\) depending a real parameter \(\phi_{p}\) that we call a phase. Here, we will let \(\hat{U}_{p,\phi_{p}}\) be a general unitary transformation such that \(\hat{U}_{p,0}=\mathbb{I}_{p}\), but assume that the phases are small \(\phi_{p}\sim 0\). The parameter to be estimated is a function of the phase \(\mathbf{\phi}=\{\phi_{1},\ldots,\phi_{N}\}\) ([25] considers weighted averages).
The other independent functions of the phases are nuisance parameters [56, 57] that are not to be estimated.
The state of the quantum statistical model for the generalized phase estimation is given by
\[\hat{\rho}_{\mathbf{\phi}}=\bigotimes_{p=1}^{N}\hat{U}_{p,\phi_{p}}| \text{GHZ}\rangle\langle\text{GHZ}|\bigotimes_{p=1}^{N}\hat{U}_{p,\phi_{p}}^{ \dagger}, \tag{48}\] \[|\text{GHZ}\rangle:=\frac{1}{\sqrt{2}}\left(\bigotimes_{p=1}^{N} |\lambda_{p,\max}\rangle+\bigotimes_{p=1}^{N}|\lambda_{p,\min}\rangle\right), \tag{49}\]
where \(|\lambda_{p,\max}\rangle\) and \(|\lambda_{p,\min}\rangle\) are eigenstates of the \(p\)th generator,
\[\hat{H}_{p}:=-i\left.\frac{\partial\hat{U}_{p,\phi_{p}}^{\dagger}}{\partial\phi_ {p}}\right|_{\phi_{p}=0}\hat{U}_{p,0}, \tag{50}\]
with eigenvalues \(\lambda_{p,\max}\) and \(\lambda_{p,\min}\) for each \(p\). The Greenberger-Horne-Zeilinger (GHZ) state \(|\text{GHZ}\rangle\) is an optimal initial state of a sensor network for estimating a weighted average of phases under several assumptions on the available resource [25].
The model with state (48) is imaginarity-free around \(\mathbf{\psi}\sim 0\). Define a tensor-product conjugation,
\[\theta=\bigotimes_{p=1}^{N}\theta_{p}, \tag{51}\]
where \(\theta_{p}\) can be any conjugation on \(\mathcal{H}_{p}\) satisfying
\[\theta_{p}|\lambda_{p,\max}\rangle=|\lambda_{p,\min}\rangle,\qquad\theta_{p}| \lambda_{p,\min}\rangle=|\lambda_{p,\max}\rangle, \tag{52}\]
for \(p=1,\ldots,N\). Then, \(\theta\hat{\rho}_{\mathbf{\phi}}\theta=\hat{\rho}_{\mathbf{\phi}}\) and \(\theta\partial_{\phi_{p}}\hat{\rho}_{\mathbf{\phi}}\theta=\partial_{\phi_{p}}\hat {\rho}_{\mathbf{\phi}}\) (\(p=1,\ldots,N\)) both hold at \(\mathbf{\phi}=\mathbf{0}\). Namely, \(\theta\) remains in the state (48) under a first-order approximation around \(\mathbf{\phi}\sim 0\).
A conjugation of this type is called a "local antiunitary symmetry" in [18], where the locality is of the parameter space.
A measurement in any eigenframe of \(\theta\) saturates the quantum CRB in the estimating functions of the phases around \(\mathbf{\phi}=\mathbf{0}\)[18]. Product eigenframes exist for the tensor-product conjugation, and we recover an optimal product measurement derived in [33].
Our analysis of imaginarity-free estimation reveals two noteworthy features of the average phase estimation.
Firstly, since the model is pure and imaginarity-free, the multiparameter quantum CRB, encompassing nuisance parameters [58, 57, 56] can be simultaneously attained in one eigenframe measurement. Note that the quantum CRB for other nuisance parameters may be suboptimal since the state (48) was originally designed for achieving the best precision when estimating a weighted average of phases [25]. Nevertheless, the same eigenframe measurement can attain the suboptimal bound.
Second, while it was originally proposed for small phase values \(\mathbf{\phi}\sim 0\), the average phase estimation can be extended to a global parameter estimation. In this case, we fix the unitary evolutions \(U_{p,\phi_{p}}\) to define the model outside the neighborhood of \(\phi\sim 0\). If we take the generator \(\hat{H}_{p}\) to be a Hamiltonian and define the unitary,
\[\hat{U}_{p,\psi_{p}}:=e^{-i\phi_{p}\hat{H}_{p}}, \tag{53}\]
then the parametrized state
\[|\text{GHZ}_{\mathbf{\phi}}\rangle:=\bigotimes_{p=1}^{N}\hat{U}_{p,\psi_{p}}|\text {GHZ}\rangle\propto\frac{1}{\sqrt{2}}\left(e^{-i\sum_{p=1}^{N}\phi_{p}(\lambda _{p,\max}-\lambda_{p,\min})}\bigotimes_{p=1}^{N}|\lambda_{p,\max}\rangle+ \bigotimes_{p=1}^{N}|\lambda_{p,\min}\rangle\right), \tag{54}\]
coincides with the general model (48) around \(\mathbf{\phi}\sim 0\) in the first-order approximation.
This time, the state (54) is invariant under the conjugation \(\theta\) for any (not necessarily small) value of \(\mathbf{\phi}\). Since \(\theta\) is a product conjugation, one can compose specific local eigenframes to form an informationally complete eigenframe over all \(\theta\)-invariant states. This single product eigenframe and the maximal likelihood estimator asymptotically attain the quantum and classical CRBs at any parameter value.
Ge _et al._[59] have theoretically tailored the average phase estimation approach to suit an optical sensor network. The utilization of photon-number detection effectively saturates
the quantum CRB when estimating the average optical phase shift. This approach is advantageous because photon-number detections require only linear optical components. In appendix E, we introduce the concept of "linear" conjugations to analyze imaginarity-free estimations within bosonic systems. The linear conjugations are accompanied by specific particle-number detections that serve as symmetric measurements. The optical model presented in [59], along with its optimal photon-number detection, exhibits symmetry under a linear conjugation.
### Example: antiparallel model
In [18], one of the authors considered "antiparallel models." An antiparallel model consists of mutually conjugated state pairs and has antiunitary symmetry.
Here, let us consider antiparallel models in a QSN. Since these models have only limited applicability in practice, our interest is theoretical. As is shown below, antiparallel models rewire the entangled measurement of the network.
Let \(\mathcal{H}_{p}\) (\(p=1,\ldots,N\)) be local Hilbert spaces and \(|\psi_{\mathbf{x}}\rangle\) (\(\mathbf{x}\in X\subset\mathbb{R}^{m}\)) be a parametrized pure quantum state on \(\otimes_{p=1}^{N}\mathcal{H}_{p}\). An optimal measurement for estimating the parameter \(\mathbf{x}\) from copies of state \(|\psi_{\mathbf{x}}\rangle\) may generally be entangled over the network. For multiparameter estimation (\(m\geq 2\)), the quantum CRB may not be saturated even by an optimal measurement. The antiparallel model for \(\{|\psi_{\mathbf{x}}\rangle|\mathbf{x}\in X\}\) consists of mutually conjugated state pairs
\[|\psi_{\mathbf{x}}\rangle\otimes\theta|\psi_{\mathbf{x}}\rangle, \tag{55}\]
on \((\otimes_{p=1}^{N}\mathcal{H}_{p})^{\otimes 2}\).
The conjugation \(\theta\) suitable for network sensing is a product,
\[\theta=\bigotimes_{p=1}^{N}\theta_{p}, \tag{56}\]
of some local conjugations \(\theta_{p}\) on \(\mathcal{H}_{p}\) (\(p=1,\ldots,N\)). The antiparallel model has the antiunitary symmetry \(\theta_{\text{SWAP}}=\text{SWAP}(\theta\otimes\theta)\) presented in section 4.3.2. Measurements in any of the eigenframes of \(\theta_{\text{SWAP}}\) saturate the quantum CRB of the antiparallel model, which is equal to the bound of the original model per state copy [18].
Methods to construct the conjugated state \(\theta|\psi_{\mathbf{x}}\rangle\) in certain metrological settings are described in [18]. It is generally impossible to transform an unknown state \(|\psi\rangle\) into its conjugate \(\theta|\psi\rangle\), so the methods presented in [18] impose several symmetry conditions on the system.
The optimal measurements on the antiparallel model can totally differ from those of the original model. An eigenframe measurement for an antiparallel model is presented in figure 4. The eigenframe is entangled inside the doubled local systems \(\mathcal{H}_{p}\otimes\mathcal{H}_{p}\), but is a product of different nodes of the network. To derive this eigenframe, we observe that the
Figure 4: Optimal measurements for antiparallel models. A pair of mutually conjugated multipartite states \(|\psi_{\mathbf{x}}\rangle\otimes\theta|\psi_{\mathbf{x}}\rangle\) (with a parameter \(\mathbf{x}\)) has antiunitary symmetry. If \(\theta\) is a product conjugation on the network, optimal measurements require entanglement only between the duplicated subsystems.
antiunitary symmetry \(\theta_{\text{SWAP}}\) of the antiparallel model is in a product form,
\[\theta_{\text{SWAP}}=\bigotimes_{p=1}^{N}\text{SWAP}_{p}(\theta_{p}\otimes\theta_ {p}), \tag{57}\]
where \(\text{SWAP}_{p}:\mathcal{H}_{p}\otimes\mathcal{H}_{p}\rightarrow\mathcal{H}_{p} \otimes\mathcal{H}_{p}\) is the swap operator on \(\mathcal{H}_{p}\otimes\mathcal{H}_{p}\). The eigenframe is a product of eigenframes of \(\text{SWAP}_{p}(\theta_{p}\otimes\theta_{p})\) for \(p=1,\ldots,N\).
The optimal measurement for the antiparallel model does not depend on the optimal one for the original model. This highlights how antiunitary symmetries govern the entanglement of optimal measurements for parameter estimation.
## 6 Conclusion
We conducted an investigation into LU-equivalence classes and the LU-invariant property of measurability for multipartite conjugations. Two multipartite conjugations are considered to be LU-equivalent when it is possible to convert one into the other via a local basis transformation. A conjugation is considered to be Prod-measurable if it has product rank-1 symmetric measurements.
A conjugation is Prod-measurable if and only if it possesses a product eigenbasis. We have provided checkable necessary conditions for determining whether a conjugation is Prod-measurable. However, these conditions are generally insufficient.
For two-qubit conjugations, there exists a canonical decomposition that includes the "magic-basis spectrum," which is a family of four complex unimodular numbers. The magic-basis spectrum enables us to derive the following results for two-qubit conjugations:
* A complete characterization of LU-equivalence classes: the space of LU-equivalence classes is homeomorphic to the configuration space of four points on a circle.
* The LU-equivalent class of a two-qubit conjugation is invariant under subsystem permutations. No LU-invariant property can show preferences for subsystems.
* The Prod-measurability of two-qubit conjugations can be ascertained by examining their magic-basis spectrum. The determination of the product symmetric measurement associated with a Prod-measurable conjugation can be achieved through its canonical decomposition.
* A lower bound of entanglement resources required for implementing the conjugation-symmetric measurements can be calculated from the magic-basis spectrum.
The investigation of general multipartite conjugations is hindered by the absence of a canonical decomposition.
Conjugation symmetries have led us to optimal estimation procedures for several quantum sensor networks. If a pure-state model of a quantum sensor network is symmetric under a Prod-measurable conjugation, there is a product measurement that saturates the quantum CRB. This single product measurement is optimal everywhere in the multiparameter space, even in one with nuisance parameters. The quantum sensor networks designed for estimating phase functions [49, 33, 25], as well as a network version of the antiparallel model [18], are symmetric under product conjugations. Among the Prod-measurable conjugations, product conjugations have a product symmetric measurement that is informationally complete over all pure symmetric states. Therefore, it is possible to combine such a measurement and the maximum likelihood estimator in order to saturate the quantum CRB and the classical CRB at any parameter value.
Our investigation of conjugations' non-locality is only preliminary, and many problems remain. Specifically, we lack a generally applicable algorithm to search for the symmetric
product measurements of Prod-measurable conjugations. We believe that the most important problem is how to find conjugation symmetries for a given set of quantum states. If we had a systematic method for searching for symmetries at hand, the conjugations would lead us to optimal estimation procedures for a greater variety of quantum sensor networks.
## Acknowledgements
JM is grateful for financial support received through academist, an academic crowdfunding platform. SA was partially supported by JST, PRESTO (Grant no. JPMJPR2111), MEXT Q-LEAP (Grant no. JPMXS0120319794), and the JST Moonshot R&D MILLENNIA Program (Grant no. JPMJMS2061).
| 量子情報処理プロトコルによっては、複素共役に対して不変である量子操作が必要となる。本研究では、多partite量子ネットワークにおける共役対称測定に必要な非局所的なリソースを分析する。特定の共役に対して、その局所的に実装可能な対称測定を行うための条件を導出する。特に、2量子素子共役に関する「魔法基底スペクトル」という一連の数字は、その局所的な測定可能性を全面的に特徴付け、また、ローカル unitary 変換に対して不変である他の特性も表す。また、量子センサーネットワークにおける最適な測定に必要な非局所的なリソースについても、その共役の対称性を基にした方法で探索する。 |
2309.05739 | VisActs: Describing Intent in Communicative Visualization | Data visualization can be defined as the visual communication of information.
One important barometer for the success of a visualization is whether the
intents of the communicator(s) are faithfully conveyed. The processes of
constructing and displaying visualizations have been widely studied by our
community. However, due to the lack of consistency in this literature, there is
a growing acknowledgment of a need for frameworks and methodologies for
classifying and formalizing the communicative component of visualization. This
work focuses on intent and introduces how this concept in communicative
visualization mirrors concepts in linguistics. We construct a mapping between
the two spaces that enables us to leverage relevant frameworks to apply to
visualization. We describe this translation as using the philosophy of language
as a base for explaining communication in visualization. Furthermore, we
illustrate the benefits and point out several prospective research directions. | Keshav Dasu, Yun-Hsin Kuo, Kwan-Liu Ma | 2023-09-11T18:03:47 | http://arxiv.org/abs/2309.05739v1 | # VisActs: Describing Intent in Communicative Visualization
###### Abstract
Data visualization can be defined as the visual communication of information. One important barometer for the success of a visualization is whether the intents of the communicator(s) are faithfully conveyed. The processes of constructing and displaying visualizations have been widely studied by our community. However, due to the lack of consistency in this literature, there is a growing acknowledgment of a need for frameworks and methodologies for classifying and formalizing the communicative component of visualization. This work focuses on intent and introduces how this concept in communicative visualization mirrors concepts in linguistics. We construct a mapping between the two spaces that enables us to leverage relevant frameworks to apply to visualization. We describe this translation as using the philosophy of language as a base for explaining communication in visualization. Furthermore, we illustrate the benefits and point out several prospective research directions.
Speech act theory, Data visualization, Designer intent, Communicative visualization
## 1 Introduction
Data visualization is a vast and growing field. In this paper, we focus on the subspace of communicative visualization, a space that is concerned with the explanatory side of data visualization and is often what the average person is exposed to. In this space, the communicative goals can range depending on the designer's intentions and audience [7, 41, 56]. The role data visualizations play in communication varies where some designers use them to supplement their written and spoken messages, whereas others recognize them as an entirely effective mode of conveying the message [61, 74, 70]. Consequently, we are seeing wide usage of data visualization in industry and academia to communicate increasingly diverse and sophisticated messages. The complexity and diversity of interaction data visualization usage suggest that we could benefit from looking at it as a rich language. Developing frameworks from this perspective could then allow us to pleasin insights from naturally occurring experiments in practice and enable research that can guide future practice. Given the growing sophistication of visual communication, there is value in exploring the relevance to data visualization of frameworks developed by linguists and language philosophers. Some initial frameworks to examine are speech act and discourse theory; speech act because it distinguishes between the what is'said', the intentions of the speaker, and the consequences of what is said in how it is processed by listening. Discourse theory because it examines how our communication is shaped by external structures.
Here the focus is on intent. Designers are tasked with creating and evaluating visualizations for targeted audiences. These audiences have varying motivations to engage with the presentation and with differing levels of prior knowledge of the subject matter. Designers have a wide array of intents. These intents range from journalists attempting to _inform_ readers, teachers trying to _explain_ concepts, scientists attempting to _discover_ relationships among variables, policymakers hoping to _persuade_ the public about the rationale for a decision, a blogger seeking to _evoke_ strong emotions and activists hoping to get volunteers to _act_. How can we classify these intents in a manner that advances our ability to visualize data? Classifying intent is a prerequisite for determining if a visualization adequately satisfies the communicative intent of designers. Thus, to build and evaluate communicative visualizations we need a refined and principled language for describing communicative intent.
Recent work in communicative visualization by Adar and Lee [2] tackles the question of "how do we formally describe communicative intent in visualizations?" Their work offers an initial taxonomy for intents and enables an additional discussion on how to communicate information using visualization. Others have also identified the importance of intent. For example, Schoenlein et al. [65] note a central problem in visual communication is understanding how people infer meaning from visual features. All this points to a need to assess and understand if our communicative goals as designers are being correctly imprinted in our visual features as intended. We posit that when considering the question of "how can we formalize intents" with regards to visualization, we can draw from the philosophy of language, particularly speech act theory [5, 27, 57, 66, 67, 73].
Our work aims to link the field of visualization to the field of linguistics and demonstrates how doing so offers a broader perspective of the space and introduces new opportunities for research that can facilitate design. We illustrate the connection between these spaces by explaining the link between a sub-space of visualization, i.e., communicative visualization, to a sub-space of linguistics, i.e., speech act theory. We show how this relationship can help grow our understanding of communicative visualization. The insights and formalization developed there can guide us in developing a formal language for intent in visualization. With VisActs, we offer a framework to assist in enhancing the overall communicative precision of a data-driven visualization. Our framework complements task-based design analysis by examining the design at a granular level, providing an approach to understanding how designer intent affects low-level design decisions.
In this paper, we (a) propose VisActs, which leverages speech act theory, as a framework for studying intent in visualization and (b) delve deeper into intents by (i) identifying a set of off-encountered vis design intents (ii) illustrating the relationship between the intent and visualization (examples of same content visualized differently based on the intent), (iii) showing how the mode of achievement creates a mesh of intents, (iv) showing the impact of context and conventions on how intent is realized (or made difficult to achieve).
## 2 Data Visualization as a Language
There is an ongoing discussion on design as communication [8, 17, 24, 38] and there is a body of work [8, 17, 24] that gives credence to viewing visual design as communication. In this work, we engage with this ongoing discussion and identify the implications and research directions that emerge from viewing visualization design as a language.
Visualizations share many commonalities with language as they both express what we observe to others. The goal of visualization, like ordinary speech, often goes beyond presenting facts to achieving actions or outcomes. In any case, how data is visualized can alter how the original context is perceived, e.g., visualizing the uncertainty in data [26, 31].
Treating visualization as a language has been considered, although exploring the value of this association and what it affords is limited. Purchase et al. [60] have explicitly made these connections, and briefly describe the use of linguistic theory, namely pragmatics, to provide an
over-arching framework for information visualization. They comment on the relationship between visualization and language and discuss how information visualization should rely on a multitude of theories rather than a singular theory. Hullman and Diakopoulos [32] study the application of linguistic-based rhetoric in narrative visualizations. Several others have presented theoretical visualization frameworks [37, 47, 58, 14] and implicitly imply that visualization is a language. They elegantly demonstrate how applying frameworks from spaces such as distributed cognition, information theory, an algebraic basis, or conceptual metaphor theory can contribute to the betterment and improved understanding of the use of visualization.
A vocabulary is the body of words used in a language. If we are to claim visualization is a language, then its vocabulary would be visual encodings. This association has been observed by Wilkinson [77], who identified general rules that govern the creation and presentation of data graphics and presented a structure within which these rules might be operationalized efficiently. He supposed that if the grammar is successful, it should be possible to reduce any data visualization problem into a graphic utilizing the rules outlined. Grammar-based visual encodings, as well as declarative languages [12, 28, 29, 45, 51, 62], arose out of a need to fluidly and precisely articulate a set of intents for communicating data. These works provide formalized approaches for describing tables, charts, graphs, maps, and tables and give credence to treating visualization as a language.
In summary, researchers have recognized that visualization is a language and that it would benefit from formalizing the relationships to languages. If we are to treat the totality of visualization as a language and apply linguistic frameworks, we would have common ground for discussion and understanding of the properties of expressing visualizations, thereby facilitating the development of the field.
We present an approach for translating relevant theoretical frameworks from the space of linguistics into visualization. We develop a mapping between a subspace of visualization and linguistics _to illustrate_ the potential for more work in this direction and immediate benefits. Our focus is on the intent of the designer. We propose a theoretical structure for both describing and assessing the intents of visualization designers and the respective consequences.
The motives of the designer of a visualization - to achieve actions or outcomes - and the impact of the visualization on perceptions, whether intended or not, should be considered while developing theoretical frameworks for studying visualizations. A framework to capture how we design interactive visualizations and their effects can be obtained by developments in speech act theory. Speech act theory, a sub-field of pragmatics and linguistics, studies how words are used to both present information as well as carry out actions. We describe a mapping of speech act theory into visualization and offer a theory on visualization acts, _VisActs_. This framework will be linguistically motivated using the foundation of speech act theory but made relevant for the space of visualization. That is, it must account for fundamentally how visual elements are processed and interpreted, which delves into semiotic theory. Furthermore, it must take into account both the conventional content and the underlying principles of data visualizations. Finally, such a theory should also offer testable predictions about the kinds of _VisActs_ performed across the space of visualization. Particularly, it should offer the ability to assess how our intents manifest within visualization and their respective consequences.
Next, we delve into intent in visualization. Subsequently, we explain speech act theory and how it relates to communicative visualization. This is immediately followed by our introduction of _VisActs_, a translation of speech act theory contextualized for visualization researchers. We ground the relevance and application of this translation through a series of examples, followed by a discussion about our mapping.
## 3 Communicative intent in Visualization
Communicative visualizations are created for a broad audience and represent the majority of visualizations that the public encounters. As stated earlier, communicative visualization occurs in a range of settings including journalism, education, museums, and public policy discussions. The audience differs in terms of their backgrounds, familiarity with the subject, and initial level of interest in the topic. This is in sharp contrast to visualizations that are designed for analysts or domain experts, where the designer has an understanding of their audience's prior knowledge and has some assurance that the expert will use or attempt to use the visualization tools. Furthermore, the designer's intent is to provide visualizations that facilitate a set of specific tasks described by the experts. The diversity in the audience for communicative visualization makes it important to understand intent.
To start, we can consider intent as what the designer, or one who puts forth information, would like to be conveyed and communicated. The intent here would closely parallel the intent of a speaker in routine life. For example, if a child asks her mother while eating soup "is there any salt?", her intent could be to request some salt. However, what she stated may also be a query about the availability of salt. As this example illustrates, the intent of the speaker may not be perceived by the recipient and the desired outcome fails to occur.
Similarly, let us consider a visualization designer who creates a chart about war. One intent of such a visualization could be to terrorize [56] the audience to act in protest of the war, for example, in Figure 1-(a) the designer could produce a visualization that appears dramatic and possibly violent. However, the same data with another intent can be visualized differently. As seen in Figure 1-(b), where a more "neutral" design with a possible intent to inform the public about the war.
In both situations, we want the communicative intentions to be received accurately. However, the diversity of the audience may make the intent of the speaker or the designer different from what is perceived by the receiver. The similarity of the communication challenges in these domains stems from the nature of the audience. Hence, it seems fruitful to leverage the findings in speech act theory to inform design practices in communicative visualization. In the space of visualizations, intent has not been formally defined. In the following subsections, we will offer some formalism. Fundamentally, it is useful to consider intent from two perspectives, user-intents and designer-intents.
### _User Intent_
Dimara and Perin [21], when characterizing interaction in visualization, offer a definition of the term _data-oriented intent_. In their work, they describe interaction as a goal-oriented activity with a data-oriented intent. They survey the literature that describes intents from the perspective of a user [1, 25, 41, 46, 48, 61]. They find that the visualization literature classifies _intent_, from the perspective of a user, as a goal, task, or problem. User intent could be to explore high-level data, gain insights, or gain multiple perspectives of data. The intent of a user can also be to collect and correct data, identify insights, or make decisions. In this literature, the intent has been described and identified at a low operation level (e.g., altering representations and collecting data) as well as at a higher level (e.g., information foraging, sense-making, and knowledge creation).
Fig. 1: (a) depicts a visualization, by Simon Scarr [71], with a possible intention to terrorize the audience how devastating the Iraq War was. (b) is a revision of the same visualization by Andy Cottgerave [4]. He adjusts the design so it is potentially received as more “neutral.”
As designers, we tend to remove ourselves and our own intentions and in a manner treat ourselves as an outside entity constructing an interface from the perspective of a user to satisfy the user's intentions. Our research field has identified a variety of ways to adapt [23] our visualizations and systems to the intentions of the user. Designers spend a lot of time describing user intentions in terms of workflows and strategies and curating systems accordingly. A goal as a designer is to create interfaces that enable users to effortlessly express their intentions through the data.
### Designer Intent
In the spaces of narrative visualization and data storytelling, there are many papers [13, 19, 42, 70, 74] that provide frameworks and methodologies for communicating narratives. Although these papers do not explicitly define or identify the designer's intent, they subsume a diffuse concept of intent.
Bako et al. [7] assessed how designers utilize and apply data visualization examples by determining example usefulness, curation practices, and design fixation. Their work gives us methods for capturing the designer's intent as they begin the process of developing their visualizations. Often designers may not be able to articulate the form of what they intend to communicate. Examples are an effective way to express and collage together this form. Another explicitly identified type of designer intent is artistic intent. Artistic intent often disregards functionality, making some works unintentionally incomprehensible. Lau and Moore [40] offer a conceptual model for incorporating these intentions formally. Intent may also have social motivations such as coordination, collaboration, or presentation to an audience.
Recently, Adar and Lee [2, 44] put forth a definition and a taxonomy for expressing intents in communicative visualization. To our best knowledge, their work is the only attempt to provide a formal classification of **intents** that are broadly applicable. They proposed a cognitive taxonomy in which they frame intents to be of the structure, "The viewer will [verb] [noun]." Verbs are selected from a specified set of cognitive constructs; nouns are from a set of specified knowledge dimensions. Their primary claim is that a good language for describing intents is the language of learning objectives. They assert that the advantages of using learning objectives are: "(1) being capable of describing objectives regardless of what the viewer wants; (2) allowing for a designer to create specific tests to validate if a visualization leads to a viewer achieving an objective; (3) finding out if the objectives are achieved both when a viewer is looking at a visualization and when it is taken away" [2]. A limitation of their work is that it restricts the intent of the designer to educate the audience. On the other hand, this is the only paper that provides some formalization of the designer's intent.
We seek to add to the discussion of designer intent by providing an alternative perspective for viewing intent in visualization and demonstrating how this perspective can assess and analyze designer intent at a granular level.
### Challenges with Intentions
Our intentions can manifest in many forms in data visualization, especially as our communicative goals evolve and become more nuanced. Through examination of research [2, 43, 56, 65] that addresses the various types of intentions in communicative visualization, we highlight the following set of intentions to illustrate these forms; however, similar to spoken word, they are not limited to this set.
1. **Inform**: the intention is to have the audience be _aware_ of a
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline
**Speech Act Theory Taxonomy** & **Description** & **Translation into Visualization** \\ \hline
**Fundamental Concepts** & & **A theoretical framework for describing visualization designer’s intent.** \\ Locutionary Act & The utterance of a phrase. _What is heard._ & To show data. _What is shared._ (Section 5.2) \\ Pratic Act & An utterance of words which has meaning & The selection of data. "Data Act" \\ Propositional Act & The act of expressing the proposition, the content & Expression of data via analysis."Analytic Act" \\ Sentence Type & The type of sentence (e.g. declarative, exclamatory, etc.) has an impact on the force of an utterance. & The visualization type (i.e. informative, instructive, narrative, explorative, and subjective) has an effect. \\ Illocutionary Act & The utterance of a phrase with an intention. _What is intend._ & The design of visualization with an intention. _What is seen._ (Section 5.3) \\ Perlocutionary Act & Effect utterance had on the listener. _The consequence._ & The effect a visualization has on the viewer._What is understood._ (Section 5.4) \\ Context & A cluster of actual states of affairs or various events related to the utterance. & The objects or entities which surround a focal event and provide resources for its appropriate interpretation. \\ Convention & Societal rules and norms govern countless behaviors. & Visualization design ablics by these as well. \\ \hline
**Illocutionary Force** & The speaker’s intention behind the utterance & The designer’s design rationale behind their visualization. (Section 5.4) \\ Illocutionary Point (IP) & The point or purpose of a type of allocation & To visually state, claim, or suggest something is the case. \\ Assentive Point & Convey information. The utterance that informs how things are. & To visually state, claim, or suggest something is the case. \\ Commmissive Point & Make a commitment. & The guarantees of what a visualization will offer and abide by (data authenticity). \\ Directive Point & Attempts by the speaker to get the hearer to do something. & Engaging or motivating the viewer to do something via the visualization. \\ Declarative Point & Create a new state. Utenerates that change the world by representing it as being changed. & Data transitions or transformation as well as predictive visualizations. \\ Expressive Point & Reveal the speaker’s attitude or emotion towards a particular proposition. & Revealing personal bias or sharing personal opinions through visualization. \\ Degree of strength of IP & These points can be achieved with different degrees of strength. & Degree of the design’s effort to convey an IP through the visualization. \\ Mode of achievement of IP & The various means a speaker utilizes to achieve the IP of an utterance. & The means a designer employs to communicate the IP of the visualization. \\ Propositional Content Conditions & A limitation on the nature of the state of affairs for an IP. & Each PI has conditions that need to be met for the illocution to register. \\ Preparatory Conditions & A state of affairs that is pensyped in a necessary condition for the non-defective employment of the force. & Assumptions the designer makes about a viewer when employing a particular force. \\ Sincerity Conditions & The psychological state of the speaker concerning the IP. & The designer and the viewer take the visualization and all its content as intentional. \\ Degree of strength of sincerity conditions & The strength of the psychological state the speaker commits to when employing an IP. & The design’ and the viewer take the visualization and all its content as intentional. \\ \hline \end{tabular}
\end{table}
Table 1: Key Information on speech act theory concepts, applications, and meaning in visualization.
concept.
2. **Educate**: the intention is to have the audience _understand_ a concept.
3. **Emote**: the intention is to _illicit_ an emotional response from the audience. (enjoy, anger, sadness, etc.) from the presentation of the concept.
4. **Provoke**: the intention is to get the audience to _react_ or _take action_ to the concept presented.
5. **Discovery**: you are _obscuring_ information on purpose so that people work for it and through that work, they _gain_ some insight that can only be gained through this process.
It is known to be challenging, with absolute certainty, to derive an individual's original intentions behind an action [5] and likewise a data visualization. However, through other contexts, structures, and cues, it is possible to infer a close approximation of their intent. Linguistics has spent time studying both pragmatic and semantic structures in language as a means to accurately gauge intent, which has attracted interest from law practitioners as well as the NLP (Natural Language Processing) community. We hope frameworks for analyzing data visualizations, such as the one proposed here, can help the rapidly developing communicative visualization subspace and its relationship with ML4VIS.
## 4 Speech Act Theory Fundamentals and Terms
In this section, we will review our translation process and provide additional information on the terminology. Table I contains the terminologies that we translate and contextualize for data visualization.
The field of speech act theory examines how words are not only used to convey information but also to carry out actions. Many philosophers and linguists study speech act theory to gain insights and a better understanding of how we communicate. A speech act can be described as something that is expressed by an individual that not only offers some information but also performs an action.
The initial foundation of speech act theory was introduced by J.L. Austin [5] and the theory has since been developed and expanded by several other scholars [57, 56, 27]. Austin introduced the terms _locutionary_, _illocutionary_, and _perlocutionary_ acts. Where locutionary act is the utterance of the phrase, illocutionary is what was meant or intended by the utterance, and perlocutionary act is the effect the utterance has upon the listener. These terms of locutionary, illocutionary, and per-locutionary can, respectively, be thought of as: what is being put forth, how is it being put forth, and what does putting it forth achieve?
### _Forces_
Classical speech act theory [5, 67, 27, 68] introduces the idea that our utterances, words with some meaning that we put forth, contain a variety of forces. Grice [27] introduced the concept of speaker meaning, a speaker attempts to get the audience to believe something by relying on the audience to take the intention of the speaker as a reason for belief. Grice finds that in order for a speaker's meaning to occur, the speaker must first intend to produce an effect on an audience and also intend that this very intention be recognized by that audience. Next, the speaker must also intend this effect on the audience to be produced at least in part by their recognition of the speaker's intention. Speech act theory recognizes [69, 15, 55] that an illocutionary force contains the intent of the speaker. Namely, illocutionary force is the intended message a speaker assigns to a sentence they utter. Searle and Vanderveken [69] assert that the force in speech is comprised of 7 parts; illocutionary point (IP), degree of strength of the IP, mode of achievement, propositional content conditions, preparatory conditions, sincerity conditions (SC), strength of SC. The illocutionary point can be of the following forms: assertive, commissive, directive, declarative, and expressive.
Neo-Gricean theories modify Grice's principles to some extent. From these modifications, we are given relevance theories [72] as well as modifications to forces allowing for more focus on the speaker's intention. In this work, we use the Neo-Gricean analysis [6, 16] as a basis for our mapping between communication in visualization and speech act theory. Mapping these forces into visualization requires careful consideration of what is consistent and what additionally needs to be factored in. Murray and Starr [55] propose that the force of an utterance is its communicative function. They examined Grice's definition of communicative intention [27] and found that it did not consider how signals are used to coordinate communications. Although Murray and Starr [55] state that the approach we adopt does not address how agents use signals to coordinate, in the context of visualization, we fill this gap using semiotic theory. As visualization designers, we make use of visual signals which is explained by semiotic theory. An important takeaway of semiotics [59, 18, 35] is how social conventions influence meaning. In other words, the force of an utterance is contextual and subject to conventions.
### _Speech Act Example: Alice & Bob_
To provide a clear example of what is a speech act and what can one do with it let us observe a conversation between Alice and Bob Fig.2.
Alice: _"Would it be too much trouble for me to ask you to hand me the salt?"_
Alice utters a sentence to Bob that is asking two questions concurrently. The first question is if Bob is capable of passing the salt, while the second is an actual request for the salt. The _locutionary act_ in this case is what was said, the literal sentence. The _illocutionary act_ is what Alice means by uttering the sentence, and the _illocutionary force_ is the intent. Specifically, the intention of Alice was a request for Bob to give her salt, she then issued an utterance with the illocutionary force of command to Bob. If Bob, successfully processes this utterance and its force and proceeds to acquire and hand Alice the salt, then he has performed the _perlocutionary act_. In order for Bob to identify the illocutionary force and corresponding act, Bob must either passively or actively factor in relevant contextual information or social conventions and practices to help determine what Alice's intents are.
## 5 VisActs
The speech act was developed for understanding and examining how we communicate, specifically with words and speech; however, we find it can be extended past words for visual communication. Visualization is becoming a dominant mode of communication. As visualization becomes more complex for expressing and communicating data, it will inherit the challenges of languages. At one level, it has the ability to express more precisely but concurrently opens itself up to more ambiguity and multiple ways to be interpreted, which can have a variety of implications.
### _Proposed Framework_
With VisActs, we borrow some concepts from linguistics to use as a foundation and then proceed to contextualize and translate how these structures apply in data visualization, specifically the communicative side of data visualization. In speech act theory, we can use three structures to frame intents and their effects; locutionary, illocutionary, and perlocutionary speech acts. With this framework, we offer varying
Fig. 2: Illustration of a speech act. Person A utters a phrase, a location, to person B. This location is what person B will hear. Person A simultaneously also performs an illocutionary act by applying a force with their intention. How person B responds or what they do after processing what Person A has said would be classified as a perfocution.
levels of depth for the analysis of designer intent, as seen in, Table I. Furthermore, we focus on a retrospective analysis of designer intentions in visualizations and particularly data stories to illustrate VisActs.
To begin our translation, we first contextualize each of these for visualization. A locutionary VisAct is the _data_ or finding we present to the targeted user or audience. An illocutionary VisAct is the _visual representation_ or _encoding_ this finding or takes assumes when presented to the target user or audience. The illocutionary force or VisForce, is the _design rationale_ for the representation or encoding. Lastly, the perlocutionary VisAct represents the _evaluation_ of the encoding design after the audience has viewed and processed it. Through the perlocutionary VisAct, the designer gains an understanding of if their intended outcomes were met, that is the audience decoded the encodings and understood the findings or data presented as intended by the designer.
With this framing, we have separated stages of visualization design into several bins. In the first bin, locutionary VisAct, we focus on isolating what is the specific or derived data to convey to the audience. This bin is not concerned with _how_ this data is visually represented or modified but focused on the semantic _what_ part of the data is being shared. It is in the illocutionary VisAct and its accompanying VisForce, that we can begin teasing and understanding how the design impacts the communication of the data. There are several means through which a designer can transform the visualization to reflect their intentions. The two categories this work will focus on are encoding and interaction design. However, how we communicate, design, and interpret data-driven visual content is also affected by societal conventions and other contextual information. The goal of VisActs is to provide an alternative means to assess here how are intentions shape _visually_ the data we are communicating as well as better infer a designer's original intentions for producing a visualization.
### Locutionary VisAct
For the purpose of this work, we are only concerned with data that has been identified to be shared. VisActs does not consider data whose content is largely unknown and expected to be explored. The Locutionary VisAct made by the designer is the process of selecting data, tasks, and initial analysis methods (i.e., data cleaning). As these choices reflect part of the designer's intentions. For example, in data storytelling, this is the process of identifying the "story pieces" to be communicated [42]. The data selection and modification affect the visualization design, as it may constrain what visualization options if any [22, 54, 61]. For example, hierarchy suggests depth, temporality may imply change, and spatiality could imply closeness or bonds. Thus, we may be visualizing data as a treemap, flows, or possibly on a map. By taking data types into account, such as nominal, categorical, numerical, and their pairings, we can begin to define a space of what representations are available to fulfill our communicative goals.
### Illocutionary VisAct
The illocutionary VisAct is the process of designing a visualization from the data to then be shared with an audience. This visualization may be interactive but must be data-driven. Similar to speech acts, we are not concerned with visualizations that have no data or " "meaning" associated with them. The design of both interactive and static data visualization is heavily influenced by the designer and their choices. In this VisAct the relationship between the designer's intent, their rationale, and the resulting visualization is captured. How the designer intends to communicate this data (e.g., to educate or persuade) may influence their design rationale.
The intention, or intended purpose of a design choice, is captured by an illocutionary point (IP). VisAct IPs fall under a set of five types; assertive, commissive, directive, declarative, and expressive. An assertive point is a visual element that either states, claims, or suggests the represented data has validity and is true. For example, in Figure 3(c) the solid red lines are making an assertive point visually _stating_ the current trend of Covid-19. A commissive point sets up a guarantee between the design and the audience that it will offer either an explanation, understanding, or action. A simple example would be a slider that sets the time window for a line chart, the guarantee being that the chart should update to reflect the selected window. A directive point would be design choices that attempt to engage or motivate the audience to act. Declarative points are design elements that transition the visualization into a new state or show predictions of what could transpire (i.e., animated transitions, filtering, or drill-down). Lastly, an expressive point captures the designer's personal opinions as they appear in the visualization. In Figure 1a each choice to make the chart look like blood dripping could be an expressive point; the color, inverting the y-axis, and the title of the poster.
Whereas the design rationale is referred to as the VisForce, a force that guides and nudges design decisions into what is finally seen by the audience. A VisForce's influence appears in (1) the encoding design and (2) the interaction design.
**Encoding Design.** How we design visualizations, in terms of binding the data to the visual elements, greatly impacts how the data are perceived and understood by the audience [54, 57, 9, 22]. Certain visualization design choices may elicit emotional responses from the audience, which can also help better communicate the designer's intentions to the audience [39].
**Interaction Design.** Interaction design as it pertains to data visualization is a heavily documented [48, 41, 46, 21, 4]. From these works we can surmise the following: (1) interaction design effects visually what is seen by the audience, (2) interaction design influences how the audience perceives the data, (3) and interaction design impacts audience engagement with the data. The designer's choice of which interactions are available to the audience can steer the audience towards their goal.
Fig. 3: VisActs consists of three core actions; Locutionary, Illocutionary, and Perlocutionary acts. Through each of these actions, the designer imbues their intent into the visualization in the hope of the audience understanding and interpreting the message as intended.
### _Perlocutionary VisAct_
A perlocutionary VisAct is performed by the audience. This action informs the designer whether or not their desired outcome has transjired. If the outcome was to _call to act_ over climate change by signing a linked petition and they received a signature that would be a success. However, if the outcome was to _educate_ museum visitors on metagenomics [19] via an interactive system and the majority of visitors failed to understand the system, then it was unsuccessful. The granularity of success and failure, much like with evaluation is up to the designer to classify. In the context of data visualization and research, this stage is evaluating what was understood but the audience and how that aligns with what the designer intended to transpire [54, 52, 22].
### _Convention_
Social conventions are rules and norms that govern countless behaviors we all engage with every day. Bicchieri [10] defines conventions as descriptive norms that have endured the test of time. She states that if one's main objective is to coordinate with others, and the right mutual expectations are present, people will follow whatever convention is in place. In visualization design adhering to and following conventions is necessary for effective communication. For example, color norms differ based on culture, knowledge, and context. Rainbow-colored maps are typically associated with temperature variations [49]. If a designer applies a rainbow color map in a geospatial visualization to depict crop yields then many viewers may not properly decode the visualization.
### _Context_
In communicative visualization, several works [53, 61, 64, 52] identified challenges in the interpretation of data-driven visualizations and how different contexts affect this interpretation. Mukherjee et al. [53] proposed the use of semantic discriminability theory, a general framework for understanding conditions determining when people can infer meaning from perceptual features. There is a large body of linguistics research [63, 67, 6, 72, 5] showing how context influences the meaning of an utterance. Sbisal [63] proposes contexts are continuously shifting, but at each moment of interaction it is possible to evaluate the performed act against the context. This literature suggests context can be classified along the following dimensions: (1) Given vs constructed context, (2) limited vs unlimited context, and (3) context change.
**Given vs. constructed context:** In a given context, the context is set once the event starts and is not mutable going forward. For example, many narrative visualizations [70, 19, 42] or analytical systems pre-determine or have a fixed context. Whereas in a constructed context the context of an interactional event is created by its participants as the interaction proceeds. One form of this in visualization could be highly interactive and collaborative visualizations that function off of user inputs. These visualization evolve and change based on these interactions. A different example of this can be seen in Figure 4 where the context of a public forum influences the designer's intent. This begins with Jeffrey Shaman creating a visualization and it is shared on a public forum. The public became invested in whether the design is effective or not, how can it be improved, and what is the intent of this visualization. In response to the visualization, others were created with a different intent. As shown in Figure 3(d), Amelia Wattenberger attempted to improve on the original, Figure 3(b), with some believing she did. The constructed context in this scenario is that initially, the context of the visualization was to forecast themicron virus for a period of time; however, as more individuals debated the effectiveness of the visualization the new visualizations produced gained a constructed context of attempting to provide an improved design and convey the original message.
**Limited vs. unlimited context:** When is acquiring information to interpret what is occurring no longer necessary? Is the context finite or something that needs to be continuously provided? Context, in speech act theory, has been considered a bounded resource that only includes 'what is needed' for [36] interpretation or evaluation. Conversely, there is an argument that context is ever-changing and that there is no end to the details one might want or need. Searle [68] views context as indefinitely extensible and potentially all-inclusive. That is every speech act has meaning only against a set of background assumptions. These assumptions are indefinite and the number of processes for the analysis and development of an idea are endless. Other views [73] find context as always extensible but delimited. They believe that context is needed as background information (or what the speaker believes is, or intends to treat as background information) and is delimited on every occasion by the presuppositions a speaker happens to make. Additionally, actions typically involve results, such as bringing about a new state, referencing the past, or substituting a new state for an older one. The objective or cognitive nature of context affects the action. After an action or event occurs its content is added to the participant's presuppositions, and therefore to the cognitive context. For example, in dashboards with linked views a user may filter on a subset of the data altering the chart. An accompanying view may re-render to reflect this filter and reveal new insights on the subset reflecting that initial interaction. This change is an implicit communication to the viewer that these two views are linked and the data in one is influencing the other. The discussion of limited vs unlimited context is ongoing in speech act theory. However, the distinctions and points made, as well as points made for future works, directly apply to visualization. For example, Mantri et al. [52] examine how a variety of additional contexts impact the interpretation of communicative visualizations and synthesis of information that comes from consuming cumulative discoveries. They discuss how science and journalism often present their content as an ongoing discussion and evolution rather than a finality (i.e., discoveries that build on, contradict, contextualize, or correct prior findings).
These considerations of context clearly arise in visualization and have been defined implicitly by this classification. Several frameworks [19, 33, 42, 57] have discussed context and its influences on visualization design. For example, they describe external context as (1) an understanding of the target audience's needs and prior knowledge, (2) the type of device or medium the visualization will be expressed through, (3) and the physical setting. Context's effect on inferring and interpreting visualizations has also been examined [57, 52, 53, 61, 64]. Padilla et al. [57] identify how after viewing the encoding, the audience mentally searches long-term memory for knowledge and context relevant to interpreting the visualization. Furthermore, context, as it
Fig. 4: This figure illustrates a disconnect between an author’s intention and the audience’s reception. Jeffrey Shaman wrote an article [34] predicting when the Omrican variant of Covid would peak with an accompanying visualization. The visualization sparked an online Twitter debate (a), with some finding the original design (b) ineffective compared to (c) a line chart. Amelia Wattenberger (d) re-imagined the design with a different intent, which garnered a more positive response, and Amanda Makuiec (Executive Directory of DataVizSociely) (e) wrote a thread [3] on how visualization design can address different needs or intents.
pertains to visualization, has many influences on the design and the designer's intent. Consequently, subtle changes in intent can be reflected and seen in the visualization, as shown in Figure 4. With VisActs, we provide a framework to facilitate studying at a granular level how intents influence the design.
## 5 VisActs: Application to Visualization
To ground the value of viewing visualization as a language and applying the speech act theory we provide a set of examples. The first example uses VisActs to assess a New York Times visualization. These examples use VisAct at a granular level to study the intention from the perspective of two archetypes commonly observed in communicative visualization: storyteller and educer. As a disclaimer, we can not know for certain the original designer's intent. However, we can, to an extent, determine what the intent could be based on design decisions and available documentation.
### Example: Storyteller
The storyteller is concerned with expressing data as a narrative. Here visualization is a means to engage an audience with the data and the visualizations are carefully sequenced to illustrate causality. We apply _VisActs_ to study a recent narrative visualization piece, _How the Virus Got Out_[78].
This narrative piece is composed of several visualizations, five of which are shown in Figure 5. The story starts with Figure 5a, a visualization that promises that the authors will explain visually _How the Virus Got Out_ of China. It also asserts that the pandemic started in China.
Let us first focus on what is being shared, not inferred from the visualization. This is defined as the location or locutionary act. When contextualized as a _VisAct_, the locutionary act and the locution describe what the underlying content is, namely the data. Here, the **locutionary VisAct** is the data and analysis used to understand the spread of the virus. The _data act_ is the selection and curation of the datasets that represent people, their movements, and infection data. As mentioned in the article [78], their data came from Baidu, two Chinese telecoms, Fred Hutchinson Cancer Research Center, the University of Washington, and the Johns Hopkins Center The _analytic act_ are the estimations and relevant methods applied to the dataset to help bring out the findings to then be visualized. Lastly, the _data types_ are spatio-temporal data.
The visualizations are the illocutionary **VisAct**. The **image act**, consists of the low-level visual elements. The viewer would see a web page composed of color, shapes, and text (e.g. the marks and channels). The **encoding acts** determine how marks and channels should be paired and how they are bound to the underlying data. In this example, the data contains temporal information about people's movements, location, and estimates of the percentage of the population with COVID. Individuals are represented as points, the position of a point connotes a geo-spatial location at an instance in time, and the color denotes whether an individual has COVID. The meaning provided by the encoding act is supplemented by the **semiotic act**. The semiotic act constructs the relationships between the image and other factors such as culture and society. The grouping of shapes and text together is seen as a map of China and neighboring countries. This visualization uses projections and map aesthetics that most of the populace has familiarity with from map-based applications. Therefore, the movement of points across this map is associated with transit. In Western cultures red color has negative connotations. In this case, red is used to symbolize those infected with the virus. Although the piece presents facts, because of its emphasis on temporal flow and implied causality, the **type of visualization** is a narrative. It is important to note that the type of visualization influences the meaning it will convey.
The **VisForce** is the intention underlying Figure 5a. One intention here is a promise to the readers _to educate_ how the virus spread. A _VisForce_ is comprised of the seven properties described in Section 5.3. To understand the _VisForce_s at play let us first identify the set of **illocutionary points** at work. Figure 5a has a commissive point and an assertive point. It asserts that the virus started in China and it promises to provide a justification for this assertion. The **degree of strength** of the promise is moderate, as we have to infer the promise. The mode of achievement of the _VisForces_ is through the sequence of steps used to build the visualization. It starts with the text "China", which slides and snaps onto a map of China. Then a set of red dots quickly transitions into streams of red dots flowing out of China. The ficticity conditions for these illocutionary points are the assumptions by the designers that they have the information and understanding of the base material and that the reader will benefit from the visualization.
Throughout the story, the visualizations make several assertive points about the spread of COVID, where it originated, and facts about specific days. In Figure 5c, the designers present a set of points depicting the number of reported cases in December. The _VisForces_ consists of two assertive points and an expressive point. The illocutionary _VisAct_ of a small cluster of red points asserts that only a few dozen cases were known to the doctors. The second assertion was that the true number of infected was closer to a thousand. The corresponding illocutionary _VisAct_ is a larger cluster. The expressive point was the emphasis the authors placed on the difference between the two assertions. The illocutionary _VisAct_ to achieve the expressive point is the animation that grows the volume of the dots. Its degree of strength is high.
In Figure 5d, volumes of varying sizes of red points are shown on a map. The designer assumes that the size of the cluster will be interpreted as the size of the infected population by the viewer. We can argue that Figure 5c introduces the context necessary for this interpretation. As we have stated earlier the _VisForce_ depends on the context, which can change or evolve. The visualization in Fig 5e opens with a declarative point. There is a state change in the _VisAct_. The overall semantic state (image, encoding, and semiotic act) of the visualization has changed. The visual narrative transitions from a geo-spatial visualization to a scale-free representation that mimics a subway map. This enables the designers to make an assertive point of Wuhan's centrality in the pandemic.
In addition to internal contexts, there are external contexts such as social conventions or norms that can passively or directly influence the _VisForce_. For a viewer to recognize the _VisForces_ in this story they must be (1) aware a pandemic occurred resulting in a global lockdown, (2) familiar with reading and interpreting maps, and (3) able to understand English. Also, as we have seen, _context change_ contributes to the intended meaning. Information necessary for interpreting visual encodings is presented and then applied in different settings. Lastly, there are some **conventions** this visualization takes advantage of. Namely, it uses the popular scrolly-telling design and assumes those viewing the page understand the convention of scrolling to reveal new content. The **sincentiy** of the designers is seen on the final page of the story where they provide notes talking about limitations of what is presented as well as sources for the data and statements made.
To recap, we have organized the mapping of this narrative into two sections. The first section, **locutionary acts** addresses what the designers put forth and what it is we see. In the second part, we focus on **illocutionary acts**. We identify (infer) the intents and examine
\begin{table}
\begin{tabular}{p{142.3pt} p{284.5pt}} \hline \hline
**VisActs Terminology** & **Description** \\ \hline
**Locutionary VisAct** & To show data. _What is shared_. \\ Data Act & The curation \& selection of a dataset(s) \\ Analytic Act & Expression of the data through analysis. \\ Data Type & The type of data (e.g., temporal, spatial, quantitative, discrete, etc.). \\ \hline
**Illocutionary VisAct** & To visualize the data. _What is seen_. \\ Image Act & The production of an image. \\ Semiotic Act & The expression of data through signs \& codes. \\ Encoding Act & Visual encodings mapped to data. \\ Visualization Type & The type of visualization (i.e., informative, instructive, narrative, explorative, and subjective). \\
**VisForce** & The designer’s rationale. \\ \hline
**Pervlutionary VisAct** & The effect the visualization has on the audience. _What is understood_. \\ \hline \hline \end{tabular}
\end{table}
Table 2: VisAct terminology. These terms and their mappings were derived from a breadth of linguistics and data visualization research.
how they are expressing their intentions. This is addressed by delving into the _VisForces_, the illocutionary points, modes of achievement, and the context. In this example, the designers use several assertive illocutionary _VisForces_ to convey to the viewer "How the Virus Got Out". We also identified and discussed declarative, expressive, and commissive points.
Finally, let us look at the **perlocutionary act**. This third and final component of _VisActs_ addresses the consequences of presenting the visualization to the viewer. The perlocutionary act captures the effect presenting the visualization had and assesses if the viewer understood the designer's intent. In our field, we conduct user studies and evaluations to determine this. To ascertain if the visualization was successful in communicating the intent of asserting the factors that led to the virus spreading across the world we would need to interview viewers.
### Example: Educator
A common use of communicative visualization is to teach or to inform. An educator uses visualization to simplify complex data to explain concepts. This can take place in a formal setting such as a classroom or in an informal setting like a museum. We examine the museum exhibit DeepTree [11, 20] using our framework.
Here, the **locutionary VisAct** is the phylogenetic tree of life. The _data act_ is the phylogenetic dataset and the corresponding timelines. The _analytic act_ could be any data formatting or association of the phylogenetic tree with the temporal information. Lastly, the _data types_ are temporal and image data.
DeepTree's **illocutionary act**, as seen in Figure 5(a), has a tree-like structure supporting a set of images. The **image act** of Figure 5(a) consists of point and line marks. DeepTree's **encoding act** maps the visual elements to the underlying data. As a result, what is seen in Figure 5(a) is a phylogenetic tree dataset that consists of images and text. The designers create a tree visualization where leaves are images of species. The visualization is composed of a line mark and takes advantage of size and position channels to convey the dataset. Images on the tree depict species along with a label with their common name. The **semiotic act** provides a tree metaphor, where the trunk symbolizes the universal common ancestor and the branches the children. The "branches" of this tree represent splits in species and portray the phylogenetic classification. This **type of visualization** is primarily explorative and is supplemented with some informative elements.
As with the prior example, to understand the illocution, we identify the _VisForces_. This visualization makes use of four of the five illocutionary points as described in Table I. It has assertive, directive, commissive, and declarative points. We will focus on the directive and commissive points.
Figure 5(a) is the entry point for this visualization. The _VisForces_ here include commissive and directive points. The _VisAct_ promises to inform the visitor about the Tree of Life. The mode of achievement is a collection of images of different species and animations indicating relationships among them. The directive point is to get the viewers to drag their hands across the screen. The mode of achievement is an animation of a hand dragging downward to instruct viewers how to engage with the application, Figure 5(d). This visualization has several other directive points that are achieved through different modes such as tapping a button, downward dragging to traverse down the tree, pushing upward to move up the tree, flicking to quickly move through the tree, single-touch pan, multi-touch zoom, pushing to select, and dragging an element onto a destination. Each directive point is achieved by using techniques such as highlighting, animating, or annotating visual elements to cue the viewer to interact. The degree of strength for a directive point depends on the visual emphasis placed on that technique.
External factors and conventions also influence the directive points. The museum setting and use of an interactive touch-screen table to display the visualization add to the _VisForce_.
The side panel in Figure 5(a) has a commissive point. The promise here is to inform the viewer of the location of the tree of each species portrayed in the side panel. In DeepTree when a user selects an image of a species from the side-panel, Figure 5(a), the image jumps to its position in the tree. If an image is pressed a graphical element, an arrow, appears showing the viewer where to slide (Figure 5(c)). The directive point here is to get the viewer to slide the image. This directive point is weaker than the earlier directive point for getting a viewer to drag their hand onto the table.
Let us next examine the **propositional content condition** for directive points. These are the designers' beliefs that the viewer will perform an action they request. In DeepTree, the designers believe that by animating an element to grow and shrink, adding a highlight around it, and having a text annotation above it saying "learn more" the viewer will tap on it. The **prepreparatory conditions** for all directive points assume that the viewer is able to perform the suggested actions. The **sincerity condition** of these directive points is that the designer wants the viewer to perform the actions. The degree of strength for the sincerity condition is the importance of these actions to the designer. In DeepTree it is very important for the designers that the viewers pan and navigate the tree. This action is crucial and is evidenced by the visual emphasis placed on this _VisAct_.
The viewer can press a button that takes them to a new simplified view of the tree. The illocutionary _VisAct_ here is a tree of life contextualized to the selected species, Figure 5(b). The _VisForces_ here have a declarative and commissive point. The declarative point is the transition from the earlier state, Figure 5(a), to a simplified visualization, Figure 5(b). This declarative point's mode of achievement is an animated transition.
The commissive point the simplified view makes is that it promises the viewer that the designer will return them back to the original state, Figure 5(a). The **mode of achievement** for this commissive point is a button. The **propositional content condition** for this commissive point is that the designer will fulfill the commitment. That is, upon a viewer pressing the "X" button, seen in Figure 5(b), the simplified view will disappear and "close" and they will be returned back to the overview, Figure 5(a). The **preparatory conditions** for the commissive point is
Fig. 5: “How the _Virus Got Out_” [78]. (a) Title page, (b) map of Wuhan, China, (c) visualization of what was assumed to be the number of COVID cases in December compared to what it actually was, (d) overview of the virus spread, (e) scale-tree representation of the world.
Fig. 6: (a) DeepTree [11] exhibit overview (b) simplified view (c) selection interaction (d) zoom and pan interaction. (permission pending)
that the designer is able to complete this promise within their design. The sinterity condition of this commisive point is that designer, and therefore the visualization intends to satisfy the promise.
Briefly, there are many assertive made in both the simplified and overview visualizations. These assertive points state facts about species and their ancestry. The modes of achievement the designers selected to express their assertive points include dialog/pop-up boxes, annotations, and color to highlight relationships between species.
So far, we have gone over some of the _VisForces_ present in this exhibit to illustrate how to use our framework and the structure it provides. Namely, we showed that there are assertive, declarative, commisive, and directive points and thus those respective _VisForces_. We walked through the properties of some of these forces and gave examples (i.e. we described eight directive _VisForces_, degree of strength, conditions, and mode of achievement). However, we also have to account for how social conventions and external contexts influence the visualization design and its overall meaning.
DeepTree relies on external contexts and conventions present in an informal learning environment; specifically a museum and the considerations and conventions [19] that it comes with. Furthermore, DeepTree relies on its viewers to have familiarity with touch-based devices [11] (e.g., iPads and iPhones).
Lastly, the perlocutancy _VisAct_ addresses the reaction the viewers had upon seeing the visualization. It can be used to determine if the designer was successful in conveying their intended meaning. The designers of DeepTree documented their evaluation [11] and from it we can see that their directive and a commisive _VisForces_ were understood by the viewer. For example, both the commisive force of a promise to the viewer that by tapping the find button something will occur in the future and the directive force of a request to the user to tap and drag an image off the image eel were successful. Viewers would tap on the button to find a species, signifying the viewer believes a promise will be fulfilled. Additionally, they were then presented with a slot to place an image. They inferred the directive point and dragged an image onto the slot. After doing so, the promise made by the designer is fulfilled as the visualization "flies" through the tree via animation to where the species in the image is located. This _VisAct_ had "emotional" perlocutonary response in the viewers, where the designers documented responses such as "wow, this is big" or "woah".
## 6 Discussion
With VisActs, we present a conceptual framework for inferring designer intent. This framework is not a finality but rather a foundation to be iterated and expanded upon. There is a growing need to infer and assess designer intent, as well as grow our understanding of which types of design decisions illustrate an intent (i.e., negative emotions can be conveyed via a variety of design choices [39]). With this manuscript, we provide a translation and contextualization of frameworks from linguistics to communicative visualization and offer a framework for inferring intent in interactive data-driven visualizations. In this section, we discuss the prospective values and directions VisActs can lead to.
**Biases Assessment:** One use of VisActs is to tease out design decisions that could reflect the designer's biases. For example, in Figure 6(a) we see a bar chart communicating _"America's economic growth in the 21st century"_. This chart at a glance shows the tallest bar to be for the year 2021. Upon closer review, however, Figure 6(b), we see that the y-axis has a mistake. This mistake extends the 2021 bar to be slightly more exaggerated in comparison to the other years. This mistake was shared for several hours on Twitter before a correction was made. In that span, the community responded with many complaints and comments over the chart. There were claims that it was intentionally added as a means to persuade that the 2021 administration is very effective. The public would attempt to evidence these claims by suggesting the mistake only occurred at a point on the y-axis that would only affect the 2021 bar and nothing else. Namely, the public assessed the encoding design to infer a plausible designer intent.
It is impossible to say with absolute certainty whether this case was a genuine mistake or an intentional design choice. However, there is a need to provide structures to assess and better debate the biases in charts that circulate in the wild. As these affect how society trusts data and science. One possible extension of VisActs is to apply the framework to a corpus of data visualizations and create associations between design and possible intents. Through developing richer classification models, we could develop faster or more accurate linters that could identify these discrepancies and provide community notes, Figure 6(d).
**ML4Vis application.** Bako et al. [7] call attention to how visualization examples can be used as inputs to author visualizations as a direction for future work. To help achieve this goal, we need to really understand and expand on which aspects of these examples correlate to what the designer intends to do. This could be task-based as well, however, it is possible that task-based may not be granular enough to effectively capture the needs of designers and the specific design elements they may be interested in. Expanding on classification models VisActs has an application in the ML4Vis space. Principles and frameworks from speech act and discourse theory have been studied and leveraged in NLP. Similarly, VisActs can be utilized in future works to automatically generate visualizations and their design based on designer inputs. By offering a framework to infer the designer's intentions based on design rationale and assess which features could contribute to particular intentions we can better machine learning train models on data visualization and build associations between visual features and particular intentions.
As with Midiourney and DallE prompts, VisActs can be a stepping stone to developing applications that allow users to provide their data to visualize and enter prompts to tailor the visualization to their needs. In order to achieve such automation we need to develop rich associations between designer intents and corresponding design choices. Through VisActs, it is possible to develop these associations. This framework can be applied at a granular level, as seen in the examples for Storyteller and Educator.
## 7 Conclusion
This work takes the view that visualization is a language and can therefore benefit from applying frameworks and theories from linguistics to systematically understand and analyze how we communicate through data visualization. We provide a translation of a sub-field of linguistics and offer our framework VisActs. We then use examples applying our framework to illustrate its potential application to our field. This translation affords us a means to deconstruct the language of visualization, identify low-level communicative components, and learn how these components individually and collectively accomplish the communicative goal of the visualization. Our detailed examples demonstrate how
Figure 6: (a) Tweet by the @WhiteIouse account conveying the economic growth of the current administration [75]. (b) The y-axis label increments by a half-step rather than a whole point as it was previously. (c) Nancy Pelosi shares this figure with the error to her following [76] (8.1 million followers). (d) The Twitter community flags the discrepancy and many replies and threads are made questioning the integrity of the data. (e) @WhiteIouse updates the figure stating it was a _proproreading_ issue.
these concepts can be used to examine designer and describe the forces at play.
This is an initial mapping of the two spaces and future work can tighten this association and build upon its structure. We believe that our work gives credence to the relevance of linguistics frameworks for the study of visualization and supports continued efforts in translating other frameworks and theories into our domain. We hope our work enables the future integration of theories and frameworks from linguistics into visualization and grows our framework for studying visualization design intent. VisAct provides a standard way, a language, for anybody to examine/describe a visualization, its interaction design, and the designer's intent down to the granular level.
| データ可視化は、情報の可視化コミュニケーションを定義します。可視化の成功の重要な指標は、コミュニケーション者(s)の意図が忠実に伝えられているかどうかです。可視化の構築と表示のプロセスは、私たちのコミュニティによって広く研究されてきました。しかし、この論文における不一致のため、可視化の通信部品を分類および形式化する枠組みと方法論の必要性が認識されてきています。この研究は意図に焦点を当て、コミュニケーション可視化におけるこの概念を言語学における概念と対比しています。この2つの空間の間には、関連するフレームワークを適用するためのマッピングを構築します。私たちは、言語学の哲学を視覚化のコミュニケーションを説明する基盤として使用しています。さらに、このアプローチの利点と、いくつかの潜在的な研究方向を説明します。 |
2301.00063 | The Sticky Lévy Process as a solution to a Time Change Equation | Stochastic Differential Equations (SDEs) were originally devised by It\^o to
provide a pathwise construction of diffusion processes. A less explored
approach to represent them is through Time Change Equations (TCEs) as put forth
by Doeblin. TCEs are a generalization of Ordinary Differential Equations driven
by random functions. We present a simple example where TCEs have some advantage
over SDEs.
We represent sticky L\'evy processes as the unique solution to a TCE driven
by a L\'evy process with no negative jumps. The solution is adapted to the
time-changed filtration of the L\'evy process driving the equation. This is in
contrast to the SDE describing sticky Brownian motion, which is known to have
no adapted solutions as first proved by Chitashvili. A known consequence of
such non-adaptability for SDEs is that certain natural approximations to the
solution of the corresponding SDE do not converge in probability, even though
they do converge weakly. Instead, we provide strong approximation schemes for
the solution of our TCE (by adapting Euler's method for ODEs), whenever the
driving L\'evy process is strongly approximated. | Miriam Ramírez, Gerónimo Uribe Bravo | 2022-12-30T22:00:58 | http://arxiv.org/abs/2301.00063v2 | # The Sticky Levy Process as a solution to a Time Change Equation
###### Abstract.
Stochastic Differential Equations (SDEs) were originally devised by Ito to provide a pathwise construction of diffusion processes. A less explored approach to represent them is through Time Change Equations (TCEs) as put forth by Doeblin. TCEs are a generalization of Ordinary Differential Equations driven by random functions. We present a simple example where TCEs have some advantage over SDEs.
We represent sticky Levy processes as the unique solution to a TCE driven by a Levy process with no negative jumps. The solution is adapted to the time-changed filtration of the Levy process driving the equation. This is in contrast to the SDE describing sticky Brownian motion, which is known to have no adapted solutions as first proved by Chitashvili. A known consequence of such non-adaptability for SDEs is that certain natural approximations to the solution of the corresponding SDE do not converge in probability, even though they do converge weakly. Instead, we provide strong approximation schemes for the solution of our TCE (by adapting Euler's method for ODEs), whenever the driving Levy process is strongly approximated.
2010 Mathematics Subject Classification: 60G51, 60G17, 34F05 Research supported by UNAM-DGAPA-PAPIIT grant IN114720
## 1. Introduction and statement of the results
Feller's discovery of sticky boundary behavior for Brownian motion on \([0,\infty)\) (in [10, 11]) is, undoubtedly, a remarkable achievement. The discovery is inscribed in the problem of describing every diffusion processes on \([0,\infty)\) that behaves as a Brownian motion up to the time the former first hits \(0\). See [10] for a historical account and [12] for probabilistic intuitions and constructions. We now consider a definition for sticky Levy processes associated Levy processes which only jump upwards (also known as Spectrally Positive Levy process and abbreviated SPLP). General information on SPLPs can be consulted in [1, Ch. VII].
**Definition 1**.: _Let \(X\) be a SPLP and \(X^{0}\) stand for \(X\) killed upon reaching zero. An extension of \(X^{0}\) will be cadlag a strong Markov process \(Z\) with values in \([0,\infty)\) such that \(X\) and \(Z\) have the same law if killed upon reaching \(0\). We say that \(Z\) is a Levy process with sticky boundary at \(0\) based on \(X\) (or a sticky Levy process for short) if \(Z\) is an extension of \(X^{0}\) for which \(0\) is regular and instantaneous and which spends positive time at zero. In other words, if \(Z_{0}=0\) then_
\[0=\inf\{t>0:Z_{t}=0\}=\inf\{t>0:Z_{t}\neq 0\}\quad\text{and}\quad\int_{0}^{ \infty}\mathrm{I}(Z_{s}=0)\,ds>0\quad\text{almost surely.}\]
It is well known that sticky Brownian motion satisfies a stochastic differential equation (SDE) of the form
\[Z_{t}=z+\int_{0}^{t}\mathrm{I}(Z_{s}>0)\,dB_{s}+\gamma\int_{0}^{t}\mathrm{I}(Z _{s}=0)\,ds,\quad t\geq 0, \tag{1}\]
###### Abstract
We consider the _Cevy_ processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with a finite number of processes with
for some constant \(\gamma>0\). In the Brownian case, this condition corresponds to the so-called sticky Brownian motion with stickiness parameter \(\gamma\). Generalizing the Brownian case, we will compute the boundary condition for the generator of the sticky Levy process of Theorem 1 in Section 3.3. Generator considerations are also relevant to explain the assumption on \(X\) having no negative jumps: The generator \(\mathscr{L}\) of such a Levy process acts on functions defined on \(\mathbb{R}\), but immediately makes sense on functions only defined on \([0,\infty)\). This last assertion is not true for the generator of a Levy process with jumps of both signs.
Our second main result exposes a positive consequence of the adaptability of the solution to the TCE (2). In [1], an equivalent system to the SDE (1) is studied. In particular, it is showed that the non-existence of strong solutions prevent the convergence in probability of certain natural approximations to the solutions of the corresponding SDE, even though they converge weakly. In contrast, we present a simple (albeit strong!) approximation scheme for the solution to the TCE (2). To establish such a convergence result, we start from an approximation to the Levy process \(X\) which drives the TCE (2).
**Theorem 2**.: _Let \(X\) be a SPLP with unbounded variation. Let \((Z,C)\) denote the unique solution to the TCE (2). Consider \((X^{n},n\geq 1)\) a sequence of processes with cadlag paths, such that each \(X^{n}\) is the piecewise constant extension of some discrete-time process defined on \(\mathbb{N}/n\) and starts at \(0\). Suppose that \(X^{n}\to X\) in the Skorohod topology, either weakly or almost surely. Let \((z_{n},n\geq 1)\) be a sequence of non-negative real numbers converging to a point \(z\). Consider the processes \(C^{n}\) and \(Z^{n}\) defined by \(C^{n}(0)=C^{n}(0-)=0\),_
\[C^{n}(t)=C^{n}(\lfloor nt\rfloor/n-)+(t-\lfloor nt\rfloor/n)\operatorname{I} (Z^{n}(t)>0) \tag{3}\]
_and_
\[Z^{n}(t)=(z_{n}+X^{n}-\gamma\operatorname{Id})(C^{n}(\lfloor nt\rfloor/n))+ \gamma\lfloor nt\rfloor/n. \tag{4}\]
_Then \(C^{n}\to C\) uniformly on compact sets and \(Z^{n}\to Z\) in the Skorohod topology. The type of convergence will be weak or almost sure, depending on the type of convergence of \((X^{n},n\geq 1)\)._
Observe that the above procedure corresponds to an Euler-type approximation for the solution to the TCE (2). If we consider the same equation but now driven by a process for which we could not guarantee the existence of a solution, our approximation scheme might converge but the limit might not be solution, as shown in the following simple but illustrative example. Let \(X=-\operatorname{Id}\), \(z=0\) and \(\gamma=1\). Then the approximations proposed in (3) and (4) reduce to
\[C^{n}\left(\frac{2k-1}{n}\right)=C^{n}\left(\frac{2k}{n}\right)=\frac{k}{n} \quad\text{ and }\quad Z^{n}\left(\frac{k}{n}\right)=\begin{cases}0&\text{ if k is even}\\ -\frac{1}{n}&\text{ if k is odd}\end{cases}\]
for each \(k\in\mathbb{N}\). These sequences converge to \(C^{*}(t)=t/2\) and \(Z^{*}=0\), but clearly such processes do not satisfy TCE (2). In general, TCEs are very robust under approximations; the failure to converge is related to the fact that the equation that we just considered actually admits no solutions, as commented in a previous paragraph.
Weak approximation results for sticky Brownian motion or of Levy processes of the sticky type have been given in [13] and [12]. In the latter reference, reflecting Brownian motion is used, while in the former, an SDE representation is used. In [1], the reader will find an approximation of sticky Brownian motions by discrete space Markov chains and by diffusions in deep-well potentials as well a numerical study and many references regarding applications. In particular, we find there the following phrase which highlights why Theorem 2 is surprising: _... there are currently no methods to simulate a sticky diffusion directly: there is no practical way to extend existing methods for discretizing SDEs based on choosing discrete time steps, such as Euler-Maruyama or its variants... to sticky processes..._ It is argued that the Markov chain approximation can be extended to multiple sticky Brownian motions. In the setting of multiple sticky Brownian motions, one can consult [1] and [14]. We are only
aware of a strong approximation of sticky Brownian motion, in terms of time-changed embedded simple and symmetric random walks, in [1].
The rest of this paper is structured as follows. We split the proof of Theorem 1 into several parts. In Section 2 we explore a deterministic version of the TCE (2), which is applied in Section 2.1 to show a monotonicity property, the essential ingredient to show uniqueness and convergence of the proposed approximation scheme (Section 2.3). In Section 2.2, we obtain conditions for the existence of the unique solution to the deterministic version of the TCE (2). The purpose of Section 3.1 is to apply the deterministic analysis to prove existence and uniqueness of the solution to the TCE (2) and the approximation Theorem 2. Then in Section 3.2, we verify that the unique process satisfying the TCE (2) is is measurable with respect to the time-changed filtration and that it is a sticky Levy process. Finally in Section 3.3, using stochastic calculus instead of Theorem 2 from [13], we analyze the boundary behavior of the solution to the proposed TCE to describe the infinitesimal generator of a sticky Levy process.
## 2. Deterministic analysis
Following the ideas from [11] and [11], we start by considering a deterministic version of the TCE (2).
We will prove that every solution to the corresponding equation satisfies a monotonicity property, which will be the key in the proof of uniqueness. Assume that \(Z\) solves almost surely the TCE (2). Hence, its paths satisfy an equation of the type
\[h(t)=f(c(t))+g(t),\quad c(t)=\int_{0}^{t}\mathrm{I}(h(s)>0)\,ds. \tag{5}\]
where \(f:[0,\infty)\to\mathbb{R}\) is a cadlag function without negative jumps starting at some non-negative value and \(g\) is an non-decreasing cadlag function. (Indeed, we can take as \(f\) a typical sample path of \(t\mapsto z+X_{t}-\gamma t\) and \(g(t)=\gamma t\).) Recall that, \(f\) being cadlag, we can define the jump of \(f\) at \(t\), denoted \(\Delta f(t)\), as \(f(t)-f(t-)\). By a solution to (5), we might refer either to the function \(h\) (from which \(c\) is immediately constructed), or to the pair \((h,c)\).
We first verify the non-negativity of the function \(h\).
**Proposition 1**.: _Let \(f\) and \(g\) be cadlag and assume that \(\Delta f\geq 0\), \(g\) is non-decreasing and \(f(0)+g(0)\geq 0\). Then, every solution \(h\) to the TCE (5) is non-negative. Furthermore, if \(g\) is strictly increasing, the function \(c\) given by \(c(t)=\int_{0}^{t}\mathrm{I}(h(s)>0)\,ds\) is also strictly increasing._
Proof.: Let \(h\) be a solution to (5) and suppose that it takes negative values. Note that \(h(0)=f(0)+g(0)\geq 0\) and that \(h\) is cadlag without negative jumps. Hence, \(h\) reaches \((-\infty,0)\) continuously. The right continuity of \(f\) (and then of \(h\)) ensures the existence of some non-degenerate interval on which \(h\) is negative. Fix \(\varepsilon>0\) small enough to ensure that \(\tau\) defined by
\[\tau=\inf\{t\geq 0:h<0\text{ on }(t,t+\varepsilon)\}\]
is finite. (Note that, with this definition and the fact that \(f\) decreases continuously, we have that \(h(\tau)=0\). ) Given that \(h\) is negative on a right neighborhood of \(\tau\), then
\[\int_{0}^{\tau}\mathrm{I}(h(s)>0)\,ds=\int_{0}^{\tau+\varepsilon}\mathrm{I}( h(s)>0)\,ds,\]
which leads us to a contradiction because
\[0=h(\tau)=f\left(\int_{0}^{\tau}\mathrm{I}(h(s)>0)\,ds\right)+g(\tau)\leq f \left(\int_{0}^{\tau+\varepsilon}\mathrm{I}(h(s)>0)\,ds\right)+g(\tau+ \varepsilon)=h(\tau+\varepsilon)<0.\]
Hence, \(h\) is non-negative.
Assume now that \(g\) is strictly increasing. By definition, \(c\) is non-decreasing. We prove that \(c\) is strictly increasing by contradiction: assume that \(c(t)=c(s)\) for some \(s<t\). Then, \(h=0\) on \((s,t)\) and, by working on a smaler interval, we can assume that \(h(s)=h(t)=0\). However, we then get
\[0=h(s)=f\circ c(s)+g(s)<f\circ c(s)+g(t)=f\circ c(t)+g(t)=h(t)=0.\]
The contradiction implies that \(c\) is strictly increasing.
If \(f_{-}(t)=f(t-)\), note that the above result and (a slight modification of) its proof also holds for solutions to the inequality
\[\int_{s}^{t}\mathrm{I}(h(r)>0)\,dr\leq c(t)-c(s)\leq\int_{s}^{t}\mathrm{I}(h(r )\geq 0)\,dr\]
where \(h(r)=f_{-}\circ c(r)+g_{-}(r)\) and \(f\) and \(g\) satisfy the hypotheses of Proposition 1. These inequalities are natural when studying the stability of solutions to (5) and will come up in the proof of Theorem 2.
### Monotonicity and Uniqueness
The following comparison result for the solutions to Equation (5) will be the key idea in the uniqueness proof of Theorem (1). Moreover, we pick up it in Section 2.3, where it also plays an essential role in the approximation of sticky Levy processes.
**Proposition 2**.: _Let \((f^{1},g^{1})\) and \((f^{2},g^{2})\) be pairs of functions satisfying that \(f^{i}\) and \(g^{i}\) are cadlag, \(\Delta f^{i}\geq 0\), \(g^{i}\) is strictly increasing and \(f^{i}(0)+g^{i}(0)\geq 0\). Suppose that \(f^{1}\leq f^{2}\) and \(g^{1}\leq g^{2}\). If \(h^{1}\) and \(h^{2}\) satisfy_
\[h^{i}(t)=f^{i}(c^{i}(t))+g^{i}(t),\qquad c^{i}(t)=\int_{0}^{t}\mathrm{I}(h^{i} (s)>0)\,ds,\]
_for \(i=1,2\), then we have the inequality \(c^{1}\leq c^{2}\). In particular, Equation (5) admits has at most one solution when \(g\) is strictly increasing._
Proof.: Fix \(\epsilon>0\) and define \(c^{\epsilon}(t)=c^{2}(\epsilon+t)\). Set
\[\tau=\inf\{t>0:c^{1}(t)>c^{\epsilon}(t)\}.\]
To get a contradiction, suppose that \(\tau<\infty\). The continuity of \(c^{1}\) and \(c^{\epsilon}\) guarantees that \(c^{1}(\tau)=c^{\epsilon}(\tau)\) and \(c^{1}\) is bigger than \(c^{\epsilon}\) at some point \(t\) of every right neighborhood of \(\tau\). At such points, the inequality \(c^{\epsilon}(t)-c^{\epsilon}(\tau)<c^{1}(t)-c^{1}(\tau)\) is satisfied. Applying a change of variable, this is equivalent to
\[\int_{\tau}^{t}\mathrm{I}(h^{2}(\epsilon+s)>0)\,ds<\int_{\tau}^{t}\mathrm{I}(h ^{1}(s)>0)\,ds. \tag{6}\]
The assumpions about \(g^{1}\) and \(g^{2}\) imply that \(g^{1}(\tau)<g^{2}(\epsilon+\tau)\). Therefore
\[0\leq h^{1}(\tau)=f^{1}(c^{1}(\tau))+g^{1}(\tau)<f^{2}(c^{\epsilon}(\tau))+g^ {2}(\epsilon+\tau)=h^{2}(\epsilon+\tau).\]
Thanks to the right continuity of \(h^{2}\), we can choose \(t\) close enough to \(\tau\) such that \(h^{2}(\epsilon+s)>0\) for every \(s\in[\tau,t)\). Going back to the inequality (6), we see that
\[t-\tau=\int_{\tau}^{t}\mathrm{I}(h^{2}(\epsilon+s)>0)\,ds<\int_{\tau}^{t} \mathrm{I}(h^{1}(s)>0)\,ds\leq t-\tau,\]
which is a contradiction. Therefore \(\tau=\infty\) and we conclude the announced result by letting \(\epsilon\to 0\).
In particular, if \((h^{1},c^{1})\) and \((h^{2},c^{2})\) are two solutions to (5) (driven by the same functions \(f\) and \(g\)), then the above monotonicity result (applied twice) implies \(c^{1}=c^{2}\) and therefore \(h^{1}=f\circ c^{1}+g=f\circ c^{2}+g=h^{2}\)
### Existence
The following variant of a well-known result of Skorohod (cf. [10, Chapter VI, Lemma 2.1]) will be helpful to verify the existence of the unique solution to the TCE (5).
**Lemma 1**.: _Let \(f:[0,\infty)\to\mathbb{R}\) be a cadlag function with non-negative jumps and \(f(0)\geq 0\). Then there exists a unique pair of functions \((r,l)\) defined on \([0,\infty)\) which satisfies: \(r=f+l\), \(r\) is non-negative, \(l\) is a non-decreasing continuous function that increases only on the set \(\{s:r(s)=0\}\) and such that \(l(0)=0\). Moreover, the function \(l\) is given by_
\[l(t)=\sup_{s\leq t}(-f(s)\lor 0).\]
Note that the lack of negative jumps of \(f\) is fundamental to obtain a continuous process \(l\).
With the above Lemma, we can give a deterministic existence result for equation (5).
**Proposition 3**.: _Assume that \(f\) is cadlag, \(\Delta f\geq 0\) and \(f(0)\geq 0\). Let \((r,l)\) be the pair of processes of Lemma 1 applied to \(f\). If \(\{t\geq 0:r(t)=0\}\) has Lebesgue measure zero, then, for every \(\gamma>0\) there exists a solution \(h\) to_
\[h=f\left(\int_{0}^{t}\mathrm{I}(h(s)>0)\,ds\right)+\gamma\int_{0}^{t}\mathrm{ I}(h(s)=0)\,ds. \tag{7}\]
Equivalently, in terms of Equation (5), the function \(h\) satisfies
\[h=f^{\gamma}\circ c+\gamma\mathrm{Id},\quad c(t)=\int_{0}^{t}\mathrm{I}(h(s)> 0)\,ds. \tag{8}\]
where \(f^{\gamma}(t)=f(t)-\gamma t\).
Proof.: Applying Lemma 1 to \(f\), we deduce the existence of a unique pair of processes \((r,l)\) satisfying \(r(t)=f(t)+l(t)\) with \(r\) is a non-negative function and \(l\) a continuous function with non-decreasing paths such that \(l(0)=0\) and
\[\int_{0}^{t}\mathrm{I}(r(s)>0)\,l(ds)=0. \tag{9}\]
To construct the solution to the deterministic TCE (7), let us consider the continuous and strictly increasing function \(a\) defined by \(a(t)=t+l(t)/\gamma\) for every \(t\geq 0\). Denote its inverse by \(c\) and consider the composition \(h=r\circ c\). The hypothesis on \(f\) implies that \(\int_{0}^{t}\mathrm{I}(r(s)=0)\,ds=0\) for all \(t\). Therefore, since \(r\) is non-negative, then
\[t=\int_{0}^{t}\mathrm{I}(r(s)>0)\,ds=\int_{0}^{t}\mathrm{I}(r(s)>0)\,(ds+ \gamma^{-1}l(ds)).\]
Substituting the deterministic time \(t\) for \(c(t)\) in the previous expression and using that \(c\) is the inverse of \(a\), we have
\[c(t)=\int_{0}^{c(t)}\mathrm{I}(r(s)>0)\,a(ds)=\int_{0}^{t}\mathrm{I}(h(s)>0)\,ds.\]
Finally, the definition of \(a\) and its continuity imply \(l(t)=\gamma(a(t)-t)\), so that
\[l(c(t))=\gamma(t-c(t))=\gamma\int_{0}^{t}\mathrm{I}(h(s)=0)\,ds.\]
Hence, the identity \(h(t)=r(c(t))\) can be written as
\[h(t)=f\left(\int_{0}^{t}\mathrm{I}(h(s)>0)\,ds\right)+\gamma\int_{0}^{t} \mathrm{I}(h(s)=0)\,ds,\]
as we wanted.
### Approximation
It is our purpose now to discuss a simple method to approximate the solution to the TCE (7). Among the large number of existing discretization schemes, we choose a widely used method, an adaptation of that of Euler's. Again, the key to the proof relies deeply on our monotonicity result.
**Proposition 4**.: _Let \(f\) be cadlag and satisfy \(\Delta f\geq 0\), and \(f(0)\geq 0\). Assume that Equation (7), or equivalently (8), admits a unique solution denoted by \((h,c)\). Let \(\tilde{f}^{n}\) be a sequence of cadlag functions which converge to \(f\) and let \(f^{n}=\tilde{f}^{n}-\gamma\lfloor n\cdot\rfloor/n\). Let \(c^{n}\) and \(h^{n}\) be given by \(c^{n}(0)=c^{n}(0-)=0\),_
\[c^{n}(t)=c^{n}(\lfloor nt\rfloor/n-)+(t-\lfloor nt\rfloor/n)\operatorname{I}(h ^{n}(t)>0) \tag{10}\]
_and_
\[h^{n}(t)=f^{n}(c^{n}(\lfloor nt\rfloor/n))+\gamma\lfloor nt\rfloor/n. \tag{11}\]
_Then \(h^{n}\to h\) in the Skorohod \(J_{1}\) topology and \(c^{n}\to c\) uniformly on compact sets._
Note that Propositions 2 and 3 give us conditions for the existence of a unique solution, which is the main assumption in the above proposition. Also, \(h^{n}\) is piecewise on \([(k-1)/n,k/n)\) and, therefore, \(c^{n}\) is piecewise linear on \([(k-1)/n,k/n]\) and, at the endpoints of this interval, \(c^{n}\) takes values in \(\mathbb{N}/n\). Hence, \(c^{n}(\lfloor tn\rfloor/n)=\lfloor nc^{n}(t)/n\rfloor\).
The proof of Proposition 4 is structured as follows: we prove that the sequence \((c^{n},n\geq 1)\) is relatively compact. Given \((c^{n_{j}},j\geq 1)\) a subsequence that converges to certain limit \(c^{*}\), we see that \(((c^{n_{j}},h^{n_{j}}),j\geq 1)\) also converges and its limit is given by \((c^{*},h^{*})\), where \(h^{*}=f^{\gamma}\circ c^{*}+\gamma\operatorname{Id}\) and we recall that \(f^{\gamma}=f-\gamma\operatorname{Id}\). A slight modification of the proof of Proposition 2 implies that the limit \((c^{*},h^{*})\) does not depend on the choice of the subsequence \((n_{j},j\geq 1)\) and consequently the whole sequence \(((c^{n},h^{n}),n\geq 1)\) converges.
Proof of Proposition 4.: Since \(\gamma\operatorname{Id}\) is continuous, then our hypothesis \(\tilde{f}^{n}\to f\) implies that \(f^{n}\to f-\gamma\operatorname{Id}\). (Since addition is not a continuous operation on Skorohod space as in [1, Ex. 12.2], we need to use Theorem 4.1 in [15] or Theorem 12.7.3 in [15].)
Fix \(t_{0}>0\). Note that Equation (10) can be written as
\[c^{n}(t)=\int_{0}^{t}\operatorname{I}(h^{n}(s)>0)\,ds.\]
This guarantees that the functions \(c^{n}\) are Lipschitz continuous with Lipschitz constant equal to \(1\). Hence they are non-decreasing, equicontinuous and uniformly bounded on \([0,t_{0}]\). It follows from Arzela-Ascoli Theorem that \((c^{n},n\geq 1)\) is relatively compact. Let \((c^{n_{j}},j\geq 1)\) be a subsequence which converges uniformly in the space of continuous function on \([0,t_{0}]\), let us call \(c^{*}\) to the limit, which is non-decreasing and continuous. Actually, \(c^{*}\) is \(1\)-Lipschitz continuous, so that \(c^{*}(t)-c^{*}(s)\leq t-s\) for \(s\leq t\). This is a fundamental fact which will be relevant to proving that \(c=c^{*}\). Since \(c^{n_{j}}(\lfloor n_{j}t\rfloor/n_{j})=\lfloor n_{j}c^{n_{j}}(t)\rfloor/n_{j}\) for every \(t\geq 0\), we can write \(h^{n_{j}}=f^{n_{j}}\circ c^{n_{j}}+\gamma\lfloor n_{j}\cdot\rfloor/n_{j}\). We now prove that: as \(j\to\infty\): \((c^{n_{j}},f^{n_{j}}\circ c^{n_{j}})\to(c^{*},f^{\gamma}\circ c^{*})\). Indeed, the convergence \(f^{n}\to f^{\gamma}\) implies that \(\liminf_{n\to\infty}f^{n}(t_{n})\geq f^{\gamma}_{-}(t)\) whenever \(t_{n}\to t\). (If a proof is needed, note that Proposition 3.6.5 in [1] tells us that the accumulation points of \(f^{n}(t_{n})\) belong to \(\{f^{\gamma}_{-}(t),f^{\gamma}(t)\}\).) Then,
\[I(f^{\gamma}_{-}\circ c^{*}(s)+\gamma s>0)\leq\liminf_{j}\operatorname{I}(f^{ n_{j}}\circ c^{n_{j}}(s)+\gamma\lfloor ns\rfloor/n>0),\]
so that, by Fatou's lemma,
\[\int_{s}^{t}\operatorname{I}(f^{\gamma}_{-}\circ c^{*}(r)+\gamma r>0)\,dr\leq c ^{*}(t)-c^{*}(s).\]
But now, arguing as in Proposition 1, we see that \(f^{\gamma}_{-}\circ c^{*}+\gamma\operatorname{Id}\) is non-negative and that \(c^{*}\) is strictly increasing. Since \(c^{*}\) is continuous and stricly increasing, Theorem 13.2.2 in [15, p. 430] implies that
the composition operation is continuous at \((f^{\gamma},c^{*})\), so that \(f^{n_{j}}\circ c^{n_{j}}\to f^{\gamma}\circ c^{*}\). Since \(\gamma\mathrm{Id}\) is continuous, we see that \(h^{n_{j}}\to h^{*}:=f^{\gamma}\circ c^{*}+\gamma\mathrm{Id}\), as asserted.
Another application of Fatou's lemma gives
\[\int_{s}^{t}\mathrm{I}(f^{\gamma}\circ c^{*}(r)+\gamma r>0)\,dr\leq c^{*}(t)-c ^{*}(s).\]
Now, arguing as in the monotonicity result of Proposition 2, we get \(c\leq c^{*}\).
Let us obtain the converse inequality \(c^{*}\leq c\) by a small adaptation of the proof of the aforementioned proposition, which then finishes the proof of Theorem 2. Let \(\varepsilon>0\), define \(\tilde{c}(t)=c(\varepsilon+t)\) and let \(\tau=\inf\{t\geq 0:c^{*}(t)>\tilde{c}(t)\}\). If \(\tau<\infty\), note that \(c^{*}(\tau)=\tilde{c}(\tau)\) and, in every right neighborhood of \(\tau\), there exists \(t\) such that \(c^{*}(t)>\tilde{c}(t)\). At \(\tau\), observe that
\[0\leq h^{*}(\tau)=f^{\gamma}\circ c^{*}(\tau)+\gamma\tau<f^{\gamma}\circ\tilde {c}(\tau)+\gamma(\tau+\varepsilon)=h(\tau+\varepsilon).\]
Thanks to the right continuity of the right hand side, there exists a right neighborhood of \(\tau\) on which \(h(\cdot+\varepsilon)\) is strictly positive and on which, by definition of \(c\), \(\tilde{c}\) grows linearly. Let \(t\) belong to that right-neighborhood and satisfy \(c^{*}(t)>\tilde{c}(t)\). Since \(c^{*}\) is \(1\)-Lipschitz continuous, we then obtain the contradiction:
\[(t-\tau)=\int_{\tau}^{t}\mathrm{I}(h(\varepsilon+r)>0)\,dr=\tilde{c}(t)- \tilde{c}(\tau)<c^{*}(t)-c^{*}(\tau)\leq t-\tau.\]
Hence, \(\tau=\infty\) and therefore \(c^{*}\leq\tilde{c}\). Since this inequality holds for any \(\varepsilon>0\), we deduce that \(c^{*}\leq c\).
The above implies that \(c^{*}=c\) and consequently \(h^{*}=h\). In other words, the limits \(c^{*}\) and \(h^{*}\) do not depend on the subsequence \((n_{j},j\geq 1)\) and then we conclude the convergence of the whole sequence \(((c^{n},h^{n}),n\geq 1)\) to the unique solution to the TCE (8).
## 3. Application to sticky Levy processes
The aim of this section is to apply the deterministic analysis of the preceeding section to prove Theorems 1 and 2. The easy part is to obtain existence, uniqueness and approximation, while the Markov property and the fact that the solution \(Z\) to Equation (2) is a sticky Levy process require some extra (probabilistic) work. We tackle the existence and uniqueness assertions in Theorem 1 and prove Theorem 2 in Subsection 3.1. Then, we prove the strong Markov property of solutions to Equation 2 in Subsection 3.2. This allows us to prove that solutions are sticky Levy processes, thus finishing the proof of Theorem 1, but leaves open the precise computation of the stickiness parameter (or, equivalently, the boundary condition for its infinitesimal generator). We finally obtain the boundary condition in Subsection 3.3. We could use the excursion analysis of [10] to obtain the boundary condition but decided to also include a different proof via stochastic analysis to make the two works independent.
### Existence, Uniqueness and Approximation
We now turn to the proof of the existence and uniqueness assertions in Theorem 1.
Proof of Theorem 1, Existence and Uniqueness.: Note that uniqueness of Equation (2) is immediate from Proposition 2 by replacing the cadlag function \(f\) by the paths of \(x+X-\gamma\mathrm{Id}\) and taking \(g=\gamma\mathrm{Id}\).
To get existence, note that applying Lemma 1 to the paths of \(X\), we deduce the existence of a unique pair of processes \((R,L)\) satisfying \(R_{t}=z+X_{t}+L_{t}\) with \(R\) a non-negative process and \(L\) a continuous process with non-decreasing paths such that \(L_{0}=0\) and \(\int_{0}^{t}\mathrm{I}(R_{s}>0)\,dL_{s}=0.\) In fact, we have an explicit representation of \(L\) as
\[L_{t}=\sup_{s\leq t}((-z-X_{s})\lor 0)=-\inf_{s\leq t}((z+X_{s})\wedge 0). \tag{12}\]
Note that \(R\) corresponds to the process \(X\) reflected at its infimum which has been widely studied as a part of the fluctuation theory of Levy processes (cf. [1, Ch. VI, VII], [10] and [13]).
From the explicit description of the process \(L\) given in (12), it follows that \(\mathbb{P}(R_{t}=0)=\mathbb{P}(X_{t}=\underline{X}_{t})\), where \(\underline{X}_{t}=\inf_{s\leq t}(X_{s}\wedge 0)\). Similarly, we denote \(\overline{X}_{t}=\sup_{s\leq t}(X_{s}\lor 0)\). Proposition 3 from [14, Ch. VI] ensures that the pairs of variables \((X_{t}-\underline{X}_{t},-\underline{X}_{t})\) and \((\overline{X}_{t},\overline{X}_{t}-X_{t})\) have the same distribution under \(\mathbb{P}\). Consequently
\[\mathbb{P}(X_{t}=\underline{X}_{t})=\mathbb{P}((X_{t}-\underline{X}_{t},- \underline{X}_{t})\in\{0\}\times[0,\infty))=\mathbb{P}((\overline{X}_{t}, \overline{X}_{t}-X_{t})\in\{0\}\times[0,\infty))\leq\mathbb{P}(\overline{X}_{ t}=0).\]
The unbounded variation of \(X\) guarantees that \(0\) is regular for \((-\infty,0)\) and for \((0,\infty)\) (as mentioned, this result can be found in [13] and has been extended in [1]). Hence, for any \(t>0\), \(\overline{X}_{t}>0\). We decode that \(\mathbb{P}(\overline{X}_{t}=0)=1-\mathbb{P}(X_{s}>0\) for some \(s\leq t)=0\). Thus,
\[\mathbb{E}\left[\int_{0}^{\infty}\mathrm{I}(R_{t}=0)\,dt\right]=\int_{0}^{ \infty}\mathbb{P}(X_{t}=\underline{X}_{t})\,dt=0.\]
Therefore, we can apply Proposition 3 to deduce the existence of solutions to Equation (2).
Let us now pass to the proof of 2.
Proof of Theorem 2.: As we have stated in Theorem 2, we allow the convergence \(X^{n}\to X\) to be weak or almost surely. Using Skorohod's representation Theorem, we may assume that it is satisfied almost surely in some suitable probability space. The desired result follows immediately from Proposition 4 by considering the paths of \(f=z+X-\gamma\mathrm{Id}\) and \(f^{n}=z_{n}+X^{n}-\gamma\lfloor n\cdot\rfloor/n\).
### Measurability details and the strong Markov property
In order to complete the proof of Theorem 1, it remains to verify the adaptability of the unique solution to the TCE (2) to the time changed filtration \((\widehat{\mathcal{F}_{t}},t\geq 0)\) and that such a solution is, in fact, a sticky Levy process based on \(X\). This is the objective of the current section, which ends the proof of Theorem 1.
By construction the mapping \(t\mapsto C_{t}\) is continuous and strictly increasing. Furthermore, given that \(C\) is the inverse of the map \(t\mapsto t+L_{t}/\gamma\), we can write
\[\{C_{t}\leq s\}=\{\gamma(t-s)\leq L_{s}\}\in\mathcal{F}_{s},\]
for every \(t\geq 0\). In other words, the random time \(C_{t}\) is a \((\mathcal{F}_{s})\)-stopping time, since the filtration is right-continuous. Therefore the process \(C\) is a \((\mathcal{F}_{s})\)-time change and \(Z\) is adapted to the time-changed filtration \((\widehat{\mathcal{F}_{t}},t\geq 0)\). In this sense we say that \(Z\) exhibits no extra randomness to that of the original Levy process. This contrasts with the SDE describing sticky Brownian motion (cf. [10, Theorem 1]).
Let us verify that the unique solution \(Z\) to (2) is an extension of the killed process \(X^{0}\). By construction, we see that if \(Z_{0}=z>0\), then \(Z\) equals \(X\) until they both reach zero. Hence \(Z\) and \(X\) have the same law if killed upon reaching zero. Let now \(Z\) be the unique solution of (2) with \(Z_{0}=z=0\). The concrete construction which proves existence to (2) of Section 2.2 shows that
\[\gamma\int_{0}^{t}\mathrm{I}(Z_{s}=0)\,ds=L\circ C\]
where \(C_{t}=\int_{0}^{t}\mathrm{I}(Z_{s}>0)\,ds\), \(L_{t}=-\inf_{s\leq t}X_{s}\). We have already argued that the unbounded variation hypothesis implies that \(L_{t}>0\) for any \(t>0\) and therefore \(L_{\infty}>0\) almost surely. As above, recalling that \(C\) is the inverse of \(\mathrm{Id}+L/\gamma\), we see that \(C_{\infty}=\infty\). We conclude that \(L\circ C_{\infty}>0\) almost surely, so that \(Z\) spends positive time at zero. We will now use the unbounded variation of \(X\) to guarantee the regular and instantaneous character of \(0\) for \(Z\). By construction, the unique solution \(Z\) to the TCE (2) is the process \(X\) reflected at its infimum by applying a continuous strictly increasing time change \(C\) to it, that is \(Z=R\circ C\) where \(R=X-\underline{X}\). Consequently
\[\mathbb{P}(\inf\{s>0:Z_{s}=0\}=0)=\mathbb{P}(\inf\{s>0:X\circ C_{s}=\underline {X}\circ C_{s}\}=0)=\mathbb{P}(\inf\{s>0:X_{s}=\underline{X}_{s}\}=0)\,.\]
Since \(0\) is regular for \((-\infty,0)\) thanks to the unbounded variation hypothesis (meaning that \(X\) visits \((-\infty,0)\) immediatly upon reaching \(0\)), we conclude the regularity of \(0\). Similarly, given the regularity of \(0\) for \((0,\infty)\) for \(X\), we have
\[\mathbb{P}(\inf\{s>0:Z_{s}>0\}=0)=\mathbb{P}(\inf\{s>0:X_{s}>\underline{X}_{s} \}=0)\geq\mathbb{P}(\inf\{s>0:X_{s}>0\}=0)=1.\]
Thus, \(0\) is an instantaneous point.
To conclude the proof of Theorem 1, it now remains to prove the strong Markov property. From the construction of the unique solution to the TCE (2), we deduce the existence of a measurable mapping \(F_{s}\) that maps the paths of the Levy process \(X\) and the initial condition \(z\) to the unique solution to the TCE (2) evaluated at time \(s\), that is, \(Z_{s}=F_{s}(X,z)\) for \(s\geq 0\). Let \(T\) be a \((\widehat{\mathcal{F}_{t}})\)-stopping time. Approximating \(T\) by a decreasing sequence of \((\widehat{\mathcal{F}_{t}})\)-stopping times \((T^{n},n\geq 1)\) taking only finitely many values, we see that \(C_{T}\) is an \((\mathcal{F}_{t})\)-stopping time. From the TCE (2), we deduce that
\[Z_{T+s}=Z_{T}+(X_{C(T+s)}-X_{C(T)})+\gamma\int_{0}^{s}\mathrm{I}(Z_{T+r}=0)\,dr.\]
Consider the processes \(\tilde{C},\tilde{X}\) and \(\tilde{Z}\) given by \(\tilde{C}_{s}=C(T+s)-C(T)\), \(\tilde{X}_{s}=X_{C(T)+s}-X_{C(T)}\) and \(\tilde{Z}_{s}=Z_{T+s}\) respectively. We can write the last equation as
\[\tilde{Z}_{s}=Z_{T}+\tilde{X}_{\tilde{C}(s)}+\gamma\int_{0}^{s}\mathrm{I}(Z_{r }=0)\,dr, \tag{13}\]
and \(\tilde{C}\) satisfies \(\tilde{C}_{s}=\int_{0}^{s}\mathrm{I}(\tilde{Z}_{r}>0)\,dr\) for \(s\geq 0\). In other words, \(\tilde{Z}\) is solution to the TCE (2) driven by \(\tilde{X}\) with initial condition \(Z_{T}\). Consequently \(\tilde{Z}_{s}=F_{s}(\tilde{X},Z_{T})\). Note that \(\tilde{X}\) has the same distribution as \(X\) and it is independent of \(\widehat{\mathcal{F}_{T}}\). Hence, the conditional law of \(\tilde{Z}\) given \(\widehat{\mathcal{F}_{T}}\) is that of \(F(\cdot,Z_{T})\). (One could make appeal to Lemma 8.7 in [10, p. 169] if needed.) This allows us to conclude that \(Z\) is a strong Markov process and concludes the proof of Theorem 1.
### Stickiness and martingales
In this section we aim at describing the boundary condition of the infinitesimal generator of the sticky Levy process \(Z\) of Theorem 1 by proving the following result.
**Proposition 5**.: _Let \(X\) be a Levy process of unbounded variation and no negative jumps and let \(\mathcal{L}\) be its infinitesimal generator. For a given \(z\geq 0\), let \(Z\) be the unique (strong Markov) process satisfying the time-change equation (2):_
\[Z_{t}=z+X_{\int_{0}^{t}\mathrm{I}(Z_{s}>0)\,ds}+\gamma\int_{0}^{t}\mathrm{I}(Z _{s}=0)\,ds.\]
_Then, for every \(f:[0,\infty)\to\mathbb{R}\) which is of class \(\mathcal{C}_{2,b}\) and which satisfies the boundary condition \(\gamma f^{\prime}(0+)=\mathcal{L}f(0+)\), the process \(M\) defined by_
\[M_{t}=f(Z_{t})-\int_{0}^{t}\mathcal{L}f(Z_{s})\,ds\]
_is a martingale and_
\[\frac{\partial}{\partial t}\bigg{|}_{t=0}\mathbb{E}(f(Z_{t}))=\mathcal{L}f(z).\]
Theorem 2 from [14] describes the domain of the infinitesimal generator of any recurrent extension of \(X^{0}\) (which is proved to be a Feller process) by means of three non-negative constants \(p_{c},p_{d},p_{\kappa}\) and a measure \(\mu\) on \((0,\infty)\). To describe such parameters we note a couple of important facts about the unique solution to (2). By construction we can see that it leaves \(0\) continuously. Indeed, if we consider the left endpoint \(g\) of some excursion interval of \(Z\), then \(C_{g}\) is the left endpoint of some excursion interval of the process reflected at its infimum \(R\). Thanks to Proposition 2 from [14], such excursions start at \(0\), so \(Z\) leaves \(0\) continuously. Thus, from [14], \(p_{c}>0\) and \(\mu=0\). Note also that \(Z\) has infinite lifetime because \(R\) has it and \(C\) is bounded by the identity function, so \(p_{\kappa}=0\). Finally, since
spends positive time at \(0\), then \(p_{d}>0\). Theorem 2 from [14] ensures that every function \(f\) in the domain of the infinitesimal generator of \(Z\) satisfies
\[f^{\prime}(0+)=\frac{p_{d}}{P_{c}}\mathcal{L}f(0+).\]
Our proof of Proposition 5 does not require the results from [14]. The main intention is to give an application of stochastic calculus, since we recall that a classical computation of the infinitesimal generator for Levy processes is based on Fourier analysis (cf. [1]). Regarding the generator \(\mathcal{L}\), recall that it can be applied to \(C_{2,b}\) functions such as \(f\) and that \(\mathcal{L}f\) is continuous (an explicit expression is forthcoming). The lack of negative jumps implies that \(\mathcal{L}f\) is defined even if \(f\) is only defined and \(C_{2,b}\) on an open set containing \([0,\infty)\).
Proof of Proposition 5.: Let \(Z\) be the unique solution to the TCE (2) driven by the SPLP \(X\). Ito's formula for semimartingales [13, Chapter II, Theorem 32] guarantees that for every function \(f\in C_{0}^{2}[0,\infty)\):
\[f(Z_{t})= f(z)+\int_{0}^{t}f^{\prime}(Z_{s}^{-})\,dX_{c_{s}}+\int_{0}^{t} \gamma f^{\prime}(Z_{s}^{-})\,\mathrm{I}(Z_{s}^{-}=0)\,ds+\frac{1}{2}\int_{0} ^{t}f^{\prime\prime}(Z_{s}^{-})\,d[Z,Z]_{s}^{c} \tag{14}\] \[+\sum_{s\leq t}(\Delta f(Z_{s})-f^{\prime}(Z_{s}^{-})\Delta Z_{s }).\]
In order to analyze this expression, we recall the so-called Levy-Ito decomposition, which describes the structure of any Levy process in terms of three independent auxiliary Levy processes, each with a different type of path behaviour. Consider the Poisson point process \(N\) of the jumps of \(X\) given by
\[N_{t}=\sum_{s\leq t}\delta_{(s,\Delta X_{t})}.\]
Denote by \(\nu\) the characteristic measure of \(N\), which is called the Levy measure of \(X\) and fulfills the integrability condition \(\int_{(0,\infty)}(1\wedge x^{2})\,\nu(dx)<\infty\). Then, we write the Levy-Ito decomposition as \(X=X^{(1)}+X^{(2)}+X^{(3)}\), where \(X^{(1)}=bt+\sigma B_{t}\) is a Brownian motion independent of \(N\), with diffusion coefficient \(\sigma^{2}\geq 0\) and drift \(b=\mathbb{E}[X_{1}-\int_{(0,1]}\int_{[1,\infty)}xN(ds,dx)]\),
\[X^{(2)}=\int_{(0,t]}\int_{[1,\infty)}xN(ds,dx)\]
is a compound Poisson process consisting of the sum of the large jumps of \(X\) and finally
\[X^{(3)}=\int_{(0,t]}\int_{(0,1)}x\left(N(ds,dx)-\nu(dx)ds\right)\]
is a square-integrable martingale.
Assuming the Levy-Ito decomposition of \(X\) and using the next result, whose proof is postponed, we will see that \(\int_{0}^{t}f^{\prime}(Z_{s}^{-})\,dX_{C_{s}}\) is a semimartingale of the form
\[M_{t}+\int_{0}^{t}bf^{\prime}(Z_{s}^{-})(1-\mathrm{I}(Z_{s}=0))\,ds+\int_{0}^ {t}f^{\prime}(Z_{s}^{-})\,dX_{C_{s}}^{(2)}, \tag{15}\]
for some square-integrable martingale \(M\).
**Lemma 2**.: _Let \(C\) be a \((\mathcal{F}_{t})\)-time change whose paths are continuous and locally bounded. Let \(X\) be a right-continuous local martingale with respect to \((\mathcal{F}_{t},t\geq 0)\). Then the time-changed process \(X_{C}\) is a right-continuous local martingale with respect to the time-changed filtration \((\widehat{\mathcal{F}_{t}},t\geq 0)\)._
Lemma 2 ensures that the time-changed process \((\sigma B+X^{(3)})\circ C\) remains a local martingale. According to Theorem 20 from [13, Chapter II], square-integrable local martingales are preserved
under stochastic integration provided that the integrand process is adapted and has cadlag paths. Consequently the stochastic integral1\(M=f^{\prime}(Z^{-})\cdot(\sigma B_{C}+X_{C}^{(3)})\) is a \((\widehat{\mathcal{S}_{t}})\)-local martingale. Thanks to Corollary 27.3 from [20, Chapter II], we know that a necessary and sufficient condition for a local martingale to be a square-integrable martingale is that its quadratic variation is integrable. Let us verify that \(\mathbb{E}[[M,M]_{t}]<\infty\) for every \(t\geq 0\). Theorem 10.17 from [1] implies the quadratic variation of the time-changed process coincides with the time change of the quadratic variation
Footnote 1: We use both notations \(\int H_{s}\,dX_{s}\) and \(H\cdot X\) to refer to the stochastic integral.
\[\left[\sigma B_{C}+X_{C}^{(3)},\sigma B_{C}+X_{C}^{(3)}\right]_{t}=\left[ \sigma B+X^{(3)},\sigma B+X^{(3)}\right]_{C_{t}},\quad t\geq 0.\]
Given that the Brownian motion \(B\) is independent of \(X^{(3)}\), the quadratic variation is \(\sigma^{2}C_{t}+\left[X^{(3)},X^{(3)}\right]_{C_{t}}\), which is bounded by \(\sigma^{2}t+\left[X^{(3)},X^{(3)}\right]_{t}\). Thus
\[\mathbb{E}[[M,M]_{t}]\leq\|f^{\prime}\|_{\infty}^{2}\mathbb{E}\left[\left[ \sigma B+X^{(3)},\sigma B+X^{(3)}\right]_{C_{t}}\right]\leq\|f^{\prime}\|_{ \infty}^{2}\left(\sigma^{2}t+t\int_{(-1,1)}x^{2}\,\nu(dx)\right)<\infty.\]
This verifies the decomposition (15). Later we will deal with the last term of this decomposition.
Coming back to Ito's formula (14), we need to calculate the term corresponding to the integral with respect to the continuous part of the quadratic variation of \(Z\). First, we decompose the variation as
\[[Z,Z]_{s}=[X_{C},X_{C}]_{s}+2[X_{C},\gamma(\operatorname{Id}-C)]_{s}+\gamma^{ 2}[\operatorname{Id}-C,\operatorname{Id}-C]_{s},\]
for every \(s\geq 0\). The first term is \([X,X]_{C_{s}}\). Given the finite variation of \(\gamma(\operatorname{Id}-C)\) and the continuity of \(C\), Theorem 26.6 from [17] implies that almost surely the other two terms are zero. Thereby \([Z,Z]_{s}=[X,X]_{C_{s}}\) for every \(s\geq 0\) and
\[\frac{1}{2}\int_{0}^{t}f^{\prime\prime}(Z_{s}^{-})\,d[Z,Z]_{s}^{c}=\frac{1}{2 }\int_{0}^{t}\sigma^{2}f^{\prime\prime}(Z_{s}^{-})(1-\operatorname{I}(Z_{s}=0 ))\,ds.\]
Now we analyze the last term on the right-hand side from (14), which corresponds to the jump part. Let us note that the discontinuities of \(f\circ Z\) derive from the discontinuities of \(Z\), which are caused by the jumps of \(X\circ C\), in other words
\[\{s\leq t:|\Delta f(Z_{s})|>0\}\subseteq\{s\leq t:\Delta Z_{s}>0\}=\{s\leq t: \Delta(X\circ C)_{s}>0\}.\]
Making the change of variable \(r=C_{s}\), the sum of the jumps in (14) can be written as
\[\sum_{r\leq C_{t}}(\Delta f(Z\circ A_{r})-f^{\prime}(Z^{-}\circ A_{r})\Delta( Z\circ A_{r})), \tag{16}\]
where \(A\) denotes the inverse of \(C\). We claim that \(A\) is a \((\widehat{\mathcal{F}_{t}})\)-time change. Indeed, splitting in the cases \(r<t\) and \(r\geq t\), we see that \(\{A_{t}\leq s\}\cap\{C_{s}\leq r\}=\{t\leq C_{s}\leq r\}\in\mathcal{F}_{r}\) for any \(r\geq 0\). Exercise 1.12 from [19, Chapter V] ensures that the time-changed filtration \((\widehat{\mathcal{F}_{A_{t}}},t\geq 0)\) is in fact \((\mathcal{F}_{t},t\geq 0)\). Thus, for any continuous function \(g\), the process \((g(Z_{A_{t}}^{-}),t\geq 0)\) is \((\mathcal{F}_{t})\)-predictable.
We return to (15) to put together the sum of the jumps in (16) and the stochastic integral \((f^{\prime}\circ Z^{-})\cdot(X^{(2)}\circ C)\). For this purpose, it is convenient to rewrite the last integral as \((f^{\prime}\circ Z^{-}\circ A\circ C)\cdot(X^{(2)}\circ C)\) and apply Lemma 10.18 from [1] to deduce that \((f^{\prime}\circ Z^{-})\cdot(X^{(2)}\circ C)=((f^{\prime}\circ Z^{-}\circ A) \cdot X^{(2)})\circ C\).
Consequently
\[\int_{0}^{t}f^{\prime}(Z_{s}^{-})\,dX_{C_{s}}^{(2)}+\sum_{s\leq C_{t} }\left(\Delta f(Z\circ A_{s})-f^{\prime}(Z^{-}\circ A_{s})\Delta(Z\circ A_{s})\right)\] \[\qquad\qquad=\int_{0}^{C_{t}}\int_{(0,\infty)}\left(f(Z_{A_{s}}^{ -}+x)-f(Z_{A_{s}}^{-})-f^{\prime}(Z_{A_{s}}^{-})x\mathrm{I}(x\in(0,1))\right) \,\left(N(ds,dx)-\nu(dx)\,ds\right) \tag{17}\] \[\qquad\qquad\qquad+\int_{0}^{C_{t}}\int_{(0,\infty)}\left(f(Z_{A_{ s}}^{-}+x)-f(Z_{A_{s}}^{-})-f^{\prime}(Z_{A_{s}}^{-})x\mathrm{I}(x\in(0,1)) \right)\,\nu(dx)\,ds.\]
Define the process \(\overline{M}\) by
\[\overline{M}_{t}= -\int_{0}^{t}\int_{[1,\infty)}\left(f(Z_{A_{s}}^{-}+x)-f(Z_{A_{s} }^{-})\right)\,\nu(dx)\,ds\] \[+\int_{0}^{t}\int_{[1,\infty)}\left(f(Z_{A_{s}}^{-}+x)-f(Z_{A_{s} }^{-})\right)\,N(ds,dx)\] \[+\int_{0}^{t}\int_{(0,1)}\left(f(Z_{A_{s}}^{-}+x)-f(Z_{A_{s}}^{- })-f^{\prime}(Z_{A_{s}}^{-})x\right)\,(N(ds,dx)-\,ds).\]
Since \(\nu\) is a Levy measure, then
\[\mathbb{E}\left[\int_{0}^{t}\int_{[1,\infty)}\left|f(Z_{A_{s}}^{-}+x)-f(Z_{A_ {s}}^{-})\right|\,ds\right]\leq\|f\|_{\infty}^{2}t\nu([1,\infty))<\infty.\]
We develop the first degree Taylor polynomial of \(f(Z_{A_{s}}^{-}+x)\) to obtain
\[f^{\prime}(Z_{A_{s}}^{-})x=f(Z_{A_{s}}^{-}+x)-f(Z_{A_{s}}^{-})-R(x),\quad x\in (0,1),\]
where the remainder \(R\) satisfies \(|R(x)|\leq\frac{1}{2}\|f^{\prime\prime}\|_{\infty}x^{2}\). Therefore
\[\mathbb{E}\left[\int_{0}^{t}\int_{(0,1)}\left(f(Z_{A_{s}}^{-}+x)-f(Z_{A_{s}}^{ -})-f^{\prime}(Z_{A_{s}}^{-})x\right)\,\nu(dx)\,ds\right]\leq\frac{1}{2}\|f^ {\prime\prime}\|_{\infty}t\mathbb{E}\left[\int_{(0,1)}x^{2}\,\nu(dx)\right]<\infty.\]
Theorem 5.2.1 from [10] ensures that \(\overline{M}\) is a \((\mathcal{F}_{t})\)-local martingale and Lemma 2 implies that \(\overline{M}_{C}\) is a \((\widehat{\mathcal{F}_{t}})\)-local martingale. Furthermore, for \(t\geq 0\) it holds that
\[\mathbb{E}\left[\sup_{s\leq t}|\overline{M}_{C_{s}}|\right]\leq\mathbb{E} \left[\sup_{s\leq t}|\overline{M}_{s}|\right]\leq\left(2\|f\|_{\infty}+\frac{ 1}{2}\|f^{\prime\prime}\|_{\infty}^{2}\right)t\int_{(0,\infty)}\left(1\wedge x ^{2}\right)\nu(dx)<\infty.\]
It follows from Theorem 51 from [10, Chapter I] that \(\overline{M}_{C}\) is a true martingale.
Gathering all the expressions involved in Ito's formula (14), we get the semimartingale decomposition
\[f(Z_{t})-f(z)= M_{t}+\int_{0}^{t}bf^{\prime}(Z_{s}^{-})(1-\mathrm{I}(Z_{s}=0))ds +\int_{0}^{t}\gamma f^{\prime}(0+)\,\mathrm{I}(Z_{s}=0)\,ds\] \[+\frac{1}{2}\int_{0}^{t}\sigma^{2}f^{\prime\prime}(Z_{s}^{-})(1- \mathrm{I}(Z_{s}=0))\,ds+\overline{M}_{C_{t}}\] \[+\int_{0}^{C_{t}}\int_{(0,\infty)}\left(f(Z_{A_{s}}^{-}+x)-f(Z_{A_{ s}}^{-})-f^{\prime}(Z_{A_{s}}^{-})x\mathrm{I}(x\in(0,1))\right)\,\nu(dx)\,ds.\]
Recall that the extended generator of \(X\) (as in [11, Ch. VII]) is given by
\[\mathscr{L}f(z)=bf^{\prime}(z)+\frac{\sigma^{2}}{2}f^{\prime\prime}(z)+\int_ {\mathbb{R}_{+}}\left(f(z+x)-f(z)-f^{\prime}(z)x\mathrm{I}(x\in(0,1))\right)\, \nu(dx)\]
on \(C_{2,b}\) functions and that the extended generator of \(X^{0}\) is given by \(\mathcal{L}f\) on \(C_{2,b}\) functions \(f\) on \([0,\infty)\) which vanish (together with its derivatives) at \(0\) and \(\infty\). Note that \(\mathcal{L}f(z)\) is bounded. Define \(\tilde{\mathcal{L}}f(0)\) by
\[\tilde{\mathcal{L}}f(0)=(b-\gamma)f^{\prime}(0+)+\frac{\sigma^{2}}{2}f^{\prime \prime}(0+)+\int_{\mathbb{R}_{+}}\left(f(x)-f(0+)-f^{\prime}(0+)x\mathrm{I}(x \in(0,1))\right)\,\nu(dx).\]
Given that \(\tilde{\mathcal{L}}f(0)=\mathcal{L}f(0+)-\gamma f^{\prime}(0+)\), we can write the martingale \(M+\overline{M}_{C}\) as
\[M+\overline{M}_{C_{t}}=f(Z_{t})-f(z)-\int_{0}^{t}\mathcal{L}f(Z_{s}^{-})\,ds+ \int_{0}^{t}\tilde{\mathcal{L}}f(0)\,\mathrm{I}(Z_{s}=0)\,ds.\]
We deduce that if a function \(f\in C^{2}[0,\infty)\) satisfies the boundary condition \(\tilde{\mathcal{L}}f(0)=0\) or equivalently \(\gamma f^{\prime}(0+)=\mathcal{L}f(0+)\), then \(f(Z_{t})-f(z)-\int_{0}^{t}\mathcal{L}f(Z_{s})\,ds\) is a martingale. By hypothesis, the last term is bounded by a linear function of \(t\), so that \(\mathbb{E}[f(Z_{t})]\) is differentiable at zero and the derivative equals \(\mathcal{L}f(z)\).
We conclude this section with the proof of Lemma 2.
Proof.: _(Lemma 2)_ Let \((\beta_{n},n\geq 1)\) be localizing sequence for \(X\), then \(\beta_{n}\to\infty\) as \(n\to\infty\) and for each \(n\geq 1\), the process \(X^{\beta_{n}}I(\beta_{n}>0)\) is a uniformly integrable martingale. Keeping the notation \(A\) for the inverse of \(C\), we will prove that \((A(\beta_{n}),n\geq 1)\) is a sequence of \((\widehat{\mathcal{F}_{t}})\)-stopping times that localizes to \(X_{C}\). The property of being \((\mathcal{F}_{t})\)-stopping time is deduced by observing that \(\{\beta_{n}\leq C_{t}\}\in\mathcal{F}_{\beta_{n}}\cap\mathcal{F}_{C_{t}}\subset \widehat{\mathcal{F}_{t}}\), which implies that
\[\{A(\beta_{n})\leq t\}\cap\{C_{t}\leq s\}=\{\beta_{n}\leq C_{t}\}\cap\{C_{t} \leq s\}\in\mathcal{F}_{s}.\]
Since \(C\circ A=\mathrm{Id}\), then
\[(Z\circ C)_{t}^{A(\beta_{n})}=Z_{C_{t}\wedge\beta_{n}}=Z_{C_{t}}^{\beta_{n}}.\]
Given that \(Z^{\beta_{n}}\) is a \((\mathcal{F}_{t})\)-martingale, Optional Stopping Theorem guarantees that
\[\mathbb{E}\left[Z_{C_{t}}^{\beta_{n}}\Big{|}\mathcal{F}_{C_{t}}\right]=Z_{C_{ s}}^{\beta_{n}},\quad 0\leq s\leq t.\]
Hence \((Z\circ C)^{A(\beta_{n})}\) is a \((\widehat{\mathcal{F}_{t}})\)-martingale. Moreover \(A(\beta_{n})\to\infty\) as \(n\to\infty\) since \(C\leq\mathrm{Id}\).
| StochasticDifferentialEquations (SDEs)はIt\^oによって確率変数を持つ拡散過程のパス計算を目的とした形で創出されました。TCE(TimeChangeEquations)と呼ばれる、Doeblinによって提案された方法では、確率関数を駆動する常微分方程式の一般化として表現されます。TCEは、確率変数の関数で駆動される通常の方程式の一般化です。私たちは、TCEがSDEよりも優れている例を示します。私たちは、TCEが駆動するL\'evyプロセスを考慮した時系列変動フィルターを通して、粘性L\'evyプロセスを唯一の解として表現します。この解は、L\'evyプロセスを駆動する方程式のTCEで、この解は、この解は、この解は、この解は、この解は、この解は、この解は、この解は、この解は、この解 |
2309.14634 | Synchronizing Full-Body Avatar Transforms with WebRTC DataChannel on
Educational Metaverse | Full-body avatars are suggested to be beneficial for communication in virtual
environments, and consistency between users' voices and gestures is considered
essential to ensure communication quality. This paper propose extending the
functionality of a web-based VR platform to support the use of full-body
avatars and delegated avatar transforms synchronization to WebRTC DataChannel
to enhance the consistency between voices and gestures. Finally, we conducted a
preliminary validation to confirm the consistency. | Yong-Hao Hu, Kenichiro Ito, Ayumi Igarashi | 2023-09-26T03:28:09 | http://arxiv.org/abs/2309.14634v1 | # Synchronizing Full-Body Avatar Transforms with WebRTC DataChannel on Educational Metaverse
###### Abstract
Full-body avatars are suggested to be beneficial for communication in virtual environments, and consistency between users' voices and gestures is considered essential to ensure communication quality. This paper propose extending the functionality of a web-based VR platform to support the use of full-body avatars and delegated avatar transforms synchronization to WebRTC DataChannel to enhance the consistency between voices and gestures. Finally, we conducted a preliminary validation to confirm the consistency.
Metaverse, Real-time Communication, Web User Interface
## I Introduction
'Metaverse' is commonly defined as 3D virtual environments where interactions among users occur and are accessible via computers or Virtual Reality (VR) devices, and it has found utility across diverse areas, including education. We have started developing an educational platform in metaverse based on Mozilla Hubs, an open-source web-based VR platform [1].
'Avatar' refers to a character that represents and is controlled by a user in a virtual environment. Currently, Mozilla Hubs only supports avatars having a simplified upper body (Fig. 3a), which may reduce computation costs on the client side, ensuring Mozilla Hubs' high accessibility even on low-end devices. On the other hand, it limits the conveyance of non-verbal information through body gestures. We believed that it is also essential for effective communication, and the use of full-body avatars that possess complete body parts like a real human (Fig. 3b) can compensate this function.
In fact, previous studies have shown how full-body avatars benefit communication in virtual environments [2, 3]. Similarly, full-body avatars can improve sense of presence [4], which in turn play an important role in learning [5]. Following these findings, we decided to integrate full-body avatars to the Mozilla-Hubs-based platform we are developing.
In addition, Mozilla Hubs currently synchronizes avatar transforms (changes in the position, orientation, or size of an avatar's bones) through WebSocket on a mesh network, while we considered WebRTC DataChannel more suitable to synchronize avatar transforms due to security concerns and consistency between users' voices and gestures. In this study, we aimed to expand Mozilla Hubs' implementation to enable the use of full-body avatars and to have the full-body avatar transforms synchronized by WebRTC DataChannel.
## II Proof of Concept
### _Implementation_
#### Ii-A1 Accommodate Full-body Avatars in Mozilla Hubs
In the original implementation in Mozilla Hubs, avatars are hard-coded to contain limited bone names and hierarchy 1, and other avatars with different skeletons, including full-body avatars, are usually neither operable nor rendered properly. To address this, we prepared a bone mapping function that maps the bones by checking the similarity of their name with their corresponding body parts (e.g. LowerArm.R is mapped to right elbow). Then we implemented Cyclic Coordinate Descent Inverse Kinematics [6] so that a full-body avatar can still reflect its user's poses naturally with limited inputs.
Footnote 1: [https://github.com/MozillaReality/hubs-avatar-pipelines](https://github.com/MozillaReality/hubs-avatar-pipelines)
#### Ii-A2 Synchronization of Full-body Avatar Transforms by WebRTC DataChannel
WebRTC (Web Real-Time Communication) 2 enables real-time data transmission between web browsers without passing through a central server. WebRTC DataChannel is capable of transmitting text or binary data, implemented on top of UDP while remaining reliability similar to TCP, and incorporating DTLS to encode the data transmission.
Footnote 2: [https://webrtc.org/](https://webrtc.org/)
In our previous study [1], we introduced an alternative WebRTC SFU solution into Mozilla Hubs to enhance its audio transmission, and we also delegated the transmission of avatar transforms to DataChannel. In this current study, we continued to employ this delegation and extended it to encompass full-body avatars for the following reasons.
Firstly, latency on the synchronization of voices and gestures may cause inconsistency between verbal and non-verbal cues, impacting the communication quality and effectiveness. We argued that real-time alignment between verbal and non-verbal cues with minimal latency should be prioritized, and DataChannel holds the potential to address this concern.
Moreover, within the metaverse, avatar transforms can be regarded as sensitive personal data, especially when using full-body avatars which reproduce a user's real body poses, revealing more about the user's identity [7]. Leveraging WebRTC DataChannel allows us to encode avatar transforms transmission and avoid routing through central servers, thereby enhancing higher security.
### _Preliminary Validation in Data Transmission Latency_
A preliminary validation was conducted to measure the transmission latency between audio and avatar transforms within both the original Mozilla Hubs architecture and our proposed one. We hypothesized that our implementation brings lower latency of avatar transforms from audio, resulting in higher consistency between audio and avatar transforms.
One Apple MacBook serving as the Sender and one Windows 10 Desktop Computer serving as the Observer, two devices were connected to the same room on self-hosted Mozilla Hubs (Fig.1 and 2). For 5 minutes, the Sender repeatedly played a 3 second audio clip and synchronized circular movements of its avatar's left hand. The avatar's left hand positions were transmitted to the Observer, while the audio clip was captured by the Sender's microphone and also transmitted. Timestamps were recorded whenever the Observer received the audio or observed changes in the Sender's avatar.
Subsequently, the transmission latency was calculated between avatar transforms and audio using the recorded timestamps, and found that avatar transforms were transmitted faster than audio in both conditions (Fig. 4). In our proposed implementation, the average latency was 257.64 ms (SD = 16.10 ms), while in the original implementation, it was 184.04 ms (SD = 49.81 ms). This suggests that, compared to the original implementation, our approach results in higher dispersion between audio and avatar transforms due to either slower audio transmission or faster synchronization of avatar transforms. Further investigation is needed to clarify this.
Higher variation was also observed in latency in the original implementation, as evident from the graph and the larger standard deviation, indicating greater stability in our proposed implementation, which can also contribute to higher consistency between audio and avatar transforms.
Regarding the limitations of this validation, it is worth noting that the avatar's movement was triggered when the audio clip was played, not precisely when the Sender started the audio data transmission. This discrepancy might had contributed to slower audio data arrivals at the Observer side.
Lastly, it is essential to acknowledge that this preliminary validation involved only two devices in the same room. In rooms accommodating more users, latency in both audio transmission and avatar transforms synchronization may become more obvious, allowing for more meaningful comparisons.
## III Conclusion
We extended the functionality of Mozilla Hubs to support the use of full-body avatars and delegated full-body avatar transforms synchronization to WebRTC DataChannel. The result of a preliminary validation failed to demonstrate a more accurate synchronization but indicated more consistent time differentials between audio and avatar transforms in our implementation. To gain a clearer understanding of the latency improvement, further investigation is required under higher client load, and we are also planning an usability assessment for our implementation within an educational context.
## Acknowledgment
This work was partially supported by the following grants: JST Grant Number JPMJPF2202.
| Full-body avatarsは、仮想環境でのコミュニケーションに有益であると提案され、ユーザーの声と手足の動きが一致するようにすることが、コミュニケーション品質を確保するために必要と考えられています。この論文では、WebベースのVRプラットフォームの機能を拡張し、フルボディアバターの使用をサポートし、WebRTCデータチャネルにデLEGATEDアバター変換の同期機能を追加することで、声と手足の動きが一致するようにしました。最後に、この一致性を検証するために、初期の検証を実施しました。 |
2309.04427 | Robust Representation Learning for Privacy-Preserving Machine Learning:
A Multi-Objective Autoencoder Approach | Several domains increasingly rely on machine learning in their applications.
The resulting heavy dependence on data has led to the emergence of various laws
and regulations around data ethics and privacy and growing awareness of the
need for privacy-preserving machine learning (ppML). Current ppML techniques
utilize methods that are either purely based on cryptography, such as
homomorphic encryption, or that introduce noise into the input, such as
differential privacy. The main criticism given to those techniques is the fact
that they either are too slow or they trade off a model s performance for
improved confidentiality. To address this performance reduction, we aim to
leverage robust representation learning as a way of encoding our data while
optimizing the privacy-utility trade-off. Our method centers on training
autoencoders in a multi-objective manner and then concatenating the latent and
learned features from the encoding part as the encoded form of our data. Such a
deep learning-powered encoding can then safely be sent to a third party for
intensive training and hyperparameter tuning. With our proposed framework, we
can share our data and use third party tools without being under the threat of
revealing its original form. We empirically validate our results on unimodal
and multimodal settings, the latter following a vertical splitting system and
show improved performance over state-of-the-art. | Sofiane Ouaari, Ali Burak Ünal, Mete Akgün, Nico Pfeifer | 2023-09-08T16:41:25 | http://arxiv.org/abs/2309.04427v1 | Robust Representation Learning for Privacy-Preserving Machine Learning: A Multi-Objective Autoencoder Approach
###### Abstract
Several domains increasingly rely on machine learning in their applications. The resulting heavy dependence on data has led to the emergence of various laws and regulations around data ethics and privacy and growing awareness of the need for privacy-preserving machine learning (ppML). Current ppML techniques utilize methods that are either purely based on cryptography, such as homomorphic encryption, or that introduce noise into the input, such as differential privacy. The main criticism given to those techniques is the fact that they either are too slow or they trade off a model's performance for improved confidentiality. To address this performance reduction, we aim to leverage robust representation learning as a way of encoding our data while optimising the privacy-utility trade-off. Our method centers on training autoencoders in a multi-objective manner and then concatenating the latent and learned features from the encoding part as the encoded form of our data. Such a deep learning-powered encoding can then safely be sent to a third party for intensive training and hyperparameter tuning. With our proposed framework, we can share our data and use third party tools without being under the threat of revealing its original form. We empirically validate our results on unimodal and multimodal settings, the latter following a vertical splitting system and show improved performance over state-of-the-art.
## I Introduction
A wide range of application sectors is drastically integrating machine learning (ML) in diverse products. A successful ML model often requires a huge amount of training data and powerful computational resources. However, the need for such enormous volumes of data to develop performing models raises serious privacy concerns. Such ML models might face multiple types of adversarial attacks depending on the type of access an adversary might have to the model (white or black-box). A membership inference attack (Shokri et al., 2017) allows an attacker to query a trained machine learning model to predict whether a given example is in the model's training data set. On the other hand, an inversion attack (Fredrikson et al., 2015; K.-C. Wang et al., 2021; Ye et al., 2022) aims to recreate an input data point given a confidence score obtained from a black-box inference of the model. In order to make researchers and engineers take such privacy threats into consideration, many regulations and ethical data policies, such as GDPR, CCPA, and CPRA (Hijmans & Raab, 2018; Rochel, 2021) were set to raise awareness around this topic and restrict any data violations that might occur in a given ML pipeline.
Previous works have been done to reduce the effectiveness of different privacy attacks. Among those studies, differential privacy (DP) is the most commonly used approach which operates by incorporating predetermined randomization into a machine learning algorithm's computation. The perturbation introduced by DP might be applied on the users's input, parameters, prediction output and even on loss functions (Abadi et al., 2016; Phan et al., 2016). However, many studies have shown that such noise reduces the performance of the model for the sake of privacy (Truex et al., 2019). Furthermore, Setting up a correct value for \(\epsilon\) is complex by nature and requires some trial and error process.
Homomorphic encryption (HE) is a cryptographic method applied in the domain of ppML. It is defined as a type of encryption method that allows computations to be performed on encrypted data without first decrypting it with a secret key. Yet HE has some limitations. It was originally designed to allow only algebraic operations such as addition and multiplication which excludes the various non-linear activation functions leveraged in neural networks. Numerous studies have been conducted to approximate such functions using polynomials (Hesamifard et al., 2017; J.-W. Lee et al., 2022; J. Lee et al., 2021), however such approximations result in high computational burden and reduce the ability to apply various methods by extending the depth of deep learning models, since performing all that in a HE fashion significantly increases computation time.
The aim behind this paper is to create a deep learning-oriented encoding strategy by training a supervised residual autoencoder and concatenating the features learned from the encoder part as the representation to be shared with third parties for further training and an extensive hyperparameters search. A framework, which trains an autoencoder and shares its latent space embedding with other parties for data sharing purposes was presented by Maria Quintero-Ossa et al., 2022. We consider such a framework as a baseline in our experiments and empirically demonstrate (see Section V-B) that our suggested architecture considerably improves the
performance. In contrast to Maria Quintero-Ossa et al., 2022, we also provide a threat analysis to discuss how secure our framework is and the different types of access that an adversary might have, considering different actors directly involved and interacting with our framework. The autoencoder proposed in our framework is trained in a multi-objective fashion by simultaneously considering the data reconstruction and the supervised learning problem, and ensuring an informative and discriminative representation. This ppML encoding is applied to the data part of the machine learning pipeline and we empirically demonstrate the efficiency of this method by first experimenting on unimodal settings using the MNIST, FashionMNIST, Leukemia, and Retinal OCT datasets. In addition, we further explored the capabilities offered by our ppML framework on multimodal data distributed in a vertical setting, where each modality is provided by a given data party. For this purpose, a TCGA multi-omics breast cancer dataset was leveraged. We summarize our main contributions as follows:
* We developed a considerably improved version of the data sharing strategy through latent space embedding proposed by Maria Quintero-Ossa et al., 2022 by increasing the performance on different prediction tasks and providing a detailed threat analysis.
* We demonstrated the application flow of our proposed encoding framework for both unimodal and multimodal (vertically distributed) settings.
* We empirically validated our approach to be utility-privacy efficient by comparing models trained on the original data against models trained on the generated encoded data and show that both perform equally.
## II Background & Related Work
In this section, we present previous works that have been performed in the sphere of autoencoders, representation learning and ppML methods enhanced with deep learning.
### _Autoencoders_
Autoencoders are a type of neural network originally implemented to solve the unsupervised task of data reconstruction. Formally, an autoencoder is defined with three main parts, which are an encoder _E(.)_, a latent space representation \(s\) and a decoder _D(.)_. Given \(x,\hat{x}\in\mathbb{R}^{d}\) and \(s\in\mathbb{R}^{m}\) we have \(s=E(x)\) and \(\hat{x}=D(s)\) with \(m<<d\) and \(\hat{x}\) defined as the reconstructed output of the original input \(x\). Beyond its original purpose, the use of autoencoders were extended to other applications such as data generation with variational autoencoders (Kingma & Welling, 2013), anomaly detection (Sakurada & Yairi, 2014) and recommendation systems (Ferreira et al., 2020; Pan et al., 2020). They were also leveraged for supervised learning purposes, Le et al., 2018 implemented a supervised autoencoder (SAE) where the latent space \(s\) is linked to a classifier \(f_{c}\) trained in parallel with the original data reconstruction problem and the overall loss function defined as follows:
\[L(x,y,\theta_{e},\theta_{d},\theta_{c})=\frac{1}{t}\sum_{i=1}^{t }L_{r}(x,D(E(x,\theta_{e}),\theta_{d}))\\ +L_{c}(y,f_{c}(E(x,\theta_{e}),\theta_{c})) \tag{1}\]
With \(\theta_{e}\), \(\theta_{d}\) and \(\theta_{c}\) being the parameters of the encoder, decoder and classifier respectively and \(L_{r}(.,.)\), \(L_{c}(.,.)\) defined as reconstruction and categorical cross entropy losses.
### _Representation learning_
A good encoding demands an informative representation of the original data by reducing the dimension of the input without lowering the inter-dependencies and the important relations needed for a given ML model to perform efficiently in a given task. A reasonable-sized learnt representation might encompass a vast array of potential input configurations, because good representations are expressive. Bengio et al. (2013) surveyed what makes the essence of a good representation which we took into consideration while developing our framework.
* _Smoothness_: Given \(x,y\in\mathbb{R}^{d}\) and a representation function \(f(.)\) defined as: \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\), where \(m<d\), a smooth representation implies that if \(x\approx y\) then \(f(x)\approx f(y)\). In a geometric point of view using distances this translates to: if \(dist(x_{1},x_{2})<dist(x_{1},x_{3})\) then \(dist(f(x_{1}),f(x_{2}))<dist(f(x_{1}),f(x_{3}))\). This is a reason why we introduced a center loss in our framework which is explained in Section III-C1 to keep a similar distance semantics when encoding the data and mapping it to a space with reduced dimension.
* _Sparsity \(\&\) Invariance_: for any given observation in \(x\), only a small fraction of the possible factors are relevant. In terms of representation, this could be represented by features that are often zero or by the fact that most of the extracted features are insensitive to small variations of \(x\). For our case, this can be achieved through sparse autoencoders (Rangamani et al., 2018) which is a sort of autoencoder that uses sparsity to create a bottleneck in the flow of information. In particular, the loss function is designed to punish the activation generated by the layer. \(L1\) regularization is usually used to apply the sparsity constraint.
### _Privacy enhancement with deep learning_
Linking deep learning training concepts for the purpose of ppML has been addressed before. Adversarial learning is the most frequent way to handle privacy-utility trade-off where the privacy variable is explicitly introduced in the adversarial objective. Mandal et al. (2022) presented UAE-PUPET where an autoencoder takes an input \(x\) and generates \(\hat{x}\) being a distorted version of \(x\) that contains minimum information of a given private attribute \(x_{p}\) while always keeping the most important information of the targeted utility variable \(x_{u}\). UAE-PUPET works by linking 2 classifiers to the autoencoder \(\gamma_{u}\), \(\gamma_{p}\) responsible on predicting the utility and private attributes respectively, then adversarial minimizing \(L_{u}\) and maximizing \(L_{p}\) while added to the original reconstruction loss. However, such a method always requires choosing only one privacy variable at a time which is not always the case, in addition the authors did not run a clear threat analysis through analyzing how a harmful attack as model inversion might be applied against the generated \(\hat{x}\).
T. Xiao et al. (2020) proposed an adversarial reconstruction
learning framework that prevents the latent representations to decode into the original input data. In other words, this time the reconstruction loss is directly maximized while minimizing the utility prediction error. Even though such a method allowed to empirically reduce the effectiveness of a model inversion attack, it is theoretically still possible and not inevitable. In addition the authors also noted that adversaries may still be able to exploit private meta information, such as determining the ethnicity of an individual based on skin color presented in reconstructed images, despite the fact that the images are not identical to the input.
Training a model \(\mathcal{M}\) with generated synthetic data is another ppML with deep learning approach. Previous works were performed to train Generative Adversarial Neural networks (GANs) using differential privacy DP-GAN (Cao et al., 2021; Harder et al., 2021; Y. Wang et al., 2021). Yet, DP leads to the injection of noise in the generated data, which might reduce the model's performance especially in complex data like genomics where the data suffers from the curse of dimensionality and where one single gene can decide the main task's outcome, DP might result in a further performance loss (Chen et al., 2020). Dong et al. (2022) were the first to introduce dataset condensation technique into the domain of ppML. Data condensation (Zhao et al., 2021) works by generating synthetic data through condensing the larger original data points into a smaller subset. It allows a data-efficient learning since the synthetic data are created by first randomly initializing them, afterwards two neural networks are trained where one is linked to the original data \(\mathcal{T}\) and the other to the synthetic data \(\mathcal{S}\) and then the latter is updated iteratively by back-propagating the loss in a way that both networks share the same weights. Dong et al. (2022) theoretically proved that dataset condensation is similar to differential privacy from the perspective that one sample has limited effect over the parameter distribution of a network. Then they empirically showed that in addition to keeping good performance a model trained with the synthetic data was also more robust towards membership inference attacks. However, such a method does present some limitations. First, the membership inference risk was still present even though it was decreased, in fact the authors even found out that in the case of FashionMNIST the membership attack was more effective on the model trained with synthetic data \(\mathcal{S}\) than on its original counterpart due to the grey-scale nature of the image, assuming that the synthetic data might contain more features prone to be memorized. Furthermore, dataset condensation initially requires a large amount of data, which excludes databases with few samples.
## III Framework Architecture & Methodology
### _Residual Autoencoder_
Ensuring the training stability in a neural network raises several questions, especially on how to correctly set the number of hidden layers to be used. Wickramasinghe et al. (2021) showed that residual autoencoders (RAE) performed well and in a stable manner while ranging the number of repeated layers from 2 to 90 on MNIST, FashionMNIST and CIFAR. For this stability purpose, we decided to use the residual autoencoder as each hidden layer of the encoder \(X^{(i)}_{enc}\in\mathbb{R}^{e_{i}}\), with \(e_{l}<e_{i+1}<e_{i}<d\) and \(d,e_{l}\) being the respective dimensions of the input data and the latent space (\(e_{l}<d\)), is separately fed into a neural network classifier \(f_{i}(\cdot)\) to perform the original classification task assigned to it \(\hat{y}_{i}=f_{i}(X^{(i)}_{enc},\theta_{i})\), \(\hat{y}_{i}\in C\) and \(i\) is the index of layers included in the encoder \(E(.)\) with the latent space taking part as well.
### _Application Flow_
In this section, we explain the application pipeline of our encoding framework in both unimodal and multimodal schemes.
#### Iii-B1 Unimodal Setting
After training the supervised residual autoencoder (SRAE) of our framework on a dataset \(\{x_{j},y_{j}\}\), we use the encoder part \(E(.)\) of it to compute different representation of a given sample \(x_{j}\) until the layer producing the latent space as defined in the previous section. The final encoding \(\bar{\Psi}\) to be shared is defined simply as the concatenation of those representations (Eq. 2).
\[\bar{\Psi}=(X^{1}_{enc},X^{2}_{enc},...,X^{l}_{enc}) \tag{2}\]
with \(X^{(i)}_{enc}\in\mathbb{R}^{e_{i}}\) and \(\bar{\Psi}\in\mathbb{R}^{\sum_{i=1}^{l}e_{i}}\). In Fig 2, we illustrate how the encoding pipeline is generated by our proposed framework for a unimodal scenario.
We would like to highlight and point out that in the unimodal setting our encoding framework is helpful when further computational power from external resources is required, for instance to evaluate different hyperparameters settings and combinations and/or the use of complex ensemble and mixture of experts models. In other words, the trained residual autoencoder is
Figure 1: Building block of the Convolution Residual Autoencoder (C-RAE) in the encoder part \(E(.)\)
Figure 2: Execution flow of our ppML encoding framework on unimodal data, in 1 our supervised residual autoencoder is trained in-house on the original data, then in 2 the generated concatenated encoding \(\bar{\Psi}\) will be sent and shared with another party (for example: cloud) for the purpose of performing a heavy hyper-parameters tuning and further computationally expensive experiments.
kept in-house and only the encoded data is shared with the server providing the compute resources allowing us to benefit from cloud services without being under the threat of revealing our original data \(x\). Our framework can also be leveraged to permit other institutions, when requested, to benefit from the inference of our model by sharing with them both the encoder \(E(.)\) and the final model trained on the cloud \(\mathcal{M}_{\Psi}\).
#### Iii-B2 Multimodal Setting
In this section of the paper we explain how to extend the utility of our framework in a multimodal use case. Let us consider a multimodal dataset consisting of \(m\) modalities \(\{x_{j}^{(m)},y_{j}\}\) and distributed in a vertical setting over \(m\) data providers where basically each modality is stored in a data supplier.
For this specific scenario, the adopted strategy works by training a supervised residual autoencoder on every modality separately at the data provider. Each one then sends in parallel its final encoding \(\bar{\Psi}^{(m)}\) to a third party system to perform a cooperative training (Fig 3). Such a workflow permits mutual work between various providers by sending good representations of the input thus not hurting the performance and in the same time guaranteeing total privacy of the data since no information about the original data format or properties are revealed including the original shape, the distribution, the type of modality (image, sound, tabular...) and if the data-type is homogeneous (only numerical, only categorical) or heterogeneous.
For the sake of a better understanding of how the proposed encoding framework can be applied on a multimodal scenario, we consider the following illustrative example. In the healthcare domain, 3 clinics are collaboratively working together to diagnose a given patient \(P_{j}\) with a certain pathology \(H_{k}\). Each clinic is responsible for delivering specific information about the patient \(P_{j}\). For instance one presents the X-ray image, the second delivers the IRM image and the last one shares the electronic health record (EHR). Our framework allows the 3 institutions to collectively train a model responsible for detecting a pathology \(H_{k}\) and to infer if patient \(P_{j}\) suffers form it, all that without revealing the true format of the data to other parties in a way that clinic 2 and 3 are not able to know the original content of the X-ray image stored in clinic 1.
### _Multi-Objective Paradigm_
Our model, as previously mentioned, is trained to solve multiple tasks simultaneously. In addition to considering both the data reconstruction and classification problems in parallel, two other tasks within the sphere of representation learning are taken into account by the model.
#### Iii-C1 Center Loss
In order to ensure that the layers' concatenation which will be shared by our framework has a widely class-separated structure to maximize the distance from decision boundaries we introduced a center loss on the concatenation layer. Center loss was presented by Wen et al. (2016) and defined in Eq. 3, where \(c_{y_{i}}\in\mathbb{R}^{d}\) denotes the \(y_{i}\)-th class center. It aims to minimize the intra-class distance.
\[L_{c}=\sum_{i=1}^{n}||x_{i}-c_{y_{i}}||_{2}^{2} \tag{3}\]
#### Iii-C2 Cosine Similarity with PCA
Explicitly introducing an interpretation mechanism in our representation is of crucial importance. Since the "black-box" barrier is always present when training neural network models such as autoencoders, we wanted to explicitly make the learning of the representation aligned with the PCA of the original data, as PCA is a well accepted technique for dimensionality reduction.
For this aim, we decided to use a cosine similarity loss function. In the same network, we connect the concatenation layer, where the center loss is already applied (Section III-C1), to a dense layer of 2 units and minimize the following function:
\[L_{pca}=1-\frac{f_{2}(\Psi)\cdot x_{pca}}{\|f_{2}(\bar{\Psi})\|\|x_{pca}\|} \tag{4}\]
and \(f_{2}(.)\) defined as \(f_{2}:\mathbb{R}^{\sum_{i=1}^{l}e_{i}}\rightarrow\mathbb{R}^{2}\) and \(x_{pca}\) being the 2-d PCA of the original input \(x\). All parts of our framework are summarized in Fig 4.
## IV Datasets
We will now define the list of datasets we experimented on to evaluate the efficiency of our encoding framework in terms of optimizing the performance-privacy trade-off. We went beyond using only standard benchmark datasets by also including data characterized with real life constraints as data
Figure 4: Suggested Framework vs Baseline
Figure 3: Execution flow when applying our ppML encoding framework on a multimodal use case with vertically distributed data
imbalance and limited number of data points with features exceeding the number of samples.
### _Unimodal Dataset_
#### Iv-A1 Image Data
**MNIST** (LeCun et al., 1998) is considered a benchmark for image classification and includes grey scale images of digits. **Fashion MNIST** (H. Xiao et al., 2017) is a benchmark dataset for machine learning similar to MNIST, but is by nature more complex since the task is to identify 10 types of Zalando articles. **Retinal Optical Coherence Tomography** (OCT) is an imaging technique used to capture high-resolution cross sections of the retinas. The dataset presented by Kermany et al. (2018) is comprised of 84 495 retina images with 4 classes: Normal, Diabetic Macular Edema, Drusen and Choroidal Neovascularization.
#### Iv-A2 Tabular Data
**Leukemia**: this tabular dataset comes from the Curated Microarray Database (Feltes et al., 2019) which is a repository containing 78 handpicked cancer microarray datasets, extensively curated from 30.000 studies from the Gene Expression Omnibus (GEO), solely for machine learning. For our study, we experimented on the leukemia dataset which contains 281 samples with 22284 gene expression values with the task of classifying 7 types of leukemic cancer.
### _Multimodal Dataset_
**TCGA Breast Information Core** is a multi-omics dataset for breast cancer. Different types of high-throughput sequencing methods used parameters of DNA genome sequence, RNA expression and DNA methylation. Each datatype is labeled with the term "omics" (genomics, transcriptomics and methylomics respectively for our case). This dataset was leveraged by Rappoport and Shamir (2018), Roder et al. (2019) and Cantini et al. (2021) to perform and experiment on multi-omics and multi-view clustering methods. However, since the current version of our representation-oriented encoding framework is specific to supervised learning use cases, we applied it to predict the survival status of patients.
## V Experiments & Evaluation
In this section, we present experiments conducted to compare the performance of training models on the encoded data against models directly trained on the original data or trained only on the latent space embedding (Maria Quintero-Ossa et al., 2022).
### _Model Parameters_
Details about the type, number, and hyperparameters of the layers composing the residual autoencoder in our framework are presented in Appendix A. Hardware specifications are presented in Appendix B.
### _Experimental Setup_
For every dataset, we trained the residual autoencoder of our encoding framework on the train set, then obtained the concatenated encoding \(\Psi^{train}\) and \(\Psi^{test}\) for the train and test sets respectively. We then trained a set of machine learning models (KNN, SVM, Decision Trees, Random Forests, Multi-Layer Perceptron) in a randomized grid search manner (Appendix C) on the original data \(x^{train}\), the latent space embedding only (baseline) and on our encoded data \(\Psi^{train}\) and compared the performance metrics on the respective test points. However, for MNIST, FashionMNIST, and Retinal OCT since they are of type image we trained their original versions using a ResNet-50. From table I we can empirically observe that the average macro F\({}_{1}\)-score of models trained on the encoded data is most of the time better than the other approaches. We further explored the impact of the introduced center loss by comparing the silhouette score of the concatenated encoding \(\Psi\) with and without applying center loss. Prior to calculating the silhouette score we first reduce the concatenated encoding to 2 dimensions using t-SNE. As a reminder, silhouette score is a quantitative metric which measures how good clusters are grouped together.
Table II summarizes the silhouette score comparison between two versions of our framework (with/without center loss). We observe an increase of the silhouette score by 478%, 43%, and 57% for FashionMNIST, Leukemia, and TCGA, respectively after adding the center loss and a small decrease of 3% and 0.5% for MNIST and OCT. We clearly notice the positive impact of the center loss in making the encoded representation well grouped.
We also applied an ablation study to check the impact of the introduced cosine similarity loss (Eq. 4). Even though the main reason of introducing this loss was to properly guide the alignment of the input and the shared representation, we also evaluated its impact on the overall classification performance. For this case, we only took the best type of ML model per dataset while using the encoded data in Table I. The results of this ablation study are presented in Table III.
## VI Threat Analysis
In our solution, we consider four different actors: data owner(s), cloud server, users having access to inference services and institutions where encoder \(E(.)\) and final model \(\mathcal{M}_{\Psi}\) are shared. We assume that the cloud server, users, and institutions are honest but curious, meaning they are expected to follow the protocol. The main goal of the adversary corrupting these actors is to attempt to infer sensitive information about the original training samples by using their observations of the protocol execution.
The view of the adversary \(\mathcal{A}\) corrupting the cloud consists of the encoded samples of the data owner(s) and the trained model on these samples. Since the adversary \(\mathcal{A}\) has no knowledge about the utilized encoder by the data owner(s), it cannot return back to the original samples from the encoded samples. For instance, the adversary \(\mathcal{A}\) knows neither the dimensionality nor the type of the original data. In addition to the protection of the privacy of the samples, our framework preserves the privacy of the model implicitly as well. Although the adversary \(\mathcal{A}\) has access to the trained classifier model in plaintext, it has no use unless the adversary has access to the encoder. Therefore, we can conclude that our proposed framework securely allows the outsourcing of the computation to a third party or enables
the third party to benefit from the output of the model without compromising the privacy of the data or the model.
The adversary \(\mathcal{A}\) corrupting at least one user can perform predictions on the model \(\mathcal{M}_{\Psi}\) for the encodings of given data. Through using the prediction service as an API, \(\mathcal{A}\) has access only to the final predictions \(y_{new}\). Without knowledge of the encodings of the training samples and the encoder \(E(.)\), it is not possible to extract the training samples. \(\mathcal{A}\) might still perform membership inference attacks which is a common problem for all machine learning models trained without differential privacy. Our solution does not aim to address these weaknesses. The adversary, \(\mathcal{A}\), who has compromised at least one institution, can access both the encoder \(E(.)\) and the model \(\mathcal{M}_{\Psi}\). \(\mathcal{A}\) can use \(E(.)\) to encode new data, and then use the encodings to make predictions using \(\mathcal{M}_{\Psi}\), resulting in \((\Psi,y_{\Psi})\). Without the encodings of the original training samples, it is impossible to extract the training data. \(\mathcal{A}\) can also perform attacks like model inversion and membership inference. However, as we mentioned, we do not present a solution to counteract these types of attacks in this article.
The adversary \(\mathcal{A}\), who has compromised at least one institution and the cloud server, can access the encoder \(E(.)\) and the encodings of the original training samples. In this scenario, \(\mathcal{A}\) can train the decoder \(D_{inv}(.)\) by using the data of compromised institution and the encoder \(E(.)\) to reconstruct the original training samples. Therefore, in our security model, we assume that the cloud server and institutions do not collude. This assumption is practical in real-world scenarios. Data owners can conceal their identities from the cloud server, making it difficult to determine with whom the model trained by the cloud server is shared. The 4 scenarios described in our threat analysis are illustrated in Fig 5.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \(L_{c}\) & **MNST** & **Fashion** & **Le
## VII Limitations of the proposed framework
Even though our encoding framework showed great performance-privacy results, it is for now still constrained to specific scenarios. Our framework is task-oriented which means that it requires the presence of a supervision task that guides our autoencoder for presenting a meaningful mapping. However, it is known within the ML community that most datasets do lack annotation (Humbert-Droz et al., 2022; Nguyen & Le, 2021; Xu et al., 2020) which currently excludes our proposed ppML encoding technique to be leveraged for unsupervised use cases. In addition, our model is not generic towards all types of data training distributions, in fact, our current framework is applicable only if the data is vertically split among input parties where they share the same sample ID but different feature spaces, like in the multimodal example used in our experiments, which excludes (for now) horizontal federated learning (Yang et al., 2019).
## VIII Conclusion & Future Works
In this study, we introduced an encoding strategy powered by representation learning leveraged for privacy purposes. The main goal and motivation behind implementing such a framework are to take advantage of discriminative representations learned in the hidden layers in the encoder part and take their concatenation as the encoding to be shared with other parties. To achieve this, we implemented a supervised residual autoencoder trained to consider both data reconstruction and an assigned classification task. To ensure a good representation we strengthened the training with two introduced losses one being the center loss applied on the concatenated encoding and a cosine similarity loss used to force the concatenation in having the same direction as the original input. We further presented the application workflow of our framework in a unimodal and multimodal settings. Our framework allows us to benefit from external computational resources to perform training on the encoded data in a secure fashion since no information about the data is being revealed. As future work, we look forward to expanding the domain application of our encoding framework by including unlabeled datasets and horizontally distributed data for federated learning.
| Several domains increasingly rely on machine learning in their applications.データの重度の依存関係が、データ倫理とプライバシーに関する法律と規制の出現、そしてプライバシーを保つ機械学習(ppML)の必要性の認識に繋がりました。現在のppML技術は、 either完全に暗号化に基づいている(例: homomorphic encryption) または入力にノイズを導入している(例: differential privacy)という方法で利用されています。これらの技術に対する主な批判は、それらは遅い場合やモデルの性能をプライバシーの向上にまで trade off していることです。このパフォーマンスの低下に対処するため、私たちは、データのエンコーディングを強化する、信頼性の高い表現学習を、プライバシーと利便性とのバランスを最適化する手段として利用することを目指しています。私たちの方法は、多目的学習を目的としたオートエンコーダーのトレーニングと、エンコーディング部の潜在的特徴と学習された特徴 |
2301.13779 | FLAME: A small language model for spreadsheet formulas | Spreadsheets are a vital tool for end-user data management. Using large
language models for formula authoring assistance in these environments can be
difficult, as these models are expensive to train and challenging to deploy due
to their size (up to billions of parameters). We present FLAME, a
transformer-based model trained exclusively on Excel formulas that leverages
domain insights to achieve competitive performance while being substantially
smaller (60M parameters) and training on two orders of magnitude less data. We
curate a training dataset using sketch deduplication, introduce an
Excel-specific formula tokenizer, and use domain-specific versions of masked
span prediction and noisy auto-encoding as pre-training objectives. We evaluate
FLAME on formula repair, formula completion, and similarity-based formula
retrieval. FLAME can outperform much larger models, such as the Davinci (175B)
and Cushman (12B) variants of Codex and CodeT5 (220M), in 10 of 14 evaluation
settings for the repair and completion tasks. For formula retrieval, FLAME
outperforms CodeT5, CodeBERT, and GraphCodeBERT. | Harshit Joshi, Abishai Ebenezer, José Cambronero, Sumit Gulwani, Aditya Kanade, Vu Le, Ivan Radiček, Gust Verbruggen | 2023-01-31T17:29:43 | http://arxiv.org/abs/2301.13779v2 | # FLAME: A small language model for spreadsheet formulas
###### Abstract
The widespread use of spreadsheet environments by billions of users presents a unique opportunity for formula-authoring assistance. Although large language models, such as Codex, can assist in general-purpose languages, they are expensive to train and challenging to deploy due to their large model sizes (up to billions of parameters). Moreover, they require hundreds of gigabytes of training data. We present FLAME, a T5-based model trained on Excel formulas that leverages domain insights to achieve competitive performance with a substantially smaller model (60M parameters) and two orders of magnitude less training data. We curate a training dataset using sketch deduplication, introduce an Excel-specific formula tokenizer for our model, and use domain-specific versions of masked span prediction and noisy auto-encoding as pretraining objectives. We evaluate FLAME on formula repair, formula auto-completion, and a novel task called syntax reconstruction. FLAME (60M) can outperform much larger models, such as Codex-Davinci (175B), Codex-Cushman (12B), and CodeT5 (220M), in 6 out of 10 settings.
## 1 Introduction
Despite a much larger user base, spreadsheet environments do not have access to nearly the same range of productivity tools as available for general programming environments. The latter typically have code completion, refactoring, linting, and a wide range of extensions for additional functionality, like generating tests, inserting code snippets, and summarizing code. Many of these advanced programming assistance tools are driven by advances in large language models trained on code (LLMCs). Codex [2] is used for code completion [16] and repair [14], AlphaCode [11] solves competitive programming problems, [12] built a code review system, and many other models show great performance in code related tasks [23, 15, 16].
To capture the complexity and variety of code and comments in different languages, these models need billions of parameters--the _smallest_ variant of Codex, used by GitHub Copilot, has 12 billion parameters. As a result, these models are trained for long periods on corpora containing millions of programs. For example, Incoder 6.7B used 159GB of code over a period of 24 days on 248 V100 GPUs. In addition to training costs, inference on large models is expensive due to extensive hardware requirements. For example, using Codex-Davinci to process 1000 tokens, including the prompt, costs $0.02 USD [17]. In a spreadsheet environment used by billions, these costs quickly add up.
In this paper, we present FLAME, a Formula LAnguage Model for Excel trained exclusively on Excel formulas. FLAME is based on T5-small [16] and has only 60 millions parameters, yet it can compete with much larger models (up to 175B parameters) on three formula authoring tasks: last-mile repair, formula auto-completion and syntax reconstruction. Syntax reconstruction is a novel task where all delimiters are removed from a formula, resulting in a flat stream
Figure 1: A summary of model comparisons in fine-tuned setting for different formula assistance tasks. We show the results under a top-5 cutoff on a public excel benchmark. Note that all Codex-Davinci results are few-shot, and Autocompletion is zeroshot for all systems except CodT5. For Autocompletion, results represent the fraction of benchmarks successfully (based on a sketch match metric) completed given 90% of the prefix.
of tokens, and the model must recover the original formula.
Figure 1 shows a high-level summary of results as a function of model size on a public dataset, where FLAME can outperform larger models in all three tasks. Figure 2 provides real examples, solved by FLAME, for each of these tasks.
There are three main challenges involved in training a model for Excel formulas: obtaining diverse training data, tokenizing their unique structure, and pretraining with objectives that teach the model about this distinctive structure. Spreadsheets contain many duplicate formulas due to copying down formula cells. We reduced our corpus from 927M formulas down to 6.1M by comparing formulas based on syntax, creating 540MB of training data. We combine formulas insights with byte pair encoding (BPE) to train an Excel-specific tokenizer. In addition to two generic objectives (tail-masking and denoising auto-encoding), we introduce two new pretraining objectives designed for formulas: language-aware masked span prediction and user-inspired denoising.
We extensively evaluate FLAME on three downstream tasks, showing that our proposed solutions to the modeling challenges significantly improve the performance of FLAME over T5-based models and can compete with much larger models. Specifically, we find that FLAME can outperform other models in 6 out of 10 settings in our evaluation.
We make the following contributions:
* We present FLAME, the first language model designed exclusively for Excel formulas (**SS3**). To this end, we introduce domain-specific dataset curation (**SS3.2**), tokenization (**SS3.3**), and pretraining objectives (**SS3.4**).
* We extensively evaluate FLAME on three formula assistance tasks: last-mile repair, formula autocompletion, and syntax reconstruction (**SS4.3**).
* We compare our performance to two variants of Codex (latest version of Cushman and Davinci) and CodeT5, and finetune Cushman for downstream tasks (**SS4.1**). We show that FLAME can outperform larger models in 6 out of 10 settings (**SS5.1**).
* We analyze the contribution of different design choices for FLAME (**SS5.2**,**SS5.3**)
## 2 Related Work
Language models for codeMultiple popular language model architectures have been successfully adapted to code. CodeBERT [12] trained BERT (encoder) on natural language and code. CodeT5 [23] trained T5 (encoder-decoder) on a similar corpus. Codex [2], PolyCoder [25], or CodeGen [21] are all trained variants of GPT (decoder). These models are trained on multiple programming languages and use pretraining objectives to understand or generate code and natural language, but do not adapt them for specific languages. In contrast, FLAME exploits a single domain to use domain-specific objectives, such as span masking that respects programming language tokens, to learn a better representation.
Evaluating code modelsMany tasks have been presented to evaluate code models, and CodeXGLUE [15] bundles most of these. These tasks are categorized by the modality (text/code) of their input and output. FLAME is trained on formulas exclusively and is focused on formula tasks. We now describe related work for these tasks.
Formula repairA popular code authoring task is repairing small mistakes. DeepFix [13], BIFI [26], Dr.Repair [27], and TFix [1] use deep learning to perform syntax, compilation, or diagnostics repair in general-purpose programming languages. LaMirage [1] generates repair engines for low-code languages and coins the term _last-mile repair_ for these types of fixes. RING [14] uses Codex to fix last-mile errors across multiple languages, but it requires additional information, such as examples of repairs and compiler messages.
Formula autocompletionThe generative nature of LLMCs makes them serve as code-completion engines. This feature has been shipped in commercial products, such as GitHub Copilot in Visual studio Code [13] and IntelliCode in Visual Studio [20]. Spreadsheet-Coder [2] is a model designed for predicting simple formulas from context in the spreadsheet.
Syntax reconstructionSyntax reconstruction, where all delimiters in a formula are removed, resembles component-based program synthesis, where partial programs are combined into a program that satisfies a specification. Components are provided by a user [15], generated by a model [16], or defined by an API [12].
## 3 FLAME: Approach
We now describe the FLAME architecture and how it overcomes the three key challenges (data, tokenization, and training) in pretraining a general language model for formulas.
### Architecture
To facilitate both formula understanding and generation, FLAME follows an encoder-decoder architecture based on T5 [19]. Encoder models like CodeBERT [12] show remarkable code understanding capabilities. Decoder models like CodeGen [21] and
Figure 2: We consider three downstream tasks: Last Mile Repair, Formula Autocompletion, and Syntax Reconstruction. Red and green colors denote the input and the expected output, respectively. Yellow text denotes the buggy part of the formula in the repair task, where the user has swapped the correct order of arguments resulting in a type error. Each task shows a case that FLAME successfully solves.
Codex [3] perform well on code generation. Encoder-decoder models seek to blend these strengths.
### Training Data
We start from a dataset of 927M formulas drawn from a corpus of 1.8M publicly available Excel workbooks.1 Each workbook contains one or more _worksheets_, and each worksheet contains zero or more formulas. Formulas in spreadsheets are often repeated with minor cell reference changes across rows or columns. For example, a user can drag a formula to another cell to repeat a computation on neighboring cell values.
Footnote 1: These workbooks were collected as part of a large Excel corpus planned for public release by a separate group of authors.
We compute formula sketches to preserve a single instance of each unique formula per _workbook_. In a formula sketch, numeric constants, string constants and cell references are replaced by their token type. For example, the sketch of =SUM(A1:A10) is =SUM(cell:cell). After applying sketch deduplication, we are left with 6.1M formulas. Note that applying this globally to the corpus, rather than per workbook, results in only 591K formulas. We found this globally deduplicated corpus to be insufficient for training as it skews the distribution of formulas --see evaluation (**SS5.2**) for details.
### Tokenizing Formulas
Tokenization is an essential part of language models [13]. A popular method for tokenization is byte pair encoding (BPE) [14]. BPE iteratively joins consecutive tokens that appear together most frequently until a target vocabulary size is reached. However, this procedure can have adverse effects on formulas. For example, SUI and ( are combined to get SUM(, which can reduce expressiveness and hurt performance for tasks like repair.
Our tokenizer considers punctuation, whitespace, built-in function names, and digits as individual tokens [15] and applies BPE [1] to the remaining parts of formulas, like string constants. Excel is case insensitive (with the exception of string contents) so we convert all input tokens to lowercase to map differently capitalized tokens to a single token. For example, without lowercasing, the same function SUM and sum will map to different tokens.
**Example 1**.: _A formula_
=SUMIF(B1:B5, "Not available", A1:A5)_
_is tokenized as_
_=_ _sumif ( b 1 : b 5, "not \(\cup\) available "_
, \(\cup\) a 1 : a 5 )_
_with space tokens denoted by \(\cup\)._
### Pretraining Objectives for Training
In this section, we describe the combination of generic and Excel-specific pretraining objectives, as summarized in Figure 3, that we use to train FLAME.
#### Masking objectives
We use two forms of masking to pre-train FLAME, an Excel-specific variant of masked span prediction (MSP), and a generic tail masking objective.
Language-aware masked span predictionIn contrast to traditional MSP, spans _must_ respect Excel lever bounds. For example, when an Excel cell reference BC18 is divided into four tokens B C 1 8, we ensure that either all or none of its constituent tokens is masked. Consecutive masked tokens are represented with a single <mask> token. Inspired by Mixture-of-Denoisers [15], we mask spans of tokens using combinations of high (35%) and low (15%) masking rates, and big (6 tokens) and small (2 tokens) average span lengths.
Generic tail maskingWe perform tail masking at the character level and allow partial masks of complete tokens. We keep the leading \(\{30\%,40\%,\cdots,70\%\}\) tokens of the input sequence and append a <mask> token.
#### Noisy Auto-encoding
Previous work in natural language processing has used denoising auto-encoding during pretraining [11]. We incorporate two such objectives in FLAME.
Random NoiseWe introduce generic noise by randomly inserting, deleting, or updating tokens in the input sequence. The insertion and update operators randomly sample a token from the vocabulary.
Excel-specific user-inspired noiseWe introduce noise operators that mirror mistakes that real users might make when writing Excel formulas. For example, users often write formulas with the incorrect function arity for in-built functions such as SUMIF. We implement 17 noise operators (Appendix A)
Figure 3: Four pretraining objectives used by FLAME. For each batch, we randomly (with weighted probability) choose one of the four objectives. Generic objectives (tail masking and random noise) are shown with a yellow header, while formula-specific variants (language-aware span masking and user-inspired noise) are shown with a green header. We depict inserted tokens with red and deleted tokens with blue.
based on a combination of help forum and code analysis. We randomly choose one of these noise operators when introducing noise into an input sequence.
Note that for all pretraining objectives, FLAME needs to generate a _complete_ formula (rather than just mask values).
**Combining pretraining objectives**
Rather than applying all pretraining objectives on every batch and then combining losses, we pick a single objective for each batch. We use the following probabilities {MSP: 50%, tail masking: 20%, user-inspired denoising: 20%, random denoising: 5%} for choosing the objective to be applied, and with a 5% probability, we leave the sequence intact.
## 4 Experimental Setup
We now describe our experimental setup. We start with the baseline models we compare against **(SS4.1)**, the training setup **(SS4.2)**, and then detail each downstream task in our evaluation, along with their corresponding datasets **(SS4.3)**.
### Baselines and Configurations
We compare FLAME to the following much larger language models, summarized in Table 1:
* CodeT5: a **220 million** parameter T5-based encoder-decoder model trained on both natural language and code. We present fine-tuned results.
* Codex-Cushman: a **12 billion** parameter autoregressive, decoder-only, GPT-3-based model trained on both natural language and code. We present both zeroshot and fine-tuned results.
* Codex-Davinci: a **175 billion** parameter autoregressive, decoder-only, GPT-3-based model trained on both natural language and code. We present zeroshot and few-shot results. We do not have resources to fine-tune Davinci.
For Codex-based baselines, we use nucleus sampling [1] (temperature=0.7) and sample 50 sequences per task. We sort these sequences based on their average token log probabilities following [14]. We detail the prompts in Appendix B. For CodeT5, we use beam search with a beam width of 50, and we consider the top 50 sequences.
### Training Details
We pretrain FLAME for 10 epochs and finetune CodeT5 and FLAME on a cluster with 16 AMD MI200s, 96 cores and 900 GB RAM. We finetune FLAME for 2 epochs for each downstream task and finetune CodeT5 for 25 epochs with a patience of 5 epochs. We carry out all Codex experiments on a cluster with 8 V100s, 40 cores, and 672 GB RAM. For Codex finetuning we use low-rank adaptation (LoRA) [13]. Refer to Appendix C for more details.
### Downstream Tasks
We consider three different downstream tasks.
**Last-mile Repair**
Last-mile repair refers to repairs that require few edits and fix syntax and simple semantic errors, such as wrong function call arity. In this setting, FLAME is given the buggy formula as the input sequence, and the task is to generate the user's intended (and syntactically correct) formula without any last-mile error.
**Example 2**.: _The user has used the wrong call arity for ISERROR. Red highlights the error in the buggy formula, and green denotes the required edit to match the groundtruth._
_Buggy Formula: =IF(ISERROR(GG *1.2, "# ) ) Groundtruth Formula: =IF(ISERROR(GG *1.2, ") ) Fine Tuning_
We create a finetuning dataset for all systems by taking 200K well-formed formulas from Excel help forums. We then randomly apply our user-inspired noise operators to generate broken versions.
**Evaluation Metric**
We compute an exact match with respect to the ground truth repair. We consider the top 1 and top 5 candidates produced by each system per formula and report the exact match fraction.
**Benchmarks**
We evaluate all systems on two benchmarks. We use the collection of 273 labeled Excel formulas used in recent last-mile repair literature [15]. The authors sourced these formulas from Excel help forums. We refer to this benchmark set as **Forum**.
We also reserve a split of randomly sampled 500 formulas derived using the same procedure as our finetuning dataset to create a **Test** benchmark set.
**Autocompletion**
Code completion is a popular task for language models trained on code, both due to its autoregressive nature and the practical value of code completion as a feature in developers' workflows. In this setting, FLAME is given a formula prefix, and the task is to generate the complete formula.
**Example 3**.: _Formula Autocompletion_
_Formula Prefix: =B2\(\leftarrow\)EDATE(Formula Completion: =B2\(\leftarrow\)EDATE(TODAY(),-33)_
**Fine Tuning**
We curated a finetuning dataset for autocompletion by splitting 189k formulas and sampling a prefix length of \(\{0.2,\cdots,0.7,0.8\}\) fraction of tokens.
**Evaluation Metric**
When completing formulas, some parts can be hard to predict due to lack of context [11], such as cell references, sheet names, string literals, and numerics. Therefore, in addition to **exactly match**, we also consider **sketch match** for autocompletion with respect to the ground truth. Precisely, for sketch match, we use the same sketch procedure described in **SS3**. This uses the Excel lexer
\begin{table}
\begin{tabular}{l l r} \hline \hline System & Architecture & Number of parameters \\ \hline Codex-Cushman & Decoder & 12 billion \\ Codex-Davinci & Decoder & 175 billion \\ CodeT5 (base) & Encoder-Decoder & 220 million \\ FLAME (ours) & Encoder-Decoder & 60 million \\ \hline \hline \end{tabular}
\end{table}
Table 1: Architecture and size comparison of baselines and FLAME
to tokenize a formula and preserves built-in function names but replaces all other tokens with their token type. We then compare the sketches of the formulas for a match. For instance, in Example 3, predicting the numeric \(-33\) is highly contextual, so in a sketch we match with its token type, Numeric.
BenchmarksWe evaluate autocompletion on a single benchmark, consisting of the 273 ground truth formulas from the Forum last-mile repair benchmark. For each formula, given exact match or sketch match metric, we predict completions at 0.25, 0.5, 0.75, 0.90 and 0.99 fractions of formula prefix.
Syntax ReconstructionWe introduce a new task that we term syntax reconstruction. The input to this task consists of Excel formulas which we have processed to remove any delimiters, resulting in a flat stream of lexer tokens. Excel delimiters are defined to be the following set of tokens: {( )!, ; { }. The model is then tasked with generating the original formula with appropriate delimiters.
Example 4: _Syntax Reconstruction given the excel tokens._
_Tokens: MAX 0 MOD C10 - B10 1 - D10 Reconstruction: MAX(0,MOD(C10-B10,1)-D10)_
Since, by definition, syntax reconstruction cannot introduce tokens into the output that are not delimiters or not in the original input token stream, FLAME employs constrained decoding to greedily remove invalid candidates from the search space. Our tokenizer design, particularly splitting on punctuation, makes this decoding strategy easier to implement.
Fine TuningWe curate a finetuning dataset by sampling 200k formulas from the publicly available Excel corpus that we used for FLAME's pretraining. We keep the subset that contains at least one delimiter (139k) and remove all delimiters.
Evaluation MetricWe compute an exact match with respect to the ground truth and consider the top 1 and top 5 candidates produced by each system per formula.
BenchmarksWe derive a benchmark set from the last-mile repair benchmarks by removing the delimiters for every groundtruth formula. We refer to this benchmark as **Forum**.
Finally, we also consider a **Test** split that reflects the same preparation as the fine tuning dataset.
## 5 Evaluation
We explore the following research questions in our evaluation:
* **RQ1:** How does FLAME perform on formula intelligence tasks compared to substantially larger language models?
* **RQ2:** How do pretraining design decisions such as data curation, model size, pretraining objectives, and tokenizer affect FLAME's downstream performance?
* **RQ3:** How do various decoding strategies affect different downstream-task performances for FLAME?
### RQ1: Larger Language Models
We now compare FLAME to substantially larger language models on our three formula intelligence tasks.
Last Mile Repair and Syntax ReconstructionWe finetune FLAME, CodeT5, and Codex-Cushman for last-mile repair and syntax reconstruction, and use few-shot prompts with three shots for Codex Davinci. Although one of
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{3}{*}{Model} & \multicolumn{4}{c}{Last Mile Repair} & \multicolumn{4}{c}{Syntax Reconstruction} \\ \cline{2-9} & \multicolumn{2}{c}{Forum} & \multicolumn{2}{c}{Test} & \multicolumn{2}{c}{Forum} & \multicolumn{2}{c}{Test} \\ \cline{2-9} & T@1 & T@5 & T@1 & T@5 & T@1 & T@5 & T@1 & T@5 \\ \hline Cushman & **0.79** & 0.88 & **0.87** & **0.93** & 0.70 & 0.80 & 0.84 & **0.91** \\ Davinci (FS) & 0.76 & 0.89 & 0.54 & 0.77 & 0.62 & 0.77 & 0.61 & 0.73 \\ CodeT5 (220M) & 0.70 & 0.84 & 0.84 & 0.90 & 0.70 & 0.84 & 0.82 & 0.89 \\ CodeT5 (60M) & 0.72 & 0.83 & 0.82 & 0.89 & 0.65 & 0.81 & 0.83 & 0.89 \\ FLAME & 0.76 & **0.89** & 0.83 & 0.91 & **0.75** & **0.89** & **0.84** & 0.89 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Fine-tuned performance for Last Mile Repair and Syntax reconstruction tasks. Codex-Davinci uses few-shots and is denoted by an FS suffix). FLAME outperforms larger models at last-mile repair in the Forum benchmark at top-5, and comes in second at top-1. In syntax reconstruction, FLAME outperforms all models at both cutoffs in the Forum benchmark. **Bold** denotes best performing model and Underline represents second best.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{4}{c}{**Exact Match**} & \multicolumn{4}{c}{**Sketch Match**} \\ \cline{2-11} & **0.25** & **0.50** & **0.75** & **0.90** & **0.99** & **0.25** & **0.50** & **0.75** & **0.90** & **0.99** \\ \hline Cushman & 0.0 & 0.04 & 0.27 & 0.61 & 0.86 & **0.12** & **0.26** & 0.47 & 0.71 & 0.86 \\ Davinci & 0.0 & 0.03 & 0.31 & 0.64 & 0.85 & 0.10 & 0.25 & 0.53 & 0.76 & 0.85 \\ CodeT5 & 0.0 & 0.02 & 0.10 & 0.27 & 0.21 & 0.03 & 0.09 & 0.20 & 0.39 & 0.22 \\ FLAME & **0.01** & **0.06** & **0.34** & **0.70** & **0.93** & 0.10 & 0.24 & **0.55** & **0.84** & **0.94** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Zeroshot autocompletion performance of FLAME, Codex-Cushman and Codex-Davinci, and fine-tuned CodeT5 (as denoted by FT suffix). Given \(\{0.25,0.50,0.75,0.90,0.99\}\) fraction of formula prefix, we report the proportion of formulas completed in the top 5. We observe that FLAME outperforms all the large language models in the exact match setting and most (3/5) of the sketch match settings. **Bold** denotes best performing model and Underline represents second best.
our pretraining objectives closely resembles last-mile repair (noisy auto-encoding) we find that finetuning FLAME helps direct it towards a particular task.
We summarize the results in Table 2 and observe that on the Forum last-mile repair benchmark FLAME outperforms all models at top-5, and is second best to Codex-Cushman at top-1. In the Test benchmark, we find that FLAME is second-best to Codex-Cushman at top-5 and is close to CodeT5's second-best performance at top-1. In the Test benchmark, Davinci's performance is substantially worse than the fine-tuned models.
On further analysis, we found that all models solve 73% of the Forum benchmark. FLAME solves 4% of the benchmarks that no other model solves and fails on 1% of the benchmarks that all other models fix. FLAME also generates syntactically correct formulas for 98% of the benchmarks in top 5. In Figure 4, we show examples where FLAME gets the correct fix, and other models do not, and vice versa. We note that in some cases, FLAME's fixes appear to be more natural, but fail to match the user's ground truth repair.
For syntax reconstruction Forum, we find that FLAME outperforms other models across the top-1 and top-5. Interestingly, CodeT5 also solves more syntax reconstruction tasks than both Codex models. We hypothesize that since syntax reconstruction is a new task, as compared to the more traditional repair problem, after fine-tuning, encoder-decoder models perform better than decoder-only models, as shown by [14]. In Test, we find that FLAME performs similar to Codex-Cushman (same at top-1 and -2 points lower at top-5).
We find that 54% of the Forum syntax reconstruction benchmarks are solved by all the models, 1% is solved only by FLAME, and there are no benchmarks that all other models solve but FLAME doesn't. We attribute this performance to our pretraining design choices. First, FLAME learns to generate syntactically correct code as a result of its noisy auto-encoding pretraining objective. Second, FLAME learns the natural distribution of formulas by generating complete sequences during pretraining, rather than just mask values and sentinel tokens.
Zeroshot PerformanceFLAME's pretraining objectives allow us to consider zeroshot performance for both last-mile repair and syntax reconstruction. In Table 4, we observe that FLAME outperforms Codex models for last-mile repair across all benchmarks. We attribute this to the closeness of our noisy auto-encoding pretraining objectives and the last-mile repair task. We find that in the syntax reconstruction task, FLAME outperforms Codex-Cushman. We believe this is because syntax reconstruction can be considered an extreme case of repair.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{3}{*}{Model} & \multicolumn{6}{c}{Last Mile Repair} & \multicolumn{6}{c}{Syntax Reconstruction} \\ \cline{2-10} & \multicolumn{3}{c}{Forum} & \multicolumn{3}{c}{Test} & \multicolumn{3}{c}{Forum} & \multicolumn{3}{c}{Test} \\ \cline{2-10} & T@1 & T@5 & T@1 & T@5 & T@1 & T@5 & T@1 & T@5 \\ \hline Cushman & 0.55 & 0.85 & 0.41 & 0.63 & 0.27 & 0.53 & 0.23 & 0.46 \\ Davinci & 0.60 & 0.82 & 0.51 & 0.75 & **0.51** & **0.65** & 0.31 & 0.45 \\ FLAME & **0.71** & **0.88** & **0.74** & **0.85** & 0.41 & 0.53 & **0.50** & **0.58** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Zeroshot last-mile repair and syntax reconstruction performance of FLAME and Codex models. FLAME outperforms all the larger models in Last Mile Repair task and solves more benchmarks than Codex-Cushman for the Syntax Reconstruction task. **Bold** denotes best performing model and Underline represents second best.
Figure 4: Repair tasks with diverging performance. In Example 1, the user did not use the AND function and missed double quotes around string literals yes and no. FLAME fixes this (in top-5), while other models fail. In Example 2 FLAME’s top candidate is syntactically valid but does not match the user’s fix, while other models’ predictions do.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{3}{*}{Model} & \multicolumn{6}{c}{Zeroshot} & \multicolumn{6}{c}{Finetuned} \\ \cline{2-10} & LMR & SR & AC (EM) & AC (SM) & \multicolumn{3}{c}{LMR} & SR \\ \cline{2-10} & Forum & Test & Forum & Test & 0.75 & 0.90 & 0.75 & 0.90 & Forum & Test & Forum & Test \\ \hline FLAME (60M) & **0.71** & **0.74** & **0.41** & **0.50** & **0.34** & **0.70** & **0.55** & **0.84** & **0.76** & **0.83** & **0.75** & **0.84** \\ \hline FLAME (16M) & 0.68 & 0.64 & 0.23 & 0.42 & 0.24 & 0.59 & 0.54 & 0.76 & 0.73 & 0.78 & 0.73 & 0.78 \\ Global Deduplication & 0.57 & 0.56 & 0.16 & 0.2 & 0.15 & 0.45 & 0.41 & 0.59 & 0.68 & 0.76 & 0.73 & 0.81 \\ T5 (Generic objectives and tokenizer) & 0.11 & 0.12 & 0.02 & 0.05 & 0.07 & 0.22 & 0.25 & 0.37 & 0.62 & 0.82 & 0.49 & 0.74 \\ \hline \hline \end{tabular}
\end{table}
Table 5: We compare multiple pretraining design decisions: model size, pretraining data curation, domain-specific pretraining objectives and tokenizer. We consider at top-1 for Last-Mile Repair (LMR) and Syntax Reconstruction (SR) and top-5 for Autocompletion (AC) with Exact Match (EM) and Sketch Match (SM). For details refer to Appendix D. Smaller model performs worse across the board. Curating data with global deduplication reduces performance by up to 30 points. Removing domain-specific objectives and tokenizer impacts performance most.
### Formula Autocompletion
The autoregressive nature of Codex models and FLAME's pretraining objectives allows us to evaluate their zeroshot performance2 for formula auto-completion. Note that we fine-tune CodeT5 for this task as it is pretrained on smaller span lengths (1 to 5 tokens) and generates special mask tokens (e.g., <MASK1>) in a zeroshot setting. We compute exact match and sketch match metrics with top-5 results.
Footnote 2: We finetuned Codex-Cushman and FLAME but observed worse performance, possibly from over-fitting.
In Table 3, we observe that FLAME performs better than all the larger models on the exact match metric and 3 out of 5 prefix lengths for sketch match. We note that Codex-Cushman and Codex-Davinci fail to complete 14% and 15% of the benchmarks with 0.99 fraction of the prefix, respectively, whereas FLAME fails to complete 6% of the benchmarks. We observe significantly lower performance by CodeT5, likely due to the lack of longer masks spans during pretraining. Surprisingly, Codex-Davinci performs slightly worse than the smaller Codex-Cushman for 3 out of 5 prefix lengths. Inspection of completions shows that Codex-Davinci tends to generate more tokens than required when completing these benchmark tasks. We also observe cases where models succeed with a shorter prefix but fail given a longer prefix.
### RQ2: Pretraining design decisions
We investigate FLAME's data curation, model size, the use of domain-specific pretraining objectives, and domain-specific tokenizer, and present results in Table 5.
#### 5.2.1 Training data curation
Previous work [10, 11] have shown that deduplication can improve the performance of language models and reduce the memorization of training data. Therefore, we curate a pretraining dataset by performing workbook-level sketch-based formula deduplication. Alternatively, one might consider performing global (pooled across all workbooks) sketch-based deduplication. This alternative results in a pretraining set of 591K formulas. Table 5 shows that training on this smaller corpus results in a lower performance model. We find that FLAME's zeroshot performance falls by 14 points and finetuned performance falls by 18 points for last-mile repair in Forum benchmarks.
#### 5.2.2 Model size
We trained two variants of FLAME with 16M and 60M parameters. Table 5 compares FLAME-16M and FLAME-60M. We find that performance declines slightly across tasks/benchmarks when we reduce the model size to 16M. However, note that FLAME-16M can still outperform larger models such as Codex in 5 out of 10 zeroshot and finetuned settings, highlighting the efficacy of our design choices for FLAME.
#### 5.2.3 Pretraining objectives and Tokenizer
To evaluate the effectiveness of our domain-specific pretraining objectives and tokenizer, we pretrained a 60M parameters T5 model with generic pertaining objectives and tokenizer. Specifically, this model uses tail-masking, masked span prediction without accounting for lexer token boundaries, and random denoising objectives. Additionally, it uses the CodeT5 tokenizer trained on our pretraining data. Table 5 shows that this variant performs worse across all tasks and benchmarks, both in a zeroshot and finetuned setting. We attribute the huge drop, up to 62 points, in last-mile repair tasks in zeroshot to our user-inspired denoising pretraining objective. Moreover, we hypothesize that FLAME's good syntax reconstruction performance can be attributed to the domain-specific tokenizer. Figure 5 illustrates how the generic tokenizer treats tokens with different capitalizations, resulting in incorrect generation.
### RQ3: Decoding strategy
In Table 6, we evaluate FLAME using four different decoding strategies, Beam Search, Group Beam Search [13], Nucleus Sampling [15] and Top K Sampling [14]. We find FLAME to perform better with group beam search decoding (group size of 2) for all the formula intelligence tasks. However, for autocompletion with sketch match, nucleus sampling showed superior performance. We believe this is because autocompletion requires more diverse results, particularly at shorter prefixes. Refer to Appendix E for autocompletion table.
## 6 Conclusions and Future Work
We present FLAME, a small (60M parameter) language model for spreadsheet formulas, which captures domain-specific properties in its data curation, tokenization, and pretraining objectives. We implemented FLAME for Excel formulas and evaluate on three downstream tasks: last-mile repair, autocompletion, and a novel task that we term syntax reconstruction. We compare with the much larger models CodeT5, Codex-Cushman, and Codex-Davinci. When fine-tuned, FLAME can achieve top performance in 6 of our 10 experimental settings, despite having two orders of magnitude fewer parameters.
Future work will explore downstream tasks that require additional spreadsheet context (e.g. tables). To tackle such tasks we will explore extending our pretraining objectives to incorporate context and the extent to which FLAME can integrate with existing table encoder models.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Decoding Method**} & \multicolumn{2}{c}{**LMR (Forum)**} & \multicolumn{2}{c}{**SR (Forum)**} \\ \cline{2-5} & **T@1** & **T@5** & **T@1** & **T@5** \\ \hline Beam Search & 0.76 & 0.88 & **0.75** & **0.89** \\ Group Beam & **0.76** & **0.89** & 0.75 & 0.89 \\ Nucleus Sampling & 0.72 & 0.85 & 0.7 & 0.84 \\ Top K & 0.67 & 0.86 & 0.67 & 0.84 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance by decoder strategy for last mile repair (LMR) and syntax reconstruction (SR). Beam and Grouped Beam Search have similar performance, and outperform Nucleus, Top K Sampling.
Figure 5: Failing case of syntax reconstruction. Due to the different capitalization of Sum and SUM, the model treats them as different tokens, converting them to an identifier and a function, respectively.
## Acknowledgments
We thank Microsoft Research Cambridge for sharing the Excel corpus used for pretraining FLAME. We thank OCTO at Microsoft (in particular Gopi Kumar and the AMD vTeam) for providing us with compute resources. We also thank the Excel team for their feedback and encouragement in pursuing this work.
| 表計算ソフトはユーザーのデータ管理に不可欠なツールです。これらの環境で式作成補助のための大規模言語モデルの使用は困難です。その理由は、これらのモデルは訓練に費用がかかり、そのサイズ(数十億のパラメータまで)が大きく、展開が難しいからです。私たちは、Excelの式に特化されたTransformerベースのモデルであるFLAMEを提示します。このモデルは、域の洞察を活用して、競争力の高いパフォーマンスを実現しながら、はるかに小型化(60Mのパラメータ)でトレーニングできます。また、2倍のデータ量でトレーニングしています。私たちは、スケッチの重複削除を使用してトレーニングデータを作成し、Excel固有の式トークンを導入し、マスクされたスパンの予測とノイズオートエンコードのドメイン固有のバージョンを使用する事前トレーニングの目的。私たちは、式修復、式完了、および類似性に基づいた式検索にFLAMEを評価しました。FL |
2309.10452 | Essential cohomology modules | In this article, we give a generalization to injective modules by using
$e$-exact sequences introduced by Akray in [1] and name it $e$-injective
modules and investigate their properties. We reprove both Baer criterion and
comparison theorem of homology using $e$-injective modules and $e$-injective
resolutions. Furthermore, we apply the notion $e$-injective modules into local
cohomology to construct a new form of the cohomology modules call it essential
cohomology modules (briefly $e$-cohomology modules). We show that the torsion
functor $\Gamma_a ( - )$ is an $e$-exact functor on torsion-free modules. We
seek about the relationship of $e$-cohomology within the classical cohomology.
Finally, we conclude that they are different on the vanishing of their $i_{th}$
cohomology modules. | Runak H. Mustafa, Ismael Akray | 2023-09-19T09:12:22 | http://arxiv.org/abs/2309.10452v1 | # Essential cohomology modules
###### Abstract.
In this article, we give a generalization to injective modules by using \(e\)-exact sequences introduced by Akray in [1] and name it \(e\)-injective modules and investigate their properties. We reprove both Baer criterion and comparison theorem of homology using \(e\)-injective modules and \(e\)-injective resolutions. Furthermore, we apply the notion \(e\)-injective modules into local cohomology to construct a new form of the cohomology modules call it essential cohomology modules (briefly \(e\)-cohomology modules). We show that the torsion functor \(\Gamma_{a}(-)\) is an \(e\)-exact functor on torsion-free modules. We seek about the relationship of \(e\)-cohomology within the classical cohomology. Finally, we conclude that they are different on the vanishing of their \(i_{th}\) cohomology modules.
Key words and phrases:essential exact sequence, homology module, essential injective module, local cohomology module 2020 Mathematics Subject Classification: 13C11, 46M18, 13D45
## 1. Introduction
Exact sequences play an important roles in module theory and homological algebra. Some notions such as injectivity, projectivity, flatness and derived functors have been defined and analyzed by exact sequence approach. Generalizing exact sequences gives possibilities to generalize some related notions which are defined by exact sequences. An exact sequence \(0\to A\xrightarrow{i}B\xrightarrow{p}C\to 0\) is split if there exists a morphism \(j:C\to B\) (or \(f:B\to A\)) such that \(pj=I_{C}\) (or \(fi=I_{A}\)). In 1972, R. S. Mishra introduced a generalization for split sequence where a semi-sequence \(M_{i-1}\xrightarrow{f_{i-1}}M_{i}\xrightarrow{f_{i}}M_{i+1}\) is called semi-split if \(Ker(f_{i})\) is a direct summand of \(M_{i}\)[7]. So a semi-split is split if and only if it is exact. In 1999, Davvaz and parnian-Goramaleky introduced a generalization for exact sequences called it a \(U\)-exact sequence, where a sequence of \(R\)-modules and \(R\)-homomorphisms \(\cdots\to M_{i-1}\xrightarrow{f_{i-1}}M_{i}\xrightarrow{f_{i}}M_{i+1}\to\dots\) is a \(U_{i+1}\)-exact at \(M_{i}\) if \(Im(f_{i})=f_{i+1}^{-1}(U_{i+1})\), where \(U_{i+1}\) is a submodule of \(M_{i+1}\)[5]. Also, a short sequence \(0\to A\xrightarrow{f_{i}}B\xrightarrow{g}C\to 0\) of \(R\)-modules is a \(U\)-exact if it is \(0\)-exact at \(A\), \(U\)-exact at \(B\) and \(0\)-exact at \(C\), equivalently, if \(f\) is monic, \(g\) is epic and \(Imf=g^{-1}(U)\) for a submodule \(U\) of
## 1. Introduction
Let \(G\) be a group of \(n\)-dimensional algebraic groups. Let \(G\) be a group of \(n\)-dimensional algebraic groups.
Section three devoted for discussing the application of \(e\)-exact sequence to local cohomology modules. We construct cohomology modules by using \(e\)-injective resolutions and called the \(i_{th}\)-local cohomology of an \(R\)-module \(M\) by \({}_{e}H^{i}_{a}(M)\). We prove that \({}_{e}H^{0}_{a}()\) is naturally equivalent to \(r\Gamma_{a}(\ )\) on torsion-free \(R\)-modules (Theorem 3.7). We study some cases of vanishinig of \(e\)-cohomology modules (Theorem 3.4 and Theorem 3.5). Furthermore, we show that by an example that not necessary an \(a\)-torsion \(R\)-module \(M\) has zero \(e\)-cohomology modules \({}_{e}H^{i}_{a}(M)=0\), for \(i>0\) as this is the case in local cohomology (Example 3.11).
## 2. Essential injective modules
In this section we introduce essential injective module and investigate some properties and results on such topic. We begin with their definition.
**Definition 2.1**.: We call an \(R\)-module \(E\) essential injective briefly e-injective if it satisfies the following condition: for any monic \(f_{1}:A_{1}\to A_{2}\) and any map \(f_{2}:A_{1}\to E\) of \(R\)-modules, there exist \(0\neq r\in R\) and \(f_{3}:A_{2}\to E\) such that \(f_{3}f_{1}=rf_{2}\).
\(0\)\(f_{2}\)\(A_{1}\)\(f_{1}\)\(A_{2}\)
In this case, we say the map \(f_{3}\) is essentially extends to the map \(f_{2}\).
Now, we give one of our main results in this section which is the Baer criterion for \(e\)-injectives.
**Theorem 2.2**.: _[e-Baer Criterion] An \(R-\)module \(E\) is e-injective if and only if for a nonzero ideal \(\boldsymbol{a}\) of \(R\), every \(R\)-map \(f:\boldsymbol{a}\to E\) can be extends essentially to \(g:R\to E\) in the sence that there exists \(0\neq r\in R\) with \(gi=rf\), where \(i:\boldsymbol{a}\to R\) is an inclusion map as in the
following diagram._
Proof.: Since any ideal of \(R\) can be considered as a submodule of the \(R\)-module \(R\), the existence of an extension \(g\) of \(f\) is just a special case of the definition of e-injectivity of \(E\). Suppose we have the following diagram, where \(A\) is a submodule of \(B\) and \(i\) is the inclusion map:
Let \(X\) be the set of all ordered pairs \((A^{\prime},g^{\prime})\), where \(A\subseteq A^{\prime}\subseteq B\) and \(g^{\prime}:A^{\prime}\to E\) essentially extend \(f\): that is, \(g^{\prime}|_{A}=rf\) for nonzero r in \(R\). Partially ordered \(X\) by defining \((A^{\prime},g^{\prime})\preceq(A^{\prime\prime},g^{\prime\prime})\), which means \(A^{\prime}\subseteq A^{\prime\prime}\) and \(g^{\prime\prime}\) essentially extends \(g^{\prime}\), so the chains in \(X\) have upper bounds in \(X\), hence using Zorn Lemma, there exists a maximal element \((A_{0},g_{0})\) in \(X\). If \(A_{0}=B\), we are done, if it is not, there is \(b\in B\) with \(b\notin A_{0}\). Define \(\mathbf{a}=\{r\in R:rb\in A_{0}\}\), it is clear that \(\mathbf{a}\) is an ideal in \(R\). Define \(h:\mathbf{a}\to E\), by \(h(r)=g_{0}(rb)\). By hypothesis, there is a map \(h^{*}:R\to E\) essentially extends \(h\). Finally, define \(A_{1}=A_{0}+<b>\) and \(g_{1}:A_{1}\to E\) by \(g_{1}(a_{0}+rb)=g_{0}(a_{0})+rh^{*}(1)\), where \(a_{0}\in A_{0}\) and \(r\in R\). Let us show that \(g_{1}\) is well-defined. If \(a_{0}+rb=a_{0}^{\prime}+r^{\prime}b\), then \((r-r^{\prime})\in\mathbf{a}\). Therefore, \(g_{0}((r-r^{\prime})b)\) and \(h(r-r^{\prime})\) are defined, and we have \(g_{0}(a_{0}^{\prime}-a_{0})=g_{0}((r-r^{\prime})b)=h(r-r^{\prime})=h^{*}(r-r^{ \prime})=(r-r^{\prime})h^{*}(1)\). Thus \(g_{0}(a_{0}^{\prime})-g_{0}(a_{0})=rh^{*}(1)-r^{\prime}h^{*}(1)\) and \(g(a_{0}^{\prime})+r^{\prime}h^{*}(1)=g_{0}(a_{0})+rh^{*}(1)\) as desired. Clearly, \(g_{1}(a_{0})=g_{0}(a_{0})\) for all \(a_{0}\in A_{0}\), so that the map \(g_{1}\) essentially extends \(g_{0}\). We conculude that \((A_{0},g_{0})\prec(A_{1},g_{1})\) contradicting the maximality of \((A_{0},g_{0})\). Therefore, \(A_{0}=B\), the map \(g_{0}\) is an essentially extends \(f\) and \(E\) is an e-injective.
The following example shows that an \(e\)-injective module may not be injective.
**Example 2.3**.: The \(\mathbb{Z}\)-module \(\mathbb{Z}\) is \(e\)-injective module, but it is not injective. We can show that \(\mathbb{Z}\) is an \(e\)-injective by using \(e\)-Baer Criterion. Let \(f_{1}:n\mathbb{Z}\rightarrow\mathbb{Z}\) be an inclusion map defined as \(f_{1}(x)=sx;s\in\mathbb{Z}\) and \(f_{2}:n\mathbb{Z}\rightarrow\mathbb{Z}\) defined as \(f_{2}(x)=mx;m\in\mathbb{Z}\) where \((m,s)=1\). Then by talking \(f_{3}=f_{2}\), we have \(f_{3}\circ f_{1}(x)=f_{3}(f_{1}(x))=f_{3}(sx)=msx=sf_{2}(x)\)
It's easy to see that every submodule of \(\mathbb{Z}\)-module \(\mathbb{Z}\) is \(e\)-injective. On an integral domain, every injective is divisble but this is not the case for e-injectives, since \(\mathbb{Z}\) as \(\mathbb{Z}\)-module is \(e\)-injective while not divisble.
**Definition 2.4**.: [1] An \(e\)-exact sequence \(0\to A\overset{i}{\rightarrow}B\overset{p}{\rightarrow}C\to 0\) is \(e\)-split if there exist \(0\neq s\in R\) and a morphism \(j:C\to B\) (or \(f:B\to A\)) such that \(pj=sI_{C}\) (or \(fi=sI_{A}\)).
**Proposition 2.5**.: _If an e-exact sequence \(0\to A\overset{i}{\rightarrow}B\overset{p}{\rightarrow}C\to 0\) is \(e\)-split, then there exists \(0\neq r\in R\) such that \(rB\cong A\bigoplus C\)._
Proof.: We show that \(rB=Imi\bigoplus Imj\) there exists \(0\neq r\in R\) and \(j:C\to B\) with \(pj=rI_{C}\). For any \(b\in B\), \(rb-jp(b)\in Kerp\) and by \(e\)-exactness, there exists \(a\in A\) and \(0\neq s\in R\) such that \(i(a)=s(rb-jp(b))\), that is \(srb=i(a)+sjp(b)\). Hence \(srB=Imi+Imj\). Now, if \(i(x)=rz=j(y)\) for \(x\in A\) and \(y\in C\), then \(0=pi(x)=p(rz)=pj(y)=ry\) and so \(rz=j(y)=0\) which implies that \(Imi\cap Imj=0\). Therefore \(rB=Imi\bigoplus Imj\cong A\bigoplus C\).
In the following proposition we generalize [8, Proposition 3.38] to \(e\)-injective \(R\)-modules.
**Proposition 2.6**.: _A direct product of \(R\)-modules is an e-injective if and only if each \(E_{i}\) is an e-injective \(R\)-module._
Proof.: Suppose that \(E=\prod E_{i}\), \(k_{i}:E_{i}\to E\) is the injection and \(p_{i}:E\to E_{i}\) is the projection such that \(p_{i}k_{i}=I_{E_{i}}\). Since \(E\) is an e-injective, there exist a homomorphism \(h:C\to E\) and \(0\neq r\in R\) such that \(hj=r(k_{i}\circ f_{i})\). Now,
\(p_{i}(r_{i}\circ f_{i}))=r((p_{i}\circ k_{i})\circ f_{i})=rf_{i}\).
\(p_{i}\)\(p_{
Recall from [1] the triangle diagram of \(R\)-modules an \(R\)-morphism as the following are \(e\)-commute if and only if there exists \(0\neq r\in R\) such that \(g\circ i=rf\).
As well too, a squar diagram as the following are \(e\)-commute if and only if there exists \(0\neq r\in R\) such that \(qf=rgt\)
**Proposition 2.7**.: _Let \(E\) be a torsion-free \(R\)-module. Then \(E\) is an e-injective if and only if \(Hom(\quad,E)\) is a contravariant e-exact functor._
Proof.: Suppose that \(0\to A\stackrel{{ i}}{{\to}}B\stackrel{{ p}}{{\to}}C\to 0\) is a short e-exact sequence. Since \(E\) is a torsion-free, \(Hom(\quad,E)\) is a left e-exact functor by [1, Theorem 2.7]. It remains to show that \(Hom(\quad,E)\) is a right e-exact, which means \(Hom(B,E)\stackrel{{ i^{*}}}{{\to}}Hom(A,E)\to 0\) is an e-exact. For this purpose, let \(0\neq f\in Hom(A,E)\). \(e\)-injectivity of \(E\) implies that there exist \(g:B\to E\) and \(0\neq r\in R\) such that \(i^{(}*g)=gi=rf\). Thus we have \(Imi^{*}\leqslant_{e}Hom(A,E)\). Therefore \(Hom(\quad,E)\) is an e-exact functor. Conversely, if \(Hom(\quad,E)\) is an e-exact contravariant functor so the sequence \(0\to Hom(C,E)\stackrel{{ p^{*}}}{{\to}}Hom(B,E)\stackrel{{ i^{*}}}{{\to}}Hom(A,E)\to 0\) is an e-exact sequence. Then for all \(0\neq r\in R\) and \(0\neq f\in Hom(A,E)\) there exists \(g\in Hom(B,E)\) such that \(i^{*}g=rf\) which implies that \(gi=rf\), that is, the diagram below
is an e-commute and \(E\) is an e-injective.
**Proposition 2.8**.: _Let \(E\) be a torsion-free \(e\)-injective \(R\)-module. Then any e-exact sequence \(0\to E\to B\to C\to 0\)\(e\)-splits._
Proof.: Let \(E\) be an \(e\)-injective \(R\)-module and the sequence \(0\to E\xrightarrow{i}B\xrightarrow{p}C\to 0\) be an e-exact. Then by Proposition 2.7\(0\to Hom(C,E)\xrightarrow{p^{*}}Hom(B,E)\xrightarrow{i^{*}}Hom(E,E)\to 0\) is an e-exact sequence. Since \(I_{E}\in Hom(E,E)\), there exist \(g\in Hom(B,E)\) and \(0\neq r\in R\) such that \(i^{*}(g)=gi=rI_{E}\) and so the given sequence \(e\)-splits.
**Theorem 2.9**.: _Given an e-commutative diagram of \(R\)-modules having \(e\)-exact rows and torsion-free module \(C^{\prime\prime}\):_
_there exist a unique map \(h:A^{\prime\prime}\to C^{\prime\prime}\) making the augmented diagram \(e\)-commute._
Proof.: If \(a^{\prime\prime}\in A^{\prime\prime}\), then there exist \(a\in A\) and \(0\neq r\in R\) such that \(p(a)=ra^{\prime\prime}\). Define \(h(a^{\prime\prime})=rqg(a)\). We must shows that \(h\) is well defined, that is, if \(u\in A\) satiesfies \(p(u)=ra^{\prime\prime}\). Now, \(p(a)=p(u)\) implies \(p(a-u)=0\), so \(a-u\in Kerp\) by \(e\)-exactness, there exist \(0\neq S\in R\) and \(a^{\prime}\in A\) such that \(i(a^{\prime})s(a-u)\). Thus, \(rsqg(a-u)=rqg(i(a^{\prime}))=qjf=0\), because \(qj=0\). Therefore, \(h\) is well-defined. Suppose that \(h^{\prime}:A^{\prime\prime}\to C^{\prime\prime}\) be another map satisfies \(rh^{\prime}p=qg\) and \(a\in A\). Then \(rh^{\prime}p(a)=qg=rhp(a)\) and so \(h\) is unique.
**Theorem 2.10**.: _Given an e-commutative diagram of \(R\)-modules having \(e\)-exact rows_
_there exist a unique map \(f:A^{\prime}\to C^{\prime}\) making the augmented diagram \(e\)-commute.Moreover, if \(g\) and \(h\) are isomorphism and \(A\) and \(A^{\prime\prime}\) are torsion-free, then \(A^{\prime}\cong rC^{\prime}\) for some non-zero elements \(0\neq r\in R\)._
Proof.: Let \(a^{\prime},u^{\prime}\in A^{\prime}\), define \(f(a^{\prime})=rqg(i(a^{\prime}))\). We must shows that \(f\) is well defined, let \(i(a^{\prime})=i(u^{\prime})\) implies \(qg(i(a^{\prime}-u^{\prime}))=0\), so \(g(i(a-u))\in Ker\;q\) by \(e\)-exactness, there exist \(0\neq r\in R\) and \(c^{\prime}\in C^{\prime}\) such that \(j(c^{\prime})=rg(i(a^{\prime}-u^{\prime}))\). Thus, \(rqg(i(a^{\prime}-u^{\prime}))=qj(c^{\prime})=0\), because \(qj=0\). Therefore, \(f\) is well-defined. Suppose that \(f^{\prime}:A^{\prime}\to C^{\prime}\) be
another map satisfies \(jf^{\prime}=rgi\). Then \(jf^{\prime}(a^{\prime})=rgi(a^{\prime})=jf(a^{\prime})\) this implies that \((f^{\prime}(a^{\prime})-f(a^{\prime}))\in Kerj=0\) and so \(f\) is unique.
Let \(a^{\prime}\in Ker\:f\). Then \(f(a^{\prime})=0\) and by \(e\)-commutativity there exists \(0\neq r\in R\) such that \(jf(a^{\prime})=0=rgi(a^{\prime})\). Thus \(r.i(a^{\prime})\in Ker\:g=0\) and \(A\) is torsion-free so \(i(a^{\prime})=0\) implies \(a^{\prime}\in Ker\:i=0\). Suppose that \(c^{\prime}\) is a non-zero element of \(C^{\prime}\). Since \(j(c^{\prime})\in C\) and \(g\) is onto, then there exist \(a\in A\) such that \(g(a)=j(c^{\prime})\), \(e\)-commutativity gives us \(qg(a)=rhp(a)\), for some \(0\neq r\in R\). Now, \(qg(a)=qj(c^{\prime})=0=rhp(a)\), so \(rp(a)\in Ker\:h=0\) torsion-freeness of \(A^{\prime\prime}\) gives us \(a\in Ker\:p\), then there exist \(0\neq s\in R\) and \(a^{\prime}\in A^{\prime}\) such that \(i(a^{\prime})=sa\). By \(e\)-commutativity there exist \(0\neq t\in R\) such that \(jf(a^{\prime})=tgi(a^{\prime})\) so \(jf(a^{\prime})=tg(i(a^{\prime}))=tsg(a)=tsj(c^{\prime})\). We obtain \((f(a^{\prime})-tsc^{\prime})\in Ker\:j=0\) and so \(f(a^{\prime})=tsc^{\prime}\). Therefore, \(A^{\prime}\cong rC^{\prime}\) for some non-zero element \(0\neq r\in R\).
**Lemma 2.11**.: _Consider the commutative diagram of \(R\)-modules and \(R\)-morphisms, where \(R\) is a domain, \(B\) and \(B^{\prime\prime}\) are torsion-free \(R\)-modules_
_If the columns and the first and the third rows are \(e\)-exact, then the middle row is also \(e\)-exact._
Proof.: To proof the middle row is \(e\)-exact, we have to check the following three conditions:
1) \(Ker(g)=0\). Take \(b\in Ker(g)\). Then \(j^{\prime}g(b)=0=hi^{\prime}(b)\) and \(i^{\prime}(b)\in Ker(h)\). Since \(Ker(h)=0\), so \(i^{\prime}(b)=0\) and \(b\in Ker(i^{\prime})\) and as \(Im(i)\leqslant_{e}Ker(i^{\prime})\), then there exist a non-zero element \(r\) belongs to \(R\) and \(a\in A\) such that \(i(a)=rb\) and so \(gi(a)=g(rb)=rg(b)=0=jf(a)\), so \(f(a)\in Ker(j)\). Since \(Ker(j)=0\), \(f(a)=0\) and
\(a\in Ker(f)=0\), which means \(a=0\) this implies \(rb=0\) and \(b=0\), since \(B\) is torsion-free. Therefore, \(g\) is monic.
2) \(Im(g)\leqslant_{e}Ker(g^{\prime})\). First to prove \(Im(g)\subseteq Ker(g^{\prime})\). Let \(b^{\prime}\in Im(g)\). Then there exists \(b\in B\) such that \(g(b)=b^{\prime}\) and \(j^{\prime}g(b)=hi^{\prime}(b)=j^{\prime}(b^{\prime})\), which means that \(j^{\prime}(b^{\prime})\in Im(h)\subseteq Ker(h^{\prime})\) and so \(p^{\prime}g^{\prime}(b^{\prime})=h^{\prime}j^{\prime}(b^{\prime})=0\). Hence \(g^{\prime}(b^{\prime})\in Ker(p^{\prime})\) and as \(Im(p)\leqslant_{e}Ker(p^{\prime})\), then there exist \(a^{\prime\prime}\in A^{\prime\prime}\) and a non-zero element \(s\) belongs to \(R\) such that \(p(a^{\prime\prime})=sg^{\prime}(b^{\prime})=s(g^{\prime}g(b))=0\) and \(sg^{\prime}(b^{\prime})=0\), so \(g^{\prime}(b^{\prime})=0\), since \(B^{\prime\prime}\) is torsion-free. Thus \(b^{\prime}\in Ker(g^{\prime})\) and so \(Im(g)\subseteq Ker(g^{\prime})\). Now, for essentiality, take \(b^{\prime}\) to be a non-zero element of \(Ker(g^{\prime})\). Then \(0=g^{\prime}(b^{\prime})=g^{\prime}j(a^{\prime})=pf^{\prime}(a^{\prime})\) and \(f^{\prime}(a^{\prime})\in Ker(p)\). Since \(Ker(p)=0,f^{\prime}(a^{\prime})=0\) so \(a^{\prime}\in Ker(f^{\prime})\) and as \(Im(f)\leqslant_{e}Ker(f^{\prime})\) then there exist a non-zero element \(r\) belongs to \(R\) and \(a\in A\) such that \(f(a)=ra^{\prime}\). Now, \(jf(a)=j(ra^{\prime})=rj(a^{\prime})=rb^{\prime}=gi(a)=g(b)\). Therefore, \(Im(g)\leqslant_{e}Ker(g^{\prime})\).
3) \(Im(g^{\prime})\leqslant_{e}B^{\prime\prime}\). Let \(b^{\prime\prime}\) be a non-zero element of \(B^{\prime\prime}\) and \(p^{\prime}(b^{\prime\prime})\in C^{\prime\prime}\). Then there exist \(c^{\prime}\in C^{\prime}\) and a non-zero element \(r\) belongs to \(R\) such that \(h^{\prime}(c^{\prime})=rp^{\prime}(b^{\prime\prime})\) and as \(Im(j^{\prime})\leqslant_{e}C^{\prime}\), then there exist a non-zero element \(s\) belongs to \(R\) and \(b^{\prime}\in B^{\prime}\) such that \(j^{\prime}(b^{\prime})=sc^{\prime}\). Now, we have \(p^{\prime}g^{\prime}(b^{\prime})=h^{\prime}j^{\prime}(b^{\prime})=h^{\prime}( sc^{\prime})=sh^{\prime}(c^{\prime})=srp^{\prime}(b^{\prime\prime})\) and \(p^{\prime}(g^{\prime}(b^{\prime})-srb^{\prime\prime})=0\) which means \((g^{\prime}(b^{\prime})-srb^{\prime\prime})\in Ker(p^{\prime})\) and as \(Im(p)\leqslant_{e}Ker(p^{\prime})\) then there exist a non-zero element \(k\in R\) and \(a^{\prime\prime}\in A^{\prime\prime}\) such that \(p(a^{\prime\prime})=k(g^{\prime}(b^{\prime})-srb^{\prime\prime})\). Also, we have \(f^{\prime}(a^{\prime})=ta^{\prime\prime}\), for a non-zero element \(t\) belongs to \(R\) and \(a^{\prime\prime}\in A^{\prime\prime}\), because \(Im(f^{\prime})\leqslant_{e}A^{\prime\prime}\). Thus \(pf^{\prime}(a^{\prime})=p(ta^{\prime\prime})=tp(a^{\prime\prime})=k(g^{\prime}( b^{\prime})-srb^{\prime\prime})=g^{\prime}j(a^{\prime})=g^{\prime}(b^{\prime})\) which means \(g^{\prime}(kb^{\prime}-b^{\prime})=ksrb^{\prime\prime}\). Therefore, \(Im(g^{\prime})\leqslant_{e}B^{\prime\prime}\).
**Lemma 2.12**.: _Consider the commutative diagram of \(R\)-modules and \(R\)-morphisms, where \(R\) is a domain, \(C\), \(C^{\prime}\) and \(C^{\prime\prime}\) are torsion-free
_If the columns and the above rows are \(e\)-exact, then the last one is also \(e\)-exact._
Proof.: To proof the third row is \(e\)-exact, we have to check the following three conditions:
1) \(Ker(h)=0\). Take \(c\in Ker(h)\), then \(h(c)=0\) and \(hi^{\prime}(b)=j^{\prime}g(b)\) and and as \(Im(i^{\prime})\leqslant_{e}C\) then there exist \(b\in B\) and a non-zero element \(r\) belongs to \(R\) such that \(i^{\prime}(b)=rc\). Then \(hi^{\prime}(b)=h(rc)=rh(c)=0=j^{\prime}g(b)\) so \(g(b)\in Ker(j^{\prime})\) and as \(Im(j)\leqslant_{e}Ker(j^{\prime})\), then there exist a non-zero element \(s\) belongs to \(R\) and \(a^{\prime}\in A^{\prime}\) such that \(j(a^{\prime})=sg(b)\) and so \(g^{\prime}j(a^{\prime})=sg^{\prime}g(b)=0=pf^{\prime}(a^{\prime})\), so \(f^{\prime}(a^{\prime})\in Ker(p)\). Since \(Ker(p)=0\), \(f^{\prime}(a^{\prime})=0\) and \(a^{\prime}\in Ker(f^{\prime})\) and also by essentiality there exist \(a\in A\) and a non-zero elemeny \(t\) belongs to \(R\) such that \(f(a)=ta^{\prime}\). Then \(jf(a)=j(ta^{\prime})=tj(a^{\prime})=tsg(b)=gi(a)\) and so \(g(tsb-i(a))=0\) which means \(tsb-i(a)\in Ker(g)=0\) this implies \(tsb=i(a)\) and so \(tsi^{\prime}(b)=i^{\prime}i(a)=0\), then \(i^{\prime}(b)=0\) since \(C\) is torsion-free. Thus \(rc=0\) and so \(c=0\). Therefore, \(h\) is monic.
2) \(Im(h)\leqslant_{e}Ker(h^{\prime})\). First to prove \(Im(h)\subseteq Ker(h^{\prime})\). Let \(c^{\prime}\in Im(h)\). Then there exists \(c\in C\) such that \(h(c)=c^{\prime}\) and from essentiality there exist \(b^{\prime}\in B^{\prime}\) and a non-zero element \(r\) belongs to \(R\) such that \(j^{\prime}(b^{\prime})=rc^{\prime}\) and so \(h^{\prime}j^{\prime}(b^{\prime})=h^{\prime}(rc)=rh^{\prime}(c^{\prime})=rh^{ \prime}(h(c))=p^{\prime}g^{\prime}(b^{\prime})\) and also by commutativity and essentiality \(g^{\prime}j(a^{\prime})=g^{\prime}(tb^{\prime})=pf^{\prime}(a^{\prime})\) for a non-zero \(t\) belongs to \(R\) and \(b^{\prime}\in Ker(j^{\prime})\), which means \(Im(g^{\prime})\subseteq Im(p)\subseteq Ker(p^{\prime})\) and so \(p^{\prime}g^{\prime}(tb^{\prime})=0\). Therefore \(h^{\prime}(rc^{\prime})=rh^{\prime}(h(c))=tp^{\prime}g^{\prime}(b^{\prime})=0\) and so \(rh^{\prime}(h(c))=0\) so \(h^{\prime}(h(c))=0\), since \(C^{\prime\prime}\) is torsion-free. Hence \(c^{\prime}=h(c)\in Ker(h^{\prime})\). Now, for essentiality, take \(c^{\prime}\) to be a non-zero element of \(Ker(h^{\prime})\). Then \(h^{\prime}j^{\prime}(b)=p^{\prime}g^{\prime}(b)\)
and as \(Im(j^{\prime})\leqslant_{e}C^{\prime}\) there exist a non-zero element \(r\) belongs to \(R\) and \(b^{\prime}\in B^{\prime}\) such that \(j^{\prime}(b^{\prime})=rc^{\prime}\). Hence \(hi^{\prime}(b)=j^{\prime}g(b)\), since \(Im(i^{\prime})\leqslant_{e}C\) and \(Im(g)\leqslant_{e}Ker(g^{\prime})\in B^{\prime}\) so there are non-zero elements \(c\in C,b\in B\) and a non-zero elements \(s\) and \(k\) belongs to \(R\) such that \(hi^{\prime}(b)=h(sc)=sh(c)=j^{\prime}(kb^{\prime})=krc^{\prime}\). Therefore, \(sh(c)=kr(c^{\prime})\) put \(k=s\) so \(s(h(c)-rc^{\prime})=0\) and so \(h(c)=rc^{\prime}\), since \(C^{\prime}\) is torsion-free. Then \(Im(h)\leqslant_{e}Ker(h^{\prime})\).
3) \(Im(h^{\prime})\leqslant_{e}C^{\prime\prime}\). Let \(c^{\prime\prime}\) be a non-zero element of \(C^{\prime\prime}\) and as \(Im(p^{\prime})\leqslant_{e}C^{\prime\prime}\), there exist \(b^{\prime\prime}\in B^{\prime\prime}\) and a non-zero element \(r\) belong to \(R\) such that \(p^{\prime}(b^{\prime\prime})=rc^{\prime\prime}\) and as \(Im(g^{\prime})\leqslant_{e}B^{\prime\prime}\), then there exist a non-zero element \(s\) belong to \(R\) and \(b^{\prime}\in B^{\prime}\) such that \(g^{\prime}(b^{\prime})=sb^{\prime\prime}\) and also we have \(Im(j^{\prime})\leqslant_{e}C^{\prime}\), there exist a non-zero element \(t\) belong to \(R\) and \(b^{\prime}\in B^{\prime}\) such that \(j^{\prime}(b^{\prime})=tc^{\prime}\). Hence \(h^{\prime}j^{\prime}(b^{\prime})=h^{\prime}(tc^{\prime})=th^{\prime}(c^{ \prime})=p^{\prime}g^{\prime}(b^{\prime})=p^{\prime}(sb^{\prime\prime})=sp^{ \prime}(b^{\prime\prime})=src^{\prime\prime}\), put \(s=t\), so \(t(h^{\prime}(c^{\prime}-rc^{\prime\prime}))=0\) this implies \(h^{\prime}(c^{\prime}-rc^{\prime\prime})=0\), since \(C^{\prime\prime}\) is torsion-free. Thus, \(h^{\prime}(c^{\prime})=rc^{\prime\prime}\) and \(Im(h^{\prime})\leqslant_{e}C^{\prime\prime}\).
Recall from [1] that an e-injective resolution of an \(R-\)module \(A\) is an e-exact sequence \(0\to A\stackrel{{\eta}}{{\rightarrow}}E^{0}\stackrel{{ d^{0}}}{{\rightarrow}}E^{1}\stackrel{{ d^{1}}}{{\rightarrow}}...\to E^{n}\stackrel{{ d^{n}}}{{\rightarrow}}E^{n+1}\rightarrow...\) where each \(E^{i}\) is an e-injective \(R\)-module. Let \(f,g:X\to E\) be two chain maps. Then \(f\) is \(e\)-homotopic to \(g\) if there are maps \(s^{n+1}:X^{n+1}\to E^{n}\) and non-zero elements \(s\) and \(r\) in \(R\) such that \(r(h^{n}-f^{n})=s^{n+1}d^{n}+sd^{n-1}s^{n}\) for all \(n\). Now, we are in a position to prove the new form of comparsion theorem by using \(e\)-injectivity and \(e\)-exact sequences.
**Theorem 2.13**.: _[Comparison Theorem for e-injectives] Suppose that we have the following diagram:_
_where the rows are complexes. If each \(E^{n}\), in the top row is e-injective and the bottom row is e-exact, then there exists a chain map \(f:X^{A^{\prime}}\to E^{A}\), (the dashed arrows) making the completed diagram e-commute. Furthermore, any two such chain maps are e-homotopic._
Proof.: We prove the existence of \(f^{n}\) by induction \(n\geqslant 0\). For the base step \(n=0\), consider the following diagram:
Since \(\varepsilon^{\prime}\) is monic and \(E^{0}\) is e-injective, there exist \(f^{0}:X^{0}\to E^{0}\) and \(0\neq r\in R\) such that \(f^{0}\varepsilon^{\prime}=r(\varepsilon\circ f)\). For the inductive step, consider we have \(f^{n-1}\) and \(f^{n}\) and the following diagram:
Since \(d^{n}f^{n}(d^{m-1}X^{n-1})=r(d^{n}d^{n-1}f^{n-1}(X^{n-1}))=0\), \(Imd^{\prime\,n-1}\subseteq Kerd^{n}f^{n}\) and as \(Imd^{\prime\,n-1}\leqslant_{e}Kerd^{n}\),we have the following diagram
in which \(d^{\prime n}\) is monic and \(E^{n+1}\) is e-injective, so there exist \(r_{n}\in R\) and \(f^{n+1}:X^{n+1}\to E^{n+1}\) such that \(f^{n+1}d^{\prime\,n}=r_{n}d^{n}f^{n}\). Now, to show the uniqueness of \(f\) up to e-homotopy. If \(h:X^{A^{\prime}}\to E^{A^{\prime}}\), is another chain map with \(h^{0}\varepsilon^{\prime}=r\varepsilon f\), we construct the terms \(s^{n}:X^{n+1}\to E^{n}\) of an e-homotopy \(s=s^{n}\) by induction on \(n\geq 0\) that we will show that \(s(h^{n}-f^{n})=s^{n+1}d^{n^{\prime}}+r^{\prime}d^{n+1}s^{n}\) for \(s,r^{\prime},p\) and \(q\) in \(R\). We define \(s^{0}:X^{0}\to A\) to be the zero map. Now to show that \(Im(r(h^{n}-f^{n})-r^{\prime}d^{n-1}s^{n})\subseteq Kerd^{n}\), we have \(d^{n}(r(h^{n}-f^{n})-r^{\prime}d^{n-1}s^{n}\)
= \(d^{n}(r(h^{n}-f^{n})-d^{n}(r^{\prime}d^{n-1}s^{n})\)
= \(rd^{n}(h^{n}-f^{n})-r^{\prime}d^{n}(d^{n-1}s^{n})\)
= \(rd^{n}(h^{n}-f^{n})-r^{\prime}d^{n}(p(h^{n}-f^{n})-qs^{n+1}d^{\prime n})\)
= \(rd^{n}(h^{n}-f^{n})-r^{\prime}pd^{n}(h^{n}-f^{n})-r^{\prime}qd^{n}(s^{n+1}d^{ \prime n})\)
= \(rd^{n}(h^{n}-f^{n})-r^{\prime}pd^{n}(h^{n}-f^{n})-r^{\prime}qd^{n}(md^{n-1}s^{ n})=0\), put \(r^{\prime}p=r\)
Therefore, we have the following diagram:
since \(E^{n}\) is e-injective and \(d^{\prime n}\) monic, there exist \(s^{n+1}\), \(s\in R\) such that \(s^{n+1}d^{n^{\prime}}=s(r(h^{n}-f^{n}-r^{\prime}d^{n-1}s^{n}))\). Therefore, \(sr(h^{n}-f^{n})=s^{n+1}d^{n^{\prime}}+sr^{\prime}d^{n-1}s^{n}\). Hence \(f\) and \(h\) are e-homotopic.
**Proposition 2.14**.: _Let \(I^{\prime\bullet}\) and \(I^{\prime\prime\bullet}\) be two e-injective resolutions of \(M^{\prime}\) and \(M^{\prime\prime}\) respectively. Suppose that \(0\to M^{\prime}\to M\to M^{\prime\prime}\to 0\) is an e-exact sequence. Then there exists an e-injective resolution \(I^{\bullet}\) such that the following diagram_
_is e-commutative in which the bottom row is an e-exact sequence of complexes._
Proof.: Consider the following diagram in which the rows are e-exact
By the comparison theorem for e-injectives (Theorem 2.13), there exist maps (dashed arrows) that make all squares e-commute. Now, we define \(I^{n}=I^{\prime n}\bigoplus I^{\prime\prime n},\delta^{-1}:M\to I^{0}\) by \(\delta^{-1}(m)=(-f^{0}(m),\delta^{\prime\prime-1}\circ p(m))\) and \(\delta^{n}:I^{n}\to I^{n+1}\) by \(\delta^{n}(a,b)=(\delta^{\prime n}(a)+(-1)^{n}f^{n+1}(b),\delta^{\prime\prime n }(b))\),
since \(Ker\delta^{\prime\prime-1}=0\) and \(Ker\delta^{\prime-1}=0\), \(Ker\delta^{-1}=0\). Thus \(\delta^{-1}\) is monic and from Proposition 2.13, we get \(I^{n}\) is e-injective for each \(n\geqslant 0\). To prove that \(Im\delta^{-1}\leqslant_{e}Ker\delta^{0}\) we must show first that \(Im\delta^{-1}\subseteq Ker\delta^{0}\). Thus \(\delta^{0}(\delta^{-1}(m))=\delta^{0}(-f^{0}(m),\delta^{\prime\prime-1}\circ p (m))=\delta^{\prime}(-f^{0}(m)+(-1)^{0}f^{1}(\delta^{\prime\prime-1}\circ p(m)),\delta^{\prime\prime 0}(\delta^{\prime\prime-1}\circ p(m)))=0\), which implies that \(Im\delta^{-1}\subseteq Ker\delta^{0}\). Now, for essentiality, let \(0\neq(a,b)\in Ker\delta^{0}\). Then \(0=\delta^{0}(a,b)=(\delta^{\prime 0}(a)+(-1)^{0}f^{1}(b),\delta^{\prime\prime 0}(b))\), so \(\delta^{\prime 0}(a)+f^{1}(b)=0\) and \(\delta^{\prime\prime 0}(b)=0\). Since \(Im\delta^{\prime\prime-1}\leqslant_{e}Ker\delta^{\prime\prime 0}\), there exists \(0\neq r\in R\) such that \(\delta^{\prime\prime-1}(m^{\prime\prime})=rb\) for some \(m^{\prime\prime}\) in \(M^{\prime\prime}\) and \(\delta^{\prime\prime-1}(p(m))=rb\) for some \(m\in M\). So \(f^{1}(\delta^{\prime\prime-1}(p(m)))=f^{1}(rb)=rf^{1}(b)=-r\delta^{\prime 0}(a)\) and by e-commutity there exists \(0\neq q\in R\) such that \(-r\delta^{\prime 0}(a)=q\delta^{\prime 0}f^{0}(m)\) and this implies that \(r\delta^{\prime 0}(a)+q\delta^{\prime 0}f^{0}(m)=0=\delta^{\prime 0}(ra+qf^{0}(m))\). Thus we get \(ra+qf^{0}(m)\in Ker\delta^{\prime 0}\), and since \(Im\delta^{\prime-1}\leqslant_{e}Ker\delta^{\prime 0}\), there exists \(0\neq t\in R\) such that \(\delta^{\prime-1}(m^{\prime})=tra+tqf^{0}(m)\) for some \(m^{\prime}\in M^{\prime}\). Now, \(\delta^{-1}(tqm-i(m^{\prime}))=(-f^{0}(tqm-i(m^{\prime})),\delta^{\prime\prime- 1}\circ p(tqm-i(m^{\prime}))=(-f^{0}(tqm)+f^{0}(i(m^{\prime}))),\delta^{\prime \prime-1}p(tqm)-\delta^{\prime\prime-1}p(i(m^{\prime})))=tra-\delta^{\prime-1} (m^{\prime})+p\delta^{\prime-1}(m^{\prime}),tqrb)=s(a,b)\), put \(q=p=1\) which proves that \(Im\delta^{-1}\leqslant_{e}Ker\delta^{0}\) for \(n\geqslant 0\) and \(\delta^{n+1}(\delta^{n}(a,b))=\delta^{n+1}(\delta^{\prime\prime n}(a)+(-1)^{n} f^{n+1}(b)),\delta^{\prime\prime n}(b))\)
\(=\delta^{\prime n+1}(\delta^{\prime\prime n}(a)+(-1)^{n}f^{n+1}(b)+(-1)^{n+1}f^ {n+2}(\delta^{\prime\prime n}(b)),\delta^{\prime\prime n+1}(\delta^{\prime \prime n}(b)))=0\). This proves that \(Im\delta^{n}\subseteq Ker\delta^{n+1}\). Now let \((a,b)\in Ker\delta^{n+1}\). Then \(\delta^{\prime\prime n+1}(b)=0\) and \(\delta^{\prime n+1}(a)+(-1)^{n+1}f^{n+2}(b)=0\). It follows that \(\delta^{\prime\prime n}(c)=rb\), \(0\neq r\in R\) for some \(c\in I^{\prime\prime n}\). Then \((-1)^{n}y\delta^{\prime n+1}f^{n+1}(c)=(-1)^{n}f^{n+2}(\delta^{\prime\prime n }(c))=r\delta^{\prime n+1}(a)\) so that \(\delta^{\prime n+1}(ra-(-1)^{n+1}yf^{n+1}(c))=0\) and \(ra-(-1)^{n}yf^{n+1}(c)\in Ker\delta^{m+1}\). Since \(Im\delta^{\prime n}\leqslant_{e}Ker\delta^{\prime n+1}\), there exists \(0\neq q\in R\) such that \(\delta^{\prime\prime}(d)=q(ra-(-1)^{n}yf^{n+1}(c))\) for some \(d\in I^{\prime n}\). Thus \(\delta^{n}(d,c)=(\delta^{\prime\prime}(d)+(-1)^{n}f^{n+1}(c),\delta^{\prime \prime n}(c))=r(a,b)\), put \(q=y=1\), which proves that \(Im\delta^{n}\leqslant_{e}Ker\delta^{n+1}\).
## 3. The cohomology regarding to \(e\)-exact sequences
In this section all rings are Noetherian domain and all modules are unitary \(R\)-modules. We want to describe the right derived functors to the additive covariant functor (torsion functor) \(\Gamma_{a}\) of local cohomology with e-injective resolutions and then we present new forms of some theorems of cohomology with \(e\)-exact sequences. Let \(a\) be an ideal of \(R\). The following is a useful result that shows the functor \(\Gamma_{a}(\ )\) is an \(e\)-exact under suitable condition.
**Lemma 3.1**.: _Let \(0\to L\xrightarrow{f}M\xrightarrow{g}N\to 0\) be an e-exact sequence of \(R\)-modules and \(R\)-homomorphisms. Then \(0\to\Gamma_{a}(L)\xrightarrow{\Gamma_{a}(f)}\Gamma_{a}(M)\xrightarrow{\Gamma_{ a}(g)}\Gamma_{a}(N)\) is an e-exact sequence. Furthermore, if \(N\) is a torsion-free module, then the functor \(\Gamma_{a}(\ )\) is an \(e\)-exact._
Proof.: It is clear that \(ker(\Gamma_{a}(f))=0\). To show that \(Im(\Gamma_{a}(f))\leqslant_{e}Ker(\Gamma_{a}(g))\). Let \(0\neq y\in Ker(\Gamma_{a}(g))\). Then \(y\in\Gamma_{a}(M)\) and \(g(y)=0\), that is there exist \(n\in N\) such that \(a^{n}y=0\). Now, since \(Imf\leqslant_{e}Kerg\), there exists \(0\neq r\in R\) such that \(f(l)=ry\), for some \(l\in L\). Since \(f(a^{n}l)=a^{n}f(l)=a^{n}(ry)=0,a^{n}l=0\) and \(l\in\Gamma_{a}(L)\). Now, to show that \(Im(\Gamma_{a}(g))\leqslant_{e}\Gamma_{a}(N)\). Let \(y\in\Gamma_{a}(g)\cap Rx\), for \(0\neq x\in\Gamma_{a}(N)\). Then \(y=g(m)=rx\) for \(m\in\Gamma_{a}(M)\) and \(r\in R\). By hypothesis, \(rx\neq 0\) and so \(y\neq 0\) which guarantees the \(e\)-exactness of the sequence \(0\rightarrow\Gamma_{a}(L)\stackrel{{\Gamma_{a}(f)}}{{ \rightarrow}}\Gamma_{a}(M)\stackrel{{\Gamma_{a}(g)}}{{ \rightarrow}}\Gamma_{a}(N)\to 0\).
**Definition 3.2**.: The right e-derived functors \(R^{n}T\) of a covariant functor \(T\) are defined on an \(R\)-module \(A\) by \((R^{n}T)(A)=H^{n}(TE^{A})=\frac{ker(Td^{n})}{Im(Td^{n-1})}\), where \(E:0\to A\to E^{0}\stackrel{{ d^{0}}}{{\rightarrow}}E^{1} \stackrel{{ d^{1}}}{{\rightarrow}}\dots\) is an e-injective resolution of \(A\) and \(E^{A}\) is its deleted \(e\)-injective resolution.
Define the \(n_{th}\)\(e\)-cohomolgy module \({}_{e}H^{n}_{a}(M)\) of \(M\) with respect to an ideal \(\mathbf{a}\) as the \(n_{th}\) right e-derived functor of the torsion functors \(\Gamma_{a}()\), that is \({}_{e}H^{n}_{a}(M)=R^{n}\Gamma_{a}(M)=\frac{ker(\Gamma_{a}(d^{n}))}{Im(\Gamma _{a}(d^{n-1}))}\). To calculate \({}_{e}H^{n}_{a}(M)\) with e-injective resolution \(0\to M\stackrel{{\alpha}}{{\rightarrow}}I^{0}\stackrel{{ d^{0}}}{{\rightarrow}}I^{1}\rightarrow\dots\to I^{n} \stackrel{{ d^{n}}}{{\rightarrow}}I^{n+1}\rightarrow\dots\) for any \(R\)-module \(M\) we do the following: Apply the functor \(\Gamma_{a}\) to the deleted complex of \(M\)\(I^{E}:0\stackrel{{ d^{-1}}}{{\rightarrow}}I^{0}\stackrel{{ d^{0}}}{{\rightarrow}}I^{1}\rightarrow\dots\to I^{n} \stackrel{{ d^{n}}}{{\rightarrow}}\dots\) and obtain \(0\rightarrow\Gamma_{a}(I^{0})\stackrel{{\Gamma_{a}(d^{0})}}{{ \rightarrow}}\Gamma_{a}(I^{1})\rightarrow\dots\rightarrow\Gamma_{a}(I^{n}) \rightarrow\dots\) then take the \(n_{th}\)\(e\)-cohomology module of this complex, the result is \({}_{e}H^{n}_{a}(M)=\frac{Ker(\Gamma_{a}(d^{n}))}{Im(\Gamma_{a}(d^{n-1}))}=(R^ {n}T)(M)=R^{n}(\Gamma_{a}(M))=H^{n}(TE^{M})\).
**Theorem 3.3**.: _The right e-derived functors for \(\Gamma_{a}\) are additive functors for every integer \(n\)._
Proof.: Let \(f:M\to M^{{}^{\prime}}\) be a morphism. Then by Theorem 2.13, there is a chain map \(\breve{f}:E^{M}\to E^{M^{{}^{\prime}}}\) over \(f\), where \(E^{M}\) and \(E^{M^{{}^{\prime}}}\) are deleted e-injective resolutions for \(M\) and \(M^{\prime}\) respectively. Then \(\Gamma_{a}\breve{f}:\Gamma_{a}E^{M}\rightarrow\Gamma_{a}E^{M^{{}^{\prime}}}\) is also a chain map, and so there is a well-defined map \({}_{e}H^{n}_{a}(f)=(R^{n}\Gamma_{a})f:H^{n}(\Gamma_{a}E^{M})\to H^{n}( \Gamma_{a}E^{M^{{}^{\prime}}})\), defined by \({}_{e}H^{n}_{a}(f)=(R^{n}\Gamma_{a})f=H^{n}(\Gamma_{a}\breve{f})=(\Gamma_{a} \breve{f})\) and \({}_{e}H^{n}_{a}=R^{n}\Gamma_{a}\) is also an additive covariant functor, because \({}_{e}H^{n}_{a}(f+g)=(R^{n}\Gamma_{a})(f+g)=H^{n}(\Gamma_{a}(f+g))=H^{n}( \Gamma_{a}(f)+\Gamma_{a}(g))=H^{n}(\Gamma_{a}(f))+H^{n}(\Gamma_{a}(g))=_{e}H^{ n}_{a}(f)+_{e}H^{n}_{a}(g)\). Therefore the right e-derived functors are additive functors for every integer n.
**Proposition 3.4**.: _Let \(A\) be any \(R\)- module. Then \({}_{e}H^{n}_{a}(A)=(R^{n}\Gamma_{a})A=0\), for all negative integers \(n\)._
Proof.: Suppose that \(E:0\to A\to E^{0}\stackrel{{ d^{0}}}{{\to}}E^{1}\to\dots\) be an e-injective resolution for \(A\). Then the deleted complex of \(A\) is \(E^{A}:0\to E^{0}\stackrel{{ d^{0}}}{{\to}}E^{1}\to\dots\). After applying \(\Gamma_{a}\) on the deleted complex, we get \(\Gamma_{a}E^{n}=0\) for all negative integers n, because the \(n_{th}\) term of \(E^{A}\) is zero when \(n\) is negative. Hence \({}_{e}H_{a}^{n}(A)=R^{n}\Gamma_{a}(A)=0\) for all negative integers n.
**Corollary 3.5**.: _Let \(E\) be an e-injective \(R\)-module. Then \({}_{e}H_{a}^{n}(E)=(R^{n}\Gamma_{a})(E)=0\), for all \(n\geq 1\)._
Proof.: Since \(E\) is an e-injective module, the e-injective resolution of \(E\) is \(0\to E\stackrel{{ 1_{E}}}{{\to}}E\to 0\). The corresponding deleted e-injective resolution \(E^{E}\) is the complex \(0\to E\to 0\). Hence, the \(n_{th}\) term of \(\Gamma_{a}(E^{E})\) is \(0\) for all \(n\geq 1\) and so \({}_{e}H_{a}^{n}(E)=(R^{n}\Gamma_{a})(E)=H^{n}(\Gamma_{a}E^{E})=0\) for all \(n\geq 1\).
**Theorem 3.6**.: _Let \(0\to L\stackrel{{ f}}{{\to}}M\stackrel{{ g}}{{\to}}N\to 0\) be an e-exact sequence of R-modules and R-homomorphisms and \(N\) torsion-free. Then, for each \(i\in N_{0}\), there are a connectig homomorphisms \({}_{e}H_{a}^{i}(N)\stackrel{{\sigma}}{{\to}}{{}_{e}}H_{a}^{i+1}(L)\) and these connecting homomorphisms make the resulting long sequence \(0\to{}_{e}H_{a}^{0}(L)\stackrel{{\epsilon H_{a}^{0}(f)}}{{\to}}{ {}_{e}}H_{a}^{0}(M)\stackrel{{\epsilon H_{a}^{0}(g)}}{{\to}}{{}_{e }}H_{a}^{0}(N)\to{}_{e}H_{a}^{1}(L)\to\dots\to{}_{e}H_{a}^{i}(L)\stackrel{{ \epsilon H_{a}^{i}(f)}}{{\to}}{{}_{e}}H_{a}^{i}(M)\to{}_{e}H_{a}^{i}(N) \stackrel{{\sigma^{*}}}{{\to}}{{}_{e}}H_{a}^{i+1}(L)\to\dots\)\(e\)-exact._
Proof.: By applying \(\Gamma_{a}\) to an e-exact sequence \(0\to L\stackrel{{ f}}{{\to}}M\stackrel{{ g}}{{\to}}N\to 0\) we obtain an \(e\)-exact sequence \(0\to\Gamma_{a}(L)\stackrel{{\Gamma_{a}(f)}}{{\to}}\Gamma_{a}(M) \stackrel{{\Gamma_{a}(g)}}{{\to}}\Gamma_{a}(N)\to 0\) by Lemma 3.1 and by [2, Theorem 3.1] there is a connecting homomorphism \(\sigma_{n}:H^{n}(\Gamma_{a}(N))\to H^{n+1}(\Gamma_{a}(L))\) and by [2, Theorem 3.2] there is a long \(e\)-exact sequence of \(R\)-modules and \(R\)-morphisms \(0\to{}_{e}H_{a}^{0}(L)\stackrel{{\epsilon H_{a}^{0}(f)}}{{\to}}{ {}_{e}}H_{a}^{0}(M)\stackrel{{\epsilon H_{a}^{0}(g)}}{{\to}}{{}_{ e}}H_{a}^{0}(N)\to{}_{e}H_{a}^{1}(L)\to\dots\to{}_{e}H_{a}^{i}(L)\stackrel{{ \epsilon H_{a}^{i}(f)}}{{\to}}{{}_{e}}H_{a}^{i}(M)\to{}_{e}H_{a}^{i}(N) \stackrel{{\sigma^{*}}}{{\to}}{{}_{e}}H_{a}^{i+1}(L)\to\dots\)
**Theorem 3.7**.: _For any torsion-free \(R\)-module \(M\), \({}_{e}H_{a}^{0}(M)\) is naturally equivalent to \(r\Gamma_{a}(M)\) for some \(r\neq 0\in R\)._
Proof.: Let \(E:0\to A\stackrel{{\sigma}}{{\to}}E^{0}\stackrel{{ d^{0}}}{{\to}}E^{1}\stackrel{{ d^{1}}}{{\to}}E^{2}\to\dots\) be an e-injective resolution for an \(R\)-module \(A\) and \(E^{A}:0\to E^{0}\stackrel{{ d^{0}}}{{\to}}E^{1}\stackrel{{ d^{1}}}{{\to}}E^{2}\to\dots\) be the deleted \(e\)-injective resolution for \(A\). When we apply \(\Gamma_{a}(\,)\) on the deleted \(e\)-injective resolution we get \(0\to\Gamma_{a}(E^{0})\stackrel{{ d^{0^{*}}}}{{\to}}\Gamma_{a}(E^{1}) \stackrel{{ d^{1^{*}}}}{{\to}}\Gamma_{a}(E^{2})\to\dots\). Then by the definition of e-cohomology, we have \({}_{e}H_{a}^{0}(A)=H^{0}(\Gamma_{a}(E^{A}))=Kerd^{0*}\). But the left e-exactness of \(\Gamma_{a}(\,)\) gives an e-exact sequence \(0\to\Gamma_{a}(E^{0})\stackrel{{ d^{0^{*}}}}{{\to}}\Gamma_{a}(E^{1}) \stackrel{{ d^{1^{*}}}}{{\to}}\Gamma_{a}(E^{2})\to\dots\)
We define \(\sigma^{*}:\Gamma_{a}(A)\to Kerd^{0*}\). Since \(Im\sigma^{*}\leqslant_{e}Kerd^{0*}\), \(\sigma^{*}\) is well-defined and \(\Gamma_{a}(\,)\) a left e-exact functor, \(\sigma^{*}\) is monic. Now, we want to prove that \(\sigma^{*}\) is epic. Let \(x\in Kerd^{0*}\). Then \(d^{0*}(x)=d^{0}(x)=0\). Therefore \(x\in Kerd^{0}\). By e-exactness of the e-injective resolution we have \(Im\sigma\leqslant_{e}Kerd^{0}\), so there exist \(a^{\prime}\in A\) and \(0\neq r\in R\) such that \(\sigma(a^{\prime})=rx\neq 0\). Now, we define \(f:r\Gamma_{a}(A)\to A\) by \(f(ry)=a^{\prime}\). Let \(y_{1},y_{2}\in\Gamma_{a}(A)\) with \(ry_{1}=ry_{2}\). Then \(\sigma(a^{\prime}_{1})=\sigma(a^{\prime}_{2})\). By monicness of \(\sigma\) we have \(a^{\prime}_{1}=a^{\prime}_{2}\) and so \(f\) is well-defined. Now we have \(rx=\sigma(a^{\prime})=\sigma(rf(y))=r\sigma(f(y))=r\sigma^{*}(f(y))\) which is equivalent to \(\sigma^{*}(f(y))=x\). Hence \(\sigma^{*}\) is an isomorphism and since \({}_{e}H^{0}_{a}(A)=Kerd^{0*}\). Therefore \({}_{e}H^{0}_{a}(\,)\) is isomorphic to \(r\Gamma_{a}(\,)\) for some nonzero \(r\) in \(R\).
**Corollary 3.8**.: _If \(0\to L\xrightarrow{f}M\xrightarrow{g}N\to 0\) is an e-exact sequence of \(R\)-modules where \(N\) is torsion-free, then there is a long e-exact sequence \(0\to{}_{e}H^{0}_{a}(L)\stackrel{{\Gamma_{a}(f)}}{{\to}}{}_{e}H^ {0}_{a}(M)\stackrel{{\Gamma_{a}(g)}}{{\to}}{}_{e}H^{0}_{a}(N) \stackrel{{\sigma}}{{\to}}{}_{e}H^{1}_{a}(L)\stackrel{{ eH^{1}_{a}(f)}}{{\to}}{}_{e}H^{1}_{a}(M)\stackrel{{ eH^{1}_{a}(g)}}{{\to}}{}_{\dots}\). In addition, if \(L\), \(M\) and \(N\) are torsion-free modules, then there is a long e-exact sequence \(0\to r_{1}\Gamma_{a}(L)\stackrel{{\Gamma_{a}(f)}}{{\to}}{}_{r_{2 }}\Gamma_{a}(M)\stackrel{{\Gamma_{a}(g)}}{{\to}}{}_{r_{3}}\Gamma _{a}(N)\stackrel{{\sigma}}{{\to}}{}_{e}H^{1}_{a}(L)\stackrel{{ eH^{1}_{a}(f)}}{{\to}}{}_{e}H^{1}_{a}(M)\stackrel{{ eH^{1}_{a}(g)}}{{\to}}{}_{\dots}\) for some nonzero \(r_{1},r_{2},r_{3}\) in \(R\)._
Proof.: Directly follows from Theorem 3.6 and Theorem 3.7.
**Theorem 3.9**.: _Given an e-commutative diagram of R-modules having e-exact rows where \(A^{\prime\prime}\) and \(C^{\prime\prime}\) are torsion-free:_
_Then there is an e-commutative diagram with e-exact rows,_
_Proof._: By Proposition 2.14, we have an e-exact sequence of deleted complexes \(0\to E^{A^{\prime}}\to E^{A}\to E^{A^{\prime\prime}}\to 0\). If \(T=\Gamma_{a}\), then \(0\to TE^{A^{\prime}}\to TE^{A}\to TE^{A^{\prime\prime}}\to 0\) is still an e-exact by Lemma 3.1. By [2, Remark
3.3] there is an \(e\)-commutative diagram of \(R\)-modules and \(R\)-morphisms
and the rest will achieved by applying the definition of \(e\)-cohomology \({}_{e}H^{n}_{a}(\,)=H^{n}(\Gamma_{a}(E^{A}))\).
**Corollary 3.10**.: _Let \(M\) be an \(a\)-torsion \(R\)-module. Then there exists an \(e\)-injective resolution of \(M\) in which each term is an \(a\)-torsion \(R\)-module._
In local cohomology, every \(a\)-torsion \(R\)-module \(M\) has zero \(i_{th}\) local cohomology modules, that is \(H^{i}_{a}(M)=0\), for all \(i>0\). while in \(e\)-cohomology this is not true in general. To be more precise we present the following example.
**Example 3.11**.: Consider the \(e\)-injective resolution \(0\to\frac{\mathbb{Z}}{2\mathbb{Z}}\xrightarrow{f}\frac{\mathbb{Z}}{8\mathbb{ Z}}\xrightarrow{g}\frac{\mathbb{Z}}{16\mathbb{Z}}\to 0\) for the \(\mathbb{Z}\)-module \(\frac{\mathbb{Z}}{2\mathbb{Z}}\) where \(f(1+2\mathbb{Z})=4+8\mathbb{Z}\) and \(g(n+8\mathbb{Z})=8n+16\mathbb{Z}\). Since each term of the resolution is \(2\mathbb{Z}\)-torsion while the \(e\)-cohomology module \(H^{1}_{2\mathbb{Z}}(\frac{\mathbb{Z}}{2\mathbb{Z}})=\frac{\mathbb{Z}}{8 \mathbb{Z}}\) is non-zero.
At the end of the paper and as a future work one can do research on the \(e\)-cohomology dimension and its connection with the cohomolgy dimension.
**Acknowlegments**: We would like to thank the referees for their thoughtful comments and efforts towards improving the manuscript.
| この論文では、Akray [1] によって導入された $e$ - 準同型系列を用いて、$e$ - injecticeモジュールを一般化し、それを $e$ - injecticeモジュールと名付け、その特性を調査します。 homology の Baer 判定法と比較定理を $e$ - injecticeモジュールと $e$ - injectiveresolution を使って再証明します。さらに、$e$ - injecticeモジュールを局所共 homology に適用して、新しい形の共 homology モジュールを構築します。これを基本的な $e$ - co homology モジュールと呼ぶこともあります。 $e$ - co homology モジュールを用いて、 torsion-free モジュールの $ \Gamma_a ( - )$ の torsion-exact functo を示します。 $e$ - co homology の古典的共 homology との関係性を調べるために、この論文では、$e$ - co homology の $ |
2309.11994 | Enhancing SAEAs with Unevaluated Solutions: A Case Study of Relation
Model for Expensive Optimization | Surrogate-assisted evolutionary algorithms (SAEAs) hold significant
importance in resolving expensive optimization problems~(EOPs). Extensive
efforts have been devoted to improving the efficacy of SAEAs through the
development of proficient model-assisted selection methods. However, generating
high-quality solutions is a prerequisite for selection. The fundamental
paradigm of evaluating a limited number of solutions in each generation within
SAEAs reduces the variance of adjacent populations, thus impacting the quality
of offspring solutions. This is a frequently encountered issue, yet it has not
gained widespread attention. This paper presents a framework using unevaluated
solutions to enhance the efficiency of SAEAs. The surrogate model is employed
to identify high-quality solutions for direct generation of new solutions
without evaluation. To ensure dependable selection, we have introduced two
tailored relation models for the selection of the optimal solution and the
unevaluated population. A comprehensive experimental analysis is performed on
two test suites, which showcases the superiority of the relation model over
regression and classification models in the selection phase. Furthermore, the
surrogate-selected unevaluated solutions with high potential have been shown to
significantly enhance the efficiency of the algorithm. | Hao Hao, Xiaoqun Zhang, Aimin Zhou | 2023-09-21T12:09:55 | http://arxiv.org/abs/2309.11994v2 | # Enhancing SAEAs with Unevaluated Solutions:
###### Abstract
Surrogate-assisted evolutionary algorithms (SAEAs) hold significant importance in resolving expensive optimization problems (EOPs). Extensive efforts have been devoted to improving the efficacy of SAEAs through the development of proficient model-assisted selection methods. However, generating high-quality solutions is a prerequisite for selection. The fundamental paradigm of evaluating a limited number of solutions in each generation within SAEAs reduces the variance of adjacent populations, thus impacting the quality of offspring solutions. This is a frequently encountered issue, yet it has not gained widespread attention. This paper presents a framework using unevaluated solutions to enhance the efficiency of SAEAs. The surrogate model is employed to identify high-quality solutions for direct generation of new solutions without evaluation. To ensure dependable selection, we have introduced two tailored relation models for the selection of the optimal solution and the unevaluated population. A comprehensive experimental analysis is performed on two test suites, which showcases the superiority of the relation model over regression and classification models in the selection phase. Furthermore, the surrogate-selected unevaluated solutions with high potential have been shown to significantly enhance the efficiency of the algorithm.
Expensive optimization, unevaluated solutions, relation model, surrogate-assisted evolutionary algorithm 1]Hao Hao
1]Xiaoqun Zhang
1]Aimin Zhou
1]Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai 200240, China; Shanghai Frontiers Science Center of Molecule Intelligent Synthesis, Shanghai 200062, China; School of Computer Science and Technology, East China Normal University, Shanghai 200062, China
## 1 Introduction
Valued for their global search capability and adaptability, evolutionary algorithms (EAs) are extensively utilized in various fields [1]. Despite the common presumption of a method to assess each solution's fitness, expensive optimization problems often present significant challenges due to the extensive computational resources or costly experiments they require [2, 3]. Addressing such practical limitations, surrogate-assisted evolutionary algorithms (SAEAs) have gained prominence. By integrating the robust global search of EAs with cost-effective surrogate model estimation, SAEAs have emerged as a mainstream method for solving these resource-intensive problems [4]. The SAEA framework, as depicted in Figure 1, with'reproduction operators' and'surrogate-assisted selection operators' being the pivot around which the framework revolves. It is the duty of the reproduction operators to generate innovative trial solutions, thus expanding the exploration of the search space. Concurrently, the surrogate-assisted selection operators strategically select prospective high-quality solutions for real fitness evaluation. These two operators alternate execution to drive the population toward the optimal solution region.
An important question is how the aforementioned two modules,'reproduction operators' and'surrogate-assisted selection operators', cooperate in SAEAs. Given the limited evaluation budget, the choice of which solutions and how many to real evaluate will impact SAEAs' preferences in terms of exploration and exploitation. Simultaneously, the decision of which solutions to use as parent solutions for generating new ones also dictates the SAEAs' search direction. The answer to this issue depends on the algorithm's
selection strategy, which can also be referred to as the model management strategy [2, 5]. There are two typical methods,'select \(N\)' and'select 1'. In'select \(N\)' [6, 7], a large number of trial solutions are generated using the reproduction operator, which exceeds the population size (\(N\)). Then, a surrogate model is utilized to select \(N\) solutions as offspring solutions for real evaluation. This methodology presents the benefit of augmenting the quantity of superior solutions in each generation and allowing entry into the following generation to promote population mobility. However, it aggravates the depletion of the real function evaluation budget. On the contrary, in the'select 1' method [8, 9], the benefit is a noteworthy reduction in the real evaluation budget. However, ideally, only one solution in the subsequent generation populace will be altered. As a result, there is a possibility that the distribution of the population may not be substantially enriched, and newly generated solutions may remain confined to the current search area (In Section 2.4, a simple visualization experiment will confirm this). The acquisition function is used to balance the search preferences and improve the diversity of the population to a certain extent. However, its ability to improve the current population distribution is limited. Therefore, when only one solution is sampled, it is difficult for the new solution in the next generation to escape the current local optimum.
The aforementioned'select \(N\)' and'select 1' strategies both present unique advantages and challenges. This prompts the question: Can we devise a simple method that amalgamates the strengths of both'select \(N\)' and'select 1' strategies? A method where, in the current population, the solution deemed best by the surrogate model is real evaluated, and the external archive and surrogate model are updated accordingly. At the same time, certain high-quality solutions identified by the model, without real evaluations, are chosen to directly contribute to the generation of solutions for the following iteration. Even though these solutions may not necessarily be optimal, their potential to surpass some of the parent solutions in quality is plausible. Implementing such a method would not escalate the algorithm's evaluation cost, but could augment the population's diversity and accelerate the algorithm's progression towards the optimal region.
The successful implementation of the aforementioned proposal is contingent upon a pivotal prerequisite of dependable prediction results from surrogate models. A variety of regression [10, 11] and classification [12, 13] models can be employed to ascertain solution quality [7]. Despite the significant contributions of existing models, our goal in this paper is to develop surrogate models that are better aligned with the specific needs of the problem at hand. Considering the accomplishments of widely-used regression and classification models, we believe there's still room to create even more reliable surrogate models. To that end, we introduce the relation model, a new surrogate model variant that we previously proposed [7]. The relation model diverges from traditional regression and classification models in its learning objective: it doesn't target the quality of a single solution (as in regression-based models) or the category of a solution (as in classification-based models), but rather the superiority relationship between two solutions. The relation model exploits the comparative nature of evolutionary algorithms [14] and has demonstrated remarkable performance in single-objective [7] and multi-objective problems [15, 16, 17, 18, 19].
In this study, we strive to customize the construction strategy of the relation model to fulfill the framework's demand for model selection accuracy amidst the requirement for potential quality solutions. Therefore, we propose a dual relation models-assisted single-objective optimization algorithm (DRSO) and design two methods for constructing relation models. These methods respectively select the optimal solution (\(\mathcal{Q}_{best}\)) and high-quality unevaluated solutions (\(\mathcal{P}_{u}\)). We employ the distribution estimation algorithm (EDA) to study the population's distribution information and generate offspring solutions. While the strategy of utilizing unevaluated solutions has been implemented for multi-objective optimization [20], our current work specifically focuses on designing a relation model for the selection of unevaluated solutions in single-objective optimization, instead of using a classifier. The main contributions of this paper can be summarized as follows:
* Illumination of the issue of offspring quality degradation in SAEAs when only a single offspring per
Figure 1: Flowchart of surrogate-assisted evolutionary algorithms.
generation is selected. In response, we propose a simple and universal method fueled by unevaluated solutions.
* Proposal of two methods for constructing relation models, known as the fitness-based and category-based criteria. These methods leverage data relationships to construct surrogate models.
* Introduction of a novel strategy, based on the EDA, for generating solutions by integrating evaluated and unevaluated solutions. The efficacy of this novel algorithm is validated on two test suites, highlighting both the effectiveness of the relation model and the significance of incorporating unevaluated solutions.
The rest of the article unfolds as follows. Section 2 presents some preliminaries. Section 3 outlines the unevaluated solutions driven SAEAs framework, covering the construction of the relation model and the generation of trial solutions. Section 4 showcases an empirical evaluation of the proposed method and compares it with other methods across two test suites. Finally, Section 5 provides a summary of the paper and explores potential directions for future research.
## 2 Preliminaries
This section provides the preliminary knowledge related to this work. Section 2.1 presents the basic concepts of EOPs. Section 2.2 introduces different strategies for offspring selection. Section 2.3 provides an overview of surrogate models, particularly focusing on relation models. Section 2.4 discusses the impact of population variance on the efficiency of SAEAs.
### Expensive optimization problems
An unconstrained minimization expensive optimization problem can be formulated as follow:
\[\min_{\mathbf{x}\in\Omega}\ f(\mathbf{x}) \tag{1}\]
where \(\mathbf{x}=(x_{1},\ldots,x_{n})^{T}\) is a decision variable vector, \(\Omega\in R^{n}\) defined the feasible region of the search space. Given that \(f:R^{n}\to R\) is the objective function, which is essentially a black-box due to the difficulty in tracking its internal workings, optimization problems in real-world applications that involve \(f(\cdot)\) can be quite costly. In fact, the lack of a closed-form objective function and the expensive nature of evaluating \(f(\cdot)\) pose significant challenges to both numerical and heuristic optimization techniques that are traditionally employed.
### Offspring Selection Methods
The purpose of offspring selection is to guide population movement toward the optimal regions while ensuring a certain level of diversity in the distribution. Depending on the offspring selection strategy, representative works in SAEAs can be categorized into three groups as follows:
\(\bullet\) select \(N\): This strategy is employed during the iterative process of several algorithms such as BCPS [6], FCPS [21], RCPS [7] and SAMFEO [22]. With the use of the reproduction operator, it generates a significant number of trial solutions surpassing the population size (\(N\)). Following this, a surrogate model is applied to select \(N\) solutions for real evaluation, creating the offspring solutions.
\(\bullet\) select 1: In every generation, only the top solution is chosen for real function evaluation and preserved in an archive. Acquisition functions \([]\) are employed to enhance the exploratory capability of the algorithm. Specifically, GPEME [8] and SADE-Sammon [9] utilize the lower confidence bound (LCB) [23] to guide the search, EGO [24] adopts the expected improvement (EI) method [25], and SA-EDA [26] integrates multiple acquisition strategies using the GP-Hedge method [27] to enhance the robustness of the selection.
\(\bullet\) others: Customized approaches have been proposed in SAMSO [28] and SACOSO [29], where multi-particle swarm is utilized to increase diversity through mutual interactions between swarm. In LLSO [30] and DFC-MOEA [20], a hierarchical strategy is employed for solution selection. In addition, LLSO enhances population diversity by introducing random solutions, while DFC-MOEA selects solutions with medium membership degrees using a classifier.
Each of the aforementioned methods has its own advantages, with the core consideration being the balance of interests under a limited computation budget of EOPs.
### Surrogate model
In SAEAs, surrogate models typically fall into two main categories [7]: regression and classification models. In regression-based SAEAs, the original function is replaced with a curve fitting the data points' distribution. Examples of such models include polynomial regression [31], radial basis function (RBF) networks [10], and Gaussian processes (GPs) [11]. Classification-based SAEAs, on the other hand, label solutions based on their quality, using models such as support vector machines (SVM) [12], artificial neural networks (ANN) [13], and fuzzy K-nearest neighbor (KNN) [32].
A newer category in SAEAs is relation learning [7, 15, 16, 17, 18], where the model is trained on the relationships between solutions, as opposed to using a single solution in regression or classification-based SAEAs. This approach shows promise in single-objective optimization, as it leverages the superiority and inferiority relationships between solutions for pre-selection operations on offspring solutions, resulting in improved performance [7]. In multi-objective optimization, methods like REMO [15] and CREMO [17] use a penalty-based boundary intersection (PBI) [33] approach to categorize solutions in the multi-objective space. A relation dataset is constructed based on the belonging relationship between samples, and a neural network is trained to learn the sample features. This process has proven effective in creating reliable surrogates for both continuous and discrete optimization problems. Methods like \(\theta\)-DEA-DP [18] directly apply the dominance relationship as the relationship definition for solutions, focusing on the dominance relationship learning and prediction.
Previous studies have demonstrated the advantages of using relation models in SAEAs. The construction of the relation model can generally be divided into three steps: data preparation, model training, and model usage. In data preparation, a certain criterion is used to construct relationship samples, \(\mathcal{D}=(\mathbf{x}_{i},\mathbf{x}_{j}),l)|\mathbf{x}_{i},\mathbf{x}_{j} \in\mathcal{P}\), where \(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle\) is a feature vector composed of each pair of solutions, and \(l\) is the label of \(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle\). Machine learning methods are then used to learn from the relation data, and a predict method based on the relation model is designed to select solutions. In this work, we address the specific needs of selecting the best solution (\(\mathcal{Q}_{best}\)) and high-quality unevaluated solutions (\(\mathcal{P}_{u}\)) and propose new methods for constructing relationship models.
### Impact of variance among adjacent generations' populations
When confronted with the EOP mentioned in Eq. (1), it is expedient to solely evaluate the optimum solution in order to conserve the evaluation budget. However, this paradigm precipitates a decline in inter-population variance, thereby engendering new solutions in the subsequent iteration that are constrained within the present search region, culminating in a low-effectiveness predicament in the algorithm. Figure 2 presents a visual representation of the results obtained from five successive generations of search on a 2-dimensional Ellipsoid function [8] using the genetic algorithm (GA) [34]. The first row shows the selection of \(N\) solutions per generation, whereas the second row illustrates the selection of only the optimal solution for the next generation. The outcomes indicate that utilizing a single solution to update the population can lower the search efficiency of the original GA algorithm. This is due to the fact that selecting only the best solution can result in a loss of diversity in the population and hinder the exploration of the search space.
Additionally, we carried out 30 independent runs utilizing GA, differential evolutionary (DE) [35], and EDA [36] (three fundamental EAs) on LZG test suite [8]. According to the experimental results shown in Table 1 and analyzed using the Wilcoxon rank sum test [37], selecting a single solution leads to a decrease in all algorithms performance. It can be seen from this that the problem of performance degradation caused by selecting only the optimal solution is commonly present in various evolutionary algorithms.
The aforementioned toy studies demonstrate that a decrease in inter-population variance can lead to a decline in the performance of some fundamental algorithm operators. Therefore, adding some unevaluated
\begin{table}
\begin{tabular}{c c c c c c} \hline Alg & method & Ellipsoid & Rosenbrock & Ackley & Griewank \\ \hline GA & select N & 3.97e-01(1.37e-01) & 5.17e+01(2.68e+01) & 2.25e+00(3.57e-01) & 1.01e+00(1.79e-03) \\ select 1 & 1.45e+01(7.57e+00)(\(-\)) & 9.27e+01(4.04e+01)(\(-\)) & 8.33e+00(2.84e+00)(\(-\)) & 1.22e+00(1.02e-01)(\(-\)) \\ \hline \multirow{2}{*}{DE} & select N & 2.64e-01(8.02e-02) & 3.36e+01(1.51e+01) & 2.60e+00(4.07e-01) & 1.01e+00(2.39e-03) \\ & select 1 & 3.27e+01(2.50e-01)(\(-\)) & 1.29e+02(4.29e+01)(\(-\)) & 9.42e+001(1.51e+00)(\(-\)) & 1.57e+00(3.91e-01)(\(-\)) \\ \hline \multirow{2}{*}{EDA} & select N & 4.62e-02(7.29e-02) & 1.92e+01(3.08e+00) & 3.69e-01(1.64e-01) & 9.61e+01(1.27e-02) \\ & select 1 & 1.00e+01(2.82e+00)(\(-\)) & 5.72e+01(1.18e+01)(\(-\)) & 7.31e+00(5.81e-01)(\(-\)) & 1.21e+00(4.18e-02)(\(-\)) \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of mean and standard deviation results obtained by GA, ED and EDA on LZG test suite with \(n=20\).
solutions to supplement diversity can be a direct and simple method to improve the performance of the SAEAs algorithm.
## 3 Proposed method
In this section, we begin by introducing a basic framework for surrogate-assisted selection and unevaluated solution driven reproduction. Then, we will present two innovative approaches for constructing relation models. Finally, we will provide a detailed explanation of the reproduction process within this framework.
### Main Framework
```
0:\(N\) (population size); \(FE_{max}\) (maximum number of FEs); \(\alpha\) (size of training data set).
0:\(\mathcal{A}_{best}\) (the optimum solution).
1:\(\mathcal{P}_{e}\gets\mathbf{Initialization}(N)\). /* initialize population*/ /* update archive*/
2:\(\mathcal{A}\leftarrow\mathcal{P}_{e}\). /* update evaluation counter*/
3:\(\mathcal{P}_{u}\leftarrow\emptyset\). /* initialize an empty set*/
4:while\(fes\leqslant FE_{max}\)do
5:\(Q\leftarrow\mathbf{Reproduction}(\mathcal{P}_{e},\mathcal{P}_{u},N)\). /* generate new solutions*/
6:\(\mathcal{M}\leftarrow\mathbf{Training}(\mathcal{A}_{1:\alpha})\). /* train surrogate model*/
7:\([\mathcal{Q}_{best},\mathcal{P}_{u}]\leftarrow\mathbf{SA\_selection}(\mathcal{Q}, \mathcal{M})\). /* surrogate-assisted selection*/
8:\(\mathcal{A}\leftarrow\mathcal{A}\cup\mathbf{Evaluation}(\mathcal{Q}_{best})\). /* evaluate new solution and update archive*/
9:\(\mathcal{P}_{e}\leftarrow\mathcal{A}_{1:N}\). /* update population*/
10:\(fes\gets fes+1\). /* update evaluation counter*/
11:endwhile
```
**Algorithm 1** Main Framework
Algorithm 1 presents a basic framework put forward in this article, comprising surrogate-assisted selection and unevaluated solution driven reproduction. The specifics are succinctly summarized as follows.
\(\bullet\)**Initialization (lines 1-4)**: A set of \(N\) initial solutions are sampled from \(\Pi_{i=1}^{n}\left[a_{i},b_{i}\right]\) by means of the Latin hypercube sampling method (LHS) [38], with each of these solutions undergoing an evaluation by the real function and subsequently being stored in the archive \(\mathcal{A}\). The fitness evaluation count of these evaluations, denoted by the \(fes\), is updated accordingly. Eventually, an empty set \(\mathcal{P}_{u}\) needs to be initialized to store the unevaluated solutions selected by the surrogate model in the subsequent steps.
\(\bullet\)**Stop condition (line 5)**: The algorithm halts once the \(fes\) surpasses the designated maximum number of evaluations (\(FE_{max}\)).
\(\bullet\)**Generate new solutions (line 6)**: Based on the current evaluated population \(\mathcal{P}_{e}\) and unevaluated population \(\mathcal{P}_{u}\), an offspring population \(\mathcal{Q}\) containing \(N\) individuals is generated utilizing various heuristic
Figure 2: Distribution of the population during continuous evolution.
operators, such as DE, GA, EDA, among others. In this study, an approach combining a variable-width histogram (VWH) model and local search will be employed to generate new solutions [36].
\(\bullet\)**Train surrogate model (line 7)**: Selecting the optimal \(\alpha\) solutions from archive \(\mathcal{A}\), surrogate models will be trained. In this work, two customized methods for constructing relation models will be provided.
\(\bullet\)**Surrogate-assisted selection (line 8)**: The surrogate model is utilized to evaluate the solutions in the offspring population \(\mathcal{Q}\), with the optimal solution being selected as \(\mathcal{Q}_{best}\). A portion of the high-quality solutions in \(\mathcal{Q}\) will be selected as unevaluated solutions, restored in \(\mathcal{P}_{u}\).
\(\bullet\)**Update archive (line 9)**: The solution \(\mathcal{Q}_{best}\) will be evaluated by the real objective function and saved in archive \(\mathcal{A}\).
\(\bullet\)**Select solution for next generation (line 10)**: \(N\) evaluated solutions are selected from the archive \(\mathcal{A}\) based on their objective function values to constitute the population \(\mathcal{P}_{e}\).
\(\bullet\)**Update the counter (line 11)**: Since only one solution, \(\mathcal{Q}_{best}\), undergoes real evaluation during each iteration, \(fes\) is incremented by one.
In order to facilitate the model-assisted selection (line 8), it is necessary to devise surrogate models that can accurately select the optimal solution \(\mathcal{Q}_{best}\) from \(\mathcal{Q}\), as well as identify a subset of potentially good solutions that have not been evaluated but meet a certain threshold to form \(\mathcal{P}_{u}\). Additionally, we need to design a method to generate offspring solutions using these unevaluated solutions. Therefore, in the following sections, we will provide a detailed description of the design of the surrogate model and the generation of new solutions.
### Relation model
This subsection proposes two relation-based methods for constructing surrogate models, which are referred to as the fitness-based criterion (C1) and the category-based criterion (C2), respectively, and are used for two specific applications. The C1 criterion is used for selecting \(\mathcal{Q}_{best}\), while the C2 criterion is used for selecting \(\mathcal{P}_{u}\). Each model consists of three components: data preparation, model training, and model usage. The following sections will provide a detailed description of the implementation details of each component.
#### 3.2.1 Data preparation
Data preparation refers to how to construct relation pairs from the original training data \(\mathcal{D}\). We have designed two data construction methods for C1 and C2 criteria.
```
0:\(\mathcal{D}=\{(\mathbf{x}_{1},f(\mathbf{x}_{1})),\cdots,(\mathbf{x}_{1},f( \mathbf{x}_{\alpha}))\}\) (Training Data).
0:\(\mathcal{D}_{r}=\{((\mathbf{x}_{i},\mathbf{x}_{j}),l)|i,j\in[1,\alpha],l\in[- 1,+1]\}\) (Relation Data).
1:\(\mathcal{D}_{r}\leftarrow\{(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle,l)| \mathbf{x}_{i},\mathbf{x}_{j}\in P,i\neq j\}\), where the label \(l\) is assigned as follow: \[l(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle)=\begin{cases}+1,&f(\mathbf{x}_{ i})<f(\mathbf{x}_{j})\\ -1,&f(\mathbf{x}_{j})\geqslant f(\mathbf{x}_{i})\end{cases}\]
```
**Algorithm 2** Data preparation in fitness criterion (C1)
\(\bullet\) Fitness-based criterion (C1): To determine the superiority or inferiority between any given pairs of relations \(\langle x_{i},x_{j}\rangle\), their corresponding fitness values (i.e., objective function values) are used as a pivotal criterion. This allows for the assignment of a label to each pair. The process is elaborated in Algorithm 2, which generates a labeled training dataset \(\mathcal{D}_{r}\) consisting of two classes. Here, \(\alpha\) denotes the total number of elements present in the dataset \(\mathcal{D}\).
\(\bullet\) Category-based criterion (C2): First, a threshold is set based on the distribution of fitness values in the current population. Then, according to the comparison between the solution in \(\mathcal{D}\) and the threshold, they are classified into different categories (\(\mathbf{X}_{good}\) and \(\mathbf{X}_{bad}\)). Finally, labels are assigned to the relation pairs \(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle\) based on the categories that make up the solution for the pairs. The specific details are shown in Algorithm 3. In lines 1-2, based on the classification threshold \(t\), the \(t\) top solutions in the data set \(\mathcal{D}\) are selected as \(\mathbf{X}_{good}\) samples according to their fitness values from best to worst. while the rest are assigned as \(\mathbf{X}_{bad}\) samples. In line 3, the relation pair \(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle\) is assigned a label according to the
categories to which they belong. Since \(t\) is not necessarily equal to 50%, the labels of the pairs in dataset \(\mathcal{D}_{r}\) may not be balanced. To address this, we have further implemented the balancing strategy (line 4) proposed in [15].
The label balancing strategy is described as follows. Let \(L(+1)\), \(L(-1)\), \(L(+0)\), and \(L(-0)\) represent the sets of pairs labeled as '\(+1\)', '\(-1\)', '\(+0\)' and '\(-0\)' respectively. The symbol \(|\cdot|\) denotes the cardinality of a set. It is apparent that \(|L(+1)|=|L(-1)|\), and \((|L(+0)|+|L(-0)|)>|L(+1)|\). In order to balance the training data sets, certain points from \(L(+0)\cup L(-0)\) must be removed. Let \(|L(+1)|=|L(-1)|=\theta\). There exist three situations.
* If \(|L(+0)|>0.5\theta\) and \(|L(-0)|>0.5\theta\), \(0.5\theta\) points are arbitrarily retained from both \(L(+0)\) and \(L(-0)\).
* If \(|L(+0)|>0.5\theta\) and \(|L(-0)|<0.5\theta\), \(L(-0)\) is retained, and \(\theta-|L(0)|\) points are randomly selected from \(L(+0)\).
* If \(|L(+0)|<0.5\theta\) and \(|L(-0)|>0.5\theta\), \(L(+0)\) is retained, and \(\theta-|L(+0)|\) points are randomly selected from \(L(0)\). By following this method, the three training data sets all have a size of \(\theta\).
After employing two data preparation strategies and customizing the training data based on the C1 and C2 criteria, we have generated a 2-class dataset for \(\mathcal{D}_{r}\) using the C1 strategy and a 3-class dataset using the C2 strategy. In the following section, we will introduce the model training process.
#### 3.2.2 Model training
Extreme Gradient Boosting (XGBoost) [39] is a machine learning algorithm that is widely used in various data-driven applications. XGBoost is based on the concept of gradient boosting, where weak models are combined to create a strong model. In this work, XGBoost was used to learn the data features of \(\mathcal{D}_{r}\).
The relation pair samples \(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle\) are the data features of \(\mathcal{D}_{r}\), and the label \(l\) indicates the relationship between two solutions in a set of pairs. The model \(\mathcal{M}\) is trained using the XGBoost algorithm, as shown in Eq. (2).
\[l=\mathcal{M}(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle) \tag{2}\]
In line 7 of Algorithm 1, two models need to be trained for the two criteria, hence we differentiate them as \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\). The next step is to explain how to select the potential solutions based on the two models.
#### 3.2.3 _Molde usage_
For selecting appropriate solutions based on the two models \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), we propose two different model usage strategies, corresponding to the selection of the \(\mathcal{Q}_{best}\) and \(\mathcal{P}_{u}\). Specifically, we adopt the basic idea of 'voting-scoring' used in previous works [15] and redesign the rules for its implementation.
The term 'vote' pertains to the prediction process of labeling \(\langle\mathbf{u},\mathbf{x}\rangle\) and \(\langle\mathbf{x},\mathbf{u}\rangle\), where an unknown solution \(\mathbf{u}\) and an evaluated solution \(\mathbf{x}\) are combined. This procedure can be regarded as an assessment of the unknown solution's quality based on the quality of the known solution \(\mathbf{x}\). As such, we refer to this process as a 'voting' mechanism. The'score' is determined based on the voting outcomes of all solutions \(\mathbf{x}\) in the training dataset \(\mathcal{D}\), and a specific rule is employed for statistical analysis. The rule's configuration
necessitates consideration of the position between \(\mathbf{x}\) and \(\mathbf{u}\), as well as \(\mathbf{x}\)'s fitness or category. Next, we will introduce the 'vote-score' strategies that are devised based on the C1 and C2 criteria.
* Fitness-based criterion (C1): For a newly generated solution \(\mathbf{u}\in\mathcal{Q}\), it combines all the evaluated solutions \(\mathbf{x}\in\mathcal{D}\). Based on the positions of the two solutions, two sets of relation pairs can be obtained, e.g., \(\langle\mathbf{x},\mathbf{u}\rangle\) and \(\langle\mathbf{u},\mathbf{x}\rangle\). Thus, utilizing Eq. (3), two sets of predicted outcomes, \(l^{I}\) and \(l^{II}\), can be derived.
\[\begin{split} l^{I}&=\{\mathcal{M}_{1}(\langle \mathbf{x},\mathbf{u}\rangle),\mathbf{x}\in\mathbf{X}\}\\ l^{II}&=\{\mathcal{M}_{1}(\langle\mathbf{u}, \mathbf{x}\rangle),\mathbf{x}\in\mathbf{X}\}\end{split} \tag{3}\]
The scoring rules are defined by Eq. (4).
\[S_{1}(\mathbf{u})=c(l^{II}_{+1})+c(l^{I}_{-1})-c(l^{I}_{+1})-c(l^{II}_{-1}) \tag{4}\]
Here, the function \(c(\cdot)\) returns the cardinality of elements present in the input set. The subscript of \(l\) denotes the labels of the relation pairs that constitute the current subset. For example, \(l^{I}_{+1}\) denotes a set that encompasses all elements in the set \(l^{I}\) whose predicted label equals \(+1\). The quality of solution \(\mathbf{u}\) can be assessed by utilizing Eq. (4), where a higher value indicates superior quality of \(\mathbf{u}\). Under the C1 criterion, the ultimate learning outcome can be perceived as a regression process for the original data distribution.
* Category-based criterion (C2): Under the C2 criterion, the 'voting' rule is formulated as Eq. (5). As \(\mathbf{x}\) possesses a categorical attribute ('good', 'bad'), the voting outcomes are classified into four categories based on the position and category of \(\mathbf{x}\). The relation model \(\mathcal{M}_{2}\) forecasts the outcomes of the four groups of relation pairs, denoted by set \(l^{I}\), \(l^{II}\), \(l^{III}\), and \(l^{IV}\), respectively.
\[\begin{split} l^{I}=&\{\mathcal{M}_{2}(\langle \mathbf{x},\mathbf{u}\rangle),\mathbf{x}\in\mathbf{X}_{good}\}\\ l^{II}=&\{\mathcal{M}_{2}(\langle\mathbf{u}, \mathbf{x}\rangle),\mathbf{x}\in\mathbf{X}_{good}\}\\ l^{III}=&\{\mathcal{M}_{2}(\langle\mathbf{x}, \mathbf{u}\rangle),\mathbf{x}\in\mathbf{X}_{bad}\}\\ l^{IV}=&\{\mathcal{M}_{2}(\langle\mathbf{u}, \mathbf{x}\rangle),\mathbf{x}\in\mathbf{X}_{bad}\}\end{split} \tag{5}\]
The scoring rules are defined by Eq.(6).
\[\begin{split} S_{2}(\mathbf{u})=\frac{1}{|\mathbf{X}|}\times(c(l ^{II}_{+1})+c(l^{IV}_{+1})+c(l^{I}_{0})+c(l^{II}_{0})+c(l^{I}_{-1})+c(l^{III}_{ -1})\\ -c(l^{I}_{+1})-c(l^{III}_{+1})-c(l^{III}_{0})-c(l^{IV}_{0})-c(l^{ II}_{-1})-c(l^{IV}_{-1}))\end{split} \tag{6}\]
In Eq. (6), the symbolism is similar to that of Eq. (4), but with a focus on the processing of the '0' label. According to the definition of relation pairs in the C2 criterion (Algorithm 3), the '0' label indicates that the two solutions in the pair belong to the same category. Therefore, based on the category of \(\mathbf{x}\), the contribution to the scoring can be determined. For instance, \(l^{II}_{+1}\) denotes the prediction result of \(\langle\mathbf{u},\mathbf{x}\rangle\) as '\(+1\)', indicating that \(\mathbf{u}\) is considered better than \(\mathbf{x}\). As a result, the score \(c(l^{II}_{+1})\) has a positive impact on the quality of \(S_{2}(\cdot)\). \(S_{2}(\cdot)\) can be scaled to \([-1,+1]\) by multiplying it with \(\frac{1}{|\mathbf{X}|}\). When \(S_{2}(\mathbf{u})>0\), it indicates that the relation model considers the current solution \(\mathbf{u}\) to be in the 'good' category, whereas when \(S_{2}(\mathbf{u})<0\), it indicates that the relation model considers the current solution \(\mathbf{u}\) to be in the 'bad' category. Moreover, the larger the \(|S_{2}(\mathbf{u})|\) value, the greater the likelihood of belonging to either of the two categories. Under the C2 criterion, the final learning outcome can be viewed as a classification process for the original data distribution.
After processing the features of the original training data (data preparation) and training the models, we obtain two models \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) based on the relation of the solutions. These models can be used to select solutions in line 8 of Algorithm 1. Specifically, each solution in the offspring population \(\mathcal{Q}\) will be predicted by \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), and then based on the C1 criterion, the solution with the maximum \(S_{1}\) value will be selected as \(\mathcal{Q}_{best}\), and based on the C2 criterion, all solutions that satisfy \(S_{2}>0\) will be selected as the \(\mathcal{P}_{u}\) population.
### Reproduction
This work employs the EDA/LS, proposed by Zhou et al [36], as the fundamental method for generating new solutions, while incorporating information from the unevaluated solutions in population \(\mathcal{P}_{u}\) to generate offspring population \(\mathcal{Q}\). The EDA/LS algorithm includes two key operators, namely the variable-width histogram (VWH) and the local search method that combines global statistical information with individual location information to improve the performance of an EDA. First, a brief introduction to the VWH is presented, followed by an explanation of the local search method. Finally, the method for integrating unevaluated solutions to generate offspring population \(\mathcal{Q}\) is described.
#### 3.3.1 Variable-width histogram model
An Estimation of Distribution Algorithm (EDA) is an evolutionary algorithm variant that uses a probabilistic model to guide the search toward promising regions of the search space. It replaces traditional crossover or mutation operators with this model. A specific type, Variable-width histograms (VWH) [36], assumes no cross-dimensional correlations and uses a histogram model to track the population distribution. VWH emphasizes promising areas, reducing probabilities in other regions to prevent premature convergence, making it ideal for enhancing convergence in EOPs.
Fig. 3 illustrates the process of VWH. For the \(j\)-th variable, the search space \([a_{j},b_{j}]\) is partitioned into \(M\) bins, where the M-2 bins in the middle correspond to the regions with solutions in the current population \(\mathcal{P}\). The values of the bins are determined by the number of solutions in each bin's interval, while the first and the last bins are assigned a lower value. To generate a new solution, a bin is randomly selected based on its value, and then a uniform random value is sampled from the selected bin's interval as the value of the new solution for the j-th variable. This process is repeated \(n\) times to obtain a complete solution in the probability model VWH. By repeating this process \(N\) times, \(N\) offspring solutions are generated. For details on the modeling and sampling process, please refer to [36]. It is worth noting that the modeling and sampling stages of VWH only use the distribution information in the decision space, making it suitable to incorporate unevaluated solutions to update VWH.
#### 3.3.2 Local search
In order to compensate for the lack of local solution information, EDA/LS [36] proposes incorporating the results of local search into the offspring generated by the VWH model, given that the EDA model only uses the global information of the population to generate new solutions. In particular, a local model is constructed based on some of the best solutions from the current population \(\mathcal{P}\), which is then utilized to generate a portion of the new solutions. Afterward, these solutions are randomly combined with the solutions sampled from VWH to form the final offspring population \(\mathcal{Q}\). For more details, please refer to EDA/LS. Only the evaluated solutions are used for local search in this work, as they are driven by objective values.
#### 3.3.3 Unevaluated solutions driven reproduction
In each iteration, the process of generating the offspring population, using a combination of VWH and local search with both \(\mathcal{P}_{e}\) and \(\mathcal{P}_{u}\), will be executed according to the flowchart illustrated in Fig 4.
Figure 3: Illustration of VWH model for population on early and late search stage.
The two populations, one consisting of evaluated solutions and the other consisting of unevaluated solutions, will be merged and modeled using the VWH model to capture their distribution. The resulting distribution will be sampled to generate a new population. Since the VWH only utilizes information about the search space, whether a solution has been evaluated or not does not affect the operation of the model. The local search method only uses the population \(\mathcal{P}_{c}\) to generate a new population, which is then randomly merged with the new population generated by the VWH model to obtain the final offspring population \(\mathcal{Q}\). The implementation details and parameter settings of the VWH model, as well as the local search method and the ratio for merging the two temporary populations, will be set to the default values specified in EDA/LS [36].
## 4 Experimental study
In this section, we will evaluate the performance of the proposed algorithm and the relation model through comprehensive numerical experiments. Specifically, these experiments encompass comparative studies, ablation studies, and further analyses of the relation model and unevaluated solutions.
### Experimental Settings
#### 4.1.1 Test suites
In the empirical study, we utilized two well-known test suites. The first test suite, LZG [8], consists of four test functions: Ellipsoid, Rosenbrock, Ackley, and Griewank. These functions exhibit a range of landscapes, including unimodal, gully, and multi-modal. The second test suite used was the YLL test suite [40], which contains functions F1 through F4 with unimodal landscapes, and F5, F8 through F13 with multimodal landscapes. Function F6 has a step landscape, while function F7 has random noise. We evaluated the problems in both test suites in dimensions \(n=20\) for small-scale and \(n=50\) for median-scale.
#### 4.1.2 Algorithms in study
For the empirical study, seven algorithms have been selected, namely CMA-ES [41], FCPS-CoDE [32], EDA/LS [36], SAMSO [28], Skopt 1, GPEME [8], and DRSO. These algorithms can be classified into three categories.
Footnote 1: [https://github.com/scikit-optimize/scikit-optimize](https://github.com/scikit-optimize/scikit-optimize)
* Basic EAs: CMA-ES and EDA/LS are two generic EAs, not explicitly tailored for expensive optimization.
* Bayesian optimization: Skopt is an effective global optimization algorithm that operates within the Bayesian optimization framework. It employs GPs as the surrogate model.
* Surrogate-assisted EAs: FCPS-CoDE utilizes a fuzzy K-nearest neighbor-based classification model for evaluating candidate solutions. GPEME employs GPs for the evaluation of candidate solutions. SAMSO is a surrogate-assisted PSO algorithm that incorporates RBFs. DRSO is a dual relation models-assisted EDA that incorporates unevaluated solutions to generate new candidate solutions which are proposed in this work.
Figure 4: Flowchart for generating new solutions.
Due to the high computational complexity of Gaussian processes in high-dimensional spaces, GPEME and Skopt were only compared in the experiments for \(n=20\).
#### 4.1.3 Parameters setting
To ensure a fair comparison in the empirical study, we employ the recommended parameters specified in the original literature for each algorithm 2). The specifics of these parameters are outlined below.
Footnote 2) CMA-ES and SAMSO are implemented in Platemo [42]; Skopt:[https://github.com/scikit-optimize/scikit-optimize](https://github.com/scikit-optimize/scikit-optimize); FCPS-CoDE and GPEME are implemented by us based on the original report.
* Termination condition: The maximum number of function evaluations (\(FE_{max}\)) is employed as the termination condition, set at 500 for all instances.
* Population size: Set \(N=30\) for CMA-ES, EDA/LS, and FCPS-CoDE, Set \(N=40\) for SAMSO (default set in PlatEMO [42]). Set \(N=50\) for GPEME and DRSO.
* DRSO employs \(t=50\%\) for the C2 criterion to choose \(\mathcal{P}_{u}\).
* Parameters of compared algorithms: default setting according to the original version.
Each algorithm is executed on each test instance for 30 independent runs to account for randomness. The initial step involves evaluating the independence of the results generated by the algorithms for each test instance using the Friedman test [43]. The Wilcoxon rank sum test [37] is employed to compare the results. In the tables, the symbols '+', '-', '\(\sim\)' signify that the value achieved by an algorithm is smaller than, greater than, or similar to the value obtained by the DRSO, at a significance level of 0.05.
### Comparison study
Table 2 presents the statistical results of seven optimization algorithms evaluated on two test suites. The results are presented in terms of p-values obtained from the Friedman test, mean ranks, and the
\begin{table}
\begin{tabular}{c c c c c c c c} \hline problem & p-value & CMA-ES & FCPS-CoDE & EDA/LS & Skopt & GPEME & SAMSO & DRSO \\ \hline \multirow{2}{*}{Ellipsoid} & \multirow{2}{*}{1.80e-33} & 1.50e+02[7](\(-\)) & 1.30e+02[6](\(-\)) & 7.17e+01[5](\(-\)) & 8.20e-02[16](\(-\)) & 3.20e-01[2](\(+\)) & 1.87e+01[4](\(\approx\)) & 6.17e+00[3] \\ & & & (4.25e+01) & (3.13e+01) & (1.56e+01) & (1.91e-02) & (1.77e-01) & (2.52e+01) & (4.57e+00) \\ \hline \multirow{2}{*}{Rosenbrock} & \multirow{2}{*}{4.94e-32} & 3.03e+02[6](\(-\)) & 3.22e+02[7](\(-\)) & 2.37e+02[2](\(-\)) & 2.27e+01[2](\(+\)) & 1.27e+02[4](\(-\)) & 8.57e+01[16](\(+\)) & 1.02e+02[3] \\ & & & (7.50e+01) & (1.05e-02) & (4.02e+01) & (1.21e+01) & (4.32e-01) & (2.47e+01) & (2.94e+01) \\ \hline \multirow{2}{*}{Ackley} & \multirow{2}{*}{2.45e-34} & 1.59e+01[6](\(-\)) & 1.48e+01[5](\(-\)) & 1.33e+01[4](\(-\)) & 7.16e+00[3](\(-\)) & 3.77e+00[16](\(-\)) & 1.83e+01[7](\(-\)) & 6.08e+00[2] \\ & & & (1.28e+00) & (1.00e+00) & (7.37e-01) & (3.27e-01) & (5.52e-01) & (1.33e+00) & (1.05e+00) \\ \hline \multirow{2}{*}{Griewank} & \multirow{2}{*}{6.07e-34} & 5.09e+01[6](\(-\)) & 5.46e+01[7](\(-\)) & 2.96e+01[5](\(-\)) & 1.02e+00[16](\(-\)) & 1.17e+00[2](\(+\)) & 2.06e+01[4](\(-\)) & 3.08e+00[3] \\ & & & (1.06e-01) & (1.33e+01) & (7.62e+00) & (1.52e-02) & (8.7e-02) & (1.33e+01) & (1.54e+00) \\ \hline \multirow{2}{*}{YLLF01} & \multirow{2}{*}{1.12e-34} & 6.32e+03[7](\(-\)) & 5.37e+03[6](\(-\)) & 3.17e+03[5](\(-\)) & 2.07e+00[16](\(+\)) & 1.67e+01[2](\(+\)) & 6.73e+02[4](\(-\)) & 2.47e+02[3] \\ & & & (1.98e+03) & (1.86e+03) & (5.92e+02) & (1.36e+00) & (1.09e+01) & (6.43e+02) & (1.37e+02) \\ \hline \multirow{2}{*}{YLLF02} & \multirow{2}{*}{6.34e-26} & 1.15e+02[6](\(-\)) & 2.44e+01[3](\(-\)) & 2.67e+01[4](\(-\)) & 4.30e+18[18](\(\approx\)) & 5.03e+02[7](\(-\)) & 3.27e+01[5](\(-\)) & 4.06e+00[2] \\ & & & (2.00e-02) & (3.67e+00) & (0.39e+00) & (3.55e+18) & (1.04e+03) & (1.72e+01) & (1.60e+00) \\ \hline \multirow{2}{*}{YLLF03} & \multirow{2}{*}{9.88e-23} & 2.09e+04[5](\(-\)) & 1.26e+04[2](\(+\)) & 2.32e+04[6](\(-\)) & 5.79e+01[1](\(+\)) & 2.99e+04[7](\(-\)) & 1.71e+04[4](\(\approx\)) & 1.53e+04[3] \\ & & & (5.68e+03) & (3.48e+03) & (4.21e+03) & (2.57e+03) & (6.03e+03) & (1.38e+04) & (3.94e+03) \\ \hline \multirow{2}{*}{YLLF04} & \multirow{2}{*}{5.24e-23} & 4.03e+01[5](\(-\)) & 4.17e+01[6](\(-\)) & 3.28e+01[4](\(-\)) & 3.28e+01[18](\(-\)) & 3.09e+01[3](\(-\)) & 7.06e+01[7](\(-\)) & 2.62e+01[2] \\ & & & (1.25e+01) & (6.55e+00) & (3.26e+00) & (1.12e+01) & (1.89e+00) & (7.40e+00) & (7.09e+00) \\ \hline \multirow{2}{*}{YLLF05} & \multirow{2}{*}{4.69e-31} & 3.11e+06[6](\(-\)) & 4.71e+06[7](\(-\)) & 1.13e+06[5](\(-\)) & 1.65e+05[3](\(-\)) & 3.05e+05[4](\(-\)) & 1.12e+05[2](\(-\)) & 5.52e+041[18](\(-\)) & 5.52e+041[19](\(-\)) & 5.52e+041[18](\(-\)) \\ & & & (1.34e+06) & (2.84e+06) & (5.76e+05) & (7.14e+04) & (1.60e+05) & (8.27e+04) & (8.19e+04) \\ \hline \multirow{2}{*}{YLLF06} & \multirow{2}{*}{1.00e-34} & 5.79e+03[6](\(-\)) & 2.66e+03[7](\(-\)) & 3.20e+03[3](\(-\)) & 2.27e+01[4](\(+\)) & 3.23e+01[21](\(+\)) & 8.59e+04[2](\(-\)) & 2.48e+02[3] \\ & & & (1.20e+03) & (1.28e+03) & (3.05e+02) & (1.49e+00) & (1.06e+01) & (1.12e+03) & (1.42e+02) \\ \hline \multirow{2}{*}{YLLF07} & \multirow{2}{*}{3.93e-28} & 1.62e+00[6](\(-\)) & 2.09e+00[7](\(-\)) & 6.67e-01[5](\(-\)) & 2.39e-01[2](\(\approx\)) & 2.90e-01[3
corresponding Wilcoxon rank sum test. The highest rank in each row is denoted by grey shading, along with the corresponding ranks enclosed in brackets of each result. The p-value obtained from the Friedman test is considerably lower than 0.05, signifying a substantial difference between the outcomes. The analysis demonstrates that DRSO achieves the best mean rank of 2.29 out of seven algorithms across 17 test instances. The Skopt algorithm secured second position, and GPEME ranked third. Although EDA/LS is not primarily designed for expensive optimization, it still displays a strong competitive performance due to the use of VWH and local search methods. The FCPS-CoDE algorithm selects \(N\) solutions for evaluation at each iteration, but its advantages are limited by the 500 evaluations allowed, nevertheless, it still outperforms the CMA-ES algorithm. According to the Wilcoxon rank sum test results, compared to DRSO, the most competitive algorithm, Skopt, achieved 7 better results, 7 worse results, and 3 results roughly equivalent. In low-dimensional problems, DRSO, driven by the relational model, has achieved statistical results similar to the most advanced BO algorithms, and DRSO even has an advantage in mean rank. Therefore, based on the aforementioned analysis, the DRSO algorithm demonstrates the best overall performance in the 20-dimensional search space.
The statistical results in the 50-dimensional problem are shown in Table 3, and DRSO still shows the best performance, achieving an average rank of 1.47 among the five algorithms. Based on the results of the Wilcoxon rank sum test, the four comparative algorithms have 15, 14, 13, and 13 results inferior to DRSO out of 17 problems, respectively.
### Ablation study
In this section, we will conduct ablation experiments on several important components of the DRSO, including the offspring selection strategy, the relation model, and the generation of new solutions. The details of the algorithm variants are shown in Table 4.
\begin{table}
\begin{tabular}{c c c c c c c} \hline problem & p-value & CMA-ES & EDA/LS & SAMSO & FCPS-CoDE & DRSO \\ \hline Ellipsoid & 4.04e-22 & 1.91e+03[5](\(-\)) & 1.52e+03[3](\(-\)) & 1.31e+03[2](\(\approx\)) & 1.61e+03[4](\(-\)) & 6.66e+02[1] \\ & (2.77e+02) & (2.23e+02) & (1.13e+03) & (3.39e+02) & (1.19e+02) \\ \hline Rosenbrock & 1.88e-17 & 2.09e+03[5](\(-\)) & 1.78e+03[3](\(-\)) & 1.58e+03[2](\(\approx\)) & 1.99e+03[4](\(-\)) & 8.81e+02[1] \\ & & (4.12e+02) & (2.84e+02) & (1.70e+03) & (5.11e+02) & (1.66e+02) \\ \hline Ackley & 5.24e-29 & 1.85e+01[4](\(-\)) & 1.76e+01[3](\(-\)) & 1.86e+01[5](\(-\)) & 1.75e+01[2](\(-\)) & 1.34e+01[1] \\ & & (9.68e-01) & (4.24e-01) & (1.18e+00) & (6.00e-01) & (5.64e+01) \\ \hline Griewank & 3.37e-22 & 2.38e+02[2](\(-\)) & 2.41e+02[3](\(-\)) & 6.19e+02[5](\(-\)) & 2.69e+02[4](\(-\)) & 1.81e+02[2](\(-\)) \\ & & (3.80e+01) & (2.86e+01) & (3.66e+02) & (6.72e+01) & (2.26e+01) \\ \hline YLLF01 & 7.52e-24 & 2.61e+04[2](\(-\)) & 2.77e+04[3](\(-\)) & 6.69e+04[5](\(-\)) & 2.95e+04[4](\(-\)) & 2.92e+04[4](\(-\)) & 2.92e+04[4](\(-\)) \\ & & (3.48e-03) & (3.92e+03) & (3.70e+04) & (5.86e+03) & (2.56e+03) \\ \hline YLLF02 & 3.17e-33 & 8.62e+13[4](\(-\)) & 2.79e+04[3](\(-\)) & 1.05e+18[5](\(-\)) & 8.92e+01[2](\(-\)) & 8.54e+01[1] \\ & & (3.40e+14) & (9.04e+04) & (3.77e+18) & (8.24e+00) & (7.20e+00) \\ \hline YLLF03 & 4.41e-21 & 1.36e+05[3](\(-\)) & 1.58e+05[4](\(-\)) & 2.97e+05[3](\(-\)) & 8.26e+04[1](\(-\)) & 1.34e+05[2] \\ & & (2.57e+04) & (2.51e+04) & (9.13e+04) & (8.16e+04) & (2.25e+04) \\ \hline YLLF04 & 5.21e-22 & 9.16e+01[5](\(-\)) & 5.80e+01[1](\(\approx\)) & 8.74e+01[4](\(-\)) & 5.88e+01[2](\(\approx\)) & 5.96e+01[3] \\ & & (1.26e+01) & (2.78e+00) & (8.06e+00) & (5.58e+00) & (4.95e+00) \\ \hline YLLF05 & 1.86e-25 & 3.78e+07[3](\(-\)) & 2.93e+07[2](\(-\)) & 1.55e+08[5](\(-\)) & 4.49e+07[4](\(-\)) & 1.16e+07[1] \\ & & (1.25e+07) & (5.30e+06) & (8.04e+07) & (1.82e+07) & (3.71e+06) \\ \hline YLLF06 & 1.67e-28 & 2.82e+04[3](\(-\)) & 2.74e+04[2](\(-\)) & 7.82e+04[5](\(-\)) & 2.92e+04[4](\(-\)) & 1.19e+04[1] \\ & & (5.82e+03) & (4.14e+03) & (3.06e+04) & (7.35e+03) & (2.11e+03) \\ \hline YLLF07 & 1.42e-25 & 3.14e+04[1](\(-\)) & 2.17e+01[2](\(-\)) & 1.31e+02[5](\(-\)) & 2.96e+01[3](\(-\)) & 8.80e+001[1] \\ & & (9.15e+00) & (4.75e+00) & (7.91e+01) & (1.28e+01) & (3.10e+00) \\ \hline YLLF08 & 2.68e-28 & 1.65e+04[4](\(-\)) & 1.51e+04[2](\(-\)) & 1.66e+04[5](\(-\)) & 1.63e+04[3](\(-\)) & 1.38e+04[1] \\ & & (5.98e+02) & (5.84e+02) & (6.62e+02) & (6.11e+02) & (6.19e+02) \\ \hline YLLF09 & 6.58e-26 & 6.21e+02[5](\(-\)) & 5.19e+04[2](\(-\)) & 4.97e+02[3](\(\approx\)) & 3.80e+02[1](\(\approx\)) & 4.82e+02[2] \\ & & (4.52e+01) & (1.78e+01) & (1.11e+02) & (3.15e+01) & (2.39e+01) \\ \hline YLLF10 & 2.76e-30 & 1.89e+01[4](\(-\)) & 1.75e+01[3](\(-\)) & 1.94e+01[5](\(-\)) & 1.72e+01[2](\(-\)) & 4.78e+001[3] \\ & & (1.45e+00) & (3.88e-01) & (2.95e+00) & (8.67e+01) & (2.54e+01) \\ \hline YLLF11 & 1.13e-21 & 2.47e+02[3](\(-\)) & 2.42e+02[2](\(-\)) & 6.31e+02[5](\(-\)) & 2.52e+02[4](\(-\)) & **1.10e+02[1]** \\ & & (3.82e+01) & (2.55e+01) & (3.41e+02) & (6.26e+01) & (2.07e+01) \\ \hline YLLF12 & 5.66e-222 & 4.07e+07[3](\(-\)) & 1.64e+07[2](\(-\)) & 5.10e+08[5](\(-\)) & 5.44e+07[4](\(-\)) & **.61e+06[1]** \\ & & (2.32e+07) & (5.98e+06) & (3.16e+08) & (3.49e+07) & (4.32e+06) \\ \hline YLLF13 & 8.00e-29 & 1.03e+08[2](\(+\)) & 5.38
DRSO-Sel-1 and DRSO-Sel-2 are used to verify the importance of reliably selecting \(\mathcal{P}u\) and \(\mathcal{Q}_{best}\). DRSO-Gen-1 and DRSO-Gen-2 serve to verify the significance of \(\mathcal{P}u\) and the local search algorithm. DRSO-Mod is utilized to validate the effectiveness of the relation model. Experiments were independently conducted 30 times on LZG test suit with 20 and 50 dimensions. The experimental design and result statistics are consistent with the 4.2 section. The experimental results are presented in Table 5.
Broadly speaking, all the algorithmic variants that omit a particular module are inferior to the original algorithm in terms of results. Specifically, the results of DRSO-GEN-1 and DRSO-GEN-2 are significantly inferior to the original version on 7 problems, indicating the importance of \(\mathcal{P}_{u}\) and local search in generating solutions. The results of DRSO-SEL-2 are also poor, being inferior to the original algorithm on all problems, highlighting the importance of the reliable selection of \(\mathcal{Q}_{best}\). The performance of DRSO-MODEL is significantly worse than DRSO on 7 problems, demonstrating the significance of the relation model. The performance deterioration of DRSO-SEL-1 is not obvious, as it is inferior to the original algorithm on only one problem, but its mean rank is still worse than the DRSO, indicating that the contribution of \(\mathcal{P}_{u}\) to the algorithm is not as significant as \(\mathcal{Q}_{best}\). In summary, each component of DRSO is effective, and their synergistic effect is also effective.
### Analysis of relation model
To analyze the fitting capacity of the relation model, four representative functions from the LZG test suite were chosen. The fitting ability of the relation model was visualized in a two-dimensional search space. Additionally, a comparison was made between the relation model's capability and that of regression and classification models in selecting \(\mathcal{Q}_{best}\) and \(\mathcal{P}_{u}\) in 20-dimensional problems. The results demonstrate the advantages of the relation model in model-assisted selection.
#### 4.4.1 Visualization analysis
In the first row of Figure 5, the contour distributions of the four test functions are depicted in their corresponding search spaces. Following this, LHS sampling was utilized to generate 100 points from the space as the original training data. Subsequently, a relation model (\(\mathcal{M}_{1}\)) is then constructed using the C1 criterion on training data. Based on the predicted values from the \(\mathcal{M}_{1}\) model, the contour distribution is displayed in the second row of Figure 5. It is apparent that the relation model, under the C1 criterion, resembles a regression-like process and is capable of acquiring knowledge on the original
\begin{table}
\begin{tabular}{c|c} \hline algorithm & details \\ \hline DRSO & Utilizes default settings \\ DRSO-Sel-1 & Selects \(\mathcal{P}u\) at random, \(\mathcal{M}2\) model excluded \\ DRSO-Sel-2 & Selects \(\mathcal{Q}best\) at random, \(\mathcal{M}_{1}\) model excluded \\ DRSO-Gen-1 & Employs \(\mathcal{P}e\) for new solutions in EDA, without \(\mathcal{P}_{u}\) \\ DRSO-Gen-2 & Excludes local search in the improvement of solution quality \\ DRSO-Mod & Excludes relation model, employs only XGBost for classification and regression, selects \(\mathcal{P}u\) and \(\mathcal{Q}best\) \\ \hline \end{tabular}
\end{table}
Table 4: Design of algorithm variants.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline n & problem & p-value & DRSO-GEN-1 & DRSO-GEN-2 & DRSO-SEL-1 & DRSO-SEL-2 & DRSO-MODEL & DRSO \\ \hline \multirow{5}{*}{20} & \multirow{2}{*}{Ellipsoid} & \multirow{2}{*}{1.13e-15} & 3.36e+01[5](\(-\)) & 1.18e+01[3](\(-\)) & 7.24e+00[2](\(\approx\)) & 4.09e+01[6](\(-\)) & 1.39e+01[4](\(-\)) & 6.17e+001[0] \\ & & & (3.73e+01) & (5.15e+00) & (4.82e+00) & (1.33e+01) & (7.24e+00) & (4.57e+00) \\ \cline{2-9} & \multirow{2}{*}{Rosenbrock} & \multirow{2}{*}{5.04e-05} & 1.27e+02[5](\(-\)) & 1.05e+02[3](\(\approx\)) & 9.54e+01[16](\(-\)) & 1.39e+02[6](\(-\)) & 1.24e+02[4](\(\approx\)) & 1.02e+02[2] \\ & & & (4.79e+01) & (3.19e+01) & (3.26e+01) & (3.35e+01) & (4.50e+01) & (2.94e+01) \\ \cline{2-9} & \multirow{2}{*}{Ackley} & \multirow{2}{*}{6.18e-21} & 8.17e+00[4](\(-\)) & 7.86e+00[3](\(-\)) & 8.88e+00[10](\(\approx\)) & 1.12e+01[6](\(-\)) & 9.94e+00[5](\(-\)) & 6.08e+00[02] \\ & & & (2.52e-00) & (9.18e-01) & (9.91e-01) & (9.96e-01) & (1.39e+00) & (1.05e+00) \\ \cline{2-9} & \multirow{2}{*}{Griewank} & \multirow{2}{*}{1.06e-19} & 1.31e+01[5](\(-\)) & 6.45e+00[3](\(-\)) & 6.56e+00[2](\(\approx\)) & 1.65e+01[6](\(-\)) & 8.57e+00[4](\(-\)) & 8.08e+00[01] \\ & & & (1.67e+01) & (2.09e+00) & (1.49e+00) & (4.59e+00) & (3.01e+00) & (1.54e+00) \\ \hline \multirow{5}{*}{50} & \multirow{2}{*}{Ellipsoid} & \multirow{2}{*}{9.52e-22} & 7.61e+02[3](\(-\)) & 1.28e+03[6](\(-\)) & 6.97e+02[2](\(\approx\)) & 1.05e+04[3](\(-\)) & 1.13e+03[5](\(-\)) & 6.66e+02[4] \\ & & & (2.08e-02) & (2.01e+02) & (9.68e+01) & (1.65e+02) & (2.08e+02) & (1.19e+02) \\ \cline{2-9} & \multirow{2}{*}{Rosenbrock} & \multirow{2}{*}{1.51e-21} & 1.02e+03[3](\(-\)) & 2.01e+03[6](\(-\)) & 9.69e+02[2](\(\approx\)) & 1.13e+03[4](\(-\)) & 1.54e+03[5](\(-\)) & 8.81e+02[1] \\ & & & (2.10e+02) & (3.02e+02) & (1.99e+02) & (2.18e+02) & (2.88e+02) & (1.66e+02) \\ \cline{2-9} & \multirow{2}{*}{Ackley} & \multirow{2}{*}{4.03e-23} & 1.50e+01[3](\(-\)) & 1.73e+01[6](\(-\)) & 1.49e+01[21](\(-\)) & 1.59e+01[4](\(-\)) & 1.71e+01[5](\(-\)) & 1.45e+01[1] \\ & & & (6.13e-01) & (4.31e-01) & (5.92e-01) & (6.57e-01) & (5.49e-01) & (5.64e-01) \\ \cline{2-9} & \multirow{2}{*}{Griewank} & \multirow{2}{*}{4.37e-22} & 1.26e+02[3](\(\approx\)) & 2.06e+02[6](\(-\)) & 1.16e+02[2](\(\approx\)) & 1.70e+02[4](\(-\)) & 1.98e+02[5](\(-\)) & 1.12e+02[1] \\ & & & (3.32e+01) & (2.90e+01) & (1.58e+01) & (2.61e+01) & (2.95e+01) & (2.26e+01) \\ \hline \multirow{5}{*}{50} & \multirow{2}{*}{mean rank} & \multirow{2}{*}{3.875} & 4.50 & 1.7 & 5.00 & 4.625 & 1.25 \\ & + / - / \(\approx\) & & 0/7/1 & 0/7/1 & 0/1/7 & 0/8/0 & 0/7/1 \\ \hline \end{tabular}
\end{table}
Table 5: Ablation study results comparing DRSO and its five variants on LZG test suites with n=20,50.
function's landscape. For instance, in the case of the Ellipsoid function, distinguished by its unimodal feature, and the Rosenbrock function, identified by a gully. the relation model does not intentionally fit the distribution of local extremums. However, it can still effectively represent the overall landscape of these two functions, which is vital for model-assisted search.
In the first row of Figure 6, the distribution of data based on the true objective function and the threshold is depicted, where cases Figure 6(a)-6(d) correspond to a classification threshold of \(t=10\%\), and cases Figure 6(e)-6(h) correspond to a threshold of \(t=30\%\). A smaller threshold indicates a narrower focus area. LHS is utilized to extract 100 data points from the decision space. Subsequently, a relation model \(\mathcal{M}_{2}\) is trained based on the C2 criterion and the specified threshold. The prediction outcomes are presented in the second row of Figure 6. Notably, the C2 criterion resembles a classification-like model, proficient in recognizing the classification boundaries of the original data and modifying the range of the model fitting boundary as per the threshold \(t\). Additionally, the relation data's label balance strategy ensures that the model training remains unaffected by imbalanced class proportions, even when \(t=10\%\).
#### 4.4.2 Accuracy analysis
The relation model showcases properties akin to both classification and regression models. This raises a valid question - why not directly employ either a classification or regression model? In the subsequent analysis, we will explore the advantages of utilizing the relation model over classification and regression models in the context of model-assisted selection.
To accentuate the importance of the data preparation and model usage stages in the relation model, we have excluded the differences in the learning abilities of the machine learning algorithms. We have opted for XGBoost with default parameters as the fundamental method for regression, classification, and the two relation models. These models are denoted as XGB, XGBC, R-C1, and R-C2, respectively. To eliminate the randomness in the operation of EAs, the parent population \(\mathcal{P}\) and offspring population \(\mathcal{Q}\) information generated by GA in 50 consecutive generations on the 20-dimensional LZG test suits are
Figure 5: Contour plot of predicted results for the C1 criterion relation model in 2-dimensional space. The first row shows results based on real function values, while the second row shows predicted results.
Figure 6: Contour plot of predicted results for the C2 criterion relation model in 2-dimensional space. The first row shows results based on true function values, while the second row shows predicted results. Fig (a)-(d) show results for \(t=10\%\), while Fig (e)-(h) show results for \(t=30\%\).
stored. The population size \(N\) is set to 50. The parent population is used as the training data for the model, while the offspring population is used as the testing data. To uniformly evaluate the capabilities of each model, two accuracy indicators, \(acc_{1}\) and \(acc_{2}\), are used to evaluate the performance on \(\mathcal{Q}_{best}\) and \(\mathcal{P}_{u}\), respectively. The calculation methods for \(acc_{1}\) and \(acc_{2}\) are as follows:
\[acc_{1}=R(Q^{\prime}_{best},\mathcal{Q}) \tag{7}\]
where \(\mathcal{Q}\) refers to both the offspring population and the test data. \(\mathcal{Q}^{\prime}_{best}\) denotes the best solution that is selected by the model within \(\mathcal{Q}\). The function \(R(\cdot)\) returns the ranking of \(\mathcal{Q}^{\prime}_{best}\) within the \(\mathcal{Q}\) based on the real objective values. A smaller value of \(acc_{1}\) indicates a higher effectiveness of the model in selecting the best solution.
\[acc_{2}=\frac{|\mathcal{P}_{u}\cap\mathcal{P}^{\prime}_{u}|}{|\mathcal{P}_{u}|} \tag{8}\]
\(\mathcal{P}_{u}\) represents the top \(t\) of solutions selected based on the real objective values. \(\mathcal{P}^{\prime}_{u}\) denotes the selection made by the model, while \(acc_{2}\) represents the proportion of cases where the model's selection matches the actual result.
A higher value of \(acc_{2}\) indicates a stronger ability of the model to make accurate selections.
Based on the results shown in Figure 7, which is the bar of the \(acc1\) metric for selecting \(Q_{best}\) over 50 generations, it can be observed that the R-C1 performs the best across all problems, with the smallest average rank value and error bar. This suggests that the R-C1 is more suitable for scenarios where the top 1 solution needs to be selected from a set. The R-C2 performs worse than the regression model XGBR, but better than the classification model XGBBC.
Figure 8 shows the ability of different models to select \(\mathcal{P}_{u}\), with \(t=50\%\). It can be observed that the R-C1 and R-C2 criteria exhibit better performance than XGBBC and XGBR. Among them, the Interquartile range of R-C1 is more concentrated, but there are some outliers, while the maximum value range of R-C2 is more optimal.
Based on the analysis above, it can be concluded that the relation model can provide more accurate and detailed partitioning than classification models, while avoiding overfitting of data in regression models and losing the order of ranks between test points. Therefore, it is more suitable for use in model-assisted scenarios.
### Importance of unevaluated solutions
Another key aspect to consider is whether the algorithm's efficiency is truly improved by the unevaluated solutions. In order to investigate this, we conducted an ablation experiment by designing a variant
Figure 8: Box chart of the \(acc_{2}\) statistics of \(\mathcal{P}_{u}\) selections for different surrogate models.
Figure 7: Bar chart of the \(acc_{1}\) statistics of \(\mathcal{Q}_{best}\) selections for different surrogate models.
DRSO' which expunged the population \(\mathcal{P}_{u}\). Moreover, we designed variants DRSO-10%, DRSO-30%, and DRSO-50%, corresponding to varied values of \(t\) to examine the influence of the parameter \(t\) on the algorithm's efficacy.
Figure 9 depicts the runtime curve of DRSO under various parameters on the LZG test suite, with a search space dimension of 20. Other parameters remain consistent with Section 4.1. The figure reveals that the DRSO', lacking unevaluated solutions, converges at a slower pace than DRSO, indicating that high-quality unevaluated solutions can effectively enhance the algorithm's convergence speed. With regards to the performance of DRSO under different values of \(t\), it can be observed that DRSO performs better under \(t=30\%\) and \(50\%\) than under \(t=10\%\) on the Ellipsoid and Griewank test functions. On the Ackley function, the performance of all three values of \(t\) is comparable. On the Rosenbrock function, the performance of \(t=10\%\) is intermediate to that of \(t=30\%\) and \(t=50\%\). We surmise that when \(t=10\%\), the algorithm is excessively focused on the optimal region, leading to inadequate diversity provided by unevaluated solutions in population \(\mathcal{P}\) and resulting in a decrease in performance. Therefore, we recommend \(t=50\%\) as the default parameter for the algorithm in this context.
## 5 Conclusions
This paper highlights an objective but often overlooked issue in SAEAs, which is the lack of variance in the adjacent population \(\mathcal{P}\) due to the limited number of solutions selected for real evaluation in each iteration. To tackle this problem, this work proposes a simple method of generating new solutions based on unevaluated solutions. Employing surrogate models, the best solution in the offspring population is selected for real evaluation, and the archive and surrogate model are updated accordingly. Additionally, some potential 'good' solution solutions are directly used to generate offspring without evaluation. We have designed customized relation-based surrogate models for SAEAs. Two specific relation model construction methods namely the fitness criterion (C1) and category criterion (C2), are proposed to address two selection scenarios. The C1 criterion constructs a relation pair based on relative fitness, while the C2 criterion divides the data into categories and constructs relation pairs based on category. XGBoost is utilized for data learning, and 'voting-scoring' strategies are designed to enhance the model's predictive ability. Some reproduction methods in EDA/LS are employed to generate offspring solutions, and unevaluated solutions are utilized to update the VWH model. Ultimately, a dual relation models-assisted single-objective optimization algorithm (DRSO) is designed.
To verify the effectiveness of the relation model, and to demonstrate the search capability of the DRSO, This work conducted experiments on the LZG and YLL test suites in 20 and 50-dimensional search spaces. The DRSO algorithm was compared with EAs, SAEAs, and BOs, and it showed strong competitiveness. Through ablation experiments, the efficacy of each module was verified. Furthermore, the paper also scrutinized the fitting ability of the relation model to the function landscape and the predictive ability for new solution quality. The effectiveness of unevaluated solutions in the algorithm search process was affirmed through experiments and analysis of algorithm hyperparameters. Overall, the results of the experiments testify to the effectiveness of the relation model and the competitiveness of the proposed DRSO with unevaluated solutions.
In future work, it is worth exploring more detailed strategies for using unevaluated solutions to improve the quality of new solutions. The relation model can be tried on more algorithm frameworks and problem types.
Figure 9: The runtime performance of DRSO and its variant on LZG test suite over 30 independent runs. | surrogate-assisted evolutionary algorithm (SAEAs)は、高価な最適化問題を解決する上で重要な役割を果たしています。SAEAsの効率性を向上するために、熟練したモデル支援の選択方法の開発に多くの努力が払われています。しかし、高品質なソリューションの生成は選考の要です。SAEAsにおける各世代に限定された数個のソリューションを評価する基盤的なパラダイムは、近傍集団のばらつきを減らし、子孫の質を向上させることがありますが、これは頻繁に遭遇する問題であり、広く注目されていません。この論文では、評価されないソリューションを用いるフレームワークを提案しています。サルーティモデルは、評価なしに新しいソリューションの直接生成に用いられます。信頼性の高い選考を確保するために、最適なソリューションと評価されない集団の選択のために、2つの調整された関係モデルが導入されました。二つのテストセットにおいて、関係モデル |
2309.16023 | Q-REG: End-to-End Trainable Point Cloud Registration with Surface
Curvature | Point cloud registration has seen recent success with several learning-based
methods that focus on correspondence matching and, as such, optimize only for
this objective. Following the learning step of correspondence matching, they
evaluate the estimated rigid transformation with a RANSAC-like framework. While
it is an indispensable component of these methods, it prevents a fully
end-to-end training, leaving the objective to minimize the pose error
nonserved. We present a novel solution, Q-REG, which utilizes rich geometric
information to estimate the rigid pose from a single correspondence. Q-REG
allows to formalize the robust estimation as an exhaustive search, hence
enabling end-to-end training that optimizes over both objectives of
correspondence matching and rigid pose estimation. We demonstrate in the
experiments that Q-REG is agnostic to the correspondence matching method and
provides consistent improvement both when used only in inference and in
end-to-end training. It sets a new state-of-the-art on the 3DMatch, KITTI, and
ModelNet benchmarks. | Shengze Jin, Daniel Barath, Marc Pollefeys, Iro Armeni | 2023-09-27T20:58:53 | http://arxiv.org/abs/2309.16023v1 | # Q-REG: End-to-End Trainable Point Cloud Registration with Surface Curvature
###### Abstract
Point cloud registration has seen recent success with several learning-based methods that focus on correspondence matching, and as such, optimize only for this objective. Following the learning step of correspondence matching, they evaluate the estimated rigid transformation with a RANSAC-like framework. While it is an indispensable component of these methods, it prevents a fully end-to-end training, leaving the objective to minimize the pose error non-served. We present a novel solution, Q-REG, which utilizes rich geometric information to estimate the rigid pose from a single correspondence. Q-REG allows to formalize the robust estimation as an exhaustive search, hence enabling end-to-end training that optimizes over both objectives of correspondence matching and rigid pose estimation. We demonstrate in the experiments that Q-REG is agnostic to the correspondence matching method and provides consistent improvement both when used only in inference and in end-to-end training. It sets a new state-of-the-art on the 3DMatch, KITTI and ModelNet benchmarks.
## 1 Introduction
Point cloud registration is the task of estimating the rigid transformation that aligns two partially overlapping point clouds. It is commonly solved by establishing a set of tentative correspondences between the two point clouds, followed by estimating their rigid transformation. The field has seen substantial progress in recent years with methods that introduce a learning component to solve the task.
Most learning methods focus on solving the correspondence task [6, 12, 18, 21], where a feature extractor is trained to extract point correspondences between two input point clouds. Once the learning step is over, they use the estimated correspondences for computing the rigid pose. Due to the low inlier ratio in putative correspondences, these methods strongly rely on hypothesize-and-verify frameworks, _e.g_. RANSAC [15] to compute the pose in a robust manner. Recent methods [28, 42] employ advances in the field of transformers to improve the final estimated set of correspondences and remove the dependency on RANSAC, achieving close-to-RANSAC performance. However, in these methods too, the objective in the learning process remains to find the best and cleanest matches, ignoring the objective to estimate the rigid pose. In addition, they do not achieve end-to-end differentiable training since they still employ robust estimation (_e.g_., [21, 28]) combined with the Kabsch-Umeyama algorithm [42].
Other learning-based methods, such as [35, 36, 41], directly solve the registration problem by incorporating the pose estimation in their training pipeline. Since RANSAC is non-differentiable due to the random sampling, they choose to estimate the alignment using soft correspondences that are computed from local feature similarity scores. In contrast to these methods, we employ the aforementioned works on estimating hard correspondences and develop a robust solution to replace RANSAC, that allows for end-to-end differentiable training.
In general, RANSAC-like robust estimation is non-differentiable only due to the employed randomized sampling function. Such a sampler is essential to cope with the combinatorics of the problem via selecting random subsets of \(m\) correspondences (_e.g_., \(m=3\) for rigid pose estimation). This allows to progressively explore the \(\binom{n}{m}\) possible combinations, where \(n\) is the total number of matches. Actually testing all of them is unbearably expensive in practice, which is what methods like [28, 42] try to avoid. This computation bottleneck would be resolved if \(m=1\). Hence, we design a 1-point solution, _Q-REG_, that utilizes rich geometric cues extracted from local surface patches es
Figure 1: _Q-REG solver._ Given (a) two partially overlapping point clouds as input and (b) the estimated correspondences of a matching method, (c) _Q-REG_ leverages the rich local geometry to estimate the rigid pose from a single correspondence, hence enabling end-to-end training of the matcher. _(Best viewed on screen.)_
timated from observed points (Figure 1). Specifically, we utilize rich geometric information by fitting quadrics (_e.g_., an ellipsoid) locally to the neighborhoods of an estimated correspondence. Moreover, such a solution allows quick outlier rejection by filtering degenerate surfaces and rigid poses inconsistent with motion priors (_e.g_., to avoid unrealistically large scaling). _Q-REG_ is designed to be deterministic, differentiable, and it replaces RANSAC for point cloud registration. It can be used together with any feature-matching or correspondence-matching method.
Since _Q-REG_ is fully differentiable, we achieve end-to-end training that optimizes both the correspondence matching and final pose objectives. As such, any learning-based matcher can be extended to being end-to-end trainable. In our experiments, we demonstrate how _Q-REG_ consistently improves the performance of state-of-the-art matchers on the _3DMatch_[44], _KITTI_[16] and _ModelNet_[38] datasets. It sets new state-of-the-art results on all benchmarks.
Our contributions can be summarized as follows:
* We develop _Q-REG_, a solution for point cloud registration, estimating the pose from a single correspondence via leveraging local surface patches. It is agnostic to the correspondence matching method. _Q-REG_ allows for quick outlier rejection by filtering degenerate solutions and assumption inconsistent motions (_e.g_., related to scaling).
* We extend the above formulation of _Q-REG_ to a differentiable setting that allows for end-to-end training of correspondence matching methods with our solver. Thus, we optimize not only over the correspondence matching but also over the final pose.
* We demonstrate the effectiveness of _Q-REG_ with different baselines on several benchmarks and achieve new state-of-the-art performance across all of them.
## 2 Related Work
**Correspondence-based Registration Methods.** The 3D point cloud registration field is well-established and active. Approaches can be grouped into two main categories: _feature-based_ and _end-to-end_ registration. Feature-based methods comprise two steps: local feature extraction and pose estimation using robust estimators, like RANSAC [15]. Traditional methods use hand-crafted features [22, 29, 30, 32, 33] to capture local geometry and, while having good generalization abilities across scenes, they often lack robustness against occlusions. Learned local features have taken over in the past few years, and, instead of using heuristics, they rely on deep models and metric learning [20, 31] to extract dataset-specific discriminative local descriptors. These learned descriptors can be divided into patch-based and fully convolutional methods depending on the input. Patch-based ones [3, 17] treat each point independently, while fully convolutional methods [6, 12, 21] extract all local descriptors simultaneously for the whole scene in a single forward pass.
**Direct Registration Methods.** Recently, end-to-end methods have appeared replacing RANSAC with a differentiable optimization algorithm that targets to incorporate direct supervision from ground truth poses. The majority of these methods [35, 36, 41] use a weighted Kabsch solver [5] for pose estimation. Deep Closest Point (DCP) [35] iteratively computes soft correspondences based on features extracted by a dynamic graph convolutional neural network [37], which are then used in the Kabsch algorithm to estimate the transformation parameters. To handle partially overlapping point clouds, methods relax the one-to-one correspondence constraint with keypoint detection [36] or optimal transport [14, 41]. Another line of work replaces local with global feature vectors that are used to regress the pose. PointNetLK [4] registers point clouds by minimizing the distance of their latent vectors, in an iterative fashion that resembles the Lucas-Kanade algorithm [25]. In [39], an approach is proposed for rejecting non-overlapping regions via masking the global feature vector. However, due to the weak feature extractors, there is still a large performance gap compared to hard matching methods. These direct registration methods primarily work on synthetic shape datasets [38] and often fail in large-scale scenes [21]. _Q-REG_ uses _hard_ correspondences while still being differentiable, via introducing an additional loss component that minimizes the pose error. In addition, as demonstrated in Sec. 4, it works for both real-world indoor [44] and outdoor [16] scene large-scale point clouds and synthetic object-level datasets [38], by setting a new state-of-the-art.
**Learned Robust Estimators.** To address the fact that RANSAC is non-differentiable, other methods either modify it [9] or learn to filter outliers followed by a hypothesize-and-verify framework [7] or a weighted Kabsch optimization [13, 19, 27]. In the latter case, outliers are filtered by a dedicated network, which infers the correspondence weights to be used in the weighted Kabsch algorithm. Similarly, we employ the correspondence confidence predicted by a feature extraction network (_e.g_., by [21, 28, 42]) as weights in the pose-induced loss.
## 3 Point Cloud Registration with Quadrics
We first describe the definition of the point cloud registration problem followed by ways of extracting local surface patches that can be exploited for point cloud registration.
**Problem Definition.** Suppose that we are given two 3D point clouds \(\mathcal{P}=\{\mathbf{p}_{i}\in\mathbb{R}^{3}\mid i=1,...,N\}\) and \(\mathcal{Q}=\{\mathbf{q}_{i}\in\mathbb{R}^{3}\mid i=1,...,M\}\), and a set of 3D-3D point correspondences \(\mathcal{C}=\{(p_{i},q_{i})\mid p_{i}\in\mathcal{P},q_{i}\in\mathcal{Q},\;i\in [1,K]\}\) extracted, _e.g_., by the state-of-the-art matchers [21, 28, 42]. The objective is to estimate rigid transfor
mation \(\mathbf{T}=\{\mathbf{R},\mathbf{t}\}\) that aligns the point clouds as follows:
\[\min_{\mathbf{R},\mathbf{t}}\sum\nolimits_{(\mathbf{p}_{x}^{*},\mathbf{q}_{y}^{* })\in\mathcal{C}^{*}}\|\mathbf{R}\mathbf{p}_{x}^{*}+\mathbf{t}-\mathbf{q}_{y}^ {*}\|_{2}^{2}, \tag{1}\]
where \(\mathbf{R}\in\text{SO}(3)\) is a 3D rotation and \(\mathbf{t}\in\mathbb{R}^{3}\) is a translation vector, and \(\mathcal{C}^{*}\) is the set of ground truth correspondences between \(\mathcal{P}\) and \(\mathcal{Q}\). In practice, we use putative correspondences instead of ground truth matches, and the set of correspondences often contains a large number of incorrect matches, _i.e_., outliers. Therefore, the objective is formulated as follows:
\[\min_{\mathbf{R},\mathbf{t}}\sum\nolimits_{(\mathbf{p}_{x},\mathbf{q}_{y}) \in\mathcal{C}}\rho(\|\mathbf{R}\mathbf{p}_{x}+\mathbf{t}-\mathbf{q}_{y}\|_{2 }^{2}), \tag{2}\]
where \(\rho:\mathbb{R}\rightarrow\mathbb{R}\) is a robust loss, _e.g_., Huber loss. The problem is solved by a RANSAC-like [15] hypothesize-and-verify framework combined with the Kabsch-Uneyama algorithm [5]. We argue in the next sections that, when employing higher-order geometric information, RANSAC can be replaced by exhaustive search improving both the performance and run-time. Figure 2 illustrates the developed approach, called _Q-REG_.
### Local Surface Patches
The main goal in this section is to determine a pair of local coordinate systems \((\mathbf{R}_{\mathbf{p}},\mathbf{R}_{\mathbf{q}})\) for each correspondence \((\mathbf{p},\mathbf{q})\in\mathcal{C}\), where \(\mathbf{R}_{\mathbf{p}},\mathbf{R}_{\mathbf{q}}\in\text{SO}(3)\). These coordinate systems will be then used to determine rotation \(\mathbf{R}\) between the point clouds as \(\mathbf{R}=\mathbf{R}_{\mathbf{q}}\mathbf{R}_{\mathbf{p}}^{\text{T}}\). We will describe the method for calculating \(\mathbf{R}_{\mathbf{p}}\), which is the same for \(\mathbf{R}_{\mathbf{q}}\). Note that determining translation \(\mathbf{t}\) is straightforward as \(\mathbf{t}=\mathbf{q}-\mathbf{p}\).
Suppose that we are given a point \(\mathbf{p}\in\mathcal{P}\) and its \(k\)-nearest-neighbors \(\mathcal{N}\subseteq\mathcal{P}\) such that there exists a correspondence \((\mathbf{p},\mathbf{q})\in\mathcal{C}\), \(k\in\mathbb{N}^{+}\). One possible solution is to fit a general quadratic surface to the given point and the points in \(\mathcal{N}\) and find the principal directions via the first and second-order derivatives at point \(\mathbf{p}\). These directions can give us a local coordinate system that is determined by the translation and rotation of the local surface in the canonical coordinate system. Even though this algorithm is widely used in practice, it can suffer from degenerate cases and slow computation time. To address these limitations, we develop the following approach.
The approach which we adopt in this paper is based on fitting a local quadric, _e.g_. ellipsoid, to the point \(\mathbf{p}\) and the points in its neighborhood \(\mathcal{N}\). The general constraint that a 3D quadric surface imposes on a 3D homogeneous point \(\hat{\mathbf{p}}^{\text{T}}=(x,y,z,1)\in\mathcal{N}\) lying on the surface is
\[\hat{\mathbf{p}}^{\text{T}}\mathbf{Q}\hat{\mathbf{p}}=0, \tag{3}\]
where \(\mathbf{Q}\) is the quadric parameters in matrix form [26] as:
\[\mathbf{Q}=\begin{pmatrix}A&D&E&G\\ D&B&F&H\\ E&F&C&I\\ G&H&I&J\end{pmatrix}. \tag{4}\]
We can rewrite constraint (3) into the form \(\mathbf{k}^{\text{T}}\mathbf{w}=d\), where
\[\mathbf{k}^{\text{T}} = (x^{2}+y^{2}-2z^{2},x^{2}+z^{2}-2y^{2},2xy,\] \[2xz,2yz,2x,2y,2z,1),\] \[\mathbf{w}^{\text{T}} = (A^{\prime},B^{\prime},D,E,F,G,H,I,J),\] \[d = x^{2}+y^{2}+z^{2},\] \[A^{\prime} = \frac{2A+B}{3}+1,\] \[B^{\prime} = \frac{A-B}{3}.\]
By imposing the constraints on all points, we have:
\[\sum_{i=1}^{|\mathcal{N}|}\mathbf{k}_{i}\mathbf{k}_{i}^{\text{T}}\mathbf{w}= \sum_{i=1}^{|\mathcal{N}|}\mathbf{k}_{i}d_{i}. \tag{5}\]
Figure 2: **Overview of _Q-REG_.** During inference, given (a) an input pair of partially overlapping point clouds and (b) the output of a correspondence matcher, we (c) perform quadric fitting for each estimated correspondence from which (d) we estimate the rigid pose and (e) compute the inliers given this pose. We iterate over all estimated correspondences, and choose the estimated pose that yields the most inliers. We further improve the result with (f) the local optimization and output the final estimated pose. During training, we back-propagate the gradients to the correspondence matcher and, in addition to its standard loss formulation, we minimize the proposed loss (\(L_{\text{pose}}\)) based on the single-correspondence pose estimation. _(Best viewed on screen.)_
\(|\mathcal{N}|\) is the number of neighbors to which the quadric is fitted (set to 50 in our experiments); \(d_{i}\) is the squared dist. of the \(i\)th neighbor from the origin. By solving the above linear equation, we get the coefficients of the quadric surface \(\mathbf{Q}\).
As we are interested in finding \(\mathbf{Q}\) such that the observed point \(\mathbf{p}\) lies on its surface, we express coefficient \(J\) (_i.e._, the offset from the origin) in Eq. 4 with formula
\[\mathbf{p}^{\mathsf{T}}\mathbf{Q}\mathbf{p}=0. \tag{6}\]
Thus, \(J\) will not be estimated from the neighborhood but it is preset to ensure that the quadric lies on point \(\mathbf{p}\).
In order to find a local coordinate system, we calculate the new coefficient matrix \(\mathbf{P}\) as follows:
\[\mathbf{P}=\frac{1}{J}\begin{pmatrix}A&D&E\\ D&B&F\\ E&F&C\end{pmatrix}. \tag{7}\]
Matrix \(\mathbf{P}\) can be decomposed by Eigen-decomposition into \(\mathbf{P}=\mathbf{V}\boldsymbol{\Sigma}\mathbf{V}^{\mathsf{T}}\), where \(\mathbf{V}=(\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3})\), projecting the fitted points to canonical coordinates, and \(\boldsymbol{\Sigma}=\text{diag}(\lambda_{1},\lambda_{2},\lambda_{3})\) comprises the eigenvalues.
Matrix \(\mathbf{V}\) contains the three main axes that map the quadric, fitted to point \(\mathbf{p}\) and its local neighborhood \(\mathcal{N}\), to canonical form. It is easy to see that, due to its local nature, the local surface is invariant to the rigid translation and rotation of the point cloud. Thus, it is a repeatable feature under rotations and translations of the underlying 3D point cloud. \(\boldsymbol{\Sigma}\) contains the three eigenvalues that are proportional to the reciprocals of the lengths \(l_{1}\), \(l_{2}\), \(l_{3}\) of three axes squared.
### Rigid Transformation from Surface Matches
Suppose that we are given sets of local coordinate systems \(\mathcal{V}^{\mathcal{P}}\) and \(\mathcal{V}^{\mathcal{Q}}\) associated with points on the found 3D-3D point correspondences, estimated as explained in the previous section. Given correspondence \((\mathbf{p},\mathbf{q})\in\mathcal{C}\), we know the local coordinates systems \(\mathbf{V}_{\mathbf{p}}^{\mathcal{P}}\in\mathcal{V}^{\mathcal{P}}\) and \(\mathbf{V}_{\mathbf{q}}^{\mathcal{Q}}\in\mathcal{V}^{\mathcal{Q}}\) at, respectively, points \(\mathbf{p}\) and \(\mathbf{q}\). Due to the local surfaces being translation and rotation invariant, the coordinate systems must preserve the rigid transformation applied to the entire point cloud. Thus, \(\mathbf{R}=\mathbf{V}_{\mathbf{q}}^{\mathcal{Q}}\mathbf{P}(\mathbf{V}_{ \mathbf{p}}^{\mathcal{P}})^{\mathsf{T}}\in\text{SO}(3)\) is the rotation between the point clouds, where \(\mathbf{P}\) is an unknown permutation matrix assigning the axes in the first coordinate system to the axes in the second one.
There are three cases that we have to account for. Ideally, the lengths of the three axes \(\mathbf{L}^{a}=(l_{1}^{a},l_{2}^{a},l_{3}^{a})^{\mathsf{T}}\) have a distinct ordering such that \(l_{1}^{a}>l_{2}^{a}>l_{3}^{a}\), \(a\in\{\mathcal{P},\mathcal{Q}\}\). In this case, the permutation matrix can be determined such that it assigns the longest axis in \(\mathbf{V}_{\mathbf{q}}^{\mathcal{Q}}\) to the longest one in \(\mathbf{V}_{\mathbf{p}}^{\mathcal{P}}\), and so on. This procedure builds on the assumption that there is no or negligible anisotropic scaling in the point clouds and thus, the relative axis lengths remain unchanged. Also, having this assignment allows us to do the matching in a scale-invariant manner while enabling us to calculate a uniform scaling of the point clouds - or to early reject incorrect matches that imply unrealistic scaling. In this case, the problem is solved from a single correspondence.
The second case is when two axes have the same lengths, _e.g._, \(l_{1}^{a}\approx l_{2}^{a}\), and \(l_{3}^{a}\) is either shorter or longer than them. In this scenario, only \(l_{3}^{a}\) can be matched between the point clouds. This case is equivalent to having a corresponding oriented point pair. It gives us an additional constraint for estimating the rotation matrix. However, the rotation around axis \(l_{3}^{a}\) is unknown and has to be estimated from another point correspondence. While this is a useful solution to reduce the required number of points from three to two, it does not allow solving from a single correspondence.
In the third case, when \(l_{1}^{a}\approx l_{2}^{a}\approx l_{3}^{a}\), we basically are given a pair of corresponding spheres that provide no extra constraints on the unknown rotation.
In the proposed algorithm, we keep only those correspondences from \(\mathcal{C}\) where the local surface patches are of the first type, _i.e._, they lead to enough constraints to estimate the rigid transformation from a single correspondence. Specifically, we keep only those correspondences, where \(l_{1}^{a}\neq l_{2}^{a}\neq l_{3}^{a}\) with \(10^{-3}\) tolerance. Next, we will discuss how this approach can be used for training 3D-3D correspondence matching algorithms with robust estimation in an end-to-end manner.
### End-to-End Training
Benefiting from the rich information extracted via local surfaces (described in the prev. section), the presented solver estimates the rigid pose from a single 3D quadric match. This unlocks end-to-end differentiability, where the gradients of the matcher network can be propagated through the robust estimator to a loss directly measuring the pose error per correspondence. This enables using test-time evaluation metrics to optimize the end-to-end training.
**Loss.** In order to calculate a pose-induced loss from each correspondence, we first fit quadrics to local neighborhoods. This step has to be done only once, prior to the loss calculation, as the point clouds do not change. Suppose that we are given a set of correspondences \(\mathcal{C}=\{(\mathbf{p},\mathbf{q},\mathbf{V}_{\mathbf{p}}^{\mathcal{P}}, \mathbf{V}_{\mathbf{q}}^{\mathcal{Q}})\mid\mathbf{p}\in\mathcal{P},\ \mathbf{q}\in\mathcal{Q},\ \mathbf{V}_{\mathbf{p}}^{\mathcal{P}}\in \mathcal{V}^{\mathcal{P}},\ \mathbf{V}_{\mathbf{q}}^{\mathcal{Q}}\in\mathcal{V}^{\mathcal{Q}}\}\) equipped with their local quadrics and a solver \(\phi:\mathcal{P}\times\mathcal{Q}\times\mathcal{V}^{\mathcal{P}}\times \mathcal{V}^{\mathcal{Q}}\rightarrow\text{SE}(3)\), as described in Sec. 3.2, we can estimate the rigid transformation \(\mathbf{T}=(\mathbf{R},\mathbf{t})\in\text{SE}(3)\) from a single correspondence. Given a correspondence \((\mathbf{p},\mathbf{q},\mathbf{V}_{\mathbf{p}}^{\mathcal{P}},\mathbf{V}_{ \mathbf{q}}^{\mathcal{Q}})\) and the pose estimated from it \(\mathbf{T}_{\mathbf{p},\mathbf{q}}=\phi(\mathbf{p},\mathbf{q},\mathbf{V}_{ \mathbf{p}}^{\mathcal{P}},\mathbf{V}_{\mathbf{q}}^{\mathcal{Q}})\), The error is formalized as follows:
\[\epsilon(\mathbf{T}_{\mathbf{p},\mathbf{q}})=\sqrt{\frac{1}{|\mathcal{C}|}\sum_ {(\mathbf{p}_{i},\mathbf{q}_{i},\ldots)\in\mathcal{C}}\|\mathbf{T}_{\mathbf{p}, \mathbf{q}}\mathbf{p}_{i}-\mathbf{q}_{i}\|_{2}^{2}}, \tag{8}\]
where the RMSE of the pose is calculated by transforming
the correspondences. The loss obtained by iterating through all correspondences is as follows:
\[L_{\text{pose}}=\sum_{(\mathbf{p},\mathbf{q},\mathbf{V}_{\mathbf{p}}^{\text{P}}, \mathbf{V}_{\mathbf{q}}^{\text{Q}})\in\mathcal{C}}\left(1-\frac{\min(\epsilon( \mathbf{T}_{\mathbf{p},\mathbf{q}}),\gamma)}{\gamma}-s\right), \tag{9}\]
where \(\gamma\in\mathbb{R}\) is a threshold and \(s\) is the score of the point correspondence predicted by the matching network. The proposed \(L_{\text{pose}}\) can be combined with any of the widely used loss functions, _e.g_., registration loss. It bridges the gap between correspondence matching and registration and unlocks the end-to-end training.
**Inference Time.** While the proposed _Q-REG_ propagates the gradients at training time, during inference, we equip it with components that ensure high accuracy but are non-differentiable. _Q-REG_ iterates through the poses calculated from all tentative correspondences, by the proposed single-match solver, in an exhaustive manner. For each match, the pose quality is calculated as the cardinality of its support, _i.e_., the number of inliers. After the best model is found, we apply local optimization similar to [23], a local re-sampling and re-fitting of inlier correspondences based on their normals (coming from the fitted quadrics) and positions.
## 4 Experiments
We evaluate _Q-REG_ with three state-of-the-art matchers (Predator [21], RegTR [42], and GeoTr [28]) on real, indoor point cloud datasets _3DMatch_[44] and _3DLoMatch_[21]. We also evaluate _Q-REG_ with Predator and GeoTr on the real, outdoor dataset _KITTI_[16] and on the synthetic, object-centric datasets _ModelNet_[38] and _ModelLoNet_[21]. We compare _Q-REG_ with other estimators that predict rigid pose from correspondences on the _3DLoMatch_ datasets. Furthermore, we evaluate the importance of different _Q-REG_ components on the best-performing matcher on _3DLoMatch_, as well as run-time during inference.
**3DMatch & 3DLoMatch.** The _3DMatch_[44] dataset contains 62 scenes in total, with 46 used for training, 8 for validation, and 8 for testing. We use the training data preprocessed by Huang et al. [21] and evaluate on both _3DMatch_ and _3DLoMatch_[21] protocols. The point cloud pairs in _3DMatch_ have more than 30% overlap, whereas those in _3DLoMatch_ have a low overlap of 10% - 30%. Following prior work [28, 42], we evaluate the following metrics: (i) Registration Recall (RR), which measures the fraction of successfully registered pairs, defined as having a correspondence RMSE below 0.2 m; (ii) Relative Rotation Error (RRE); and (iii) Relative Translation Error (RTE). Both (ii) and (iii) measure the accuracy of successful registrations. Additionally, we report the mean RRE, RTE, and RMSE. In this setting, we evaluate over all valid pairs1 instead of only those with an RMSE below 0.2 m, and we provide a simple average over all valid pairs instead of the median value of each scene followed by the average over all scenes. These metrics will show how consistently well (or not) a method performs in registering scenes.
Footnote 1: According to [44], a valid pair is a pair of non-consecutive frames.
We report several learned correspondence-based algorithms on the two datasets. For [6, 12, 18], we tabulate the results as reported in their original papers. For [21, 28, 42], we evaluate them with and without the _Q-REG_ solver on all metrics. We also report methods that do not employ RANSAC [10, 13, 39] - results are taken from [42].
The results for _3DLoMatch_ and _3DMatch_ are tabulated in Tables 1 and 2 respectively. Note that, unless differently stated, hereafter the best values per group are in **bold** and the absolute best is **underlined**. Also, _Q-REG_ means that the solver is used only in inference and _Q-REG_* means it is used in both end-to-end training and inference. In the latter case, we train from scratch the correspondence matching network with the addition of the pose-induced loss. We use the default training procedure and parameters specified for each particular matcher when retraining. \(50K\) refers to the RANSAC iterations. Last, if nothing is added next to a method, the standard formulation is used.
In all three matchers, incorporating _Q-REG_ in inference time yields an increase in RR that ranges from 1.0 to 6.2% in _3DLoMatch_ and from 0.9 to 1.6% in _3DMatch_. The range difference between the two datasets is expected, since _3DMatch_ is more saturated and the gap for improvement is small. Using _Q-REG_ for inference achieves the second-best results overall (GeoTr + Q-REG). Even in the case of RegTR, where the putative correspondence set is smaller than other two methods and applying RANSAC ends in decreasing performance [42], _Q-REG_ can still provide a boost in all metrics. When training end-to-end the best-performing matcher, GeoTr, we gain further boost and achieve the best results overall in both datasets, setting a new benchmark (GeoTr + _Q-REG*_). We observe this behavior not only on the standard metrics (RR, RRE, RTE), but also at the Mean RRE, RTE, and RMSE. As expected, _Q-REG_ results in smaller errors regardless of the matcher. Additional results of Inlier Ratio (IR) and Feature Matching Recall (FMR) can be found in the supplementary material.
Qualitative results are shown in Figure 3. In the first and second row, GeoTr+_Q-REG*_ achieves a good alignment of the point clouds when all other methods fail. This means that using _Q-REG_ in end-to-end training can provide additional improvements in performance by learning how to better match correspondences together with the objective of rigid pose estimation, and not in isolation as it happens in all other cases. In the third row, the standard formulation already produces well-aligned point clouds, and the addition of _Q-REG_ slightly refines this output. However, in the case of RegTR, we can see the most improvement. The standard
formulation fails to achieve a good alignment and _Q-REG_ is able to recover a good pose. This means that our method is able to identify robust correspondences and remove spurious ones. When RegTR finds a correct pose, _Q-REG_ can further optimize it, as shown in the fourth row. In the same example, although GeoTR fails to infer a good pose, both _Q-REG_ and _Q-REG*_ are able to recover it. Additional qualitative results and plots can be found in the supp. material.
**KITTI.** The _KITTI_ odometry dataset [16] contains 11 sequences of LiDAR-scanned outdoor driving scenarios. We follow [6, 12, 21, 28] and split it into train/val/test sets as follows: sequences 0-5 for training, 6-7 for validation and 8-10 for testing. As in [6, 12, 21, 28], we refine the provided ground truth poses using ICP [8] and only use point cloud pairs that are captured within 10m range of each other. Following prior work [21, 28], we evaluate the following metrics: (1) Registration Recall (RR), which is the fraction of point cloud pairs with RRE and RTE both below certain thresholds (_i.e._, RRE\(<\)5 and RTE\(<\)2m), (2) Relative Rotation Error (RRE), and (3) Relative Translation Error (RTE).
We report several recent algorithms on the _KITTI_ odometry dataset [16]. For [6, 12, 40], results are taken from [28]. For [21, 28], we evaluate them with and without _Q-REG_, similarly to the Sec. 4. The results on the _KITTI_ dataset are in Table 3. Here as well, we observe a similar trend in the results, with _Q-REG_ boosting the performance of all matchers. Despite saturation of methods on _KITTI_, using the _Q-REG_ solver during inference provides improvements in both RRE and RTE. Predator with _Q-REG_ achieves the best results overall on both datasets (Predator + Q-REG). In addition, when _Q-REG_ is used for both inference and end-to-end training, the results of GeoTr are also improved with respect to its standard formulation (GeoTr + _Q-REG*_). This indicates that _Q-REG_ has similar behavior in point clouds of lower density and different distribution. Additional qualitative results and plots are provided in the supp. material.
**ModelNet & ModelLoNet.** The _ModelNet_[38] dataset contains 12,311 3D CAD models of man-made objects from 40 categories, with 5,112 used for training, 1,202 for validation, and 1,266 for testing. We use the partial scans created by Yew et al. [41] for evaluating on _ModelNet_ and those created by Huang et al. [21] for evaluating on _ModelLoNet_. The point cloud pairs in _ModelNet_ have 73.5% overlap on average, whereas those in _ModelLoNet_ have 53.6%. Following prior work [21, 42], we evaluate the following metrics: (1) Chamfer Distance (CD) between registered point clouds; (ii) Relative Rotation Error (RRE); and (iii) Relative Translation Error (RTE). We report several recent algorithms on the two datasets. For [4, 21], we tabulate the results as reported in their original papers. For [35, 39, 41], results are taken from [42]. For [28, 42], we evaluate them with and without _Q-REG_, similarly as before.
The results for _ModelNet_[38] and _ModelLoNet_[21] are tabulated in Tables 4 and 5, respectively. Here as well, we observe a similar trend in the results, with _Q-REG_ boosting the performance of all matchers. RegTR with _Q-REG_
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline Model & RR & RRE & RTE & \multicolumn{3}{c}{_Mean_} \\ & (\%)\(\uparrow\) & (\(\uparrow\))\(\downarrow\) & (cm\(\downarrow\)) & RRE \(\downarrow\) & RTE \(\downarrow\) & RMSE (cm\(\downarrow\)) \\ \hline
3DSN [18] & 38.4 & 2.20 & 7.1 & - & - & - \\ FCGF [12] & 85.1 & 1.95 & 6.6 & - & - & - \\ DSFNet [6] & 81.6 & 2.16 & 6.7 & - & - & - \\ OMNet [59] & 35.9 & 4.17 & 10.5 & - & - & - \\ DCR [13] & 85.3 & 2.10 & 6.7 & - & - & - \\ PCFM [10] & **85.5** & **1.81** & **5.9** & - & - & - \\ \hline Predator [21] + 56K & 89.3 & 1.98 & 6.5 & 6.80 & 20.2 & 18.3 \\ Predator [21] + Q-REG & **90.6** & **1.74** & **5.7** & **6.78** & **20.0** & **18.1** \\ \hline RegTR [42] & 92.0 & **1.57** & **4.9** & 5.31 & 17.0 & 13.8 \\ RegTR [42] + 50K & 91.3 & 1.72 & 5.9 & 5.26 & 17.5 & 14.7 \\ RegTR [42] + Q-REG & **92.1** & **1.57** & **4.9** & **5.13** & **16.5** & **13.6** \\ \hline GeoT [28] & 92.5 & 1.54 & **5.1** & 7.04 & 19.4 & 17.6 \\ GeoTr [28] + 50K & 92.2 & 1.66 & 5.6 & 6.85 & 18.7 & 17.1 \\ GeoTr [28] + Q-REG & 93.8 & 1.57 & 5.3 & 4.74 & 15.0 & 12.8 \\ GeoTr [28] + _Q-REG*_ & **95.2** & **1.53** & 5.3 & **3.70** & **12.5** & **10.7** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation of state-of-the-art matchers on the _3DMatch_[44] dataset. The best values are **bold** in each group. The absolute best are **underlined**.
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline Method & RR (\%)\(\uparrow\) & RRE (\(\circ\))\(\downarrow\) & RTE (cm\(\downarrow\)) & \multicolumn{3}{c}{_Mean_} \\ \hline
3DFeat-Net [40] & 96.0 & **0.25** & 25.9 \\ FCGF [12] & 96.0 & 0.30 & 9.5 \\ D3Feat [6] & **99.8** & 0.30 & **7.2** \\ \hline Predator [21] + 50K & **99.8** & 0.27 & 6.8 \\ Predator [21] + Q-REG & **99.8** & **0.16** & **3.9** \\ \hline GeoTr [28] & **99.8** & 0.24 & 6.8 \\ GeoTr [28] + 50K & **99.8** & 0.26 & 7.5 \\ GeoTr [28] + Q-REG & **99.8** & 0.20 & 6.0 \\ GeoTr [28] + _Q-REG*_ & **99.8** & **0.18** & **5.4** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation of state-of-the-art matchers on the _KITTI_[16] dataset. The best values are **bold** in each group. The absolute best are **underlined**.
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline Model & RR (\%)\(\uparrow\) & RTE (\(\circ\))\(\downarrow\) & RTE & \multicolumn{3}{c}{_Mean_} \\ & (\%)\(\uparrow\) & (\(\uparrow\))\(\downarrow\) & (cm\(\downarrow\)) & RRE \(\downarrow\) & RTE \(\downarrow\) & RMSE (cm\(\downarrow\)) \\ \hline
3DSN [18] & 78.4 & 2.20 & 7.1 & - & - & - \\ FCGF [12] & 85.1 & 1.95 & 6.6 & - & - & - \\ DSFNet [6] & 81.6 & 2.16 & 6.7 & - & - & - \\ OMNet [59] & 35.9 & 4.17 & 10.5 & - & - & - \\ DCR [13] & 85.3 & 2.10 & 6.7 & - & - \\ PCFM [10] & **85.5** & **1.81** & **5.9** & - & - & - \\ \hline Predator [21] + 50K & 89.3 & 1.98 & 6.5 & 6.80 & 20.2 & 18.3 \\ Predator [21] + Q-REG & **90.6** & **1.74** & **5.7** & **6.78** & **20.0** & **18.1** \\ \hline RegTR [42] + 50K & 92.0 & **1.57** & **4.9** & 5.31 & 17.0 & 13.8 \\ RegTR [42] + 50K & 91.3 & 1.72 & 5.9 & 5.26 & 17.5 & 14.7 \\ RegTR [28] + Q-REG & **92.1** & **1.57** & **4.9** & **5.13** & **16.5** & **13.6** \\ \hline GeoT [28] & 92.5 & 1.54 & **5.1** & 7.04 & 19.4 & 17.6 \\ GeoTr [28] + 50K & 92.2 & 1.66 & 5.6 & 6.85 & 18.7 & 17.1 \\ GeoTr [28] + Q-REG & 93.8 & 1.57 & 5.3 & 4.74 & 15.0 & 12.8 \\ GeoTr [28] + _Q-REG*_ & **95.2** & **1.53** & 5.3 & **3.70** & **12.5** & **10.7** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation of state-of-the-art matchers on the _3DMatch_[44] dataset. The best values are **bold** in each group. The absolute best are **underlined**.
achieves the best results overall on both datasets (RegTR + Q-REG). In addition, both when _Q-REG*_ is used for inference and end-to-end training, the results of GeoTr are also improved with respect to its standard formulation.
### Comparison with Other Estimators
We compare _Q-REG_ with other estimators that predict rigid pose from correspondences on the _3DLoMatch_ dataset, using the state-of-the-art matcher GeoTr [28] as the correspondence extractor (best performing on this dataset). We evaluate the following estimators: (i) **GeoTr + WK**: weighted variant of the Kabsch-Umeyama algorithm [34]. (ii) **GeoTr + ICP**: Iterative closest point (ICP) [8] initialized with 50K RANSAC. (iii) **GeoTr + PointDSC**: PointDSC [7] with their pre-trained model from [2]. (iv) **GeoTr + SC2-PCR**: GeoTr with SC2-PCR [11]. (v) **GeoTr + Q-REG w/ PCA**: PCA instead of our quadric fitting to determine the local coordinate system. (vi) **GeoTr + Q-REG w/ PCD**: Use principal direction as explained in Sec. 3.1. (vii) **GeoTr + Q-REG**: Our quadric-fitting solver is used only in inference. (viii) **GeoTr + _Q-REG*_**: Our solver is used in both end-to-end training and inference.
The results are in Table 6 (_best results in **bold**, _second best are underlined_). Among all methods, _Q-REG*_ per
\begin{table}
\begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{ModelNet [38]} \\ & CD \(\downarrow\) & RPE (\({}^{\circ}\))\(\downarrow\) & RTE (cm)\(\downarrow\) \\ \hline PointNetLK [4] & 0.02350 & 29.73 & 29.7 \\ OMNet [39] & 0.00150 & 2.95 & 3.2 \\ DCP-v2 [35] & 0.01170 & 11.98 & 17.1 \\ RPM-Net [41] & **0.00085** & **1.71** & **1.8** \\ Predator [21] & 0.00089 & 1.74 & 1.9 \\ \hline RegTR [42] & 0.00078 & 1.47 & 1.4 \\ RegTR [42] + 50K & 0.00091 & 1.82 & 1.8 \\ RegTR [42] + Q-REG & **0.00074** & **1.35** & **1.3** \\ \hline GeoTr [28] & 0.00083 & 2.16 & 2.0 \\ GeoTr [28] + 50K & 0.00095 & 2.40 & 2.2 \\ GeoTr [28] + Q-REG & 0.00078 & 1.84 & 1.7 \\ GeoTr [28] + _Q-REG*_ & **0.00076** & **1.73** & **1.5** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Evaluation of state-of-the-art matchers on the _ModelNet_[38] dataset. The best values are **bold** in each group. The absolute best are **underlined**.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{ModelNet [21]} \\ & CD \(\downarrow\) & RPE (\({}^{\circ}\))\(\downarrow\) & RTE (cm)\(\downarrow\) \\ \hline PointNetLK [4] & 0.0367 & 48.57 & 50.7 \\ OMNet [39] & 0.0074 & 6.52 & 12.9 \\ DCP-v2 [35] & 0.0268 & 6.50 & 30.0 \\ RPM-Net [41] & **0.0050** & 7.34 & **12.4** \\ Predator [21] & 0.0083 & **5.24** & 13.2 \\ \hline RegTR [42] & 0.0037 & 3.93 & 8.7 \\ RegTR [42] + 50K & 0.0039 & 4.23 & 9.2 \\ RegTR [42] + Q-REG & **0.0034** & **3.65** & **8.1** \\ \hline GeoTr [28] & 0.0050 & 4.49 & 7.6 \\ GeoTr [28] + 50K & 0.0050 & 4.27 & 8.0 \\ GeoTr [28] + Q-REG & 0.0044 & 3.87 & 7.0 \\ GeoTr [28] + _Q-REG*_ & **0.0040** & **3.73** & **6.5** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Evaluation of state-of-the-art matchers on the _ModelLoNet_[21] dataset. The best values are **bold** in each group. The absolute best are **underlined**.
Figure 3: **Qualitative Results.** We showcase registration examples of RegTR [42] and GeoTr [28] with and without _Q-REG_ for the _3DLoMatch_ (first and third rows) and _3DMatch_ (second and fourth rows) datasets. _(Best viewed on screen.)_
forms the best in the majority of the metrics. GeoTr + WK shows a large performance gap with other methods since it utilizes soft correspondences and is not robust enough to outliers. GeoTr + ICP relies heavily on the initialization and, thus, often fails to converge to a good solution. GeoTr + PointDSC and GeoTr + SC2-PCR have comparable results as LGR but are noticeably worse than _Q-REG_. GeoTr + Q-REG w/ PCA leads to less accurate results than w/ quadric fitting, which demonstrates the superiority of _Q-REG_ that utilizes quadrics. GeoTr + Q-REG w/ PCD shows comparable performance as our _Q-REG_. This is expected since both methods have the same geometric meaning. However, there is a substantial difference in runtime (secs): 0.166 for quadric fitting versus 0.246 for PCD. Similar comparison results on the _3DMatch dataset_ is in the supplementary material.
### Ablation Studies
We evaluate the contribution of each component in the _Q-REG_ solver to the best-performing matcher on the _3DLoMatch_ dataset, the state-of-the-art GeoTr [28]. We evaluate the following self-baselines (i, ii and iii are inference only): (i) **GeoTr + Q**: Our quadric-fitting 1-point solver. (ii) **GeoTr + QL**: We extend the quadric fitting with local optimization (LO) as discussed in Sec. 3.3. (iii) **GeoTr + RL**: We replace quadric fitting with RANSAC 50K in (ii). (iv) **GeoTr + QT**: Our quadric-fitting solver is used in end-to-end training - during inference we do not employ LO; and (v) **GeoTr + QTL**: Our quadric-fitting 1-point solver is used in end-to-end training followed by inference with LO.
The results are reported in Table 7 (_best results in **bold**, second best are underlined_). _Q-REG_* performs the best in the majority of the metrics. Specifically, we observe that there is a substantial increase in RR by 4.2%. When our solver is used only during inference, we still see a 3.0% increase in RR. Even though the performance of our _Q-REG_ decreases by 1.6% in RR without LO, it provides a good initial pose and the performance gap can be easily bridged with a refinement step. For RANSAC 50K, the increase in RR is only 0.4% after applying local optimization, indicating that many of the initially predicted poses is unreasonable and cannot be improved with further refinement. We can also observe a noticeable difference in performance between GeoTr + RL and GeoTr + QL, which further highlights the superiority of our quadric fitting approach. When considering the mean RRE, RTE, and RMSE, we observe that our self-baselines provide consistently more robust results over all valid pairs versus the standard GeoTr (standard GeoTr corresponds to the top two rows in Table 7). The ablation on the _3DMatch_ dataset is in the supplementary.
**Run-time.** We compute the average run-time in seconds per component in Table 8 (evaluated with GeoTr on _3DLoMatch_). Regarding RANSAC 50K, which yields at least 2% lower RR, _QREG_ provides better results while being an order of magnitude faster. On average, GeoTr's correspondence matcher runs for \(0.134\)s. The overall inference time of each method can be obtained by adding it to Table 8. These experiments were run on 8 Intel Xeon Gold 6150 CPUs and an NVIDIA GeForce RTX 3090 GPU.
## 5 Conclusion
We present a novel solution for point cloud registration, _Q-REG_, that utilizes rich geometric information to estimate the rigid pose from a single correspondence. It allows us to formalize the robust estimation as an exhaustive search and enable us to iterate through all the combinations and select the best rigid pose among them. It performs quick outlier rejection by filtering degenerate solutions and assumption inconsistent motions (_e.g_., related to scale). _Q-REG_ is agnostic to matching methods and is consistently improving their performance on all reported datasets, setting new state-of-the-art on these benchmarks.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{Model} & RR & RRE & RTE & \multicolumn{2}{c}{_Mean_} \\ & (\%)\(\uparrow\) & (\%)\(\downarrow\) & (cm)\(\downarrow\) & RE \(\downarrow\) & RTE \(\downarrow\) & RMSE \(\downarrow\) \\ \hline GeoTr + LGR & 74.1 & 2.99 & 7.3 & 23.15 & 88.3 & 57.8 \\ GeoTr + 50K & 75.0 & 2.54 & 7.7 & 22.69 & 57.8 & 57.3 \\ \hline i) GeoTr + WK [34] & 58.6 & 3.01 & 8.8 & 33.74 & 84.7 & 76.1 \\ ii) GeoTr + ICP [8] & 75.1 & 2.43 & 8.1 & 22.68 & 66.5 & 66.5 \\ ii) GeoTr + PointDSC [7] & 74.0 & 2.55 & 7.2 & 23.95 & 61.6 & 60.7 \\ iv) GeoTr + SC2-PCR [11] & 74.2 & 2.58 & 7.5 & 22.90 & 59.1 & 58.4 \\ v) GeoTr + QREG w/ PCD & 75.1 & 2.44 & 7.6 & 22.66 & 57.4 & 56.9 \\ vi) GeoTr + QREG w/ PCD & 76.5 & 2.47 & 7.5 & 16.81 & 46.4 & 44.6 \\ vi) GeoTr + Q-REG & 77.1 & 2.44 & 7.7 & 16.70 & **44.6** & 44.6 \\ vi) GeoTr + _Q-REG_* & **78.3** & **2.38** & **7.2** & **15.65** & 46.3 & **42.5** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results on the _3DLoMatch_[21] dataset of GeoTr [28] with different estimators. The best values are **bold** and the 2nd best are underlined.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Model} & RR & RRE & RTE & \multicolumn{2}{c}{_Mean_} \\ & (\%)\(\uparrow\) & (\%)\(\downarrow\) & (cm)\(\downarrow\) & RE \(\downarrow\) & RTE \(\downarrow\) & RMSE \(\downarrow\) \\ \hline GeoTr + LGR & 74.1 & 2.99 & 7.3 & 23.15 & 88.3 & 57.8 \\ GeoTr + 50K & 75.0 & 2.54 & 7.7 & 22.69 & 57.8 & 57.3 \\ \hline i) GeoTr + Q & 75.5 & 2.47 & 7.6 & 22.38 & 57.6 & 57.3 \\ ii) GeoTr + QL (- Q-REG) & 77.1 & 2.44 & 7.7 & 16.20 & **46.0** & 44.6 \\ ii) GeoTr + RL & 75.4 & 2.46 & 7.6 & 22.86 & 58.5 & 68.0 \\ vi) GeoTr + QT & 72.2 & **2.37** & 7.5 & 17.32 & 50.3 & 47.4 \\ v) GeoTr + QTL (_Q-REG_*) & **78.3** & 2.38 & **7.2** & **15.65** & 46.3 & **42.5** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Ablation results on the _3DLoMatch_[21] dataset of GeoTr [28] with different aspects of the _Q-REG_ solver. The best values are **bold** and the 2nd best are underlined.
\begin{table}
\begin{tabular}{l c c|c c} \hline \hline LGR & +1K & +50K & +Q & +QL (Q-REG) \\ \hline
0.016 & 0.053 & 1.809 & 0.085 & 0.166 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Run-time evaluation in seconds during inference using GeoTr [28] on the _3DLoMatch_ dataset. Times shown for LGR, RANSAC running 1K and 50K iterations, Quadric solvers (Sec. 3.2), and with the entire _Q-REG_ algorithm. | ```
点 cloud 登録は、対応関係のマッチングに焦点を当て、学習ベースの方法が近年成功を収めています。このような方法では、この目的を最適化するのみで、学習ステップに達した後に、対応関係の評価を通じて、RANSAC様式のフレームワークを使用して剛体変換を評価します。これらは必須の要素ですが、完全に端から端のトレーニングを阻害し、ポーズ誤差を最小限に抑えるという目的を無視しています。私たちは、Q-REGという新しいソリューションを提案し、対応関係から剛体姿勢を推定するために、豊富な幾何学的情報を利用します。Q-REGは、堅牢な推定を、全探索として形式化することで、端から端のトレーニングを可能にし、対応関係のマッチングと剛体姿勢推定の両方の目標を最適化するのに役立ちます。実験を通じて、Q-REGは対応関係のマッチング方法に |
2302.14546 | Dynamic Logic of Communicating Hybrid Programs | This paper presents a dynamic logic $d\mathcal{L}_\text{CHP}$ for
compositional deductive verification of communicating hybrid programs (CHPs).
CHPs go beyond the traditional mixed discrete and continuous dynamics of hybrid
systems by adding CSP-style operators for communication and parallelism. A
compositional proof calculus is presented that modularly verifies CHPs
including their parallel compositions from proofs of their subprograms by
assumption-commitment reasoning in dynamic logic. Unlike Hoare-style
assumption-commitments, $d\mathcal{L}_\text{CHP}$ supports intuitive symbolic
execution via explicit recorder variables for communication primitives. Since
$d\mathcal{L}_\text{CHP}$ is a conservative extension of differential dynamic
logic $d\mathcal{L}$, it can be used soundly along with the $d\mathcal{L}$
proof calculus and $d\mathcal{L}$'s complete axiomatization for differential
equation invariants. | Marvin Brieger, Stefan Mitsch, André Platzer | 2023-02-28T13:10:23 | http://arxiv.org/abs/2302.14546v2 | # Dynamic Logic of
###### Abstract
This paper presents a dynamic logic \(d\mathcal{L}_{\mathrm{CHP}}\) for compositional deductive verification of communicating hybrid programs (CHPs) for COHP's go beyond the traditional mixed discrete and continuous dynamics of hybrid systems by adding CSP-style operators for communication and parallelism. A compositional proof calculus is presented that modularly verifies CHPs including their parallel compositions from proofs of their subprograms by assumption-commitment reasoning in dynamic logic. Unlike Hoare-style assumption-commitments, \(d\mathcal{L}_{\mathrm{CHP}}\) supports intuitive symbolic execution via explicit recorder variables for communication primitives. Since \(d\mathcal{L}_{\mathrm{CHP}}\) is a conservative extension of differential dynamic logic \(d\mathcal{L}\), it can be used soundly along with the \(d\mathcal{L}\) proof calculus and \(d\mathcal{L}\)'s complete axiomatization for differential equation invariants.
Keywords:Compositional verification Hybrid systems Parallel programs Differential dynamic logic Assumption-commitment reasoning CSP
## 1 Introduction
Their prevalence in safety-critical applications and ample technical subtleties make both cyber-physical systems (CPS) verification [1, 2, 30, 12] and parallel program verification [3, 20, 27, 38] important challenges. CPS verification complexity stems from subtle interactions of their discrete control decisions and differential equations. Parallel program verification complexity comes from intricate interactions caused by synchronization via state or communication interdependencies between parallel components. But their combination becomes intrinsically complicated because parallel CPS are _always_ interdependent as they _always_ synchronize implicitly by sharing the same global time. Moreover, many real-world CPS have heterogeneous subsystems whose controllers do not operate in lock-step, and their communication is potentially unreliable (see Fig. 1). Unlike hybrid systems model checking [2, 23, 4, 8]--which needs to compute products of communicating parallel automata of significant size even when reducing the size of the subautomata--deductive approaches can be truly compositional. Existing deductive verification approaches, however, fail to properly tackle the above
challenges, since they are restricted to homogeneous subsystems [32], specific component shapes and interaction patterns [19; 22; 26], do not support symbolic execution [10; 21; 41], or are non-compositional [10; 21; 41], where even explicit attempts on compositionality [10; 41] turn out to be non-compostional again.
Neither compositionality to tame complexity by reasoning separately about discrete, continuous, and communication behavior, nor generality in expressing models, nor symbolic execution to boost the feasibility of deductive reasoning are dispensable for a practical approach. Thus, to tackle all three of these challenges, this paper presents \(d\mathcal{L}_{\mathrm{CHP}}\), a _dynamic logic for communicating hybrid programs_ (_CHPs_), that extends differential dynamic logic \(d\mathcal{L}\) for hybrid programs of differential equations [30; 31; 33; 34] with CSP-style operators for parallel composition and message passing communication [14; 15] as well as assumption-commitment (ac) reasoning [25; 43]. Parallel CHPs cannot share state but can communicate. They run synchronously in global time and their communication is instantaneous.
There are two fundamental approaches to reasoning about parallelism. Parallelism can either be understood via unfolding its explicit operational semantics, or via its denotational semantics implicitly characterizing parallel behavior by matching behavior of the subprograms on their respective part of the state. Verification based on the operational semantics unrolls the exponentially many possible interleavings, as in hybrid automata [2] or hybrid CSP [10; 41]. Such approaches admit superficial mitigation of the state space explosion but always resorts to product automata after reducing the size of the subautomata [4; 8; 13], or merely postpone the burden of reasoning about exponentially many trace interleavings [10; 41] since it reveals the internal structure of subprograms [41]. In contrast, verification based on the denotational semantics is compositional for discrete programs with ac-reasoning [43; 44], _if_ the semantics is compositional and aligns well with the intended reasoning. This paper generalizes ac-reasoning for the purpose of integration with a compositional hybrid systems logic [30; 31; 33; 34].
Our central contribution is a sound _compositional_ proof calculus for \(d\mathcal{L}_{\mathrm{CHP}}\). For compositional verification of parallel communication, it embeds ac-reasoning into dynamic logic via the explicit ac-modality \([\alpha]_{\{\mathsf{A},\mathsf{C}\}}\psi\). The ac-modality expresses that for all runs of \(\alpha\) whose incoming communication meets assumption \(\mathsf{A}\) the outgoing communication fulfills commitment \(\mathsf{C}\) and that \(\psi\) holds in the final state. Formulas \(\mathsf{A}\) and \(\mathsf{C}\) specify the communication behavior of \(\alpha\), and
Figure 1: The interaction of \(\mathsf{CPS}_{1}\) and \(\mathsf{CPS}_{2}\) has (delayed) (- - - -), lossy (- - -\(\times\)), and noisy (- -) communication. Discrete change (\(\mathsf{o}\)) is independent of discrete change in parallel unless synchronization takes place. Continuous evolution (- - -) is not interrupted by parallel discrete behavior.
enable parallel composition if they mutually agree for the contracts of subprograms. Crucially, these formulas directly interface the communication history via history variables such that communication can remain implicit in the parallel composition axiom just as in the underlying denotational semantics. Since we prove that \(d\mathcal{L}_{\mathrm{CHP}}\) is a conservative extension of \(d\mathcal{L}\), it inherits \(d\mathcal{L}\)'s complete axiomatization of differential equation invariants [35].
Unlike approaches built on Hoare-logics [10, 21, 41], our calculus supports statement-by-statement symbolic execution instead of executing them backward. Since executing communication primitives extends the former history, the new history state requires a fresh name. As a consequence, it is unsound to adopt Hoare-style ac-reasoning [43] verbatim with a distinguished variable for the communication history. Instead, \(d\mathcal{L}_{\mathrm{CHP}}\) reconciles ac-reasoning and symbolic execution via recorder variables whose evolution is maintainable, as done by our communication axiom along the way.
In summary, we provide the first truly _compositional_ approach to the verification of communicating parallel hybrid system models and prove its soundness. Our logical treatment separates the essentials of discrete, continuous, and communication reasoning into axioms that are simple and modular compared to earlier approaches [21, 41]. Even though the technical development is challenging because of a prefix-closed dynamic semantics, a subtle static semantics due to the global time and recorder variables, and the mutual dependency of formulas and programs in dynamic logic it remains finally under the hood. We demonstrate the flexibility of \(d\mathcal{L}_{\mathrm{CHP}}\) and its proof calculus with an example in autonomous driving considering the challenge of lossy communication, where a follower and leader car communicate to form a convoy.
## 2 Dynamic Logic of Communicating Hybrid Programs
We introduce \(d\mathcal{L}_{\mathrm{CHP}}\), a _dynamic logic_ to reason about _communicating hybrid programs_ (CHPs). CHPs combine \(d\mathcal{L}\)'s hybrid programs [30, 33, 34] with communication primitives and a parallel operator similar to CSP [14, 15]. On the logical side, \(d\mathcal{L}_{\mathrm{CHP}}\) introduces ac-reasoning [43, 25, 44] into the dynamic logic setup [11] of \(d\mathcal{L}\), allowing compositional reasoning about communication in a way that preserves symbolic execution as an intuitive reasoning principle.
### Syntax
The syntax uses channel names \(\Omega\) and variables \(V=V_{\mathbb{R}}\cup V_{\mathbb{N}}\cup V_{\mathcal{T}}\) with pairwise disjoint sets of real variables \(V_{\mathbb{R}}\), integer variables \(V_{\mathbb{N}}\), and trace variables \(V_{\mathcal{T}}\). The variable \(\mu\in V_{\mathbb{R}}\) is designated to reflect the global time. By convention \(x,y,t\in V_{\mathbb{R}}\), \(n,n_{i}\in V_{\mathbb{N}}\), \(h,h_{i}\in V_{\mathcal{T}}\), \(ch,ch_{i}\in\Omega\), and \(z,z_{i}\in V\). Notions \(FV(\cdot)\) of free and \(B\!\!V(\cdot)\) of bound variables in formulas and programs are defined as usual by simultaneous induction with the syntax (see Appendix 0.B). \(V(\cdot)=FV(\cdot)\cup B\!\!V(\cdot)\) is the set of all variables, whether read or written.
All real variables \(V_{\mathbb{R}}\) can be read _and_ written in programs but the global time \(\mu\) is _not meant_ to be written manually.4 Instead, the built-in evolution of \(\mu\) with every continuous behavior makes it represent the global flow of time. Trace variables are bound by programs when they record communication. For a program \(\alpha\), we call the remaining set \((\mathcal{BV}(\alpha)\cap V_{\mathbb{R}})\setminus\{\mu\}\) the state of \(\alpha\) and say that \(\alpha\) operates over a real state. In parallel compositions \(\alpha\parallel\beta\), the programs \(\alpha\) and \(\beta\) may communicate explicitly but do _not_ share state.
Footnote 4: Programs writing the global time \(\mu\) manually are likely to be meaningless such as \(\mu:=0\parallel\mu:=1\) that sets \(\mu\) to \(0\) and \(1\) in parallel, which fortunately has no runs.
The logical language of \(d\mathcal{L}_{\mathrm{CHP}}\) features trace algebra to reason about communication behavior following the approach of Zwiers _et al_. [42, 44]. During symbolic program execution, communication events are collected syntactically in logical trace variables that were explicitly designated to record the history. This way, the communication behavior of a program can be specified using recorder variables as interface. In analogy to a distinguished history variable, this interface is crucial to obtain a compositional proof rule for parallel composition [44, 17, 38].
Definition 1 (Terms): Real terms \(\mathrm{Trm}_{\mathbb{R}}(V,\Omega)\), integer terms \(\mathrm{Trm}_{\mathbb{N}}(V,\Omega)\), channel terms \(\mathrm{Trm}_{\Omega}(V,\Omega)\), and trace terms \(\mathrm{Trm}_{\mathcal{T}}(V,\Omega)\) are defined by the grammar below, where \(c\in\mathbb{Q}\) is a rational constant, \(\mathit{ch}\in\Omega\) a channel name, \(\theta_{1},\theta_{2}\in\mathbb{Q}[V_{\mathbb{R}}]\) are polynomials in \(V_{\mathbb{R}}\), and \(C\subseteq\Omega\) is a finite set of channel names. The set of all terms is denoted by \(\mathrm{Trm}(V,\Omega)\).
\[\mathrm{Trm}_{\mathbb{R}}(V,\Omega): \eta_{1},\eta_{2} ::=x\mid c\mid\eta_{1}+\eta_{2}\mid\eta_{1}\cdot\eta_{2}\mid \mathtt{val}(te[ie])\mid\mathtt{time}(te[ie])\] \[\mathrm{Trm}_{\mathbb{N}}(V,\Omega): ie_{1},ie_{2} ::=n\mid 0\mid 1\mid ie_{1}+ie_{2}\mid|te|\] \[\mathrm{Trm}_{\Omega}(V,\Omega): ce_{1},ce_{2} ::=\mathit{ch}\mid\mathtt{chan}(te[ie])\] \[\mathrm{Trm}_{\mathcal{T}}(V,\Omega): te_{1},te_{2} ::=h\mid\epsilon\mid\langle\mathit{ch},\theta_{1},\theta_{2}\rangle\mid te _{1}\cdot te_{2}\mid te\downarrow C\]
Real terms \(\mathrm{Trm}_{\mathbb{R}}(V,\Omega)\) are formed by arithmetic operators, variables \(x\) (including \(\mu\)), and rational constants \(c\). Additionally, \(\mathtt{val}(te[ie])\) accesses the value and \(\mathtt{time}(te[ie])\) the timestamp of the \(ie\)-th communication in trace \(te\). In CHPs only polynomials \(\theta\in\mathbb{Q}[V_{\mathbb{R}}]\subset\mathrm{Trm}_{\mathbb{R}}(V,\Omega)\) in \(V_{\mathbb{R}}\) over rational coefficients occur, i.e., without trace terms, since CHPs operate over a real state. By convention, \(\theta,\theta_{i}\) denote terms from \(\mathbb{Q}[V_{\mathbb{R}}]\).
For integers, we use Presburger arithmetic (no multiplication) since it is decidable [37] and sufficient for our purposes.5 The integer term \(|te|\) denotes the length of trace \(te\). In analogy to \(\mathtt{val}(te[ie])\), the term \(\mathtt{chan}(te[ie])\) accesses the channel name of the \(ie\)-th communication in trace \(te\). The trace term \(\epsilon\) represents empty communication, \(te_{1}\cdot te_{2}\) is the concatenation of trace terms \(te_{1}\) and \(te_{2}\), and \(te\downarrow C\) the projection of \(te\) onto the set of channel names \(C\subseteq\Omega\). The tuple \(\langle\mathrm{ch},\theta_{1},\theta_{2}\rangle\) represents communication of \(\theta_{1}\) along channel ch at time \(\theta_{2}\), where \(\theta_{1},\theta_{2}\in\mathbb{Q}[V_{\mathbb{R}}]\) since communication is between programs over a real state.
Footnote 5: Presburger arithmetic is the subset of \(\mathrm{Trm}_{\mathbb{N}}(V,\Omega)\) without length computations \(|te|\).
A trace variable \(h\) refers to a sequence of communication events. By symbolic execution, proofs collect communication items in trace variables designated
to record the history. Then, communication behavior is specified against these recorder variables using projections onto the channels of interest (see Example 1 below). This allows specifications to hide the internal structure of programs leading to compositional reasoning in the presence of communication [17, 38, 44].
NotationWe write \(\mathtt{val}(te)\) to abbreviate \(\mathtt{val}(te[|te|-1])\), i.e., access to the value of the last item on trace \(te\). Likewise, we use \(\mathtt{time}(te)\) and \(\mathtt{chan}(te)\).
Example 1: The formula \(|h\!\downarrow\!\mathrm{ch}|>0\rightarrow\mathtt{val}(h\!\downarrow\! \mathrm{ch})>0\) expresses that the last value sent along channel \(\mathrm{ch}\) recorded by \(h\) is positive. The precondition \(|h\!\downarrow\!\mathrm{ch}|>0\) ensures that the value is accessed only for a non-empty history.
Definition 2 (Communicating hybrid programs): The set \(\mathrm{CHP}(V,\Omega)\) of communicating hybrid programs is defined by the grammar below, where \(x\in V_{\mathbb{R}}\) for \(x,x^{\prime}\), and \(\theta\in\mathbb{Q}[V_{\mathbb{R}}]\) is a polynomial in \(V_{\mathbb{R}}\), and \(\chi\in\mathrm{FOL}_{\mathbb{R}}(V_{\mathbb{R}})\) is a formula of first-order real-arithmetic. In the parallel composition \(\alpha\parallel\beta\), the constituents must not share state, i.e., \(V(\alpha)\cap B\!V(\beta)=V(\beta)\cap B\!V(\alpha)\subseteq\{\mu\}\cup V_{ \mathcal{T}}\).
\[\alpha,\beta::= x:=\theta\mid x:=*\mid\{x^{\prime}=\theta\mathbin{\&}\chi\}\mid? \chi\mid\alpha;\beta\mid\alpha\cup\beta\mid\alpha^{*}\mid\quad\text{(standard $d\mathcal{L}$)}\] \[\text{ch}(h)!\theta\mid\text{ch}(h)?x\mid\alpha\parallel\beta\] (CSP extension)
The statement \(x:=\theta\) instantly changes \(x\) to \(\theta\) and nondeterministic assignment \(x:=*\) sets \(x\) to an arbitrary value. Assignment to the global time \(\mu\) is only meant to be used by axioms. As in \(d\mathcal{L}\)[30], continuous evolution \(\{x^{\prime}=\theta\mathbin{\&}\chi\}\) follows the differential equation \(x^{\prime}=\theta\) for a nondeterministically chosen duration but only as long as the domain constraint \(\chi\) is fulfilled.
In \(d\mathcal{L}_{\mathrm{CHP}}\), the global time \(\mu\) always evolves during a continuous evolution according to \(\mu^{\prime}=1\) even if it does not occur syntactically. Hence, an evolution \(\mu^{\prime}=\theta\) is considered ill-formed if \(\theta\not\equiv 1\) just like an ODE \(x^{\prime}=1,x^{\prime}=2\) is considered ill-formed. Since programs operate over a real state, terms \(\theta\in\mathbb{Q}[V_{\mathbb{R}}]\) are polynomials in \(V_{\mathbb{R}}\) and \(\chi\in\mathrm{FOL}_{\mathbb{R}}(V_{\mathbb{R}})\) is a formula of first-order real-arithmetic.
The test \(?\chi\) passes if formula \(\chi\) is satisfied. Otherwise, execution is aborted. The sequential composition \(\alpha;\beta\) executes \(\beta\) after \(\alpha\). The choice \(\alpha\cup\beta\) follows \(\alpha\) or \(\beta\) nondeterministically, and \(\alpha^{*}\) repeats \(\alpha\) zero or more times.
Communication and parallelism are inspired by CSP [14]. The primitive \(\mathrm{ch}(h)!\theta\) sends the value of term \(\theta\) along channel \(\mathrm{ch}\) and \(\mathrm{ch}(h)?x\) receives a value from \(\mathrm{ch}\) binding it to variable \(x\). For both statements, \(h\) is the trace variable designated to record the communication. In an ongoing symbolic execution, this variable can be renamed to refer to the variable keeping the most recent communication history. In system models, all recorder variables are meant to be the same. In this case, we also write \(\mathrm{ch}!\theta\) and \(\mathrm{ch}?x\) instead of \(\mathrm{ch}(h)!\theta\) and \(\mathrm{ch}(h)?x\).
Finally, \(\alpha\parallel\beta\) executes \(\alpha\) and \(\beta\) in parallel for equal duration, i.e., their final states agree on the value of the global time \(\mu\). If \(\mu\) is not manipulated manually, its increase equals the duration of continuous behavior. As in CSP, \(\alpha\) and \(\beta\) can perform synchronous message passing but cannot share state.6 All programs
participating in communication over a channel must agree on all communication along that channel and share the same values and recorder variables at the same time, i.e., message passing does not consume time. Since shared recorder variables agree on the communication for all subprograms, they provide the interface that allows for decomposition of parallel behavior. If the recorders are not the same as in \(\mathrm{ch}(h_{0})!\theta\parallel\mathrm{ch}(h_{1})?x\), there are no runs. The need for matching recorders in parallel composition must not be confused with renaming of the history by symbolic execution along the sequential structure of programs.
As usual in CSP [14], the syntax does not enforce unidirectional communication such that several programs may send and receive on the same channel at the same time as long as they agree on the recorder variables and values. For example, \(\mathrm{ch}(h)?x\parallel\mathrm{ch}(h)?y\) is a well-formed program. Its semantics has all runs where \(x\) and \(y\) receive the same values. Likewise, \(\mathrm{ch}(h)!\theta_{1}\parallel\ldots\parallel\mathrm{ch}(h)!\theta_{n}\) has terminating runs if the values of all \(\theta_{i}\) agree. Consequently, a receive statement \(\mathrm{ch}(h)?x\) can be replaced with the semantical equivalent \(x:=*;\mathrm{ch}(h)!x\).
NotationAs usual \(\mathtt{if}\left(\varphi\right)\{\alpha\}\) is short for \((?\varphi;\alpha)\cup?\neg\varphi\).
Example 2: Fig. 2 models two cars in a convoy safely adjusting their speed. From time to time, the leader changes its speed \(v_{l}\) in the range \(0\) to \(V\) and notifies the follower about the change. The communication, however, is lossy (\(\mathrm{vel}!v_{l}\cup\mathsf{skip}\)). Sending position updates by updt succeeds at least every \(\epsilon\) time units. On such an update, the follower's controller \(\mathsf{dist}\) awakes. If the distance \(d\) fell below \(\epsilon V\), the follower slows down to avoid collision before the next position update.
Regularly, the follower adopts the speed update in \(\mathsf{velo}\), but crucially refuses to do so if the last known distance was not safe (\(d>\epsilon V\)). If the leader could overwrite the follower's speed, it could cause a future collision (see Fig. 3 below) even though obeying would be perfectly fine at the moment. This is because a subsequent notification of the leader slowing down could be lost.
Definition 3 (Formulas): The set of \(d\mathcal{L}_{\mathrm{CHP}}\) formulas \(\mathrm{Fml}(V,\Omega)\) is defined by the following grammar, where \(z\in V\), terms \(e_{1},e_{2}\in\mathrm{Trm}(V,\Omega)\) are of equal sort, \(\eta_{i}\in\mathrm{Trm}_{\mathbb{R}}(V,\Omega)\), \(ie_{i}\in\mathrm{Trm}_{\mathbb{N}}(V,\Omega)\), \(te_{i}\in\mathrm{Trm}_{\mathcal{T}}(V,\Omega)\), and the ac-formulas \(\mathsf{A}\) and \(\mathsf{C}\) do not refer to state and time of \(\alpha\), i.e., \((\mathit{FV}(\mathsf{A})\cup\mathit{FV}(\mathsf{C}))\cap\mathit{BV}(\alpha) \subseteq V_{\mathcal{T}}\).
\[\varphi,\psi,\mathsf{A},\mathsf{C} ::=e_{1}=e_{2}\mid\eta_{1}\geq\eta_{2}\mid ie_{1}\geq ie_{2}\mid e_ {1}\preceq te_{2}\mid\neg\varphi\mid\varphi\land\psi\mid\varphi\lor\psi\mid\] \[\varphi \to\psi\mid\forall z\,\varphi\mid\exists z\,\varphi\mid[\alpha] \psi\mid[\alpha]_{\{\mathsf{A},\mathsf{C}\}}\psi\]
Figure 2: Models of two moving cars (follower and leader) intended to form the convoy follower \(\parallel\) leader by parallel composition, communicating target speed.
For specifying the (communication) behavior of CHPs, we combine first-order dynamic logic [11] with ac-reasoning [43]. Equality is defined on each sort of terms. On real and integer terms, \(\geq\) has the usual meaning. On trace terms, \(te_{1}\preceq te_{2}\) means that \(te_{1}\) is a prefix of \(te_{2}\). There is no order on channel terms. Quantified variables \(z\in V\) are of arbitrary sort. Since our primary interest is safety, we omit the dynamic modality \(\langle\alpha\rangle\psi\) and give no dual \(\langle\alpha\rangle_{\{\mathsf{A},\mathsf{C}\}}\psi\) for \([\alpha]_{\{\mathsf{A},\mathsf{C}\}}\psi\).
Besides the dynamic modality \([\alpha]\psi\), \(d\mathcal{L}_{\text{CHP}}\) prominently features the ac-box \([\alpha]_{\{\mathsf{A},\mathsf{C}\}}\psi\) that reshapes Hoare-style acreasoning [43] into the modal approach of dynamic logic. In an ac-contract \(\varphi\rightarrow[\alpha]_{\{\mathsf{A},\mathsf{C}\}}\psi\), assumption \(\mathsf{A}\) and commitment \(\mathsf{C}\) specify \(\alpha\)'s communication behavior along the interface of recorder variables but without access to \(\alpha\)'s state and time as required by \((\mathit{FV}(\mathsf{A})\cup\mathit{FV}(\mathsf{C}))\cap\mathit{BV}(\alpha) \subseteq V_{\mathcal{T}}\), whereas formulas \(\varphi\) and \(\psi\) act as pre- and postcondition as usual. Formula \([\alpha]_{\{\mathsf{A},\mathsf{C}\}}\psi\) promises that \(\mathsf{C}\) holds after each communication event of \(\alpha\) assuming \(\mathsf{A}\) held before each event. Moreover, if the program terminated _and_\(\mathsf{A}\) held before and after each communication event, the final state satisfies \(\psi\).
Example 3: The safety condition about follower and leader below expresses: If they start driving with a distance of at least \(d\) and a speed \(\leq\!\!^{d}\!/_{\epsilon}\) that prevents the follower from reaching the leader within \(\epsilon\) time units, then the cars do never collide. Section 3 shows a proof of this formula.
\[\epsilon\geq 0\wedge 0\!\leq\!v_{f}\!\leq\!\!^{d}\!/_{\epsilon}\wedge v_{f} \leq V\wedge x_{f}+d<x_{l}\rightarrow[\mathtt{follower}\parallel\mathtt{ leader}]\,x_{f}<x_{l}\]
Closed systems, where communication has an internal partner, can be specified using boxes \([\cdot]\psi\) (see Example 3) since their safety does not depend on the environment. Ac-boxes \([\cdot]_{\{\mathsf{A},\mathsf{C}\}}\psi\) come into play when such systems are decomposed since the constituents follower and leader are each other's environment.
### Semantics
CHPs have a denotational linear history semantics merging ideas from \(d\mathcal{L}\)[30] and ac-reasoning [44] and adding synchronization in the global time. The basic domains are traces \(\mathcal{T}\) and states \(\mathcal{S}\). A _trace_\(\tau\in\mathcal{T}\) is a finite sequence \((\tau_{1},...,\tau_{n})\) of communication events \(\tau_{i}=\langle\mathrm{ch}_{i},a_{i},s_{i}\rangle\) with channel \(\mathrm{ch}_{i}\in\Omega\), value \(a_{i}\in\mathbb{R}\)
Figure 3: Plot of example positions \(x_{f}\) and \(x_{l}\) of the cars over time. First, speed update is accepted (\(\checkmark\)). The next update is lost (\(\checkmark\)). After position update (\(\vartriangle\)), the follower adjusts its speed. Crucially, it conservatively rejects the speed update (\(\checkmark\)) when a crash (\(\checkmark\)) with a slowing leader is possible since speed communication may fail (\(\checkmark\) -\(\times\)) until the next reliable position update is expected, see dashed trajectory (\(\checkmark\) -\(\checkmark\)).
and timestamp \(s_{i}\in\mathbb{R}\) that is _chronological_, i.e., \(s_{i}\leq s_{j}\) for all \(1\leq i<j\leq n\). The empty trace is denoted \(\epsilon\), the concatenation of traces \(\tau_{1}\) and \(\tau_{2}\) is \(\tau_{1}\cdot\tau_{2}\), and for \(C\subseteq\Omega\), the projection \(\tau\!\downarrow\!C\) is the subsequence of \(\tau\) consisting exactly of those \(\langle\mathrm{ch},a,s\rangle\) and \(\langle h,\mathrm{ch},a,s\rangle\) with \(\mathrm{ch}\in C\).7 By \(\tau^{\prime}\preceq\tau\) and \(\tau^{\prime}\prec\tau\), we express that \(\tau^{\prime}\) is a prefix or proper prefix of \(\tau\), respectively. A _recorded trace_\(\tau\in\mathcal{T}_{\mathrm{rec}}\) is a trace that has an additional recorder variable \(h_{i}\in V_{\mathcal{T}}\) for each communication event such that \(\tau_{i}=\langle h_{i},\mathrm{ch}_{i},a_{i},s_{i}\rangle\). Raw traces \(\mathcal{T}\) represent trace terms in a state, whereas recorded traces originate from programs.
Footnote 7: We use the same operators for corresponding syntax and semantics, i.e., \(te\!\downarrow\!C\) and \(\tau\!\downarrow\!C\) are the projection on \(C\) for trace term \(te\) and semantic trace \(\tau\), respectively.
A _state_ is a map \(v:V\to\mathbb{R}\cup\mathbb{N}\cup\mathcal{T}\) that assigns a value from \(type(z)\) to each variable \(z\in V\), where \(type(e)=\mathbb{M}\) if \(e\in\mathrm{Trm}_{\mathbb{M}}(V,\Omega)\) for \(\mathbb{M}\in\{\mathbb{R},\mathbb{N},\mathcal{T}\}\). The _updated state_\(v_{z}^{d}\) is defined by \(v_{z}^{d}=v\) on \(\{z\}^{\complement}\) and \(v_{z}^{d}(z)=d\). _State-trace concatenation_\(v\cdot\tau\) appends recorded communication \(\tau\in\mathcal{T}_{\mathrm{rec}}\) to the corresponding trace variable in \(v\in\mathcal{S}\). It is defined by \(v\cdot\tau=v\) on \(V_{\mathcal{T}}^{\complement}\) and \((v\cdot\tau)(h)=v(h)\cdot\tau(h)\) for all \(h\in V_{\mathcal{T}}\), where \(\tau(h)\) denotes the subtrace of \(\tau\) consisting of the raw versions \(\langle\mathrm{ch},a,s\rangle\) of communication events \(\langle h,\mathrm{ch},a,s\rangle\) in \(\tau\) recorded by \(h\).
#### 3.2.2 Term Semantics.
The value \(\llbracket e\rrbracket v\in type(e)\) of term \(e\) at state \(v\in\mathcal{S}\) is according to its sort \(type(e)\) (see Appendix 0.A). The evaluation of real and integer terms is as usual. Additionally, \(\mathtt{val}(te\llbracket ie\rrbracket)\) evaluates to the value, \(\mathtt{time}(te\llbracket ie\rrbracket)\) to the timestamp, and \(\mathtt{chan}(te\llbracket ie\rrbracket)\) to the channel name of the \(ie\)-th communication event in \(te\) with indices from \(0\) to \(|te|-1\). Moreover, \(|te|\) evaluates to the length of \(te\). The evaluation of trace terms is aligned with the semantic operators on traces [42, 44], e.g., \(\llbracket te\!\downarrow\!C\rrbracket v=(\llbracket te\rrbracket v)\! \downarrow\!C\) and \(\llbracket\langle\mathrm{ch},\theta_{1},\theta_{2}\rangle\rrbracket v=\langle \mathrm{ch},\llbracket\theta_{1}\rrbracket v,\llbracket\theta_{2}\rrbracket v\rangle\).
#### 3.2.3 Domain of Computation.
The denotational semantics \(\llbracket\alpha\rrbracket\subseteq\mathcal{D}\) of a CHP \(\alpha\) has domain \(\mathcal{D}=\mathcal{S}\times\mathcal{T}_{\mathrm{rec}}\times\mathcal{S}_{\perp}\) with \(\mathcal{S}_{\perp}=\mathcal{S}\cup\{\perp\}\), i.e., the observables of a CHP started from a state are communication and a final state. The marker \(\perp\) indicates an unfinished execution that either can be continued or was aborted due to a failing test. Since communication can even be observed from unfinished computations, a meaningful semantics of communicating programs is prefix-closed and total (see Def. 4 below). Totality captures that every program can at least start computation even if it aborts immediately like?1 and has not emitted communication initially. For Def. 4, we extend the prefix relation \(\preceq\) on traces \(\mathcal{T}_{\mathrm{rec}}\) to a partial order \(\preceq\) on observable behavior \(\mathcal{T}_{\mathrm{rec}}\times\mathcal{S}_{\perp}\) expressing that \((\tau^{\prime},w^{\prime})\) is a prefix of \((\tau,w)\) if \(\bigl{(}(w=w^{\prime}\text{ and }\tau=\tau^{\prime})\text{ or }(w^{\prime}=\perp\text{ and }\tau^{\prime}\preceq\tau)\bigr{)}\).
Definition 4 (Prefix-closedness and totality): A set \(U\subseteq\mathcal{D}\) is _prefix-closed_ if \((v,\tau,w)\in U\) and \((\tau^{\prime},w^{\prime})\preceq(\tau,w)\) implies \((v,\tau^{\prime},w^{\prime})\in U\). The set is _total_ if \(\perp_{\mathcal{D}}\subseteq U\) with \(\perp_{\mathcal{D}}=\mathcal{S}\times\{\epsilon\}\times\{\perp\}\), i.e., \((v,\epsilon,\perp)\in U\) for every \(v\in\mathcal{S}\).
Program Semantics.The semantics of compound programs is compositionally defined in terms of semantical operators: For \(U,M\subseteq\mathcal{D}\), we define \(U_{\perp}=\{(v,\tau,\perp)\mid(v,\tau,w)\in U\}\) and \((v,\tau,w)\in U\triangleright M\) if \((v,\tau_{1},u)\in U\) and \(u\neq\perp\) and \((u,\tau_{2},w)\in M\) exists such that \(\tau=\tau_{1}\cdot\tau_{2}\). The operator \(\hat{\circ}\) is for sequential composition. For \(U,M\subseteq\mathcal{D}\), we define \(U\,\hat{\circ}\,M=U_{\perp}\cup(U\triangleright M)\). Semantic iteration \(U^{m}\) is defined by \(U^{0}=\mathrm{I}_{\mathcal{D}}=\perp_{\mathcal{D}}\cup(\mathcal{S}\times\{ \epsilon\}\times\mathcal{S})\) and \(U^{n+1}=U\,\hat{\circ}\,U^{n}\) for \(n>0\). Accordingly, \(\alpha^{0}\equiv\,?\mathsf{T}\) and \(\alpha^{n+1}\equiv\alpha;\alpha^{n}\) defines syntactic iteration.
Parallel composition \(\alpha\parallel\beta\) requires that \(\alpha\) and \(\beta\) have disjoint bound variables (Def. 2) except for \(\{\mu\}\cup V_{\mathcal{T}}\), where they will always agree. Thus, the merged state \(w_{\alpha}\oplus w_{\beta}\in\mathcal{S}_{\perp}\) for states \(w_{\alpha},w_{\beta}\in\mathcal{S}_{\perp}\) can be unambiguously determined as follows: \(w_{\alpha}\oplus w_{\beta}=\perp\) if at least one of the states is \(\perp\). Otherwise, define \((w_{\alpha}\oplus w_{\beta})(z)=w_{\alpha}(z)\) if \(z\in B\!V(\alpha)\) and \((w_{\alpha}\oplus w_{\beta})(z)=w_{\beta}(z)\) if \(z\not\in B\!V(\alpha)\).8 For program \(\alpha\), the set \(C\!N(\alpha)\subseteq\Omega\) consists of all channel names occurring in \(\alpha\), i.e., in send \(\mathrm{ch}(h)!\theta\) and receive \(\mathrm{ch}(h)?x\) statements. The projection \(\tau\!\downarrow\!C\!N(\alpha)\) is abbreviated as \(\tau\!\downarrow\!\alpha\). The _semantic parallel operator_ is defined as follows for programs \(\alpha,\beta\in\mathrm{CHP}(V,\Omega)\):
Footnote 8: The alternative condition \(z\in B\!V(\beta)\) leads to an equivalent definition when \(w_{\alpha}=w_{\beta}\) on \((B\!V(\alpha)\cup B\!V(\beta))^{\complement}\), which is the case for the final states in parallel composition.
\[\llbracket\alpha\rrbracket\parallel\llbracket\beta\rrbracket=\left\{(v,\tau,w_ {\alpha}\oplus w_{\beta})\in\mathcal{D}\,\middle|\,\begin{aligned} &(v,\tau\downarrow\alpha,w_{\alpha})\in\llbracket\alpha \rrbracket,(v,\tau\downarrow\beta,w_{\beta})\in\llbracket\beta\rrbracket,\\ & w_{\alpha}(\mu)=w_{\beta}(\mu),\tau=\tau\downarrow(\alpha \parallel\beta)\end{aligned}\right\}\]
Instead of computing explicit interleavings, the parallel operator \(\parallel\) characterizes the joint communication \(\tau\) implicitly via any order that the subprograms can agree on. Thereby \(\tau\in\mathcal{T}_{\mathrm{rec}}\) rules out non-chronological ordering of communication events that are exclusive to either \(\tau\!\downarrow\!\alpha\) or \(\tau\!\downarrow\!\beta\). Moreover, by \(\tau=\tau\!\downarrow\!(\alpha\parallel\beta)\), the trace \(\tau\) must not contain any junk, i.e., communication not caused by \(\alpha\) or \(\beta\). Communication along joint channels of \(\alpha\) and \(\beta\) must agree in its recorder variable, value, and timestamp as it occurs in \(\tau\!\downarrow\!\alpha\) and \(\tau\!\downarrow\!\beta\). By \(w_{\alpha}(\mu)=w_{\beta}(\mu)\) both computations need to meet at the same point in global time.9
Footnote 9: We consider \(w_{\alpha}(\mu)=w_{\beta}(\mu)\) fulfilled if \(w_{\alpha}=\perp\) or \(w_{\beta}=\perp\).
Definition 5 (Program semantics).: The semantics \(\llbracket\alpha\rrbracket\subseteq\mathcal{D}\) of a program \(\alpha\in\mathrm{CHP}(V,\Omega)\) is inductively defined as follows, where \(\perp_{\mathcal{D}}=\mathcal{S}\times\{\epsilon\}\times\{\perp\}\) and \(\vDash\) is the satisfaction relation for formulas (Def. 6):
\[\llbracket x:=\theta\rrbracket=\perp_{\mathcal{D}}\cup\{(v, \epsilon,w)\mid w=v_{x}^{\llbracket\theta\rrbracket v}\}\] \[\llbracket x:=*\rrbracket=\perp_{\mathcal{D}}\cup\{(v,\epsilon,w) \mid w=v_{x}^{a}\text{ where }a\in\mathbb{R}\}\] \[\llbracket?\chi\rrbracket=\perp_{\mathcal{D}}\cup\{(v,\epsilon,v) \mid v\vDash\chi\}\] \[\llbracket\{x^{\prime}=\theta\,\&\,\chi\}\rrbracket= \perp_{\mathcal{D}}\cup\big{\{}(\varphi(0),\epsilon,\varphi(s))\mid\varphi( \zeta)\vDash\mu^{\prime}=1\wedge x^{\prime}=\theta\wedge\chi\text{ and }\] \[\varphi(\zeta)=\varphi(0)\text{ on }\{x,\mu\}^{\complement}\text{ for all }\zeta\in[0,s]\text{ and a solution }\] \[\varphi:[0,s]\rightarrow\mathcal{S}\text{ with }\varphi(\zeta)(z^{\prime})=\frac{d \varphi(t)(z)}{dt}(\zeta)\text{ for }z\in\{x,\mu\}\big{\}}\]
\[\llbracket ch(h)!\theta\rrbracket=\{(v,\tau,w)\ |\ (\tau,w) \preceq(\langle h,ch,\llbracket\!\theta\rrbracket v,v(\mu)\rangle,v)\}\] \[\llbracket ch(h)?x\rrbracket=\{(v,\tau,w)\ |\ (\tau,w)\preceq( \langle h,ch,a,v(\mu)\rangle,v_{x}^{a})\ \text{where }a\in\mathbb{R}\}\] \[\llbracket\alpha\cup\beta\rrbracket=\llbracket\alpha\rrbracket\cup \llbracket\beta\rrbracket\] \[\llbracket\alpha;\beta\rrbracket=\llbracket\alpha\rrbracket\circ \llbracket\beta\rrbracket=\llbracket\alpha\rrbracket_{\perp}\cup(\llbracket \alpha\rrbracket\triangleright\llbracket\beta\rrbracket)\] \[\llbracket\alpha^{*}\rrbracket=\bigcup_{n\in\mathbb{N}} \llbracket\alpha\rrbracket^{n}=\bigcup_{n\in\mathbb{N}}\llbracket\alpha^{n}\rrbracket\] \[\llbracket\alpha\parallel\beta\rrbracket=\llbracket\alpha \rrbracket\parallel\llbracket\beta\rrbracket\]
In the semantics of continuous evolution, the solution for the ODE gives meaning to the primed variable \(x^{\prime}\) as in \(d\mathcal{L}\)[33]. By \(\mu^{\prime}=1\), the global time \(\mu\) always evolves with slope \(1\) with every continuous evolution.
The semantics \(\llbracket\alpha\rrbracket\subseteq\mathcal{D}\) is prefix-closed and total (Def. 4) for every program \(\alpha\) (see Appendix 0.A). For atomic non-communicating programs, \(\bot_{\mathcal{D}}=\mathcal{S}\times\{\epsilon\}\times\{\bot\}\) ensures prefix-closedness. If \(?\chi\) or \(\{x^{\prime}=\theta\&\chi\}\) abort, \(\bot_{\mathcal{D}}\) also guarantees totality. Keeping the unfinished computations \(\llbracket\alpha\rrbracket_{\bot}\) preserves prefix-closedness of \(\alpha;\beta\).
#### 0.a.2 Formula Semantics.
The semantics of the first-order fragment is as usual. Like in dynamic logic [11], the box \([\alpha]\psi\) means that \(\psi\) is true after all finished computations, i.e., the final state and communication of \((v,\tau,w)\in\llbracket\!\alpha\rrbracket\) with \(w\neq\bot\). The ac-box \([\alpha]_{\{\mathsf{A},\mathsf{C}\}}\psi\) additionally means that the communication of (un)finished computations fulfills commitment \(\mathsf{C}\). In our modal treatment of ac-reasoning [43], assumption \(\mathsf{A}\) and program \(\alpha\) determine the reachable worlds together, i.e., only computations need to be considered whose incoming communication meets \(\mathsf{A}\).
Definition 6 (Formula semantics): The semantics \(\llbracket\varphi\rrbracket\subseteq\mathcal{S}\) of a formula \(\varphi\in\mathrm{Fml}(V,\Omega)\) is defined as \(\llbracket\varphi\rrbracket=\{v\ |\ v\vDash\varphi\}\) using the _satisfaction relation \(\vDash\)_. The relation \(\vDash\) is defined by induction on the structure of \(\varphi\) as follows:
1. \(v\vDash e_{1}{=}e_{2}\) if \(\llbracket\!e_{1}\rrbracket v=\llbracket\!e_{2}\rrbracket v\). Accordingly, for \(\eta_{1}{\geq}\eta_{2}\), \(ie_{1}{\geq}ie_{2}\), \(te_{1}{\preceq}te_{2}\)
2. \(v\vDash\varphi\wedge\psi\) if \(v\vDash\varphi\) and \(v\vDash\psi\). Accordingly, for \(\neg,\vee,\rightarrow\)
3. \(v\vDash\forall z\,\varphi\) if \(v_{z}^{d}\vDash\varphi\) for all \(d\in type(z)\)
4. \(v\vDash\exists z\,\varphi\) if \(v_{z}^{d}\vDash\varphi\) for some \(d\in type(z)\)
5. \(v\vDash[\alpha]\psi\) if \(w\cdot\tau\vDash\psi\) for all \((v,\tau,w)\in\llbracket\!\alpha\rrbracket\) with \(w\neq\bot\)
6. \(v\vDash[\alpha]_{\{\mathsf{A},\mathsf{C}\}}\psi\) if for all \((v,\tau,w)\in\llbracket\!\alpha\rrbracket\) the following conditions hold:
\[\{v\cdot\tau^{\prime}\ |\ \tau^{\prime}\prec\tau\}\vDash\text{ implies }v \cdot\tau\vDash\mathsf{C}\] (commit) \[\big{(}\{v\cdot\tau^{\prime}\ |\ \tau^{\prime}\preceq\tau\}\vDash \text{ and }w\neq\bot\big{)}\text{ implies }w\cdot\tau\vDash\psi\] (post)
Where \(U\vDash\varphi\) for a set of states \(U\subseteq\mathcal{S}\) and any formula \(\varphi\in\mathrm{Fml}(V,\Omega)\) if \(v\vDash\varphi\) for all \(v\in U\). In particular, \(\emptyset\vDash\varphi\).
In item 6, (commit) is checked after each communication event as desired since _all_ communication prefixes are reachable worlds by prefix-closedness of the program semantics \(\llbracket\!\alpha\rrbracket\) (Def. 4). Via state-trace concatenation \(v\cdot\tau\) and \(w\cdot\tau\) in item 5 and item 6, the communication events recorded in \(\tau\) become observable. This follows the realization that the reachable worlds of a CHP consist of the final state and the communication trace.
Remark 1: In (commit), assumptions are only available about the communication _strictly_ before to prevent unsound circular reasoning [17, 43]. With a non-strict definition, the formula \(y=0\to[\operatorname{ch}(h)!y]_{\{\mathsf{A},\mathsf{C}\}}\mathsf{T}\), where \(\mathsf{A}\equiv\mathsf{C}\equiv|h\downarrow\operatorname{ch}|>0\to\mathtt{ val}(h\downarrow\operatorname{ch})=1\), would get valid. Locally, we are aware of the contradiction between the precondition \(y=0\) and what is assumed by \(\mathsf{A}\), whereas the environment is not and would trust in the promise \(\mathsf{C}\).
Proposition 1 (Conservative extension): _The logic \(d\mathcal{L}_{\mathrm{CHP}}\) is a conservative extension of \(d\mathcal{L}\). That is, a formula \(\varphi\in\operatorname{Fml}(V,\Omega)\cap\operatorname{Fml}_{d\mathcal{L}}\) is valid in \(d\mathcal{L}_{\mathrm{CHP}}\) iff it is valid in \(d\mathcal{L}\), where \(\operatorname{Fml}_{d\mathcal{L}}\) is the set of \(d\mathcal{L}\) formulas (see Appendix 0.A)._
### Calculus
This section develops a sound (see Theorem 1.1) proof calculus for \(d\mathcal{L}_{\mathrm{CHP}}\), summarized in Fig. 4 on page 4. In Fig. 5 on page 4, we provide common derived rules. Since \(d\mathcal{L}_{\mathrm{CHP}}\) is a conservative extension of \(d\mathcal{L}\) (Proposition 1), the entire \(d\mathcal{L}\) sequent calculus [33, 34, 36] can be used soundly for reasoning about \(d\mathcal{L}_{\mathrm{CHP}}\) formulas. A _sequent_\(\Gamma\vdash\Delta\) with finite lists of formulas \(\Gamma\), \(\Delta\) is short for \(\bigwedge_{\varphi\in\Gamma}\varphi\to\bigvee_{\psi\in\Delta}\psi\).
Each program statement is axiomatized by a dynamic box \([\cdot]\psi\) or an ac-box \([\cdot]_{\{\mathsf{A},\mathsf{C}\}}\psi\). Axioms \([\epsilon]_{\mathsf{AC}}\) and \([]_{\top,\top}\) for switching between dynamic and ac-boxes mediate between them. The dynamic axioms are as usual in differential [33] dynamic logic [11]. The ac-axioms re-express Hoare-style ac-reasoning [43] as a dynamic logic. However, we design more atomic axioms for parallel composition and communication from which the proof rule \([]_{\mathsf{AC}}\mathsf{R}\) for parallel composition and proof rules for communication derive.
Noninterference (Def. 7) identifies valid instances of the formula \([\alpha]_{\{\mathsf{A},\mathsf{C}\}}\psi\to[\alpha\parallel\beta]_{\{\mathsf{ A},\mathsf{C}\}}\psi\), which we introduce as axiom \([]_{\mathsf{AC}}\) in Fig. 4. For formula \(\chi\), the accessed channels \(C\!N(\chi)\subseteq\Omega\) are those channels whose communication may influence the truth value of \(\chi\), e.g., ch in \(|h\downarrow\operatorname{ch}|>0\). That is, \(C\!N(\chi)\) plays a similar role for the communication traces \(v|_{V_{\mathcal{T}}}\) (the state restricted to \(V_{\mathcal{T}}\)) with \(v\in\mathcal{S}\) as \(F\!V(\chi)\) does for the overall state \(v\). For program \(\alpha\), the set \(C\!N(\alpha)\) denotes the communication channels used.
Definition 7 (Noninterference): Given an ac-box \([\alpha\parallel\beta]_{\{\mathsf{A},\mathsf{C}\}}\psi\) the \(\mathit{CHP}\)\(\beta\) _does not interfere_ with its surrounding contract if the following conditions hold:10
Footnote 10: The definition only restricts \(\beta\)’s influence on formulas \(\mathsf{A}\), \(\mathsf{C}\), \(\psi\) but not on program \(\alpha\) because in parallel composition \(\alpha\parallel\beta\), the subprograms must not share state anyway.
\[F\!V(\psi)\cap B\!V(\beta)\subseteq\{\mu\}\cup V_{\mathcal{T}} \tag{1}\] \[F\!V(\chi)\cap B\!V(\beta)\subseteq V_{\mathcal{T}}\;\;(\text{for } \chi\in\{\mathsf{A},\mathsf{C}\})\] (2) \[C\!N(\chi)\cap C\!N(\beta)\subseteq C\!N(\alpha)\;\;(\text{for } \chi\in\{\mathsf{A},\mathsf{C},\psi\}) \tag{3}\]
Clearly, state variables bound by \(\beta\) and free in \(\chi\in\{\mathsf{A},\mathsf{C},\psi\}\) would influence \(\chi\)'s truth in \([\alpha\parallel\beta]_{\{\mathsf{A},\mathsf{C}\}}\psi\). But equation (1) and equation (2) do not capture trace variables \(V_{\mathcal{T}}\) since they are also \(\alpha\)'s interface with communication that might be joint with \(\beta\). However, equation (3) restricts access to trace variables in \(\chi\) to those channels whose communication can be observed either exclusively from \(\alpha\) or as joint communication between \(\alpha\) and \(\beta\), thus prevents influence of \(\beta\) on \(\chi\) beyond what is already caused by \(\alpha\). Still Def. 7 allows full access to \(\alpha\)'s communication including the joint communication with \(\beta\).
#### 3.2.2 Dynamic ac-reasoning.
In Hoare-style ac-reasoning [43], a distinguished history variable records communication globally. Assuming that \(h\) is such a variable in \(d\mathcal{L}_{\text{CHP}}\), a tempting but _wrong_ axiomatization of the send statement would be
\[[\text{ch}!\theta]\psi(h)\leftrightarrow\forall h_{0}\,\big{(}h_{0}=h\cdot \langle\text{ch},\theta,\mu\rangle\to\psi(h_{0})\big{)}.\] ( \[\mathsection\] )
Applying it to
\[\vdash[\text{ch}_{1}!\theta_{1}][\text{ch}_{2}!\theta_{2}]\psi(h) \tag{4}\]
results in \(h_{0}=h\cdot\langle\text{ch}_{1},\theta_{1},\mu\rangle\vdash[\text{ch}_{2}! \theta_{2}]\psi(h_{0})\). After this step, the ongoing history is \(h_{0}\). However, another application leads to \(h_{0}=h\cdot\langle\text{ch}_{1},\theta_{1},\mu\rangle,h_{1}=h\cdot\langle \text{ch}_{2},\theta_{2},\mu\rangle\vdash\psi(h_{0})\). Incorrectly, communication is appended to \(h\) again and \(\psi(h_{0})\) still refers to \(h_{0}\). Problematically, the substitution \(([\text{ch}_{2}!\theta_{2}]\psi(h))\{h\mapsto h_{0}\}\) during the first application does not guide \(\text{ch}_{2}!\theta_{2}\) to append its communication to \(h_{0}\). Without \(h\) occurring syntactically but being free in \(\text{ch}_{2}!\theta_{2}\) the substitution does not even have a meaningful definition. For a similar reason, axiom
\[[\text{ch}!\theta]\psi(h)\leftrightarrow\psi(h\cdot\langle\text{ch},\theta, \mu\rangle)\] ( \[\mathsection\] )
is unsound as applying it twice to equation (4) leads to \(\vdash\psi(h\cdot\langle\text{ch}_{2},\theta_{2},\mu\rangle\cdot\langle\text{ ch}_{1},\theta_{1},\mu\rangle)\) with the communication items in wrong order. Here the axiom is not able to append the second item at the right position because there is no symbolic name for the state \(h\cdot\langle\text{ch}_{1},\theta_{1},\mu\rangle\) of history after the first application.
To enable symbolic execution, we drop the assumption of a distinguished history variable and annotate each communication statement \(\text{ch}(h)!\theta\) and \(\text{ch}(h)?x\) with an explicit recorder variable \(h\). Now, substitution \(\alpha\{h\mapsto h_{0}\}\) is defined easily as \(\text{ch}(h_{0})!\theta\) for \(\alpha\equiv\text{ch}(h)!\theta\), and as \(\text{ch}(h_{0})?x\) for \(\alpha\equiv\text{ch}(h)?x\), and as \(\alpha\) for other atomic programs, and by recursive application otherwise.
#### 3.2.3 Atomic Hybrid Programs.
For an (atomic) non-communicating program \(\alpha\), we can flatten \([\alpha]_{\{\mathsf{A},\mathsf{C}\}}\psi\) by axiom \([\epsilon]_{\mathsf{AC}}\) to a dynamic formula because \(\mathsf{A}\) and \(\mathsf{C}\) only refer to the initial state when \(\mathit{CN}(\alpha)=\emptyset\). Subsequently, we can execute the program by its dynamic axiom ([:=], [:*], [?], [?]). Note that by conservative extension (Proposition 1) axiom [?] only applies to \([\{x^{\prime}=\theta\,\&\,\chi\}]\psi\) if the ODE \(x^{\prime}=\theta\) matches the underlying semantics, i.e., if \(\mu\in x\), which has right-hand side 1 for well-formed continuous evolution \(x^{\prime}=\theta\). Therefore, axiom \([\mu]\) allows to materialize the flow of global time \(\mu\) as evolution \(\mu^{\prime}=1\) whenever necessary.
\[[:=] [x:=\theta]\psi(x)\leftrightarrow\psi(\theta) [;]_{\mathbf{\mathcal{L}}} [\alpha;\beta]_{\{\mathsf{A,C}\}}\psi\leftrightarrow[\alpha]_{\{\mathsf{A,C }\}}[\beta]_{\{\mathsf{A,C}\}}\psi\] \[[:=] [x:=*]\psi\leftrightarrow\forall x\,\psi [[\cup]_{\mathbf{\mathcal{L}}} [\alpha\cup\beta]_{\{\mathsf{A,C}\}}\psi\leftrightarrow[\alpha]_{\{\mathsf{A,C }\}}\psi\otimes[\beta]_{\{\mathsf{A,C}\}}\psi\] \[[?] [?\chi]\psi\leftrightarrow(\chi\to\psi) [[^{*}]_{\mathbf{\mathcal{L}}} [\alpha^{*}]_{\{\mathsf{A,C}\}}\psi\leftrightarrow[\alpha^{0}]_{\{\mathsf{A,C }\}}\psi\wedge[\alpha]_{\{\mathsf{A,C}\}}[\alpha^{*}]_{\{\mathsf{A,C}\}}\psi^{a}\] \[[]\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Communication Statements.** Axiom [ch!]\({}_{\mbox{\scriptsize{\bf AC}}}\) unfolds (commit) for an ac-box of a single send statement into a dynamic box. The effect on the recorder variable \(h\) of executing sending \(\mbox{ch}(h)!\theta\) is captured by axiom [ch!]. It records the event \(\langle\mbox{ch},\theta,\mu\rangle\) using the current global time \(\mu\) as timestamp and renames the history in the postcondition for subsequent proof steps. Axiom [ch?]\({}_{\mbox{\scriptsize{\bf AC}}}\) allows to execute a receive statement by its duality with send. Derived rule \([!\theta]_{\mbox{\scriptsize{\bf AC}}}\)R combines \([\epsilon]_{\mbox{\scriptsize{\bf AC}}}\) and [ch!], and decomposes the statement into two premises for (commit) and one for (post). Derived rule CG is useful as it eliminates the need for case distinction about empty history by prefixing the history with additional ghost communication.
**Parallel Composition.** A non-interfering program \(\beta\) (Def. 7) can be dropped from parallel composition \([\alpha\parallel\beta]_{\{\mbox{\scriptsize{\bf A}},\mbox{\scriptsize{\bf C}} \}}\psi\) by axiom \([]_{\mbox{\scriptsize{\bf AC}}}\) because it has no influence on the surrounding contract \([\alpha\parallel\_]_{\{\mbox{\scriptsize{\bf A}},\mbox{\scriptsize{\bf C}} \}}\psi\). Since \(\parallel\) is associative and commutative (see Appendix 0.C), axiom \([]_{\mbox{\scriptsize{\bf AC}}}\) can drop any subprogram in a chain of parallel statements. In parallel composition of \([\alpha_{j}]_{\{\mbox{\scriptsize{\bf A}}_{j},\mbox{\scriptsize{\bf C}}_{j}\} }\psi_{j}\) for \(j=1,2\), the commitments mutually contribute to the assumptions. This can weaken the assumption of \(\alpha_{1}\parallel\alpha_{2}\) about its environment to \(\mathsf{A}\) by axiom \([]_{\mbox{\scriptsize{\bf K}}}\) if the compositionality condition \(\mathsf{K}\equiv(\mathsf{A}\wedge\mathsf{C}_{1}\to\mathsf{A}_{2})\wedge( \mathsf{A}\wedge\mathsf{C}_{2}\to\mathsf{A}_{1})\) is valid.
The derived rule \([]_{\mbox{\scriptsize{\bf AC}}}\)R combines axioms \([]\_]_{\mbox{\scriptsize{\bf AC}}}\) and \([]_{\mbox{\scriptsize{\bf K}}}\) for full decomposition of parallelism. Reasoning about a parallel \([\alpha_{1}\parallel\alpha_{2}]_{\{\mathsf{A},\mbox{\scriptsize{\bf C}}\}}\psi\) with arbitrary \(\mathsf{A}\), \(\mathsf{C}\), and \(\psi\) is the task of constructing \(\mathsf{A}_{j}\), \(\mathsf{C}_{j}\), and \(\psi_{j}\) for \(j=1,2\) such that \(\mathsf{C}_{1}\wedge\mathsf{C}_{2}\to\mathsf{C}\), and \(\psi_{1}\wedge\psi_{2}\to\psi\) are valid, and such that \([]_{\mbox{\scriptsize{\bf AC}}}\)R is applicable. Since the side condition of \([]_{\mbox{\scriptsize{\bf AC}}}\)R about noninterference still allows \(\mathsf{C}_{j}\) and \(\psi_{j}\) for \(j=1,2\) to access \(\alpha_{j}\)'s communication including the joint communication with \(\alpha_{3-j}\), the formulas can cover the complete communication of \(\alpha_{1}\parallel\alpha_{2}\).
#### 0.a.0.3 Miscellaneous.
Ac-boxes distribute over conjunctions by axiom \([]_{\mbox{\scriptsize{\bf AC}}}\wedge\) except for assumptions just like preconditions \(\varphi_{j}\) do not distribute in \(\varphi_{1}\wedge\varphi_{2}\to[\alpha]\psi\).
Figure 5: Derived \(d\mathcal{L}_{\mathrm{CHP}}\) proof rules
Rule \(\mathrm{M}[\cdot]_{\mathbf{AC}}\) generalizes monotonicity from dynamic to ac-boxes. Ac-weakening \(\mathrm{W}[]_{\mathbf{AC}}\) exploits totality of the program semantics \([\![\alpha]\!]\) to add or drop \(\mathsf{C}\) in the initial state. Moreover, adding or dropping \(\mathsf{C}\) and \(\mathsf{A}\to\psi\) by \(\mathrm{W}[]_{\mathbf{AC}}\) in the final state is due to (commit) and (post), respectively. Rule \(\mathrm{G}_{\mathbf{AC}}\) is the ac-version of the Godel-generalization rule.
First-order formulas \(\mathrm{FOL}_{\mathbb{N}}(V_{\mathbb{N}})\) over \(\mathrm{Trm}_{\mathbb{N}}(V,\Omega)\) without length computations \(|te|\), can be handled by an effective oracle proof rule (called \(\mathbb{N}\)) since Presburger arithmetic is decidable [37]. Likewise, first-order real arithmetic \(\mathrm{FOL}_{\mathbb{R}}(V_{\mathbb{R}})\) is decidable [40], and we use an oracle rule for it (called \(\mathbb{R}\)) as in \(d\mathcal{L}\)[30]. However, the full first-order fragment \(\mathrm{FOL}(V,\Omega)\) of \(d\mathcal{L}_{\mathrm{CHP}}\) is not decidable because of alternating quantifiers of trace and integer variables [5].
Instead, reasoning about trace terms is by simple algebraic laws for successive simplification (see Appendix 0.C) [42]. For applicability of rules \(\mathbb{R}\) and \(\mathbb{N}\) trace subterms can be rewritten with fresh variables. For example, \(\mathtt{val}(te_{1}[ie])<\mathtt{val}(te_{2}[ie])\to\mathtt{val}(te_{1}[ie])+ \theta<\mathtt{val}(te_{2}[ie])+\theta\) is valid since \(x<y\to x+\theta<y+\theta\) is valid in \(\mathrm{FOL}_{\mathbb{R}}(V_{\mathbb{R}})\). Ultimately, we use \(\mathbb{N}\) and \(\mathbb{R}\) modulo trace terms, i.e., perform rewritings silently.
Our central contribution is the soundness theorem about the compositional \(d\mathcal{L}_{\mathrm{CHP}}\) proof calculus. The proof rules given in Fig. 5 derive (see Appendix 0.C).
Theorem 2.1: _The \(d\mathcal{L}_{\mathrm{CHP}}\) calculus (see Fig. 4) is sound (see Appendix 0.C)._
## 3 Demonstration of \(d\mathcal{L}_{\mathrm{CHP}}\)
We demonstrate our calculus outlining a proof of the safety condition from Example 3 on page 3 about the convoy in Fig. 2 on page 3. After decomposing the parallel statement, the proof proceeds purely mechanical by statement-by-statement symbolic execution. It starts in Fig. 7 using a standard pattern for decomposing a parallel statement: First, introduce commitments and postconditions (see Fig. 6) by axiom \([\![]\!]_{\top,\top}\) and rule \(\mathrm{M}[\cdot]_{\mathbf{AC}}\), which relate the subprograms, such that second, rule \([\![]\!]_{\mathbf{AC}}\) becomes applicable. The latter also makes the commitments mutual assumptions.
The formulas in Fig. 6 relating the subprograms clearly reduce their complex interior and only depend on the small communication interface and local variables, thus are independent of any knowledge about the internal structure of the respective other car. Adding initial ghost communication (rule 0.C) exploits the flexibility of explicit history variables and dynamic logic to avoid cumbersome case distinction about empty history.
Fig. 7 decomposes \(x_{f}<x_{l}\) into \(\psi_{f}\) and \(\psi_{l}\) since follower stays behind leader's last known position \(\mathtt{val}(h_{\mathrm{pos}})\), whereas leader never drives backward by \(\psi_{l}\). Indeed, follower stores the last known distance \(\mathtt{val}(h_{\mathrm{pos}})-x_{f}\) in \(d\) and \(\epsilon-\Delta(\mu,h_{\mathrm{pos}})\) bounds the waiting time till the next position update along channel pos. Thus, follower stays behind leader when driving with speed \(\nicefrac{{d}}{{\epsilon}}\).
Fig. 8 continues from Fig. 7 and demonstrates the reasoning along one execution path of follower. The invariant F for induction \(\mathtt{ind}_{\mathbf{AC}}\) bounds follower's
speed by \(\nicefrac{{d}}{{e}}\) such that it stays behind leader before the next position update. TA indicates trace algebra reasoning. The remaining proof is mechanical symbolic execution. In particular, combining \([\epsilon]_{\mathbf{\mathcal{AC}}}\) and \(\mathbb{W}[]_{\mathbf{\mathcal{AC}}}\) swallows the ac-formulas \(\{\mathbf{\mathcal{A}},\mathbf{\mathcal{T}}\}\) and using duality axiom \([\mbox{\rm ch?}]_{\mathbf{\mathcal{AC}}}\) allows to execute the communication by rule \([\![\theta]\!]_{\mathbf{\mathcal{AC}}}\mathbb{R}\). Axiom \([\mu]\) materializes the flow of the global time \(\mu\) making the solution axiom \([\![^{\prime}]\!]\) applicable. Weakening WL drops irrelevant premises, \(\forall\mathbb{R}\) introduces fresh for quantified variables, and \(=\!\!\!L\) and \(=\!\!\!R\) perform substitution on the left and right, respectively. Trace algebra TA evaluates the assumption A. Finally, the proof concludes by real arithmetic \(\mathbb{R}\) modulo trace terms.
## 4 Related Work
Unlike CHPs, Hybrid CSP (HCSP) [18] extends CSP [14] with _eager_ continuous evolution terminating on violation of the evolution constraint. This reduced non-determinism leaves negligible room for parallel programs to agree on a duration, which easily results in empty behavior and vacuous proofs, Non-eager evolution in CHPs subsumes eager runs. Instead of exploiting their compositional models as in \(d\mathcal{L}_{\mathrm{CHP}}\), other hybrid process algebras are verified non-compositionally by translation to model checking [6, 24, 39]. Unlike CHPs, which demonstrated to model and reason about loss of communication out of the box, meta-level
components [4, 7, 13, 19, 22, 23, 26] would need to be rethought to integrate lossy communication as for every other new application as well.
Hybrid Hoare-logic (HHL) for HCSP [21] is non-compositional [41]. Wang _et al_. [41] extend it with assume-guarantee reasoning (AGR) in a way that, unlike \(d\mathcal{L}_{\mathrm{CHP}}\), becomes non-compositional again. Unfortunately, their rule for parallel composition still explicitly unrolls all interleavings in the postcondition for communication traces reflecting the structure of the subprograms. Assumptions and guarantees in HHL cannot specify the communication history but consider readiness for reasoning about deadlock freedom for future work [41]. Externalizing the complete observable behavior (and program structure) in this way devalues the whole point of compositionality [38, Section 1.6.2] but only postpones reasoning about the exponentially many interleavings. Similarly, Guelev _et al_. encode the semantics of the parallel composition into the postcondition [10].
Hoare-style ac-reasoning [16, 43, 44] including Hoare-style reasoning for HCSP [10, 21, 41] lacks symbolic execution as intuitive reasoning principle but manages
with a distinguished history variable since multiple Hoare-triples cannot be considered together. \(d\mathcal{L}_{\mathrm{CHP}}\) makes symbolic execution possible despite communication through explicit trace variables referring to the different possible states of the history in a proof. The resulting combination of ac-reasoning and dynamic logic allows flexible switch between first-order, dynamic, and ac-reasoning while the axioms are simple capturing discrete, continues, or communication behavior. Unlike \(d\mathcal{L}_{\mathrm{CHP}}\), which has a global flow of time due to continuous evolution, calculi for distributed real-time computation [16; 17] need to consider the waiting for termination of time-consuming discrete statements.
Unlike other \(d\mathcal{L}\) approaches [19; 22; 26], \(d\mathcal{L}_{\mathrm{CHP}}\) has a parallel operator with built-in time-synchronization as first-class citizen in hybrid programs that can be arbitrarily nested with other programs, rather than parallel composition of meta-level components with a explicit time model. Modeling of parallelism by nondeterministic choice additionally requires extra care to ensure execution periodicity [22]. In contrast to first-order constraints relating at most consecutive I/O events [19; 22; 26], \(d\mathcal{L}_{\mathrm{CHP}}\) can reason about invariants of the whole communication history. Different from our integrated reasoning about discrete, hybrid, and communication behavior, Kamburjan _et al_. [19] separate reasoning about communication from hybrid systems reasoning.
Quantified differential dynamic logic \(\mathrm{Q}d\mathcal{L}\)[32] allows reasoning about parallel compositions of an unbounded number of distributed CPSs. Unlike \(d\mathcal{L}_{\mathrm{CHP}}\) that can reason about interactions of entirely different programs, parallelism in \(\mathrm{Q}d\mathcal{L}\) is restricted to subprograms with a _homogeneous_ structure.
Different from the denotational semantics of CHPs, parallel composition of hybrid automata [4; 8; 13; 23], just like Hoare-style reasoning about HCSP [10; 41], always fall back to the combinatorial exploration of parallelism. Consequently, even AGR approaches [4; 8; 12; 23] for hybrid automata that mitigate the state space explosion for subautomata, eventually resort to large product automata later. In contrast, \(d\mathcal{L}_{\mathrm{CHP}}\)'s proof rule for parallel composition exploits the built-in compositionality of its semantics enabling verification of subprograms truly independent of their environment except for the communication interface. Unlike ac-formulas in \(d\mathcal{L}_{\mathrm{CHP}}\), which can capture change, rate, delay, or noise for arbitrary pairings of communication channels, overapproximation is limited to coarse abstractions by timed transition systems [8], components completely dropping knowledge about continuous behavior [13], or static global contracts [4]. Where \(d\mathcal{L}_{\mathrm{CHP}}\) inherits complete reasoning about differential equation invariants from \(d\mathcal{L}\), automata approaches are often limited to linear continuous dynamics [8; 13].
Concurrent dynamic logic (CDL) has no way for parallel programs to interact [29]. CDL with communication [28] has CSP-style [14] communication primitives but lacks continuous behavior and a proof calculus for verification.
## 5 Conclusion
This paper presented a dynamic logic \(d\mathcal{L}_{\mathrm{CHP}}\) for communicating hybrid programs (CHPs) with synchronous parallel composition in global time. The \(d\mathcal{L}_{\mathrm{CHP}}\) proof
calculus is the first truly compositional verification approach for communicating parallel hybrid systems. To this end, \(d\mathcal{L}_{\mathrm{CHP}}\) exploits the flexibility of dynamic logic by complementing necessity and possibility modalities with assumption-commitment (ac) modalities. Crucially, this embedding of ac-reasoning enables compositional specification and verification of parallel hybrid behavior in a way that tames their complexity. The practical feasibility of \(d\mathcal{L}_{\mathrm{CHP}}\) increases as it supports reasoning via intuitive symbolic execution in the presence of communication. All technical subtleties in the semantic construction remain under the hood such that the actual calculus naturally generalizes dynamic logic reasoning.
Future work includes developing a uniform substitution calculus [33] for \(d\mathcal{L}_{\mathrm{CHP}}\) in order to enable parsimonious theorem prover implementations [9].
#### 5.0.1 Funding Statement.
This project was funded in part by the Deutsche Forschungs-gemeinschaft (DFG) - 378803395 (ConVeY) and an Alexander von Humboldt Professorship.
| この論文では、コミュニケーション型ハイブリッドプログラム (CHP) の構成的推論検証のためのダイナミック論理 $d\mathcal{L}_\text{CHP}$ が提示されています。 CHP は、従来の混合的な離散と連続の動態を超えて、コミュニケーションと並列性を備えたCSPスタイルの操作子を追加しています。 CHP の構成的証明計算は、CHP の並列合成を含む、そのサブプログラムの証明からモジュール的に検証します。これは、ダイナミック論理の仮定結合推理に基づいています。Hoareスタイルの仮定結合とは異なり、$d\mathcal{L}_\text{CHP}$は、コミュニケーションのプリミティブを明確に記録する変数を使用して直感的なシンボリック実行をサポートしています。 $d\mathcal{L}_\text{CHP}$は、微分動的論理 $d\mathcal{L}$の保守的拡張であり、$d\mathcal{L |
2309.09103 | Optimal Estimation under a Semiparametric Density Ratio Model | In many statistical and econometric applications, we gather individual
samples from various interconnected populations that undeniably exhibit common
latent structures. Utilizing a model that incorporates these latent structures
for such data enhances the efficiency of inferences. Recently, many researchers
have been adopting the semiparametric density ratio model (DRM) to address the
presence of latent structures. The DRM enables estimation of each population
distribution using pooled data, resulting in statistically more efficient
estimations in contrast to nonparametric methods that analyze each sample in
isolation. In this article, we investigate the limit of the efficiency
improvement attainable through the DRM. We focus on situations where one
population's sample size significantly exceeds those of the other populations.
In such scenarios, we demonstrate that the DRM-based inferences for populations
with smaller sample sizes achieve the highest attainable asymptotic efficiency
as if a parametric model is assumed. The estimands we consider include the
model parameters, distribution functions, and quantiles. We use simulation
experiments to support the theoretical findings with a specific focus on
quantile estimation. Additionally, we provide an analysis of real revenue data
from U.S. collegiate sports to illustrate the efficacy of our contribution. | Archer Gong Zhang, Jiahua Chen | 2023-09-16T22:06:22 | http://arxiv.org/abs/2309.09103v1 | # Optimal Estimation under a Semiparametric Density Ratio Model
###### Abstract
In many statistical and econometric applications, we gather individual samples from various interconnected populations that undeniably exhibit common latent structures. Utilizing a model that incorporates these latent structures for such data enhances the efficiency of inferences. Recently, many researchers have been adopting the semiparametric density ratio model (DRM) to address the presence of latent structures. The DRM enables estimation of each population distribution using pooled data, resulting in statistically more efficient estimations in contrast to nonparametric methods that analyze each sample in isolation. In this article, we investigate the limit of the efficiency improvement attainable through the DRM. We focus on situations where one population's sample size significantly exceeds those of the other populations. In such scenarios, we demonstrate that the DRM-based inferences for populations with smaller sample sizes achieve the highest attainable asymptotic efficiency as if a parametric model is assumed. The estimands we consider include the model parameters, distribution functions, and quantiles. We use simulation experiments to support the theoretical findings with a specific focus on quantile estimation. Additionally, we provide an analysis of real revenue data from U.S. collegiate sports to illustrate the efficacy of our contribution.
_Keywords_: Biased sampling; Empirical likelihood; Exponential family; Exponential tilting; Statistical efficiency.
## 1 Introduction
In numerous applications, scientists may independently collect a single sample from each of the connected and similar populations. The dataset hence contains independent multiple samples. For example, to gauge the economic state of a country, econometricians may study the evolution of income distribution over time (Roine and Waldenstrom, 2015). They may hence collect cross-sectional or panel survey data from year to year (Wooldridge, 2010), and the underlying populations for these multiple samples are naturally connected. There are various approaches to data analysis of this nature. One may postulate a parametric model such as normal or gamma on these populations (Davison, 2003). However, some statistical inference procedures such as quantile estimation may have poor performance when the model assumptions are merely mildly violated. For instance, statistical analyses of a dataset under two similar models, log-normal and gamma models, can yield markedly different results (Wiens, 1999). To mitigate the risks of model misspecification, one may take a nonparametric approach (Wasserman, 2006) by not imposing any parametric assumption. However, the nonparametric approaches ignore the latent structures shared by the multiple populations from which the samples are collected, and hence fail to utilize the potential gain in statistical efficiency. As a balanced trade-off between the model misspecification risks and the statistical efficiency, we advocate semiparametric approaches (Hardle et al., 2004; Tsiatis, 2006).
Recently, there have been many research studies on the semiparametric density ratio model (DRM) (Anderson, 1979). Let \(g_{0}(x),\ldots,g_{m}(x)\) be the density functions for the multiple populations. The DRM postulates that
\[g_{k}(x)=g_{0}(x)\exp\{\boldsymbol{\theta}_{k}^{\top}\mathbf{q}(x)\},\quad k= 1,\ldots,m, \tag{1}\]
where \(\mathbf{q}(x)\) is a prespecified vector-valued function and \(\boldsymbol{\theta}_{k}\) is an unknown vector-valued parameter. The DRM effectively utilizes the latent structures shared by the multiple populations. Not surprisingly, methods developed under the DRM are often statistically more efficient than the
nonparametric methods that ignore the latent structures and use the samples separately for individual populations. In particular, the DRM-based empirical likelihood (EL) approaches (Owen, 2001) are found to have nice asymptotic properties and superior performances (Qin, 1993; Chen and Liu, 2013; Cai et al., 2017). Under the framework of the DRM and EL, the multiple samples are used together to draw inferences on each population.
These results lead to an interesting question: what is the limit of the efficiency gain through the DRM-based approach? A gold-standard target to consider is the parametric efficiency. As a semiparametric model, a DRM naturally contains many parametric models, with each forming a so-called parametric submodel. For every such parametric submodel, there is a classical Cramer-Rao lower bound. As a result, the asymptotic variance for any consistent and asymptotically normal semiparametric estimator is no smaller than the supremum of the Cramer-Rao lower bounds for all parametric submodels. Such a supremum is famously known as the semiparametric efficiency bound (Newey, 1990). When a parametric submodel contains the true distributions that generate the data and is regular, the parametric maximum likelihood estimator (MLE) has the smallest asymptotic variance that attains the Cramer-Rao lower bound (Casella and Berger, 2002). Therefore, no methods can achieve higher efficiency under the DRM than it can under its parametric submodel that the true distributions reside. This gives an efficiency upper bound for any semiparametric approach including the DRM-based estimators. Motivated by this observation, our research question becomes: is it likely that the DRM-based approaches achieve parametric efficiency under a parametric submodel? If yes, when would that happen?
In this article, we focus on the scenario where the sample size from one population is significantly larger than the sample sizes from the other populations. We are interested in the efficiency of the DRM-based estimators for the populations with smaller samples. In other words, we want to quantify how much help the large sample population can offer to improve the efficiency of the estimators for the small sample populations. The motivation for us to consider this scenario is as follows. Suppose the sample size from one population is infinite. The corresponding population distribution can therefore be regarded as known. Consequently, treating the infinite-sample population density as \(g_{0}(x)\), the semiparametric DRM in (1) would reduce to a parametric model for the other populations. We therefore expect that the DRM-based estimators for the small sample populations will achieve parametric efficiency under the corresponding parametric models.
This article establishes and proves this claim for the model parameters, distribution functions, and quantiles. Although we prove these results mostly in a two-sample scenario for clarity, our discoveries are general for the multi-sample scenario. More specifically, when one population has an extra large sample, the DRM-based estimators for the other populations are as efficient as the best possible under the true parametric submodel. For convenience, we refer to this scenario as the two-sample scenario throughout the article.
In the existing literature, there have been some discussions on the asymptotic efficiency of the DRM-based approaches, such as Zhang (2000) and Chen and Liu (2013). They focus on situation where the sample sizes all go to infinity at the same rate and find that the DRM-based quantile estimators are asymptotically more efficient than the nonparametric empirical quantiles. However, none of these studies systematically investigate the boundary of the efficiency gain under the DRM. This article tackles this important research problem by examining the aforementioned two-sample scenario. Furthermore, their theoretical results rely on the assumption that the ratio of any two sample sizes remains fixed, positive, and finite as the sample sizes aproach infinity. In contrast, we study the asymptotic efficiency of the DRM-based approaches in a different and more general context that allows the sample sizes to grow at different rates and allows the ratios of sample sizes to evolve.
The rest of the article is organized as follows. In Section 2, we elaborate more on the DRM and state the research problem. For comparison, in Section 3 we present the gold-standard parametric efficiency under a specific parametric submodel of the DRM. In Section 4, we give an overview of the EL-based inferences under the DRM, and study the asymptotic efficiency of the EL-DRM estimators of the model parameters, distribution functions, and quantiles. For illustration, we show in Section 5 how the asymptotic variance of the DRM-based quantile estimator evolves as a function of the ratio of the sample sizes when the true population distributions \(G_{k}\) are all identical but not assumed to be identical in the estimation. We conduct some simulation studies in Section 6 by generating data from normal and exponential distributions, whose results support our theoretical findings. In Section 7, we report results from a real-data analysis on the revenues of U.S. collegiate sports. Finally, Section 8 concludes our contributions in the article with a discussion. Proofs of the theoretical results are provided in Appendix.
The research problem
Suppose we have \(m+1\) independent sets of independent and identically distributed (i.i.d.) samples respectively from population distributions \(G_{0},\ldots,G_{m}\):
\[x_{k,1},\ldots,x_{k,n_{k}}\stackrel{{\mathrm{i.i.d.}}}{{\sim}}G_{k}, \hskip 14.226378ptk=0,\ldots,m, \tag{2}\]
Let \(g_{0}(\cdot),\ldots,g_{m}(\cdot)\) be the corresponding density functions with respect to some common \(\sigma\)-finite measure. They satisfy the DRM if
\[g_{k}(x)=g_{0}(x)\exp\{\boldsymbol{\theta}_{k}^{\top}\mathbf{q}(x)\},\hskip 14.226378pt k=1,\ldots,m,\]
with \(\mathbf{q}(x)\) a prespecified vector-valued function of dimension \(d\) and \(\boldsymbol{\theta}^{\top}\coloneqq(\boldsymbol{\theta}_{1}^{\top},\ldots \boldsymbol{\theta}_{m}^{\top})\) an unknown vector-valued parameter. The first element of \(\mathbf{q}(x)\) is set to be 1 so that the corresponding coefficient in \(\boldsymbol{\theta}_{k}\) is a normalization constant. Therefore, we may write \(\mathbf{q}(x)\) as
\[\mathbf{q}^{\top}(x)=(1,\mathbf{q}_{-}^{\top}(x)), \tag{3}\]
for some vector-valued function \(\mathbf{q}_{-}^{\top}(x)\) with dimension \(d-1\). Correspondingly, we may also decompose the parameter \(\boldsymbol{\theta}_{k}\) into \((\alpha_{k},\boldsymbol{\beta}_{k})\). We also require the elements of \(\mathbf{q}(x)\) to be linearly independent; otherwise, some elements of \(\mathbf{q}(x)\) are redundant. By convention, we call \(G_{0}\) the base distribution and \(\mathbf{q}(x)\) the basis function. The DRM and its inference, which we introduce later, are invariant to the choice of the base distribution. In this article, we select the population distribution from which we have a much larger sample as the base distribution.
The DRM covers many well-known parametric distribution families, with different choices of the base distribution \(G_{0}\) and the basis function \(\mathbf{q}(x)\). For example, the Poisson distribution family satisfies the DRM with \(G_{0}\) being Poisson and \(\mathbf{q}(x)=(1,x)^{\top}\); the normal distribution family satisfies the DRM with \(G_{0}\) being normal and \(\mathbf{q}(x)=(1,x,x^{2})^{\top}\); and the gamma distribution family also satisfies the DRM with \(G_{0}\) being gamma and \(\mathbf{q}(x)=(1,x,\log x)^{\top}\). In fact, any exponential family model satisfies the DRM. On the other hand, when using DRM, we do not assume that each population distribution belongs to any parametric distribution family. Instead of directly modelling the distribution of each population, the DRM takes a semiparametric approach to modelling the relationship between the multiple populations. Therefore, we may see that the
DRM is a very flexible model and has a low risk of model misspecification. A more appealing feature of the DRM is that, with an appropriately chosen basis function \(\mathbf{q}(x)\), the DRM allows us to utilize the combined data to estimate each distribution \(G_{k}\). This would lead to efficiency gain compared with estimating \(G_{k}\) individually using the sample from itself. The DRM has been widely applied in many fields; see the books Sugiyama et al. (2012) and Qin (2017). It is also closely connected with other semiparametric models such as models for biased sampling (Vardi, 1982, 1985), the exponential tilting model (Rathouz and Gao, 2009), and the proportional likelihood ratio model (Luo and Tsai, 2012). Many studies on a variety of inference problems in the literature have confirmed this efficiency gain through either theoretical or empirical analyses, including Zhang (2000); Chen and Liu (2013); Cai et al. (2017).
The research problem we consider in this article is: when the sample from \(G_{0}\) has a much larger size than the samples from the other \(G_{k}\), would the DRM-based estimators of some functionals of \(G_{k}\) achieve parametric efficiency under a parametric submodel? Without loss of generality and for clarity, hereafter we present our results mostly in the two-sample scenario, namely \(m=1\). Our results are general for the multiple samples as long as we have a much larger sample from one population than the others. We denote by \(n_{k}\) the size of the sample from \(G_{k}\), and let \(n=\sum_{k=0}^{m}n_{k}\) be the total sample size. Formally, when \(G_{0},G_{1}\) satisfy the DRM, we study the asymptotic efficiency of the estimators of some functionals of \(G_{1}\) when
\[n_{0}/n_{1}\rightarrow\infty\ \ \ \text{as}\ \ \ n_{0},n_{1}\rightarrow\infty. \tag{4}\]
The functionals of \(G_{1}\) we consider include the model parameters \(\mathbf{\theta}\), the cumulative distribution function \(G_{1}(x)\) for every \(x\), and the quantiles of \(G_{1}\).
The research problem has many applications in statistics and econometrics. For example, an important aspect of econometrics is to analyze cross-sectional and panel data with some structures (Ng, 2006; Chen et al., 2012; Lin and Ng, 2012; Boneva et al., 2015; Su et al., 2016; Gao et al., 2020; Su et al., 2023), which is useful for areas of economic research such as understanding the economic state of a population at specific time points and evaluating the effects of a policy across different groups, regions, or time periods. The DRM has been found successful for the analysis of such data in the long-term monitoring of lumber strength properties (Zidek and Lum, 2018; Chen et al., 2022). This is because the DRM-based methods are able to borrow strength
across the multiple cross-sectional samples to exploit the shared latent structures between different populations over years and regions. Therefore, it is of scientific significance to quantify the resulting efficiency gain when analyzing such type of data. Our contributions in this article achieve this aim by focusing on the situation where one wishes to make inferences on a population with a small data set, aided by a large data set from another connected population. In the real-data analysis section of this article, we study the efficiency of the DRM-based quantile estimators under such scenario based on a revenue data from U.S. collegiate sports.
## 3 Estimation efficiency under a parametric submodel
To study the efficiency of the DRM-based inferences, we first reveal the efficiency limit under its parametric submodels. A parametric submodel is a parametric family of distributions we select for the data that satisfy the semiparametric model (in our case, the DRM) assumptions. In this article, the parametric submodel we consider for \(G_{0},G_{1}\) is an exponential family model:
\[g_{0}(x) =B(x)\exp\{\boldsymbol{\eta}_{0}^{\top}\mathbf{q}_{-}(x)+A( \boldsymbol{\eta}_{0})\},\] \[g_{1}(x) =B(x)\exp\{\boldsymbol{\eta}_{1}^{\top}\mathbf{q}_{-}(x)+A( \boldsymbol{\eta}_{1})\}, \tag{5}\]
where \(\boldsymbol{\eta}_{0},\boldsymbol{\eta}_{1}\) are the unknown natural parameters to be estimated, \(\mathbf{q}_{-}(x)\) is the sufficient statistic for a single observation \(x\) that is the same as the one in (3) in the DRM, \(B(\cdot)\) is a known nonnegative function, and \(A(\boldsymbol{\eta}_{k})\) is uniquely determined by \(\boldsymbol{\eta}_{k}\):
\[A(\boldsymbol{\eta}_{k})=-\log\int B(x)\exp\{\boldsymbol{\eta}_{k}^{\top} \mathbf{q}_{-}(x)\}\mathrm{d}x,\]
such that \(g_{k}\) is indeed a density function for \(k=0,1\).
**Remark**: the results presented in the article are still valid if we consider another set of parametric submodel: \(g_{0}(x)\) being completely known and \(g_{1}(x)\) from an exponential family model. This submodel consists of joint distributions of two independent set of i.i.d. samples in the form of \(\prod_{i=1}^{n_{0}}g_{0}(x_{0i})\prod_{j=1}^{n_{1}}g_{0}(x_{1j})\exp\{\alpha+ \boldsymbol{\beta}^{\top}\mathbf{q}_{-}(x_{1j})\}\).
With the decomposition of \(\mathbf{q}(x)\) and \(\boldsymbol{\theta}\) in (3), we get an equivalent form of the DRM in (1):
\[g_{1}(x;\boldsymbol{\theta})=g_{0}(x)\exp\{\boldsymbol{\theta}^{\top}\mathbf{ q}(x)\}=g_{0}(x)\exp\{\alpha+\boldsymbol{\beta}^{\top}\mathbf{q}_{-}(x)\}, \tag{6}\]
where \(\alpha\) is a normalization constant determined by
\[\alpha=-\log\int g_{0}(x)\exp\{\mathbf{\beta}^{\top}\mathbf{q}_{-}(x)\}\mathrm{d}x.\]
Therefore, the exponential family model in (5) satisfies the DRM in (6) with
\[\mathbf{\theta}=\begin{pmatrix}\alpha\\ \mathbf{\beta}\end{pmatrix}=\begin{pmatrix}A(\mathbf{\eta}_{1})-A(\mathbf{\eta}_{0})\\ \mathbf{\eta}_{1}-\mathbf{\eta}_{0}\end{pmatrix}.\]
In other words, the exponential family model in (5) is indeed a parametric submodel of the DRM. In this section, we obtain the asymptotic results for \(G_{1}\) under this parametric submodel.
### Estimation of \(\mathbf{\theta}\) under the parametric submodel
We first study the efficiency of the parametric MLE of \(\mathbf{\theta}\) under the parametric submodel (5). This will be our gold-standard target with which we will compare the efficiency of the DRM estimator of \(\mathbf{\theta}\) to be presented later. We obtain a parametric estimator of \(\mathbf{\theta}\) using the maximum likelihood method. The parametric log-likelihood function given the two independent i.i.d. samples is
\[\ell_{\mathrm{para}}(\mathbf{\eta}_{0},\mathbf{\eta}_{1}) =\sum_{j=1}^{n_{0}}\log g_{0}(x_{0,j})+\sum_{j=1}^{n_{1}}\log g_{ 1}(x_{1,j})\] \[=n_{0}A(\mathbf{\eta}_{0})+\mathbf{\eta}_{0}^{\top}\sum_{j=1}^{n_{0}} \mathbf{q}_{-}(x_{0,j})+n_{1}A(\mathbf{\eta}_{1})+\mathbf{\eta}_{1}^{\top}\sum_{j=1}^{ n_{1}}\mathbf{q}_{-}(x_{1,j})+\sum_{k,j}B(x_{kj}).\]
From this, we get the score function: for \(k=0,1\),
\[\frac{\partial\ell_{\mathrm{para}}(\mathbf{\eta}_{0},\mathbf{\eta}_{1})}{\partial\mathbf{ \eta}_{k}}=n_{k}\frac{\partial A(\mathbf{\eta}_{k})}{\partial\mathbf{\eta}_{k}}+\sum_ {j=1}^{n_{k}}\mathbf{q}_{-}(x_{kj}).\]
We let \(\tilde{\mathbf{\eta}}_{k}\) denote the parametric MLE of \(\mathbf{\eta}_{k}\), and they satisfy
\[\frac{\partial A(\tilde{\mathbf{\eta}}_{k})}{\partial\mathbf{\eta}_{k}}=-n_{k}^{-1} \sum_{j=1}^{n_{k}}\mathbf{q}_{-}(x_{kj}).\]
We remark that for convenience, with a generic function \(f(\mathbf{y})\), we have used the notation
\[\frac{\partial f(\mathbf{y}^{*})}{\partial\mathbf{y}}=\left.\frac{\partial f(\mathbf{y})}{ \partial\mathbf{y}}\right|_{\mathbf{y}=\mathbf{y}^{*}}\]
in the preceding expression, and we retain the use of such notation hereafter.
By the invariance property of maximum likelihood estimation (see Theorem 3.1 below), which states that any function of MLE is still an MLE of the function of the parameter, we must have that the MLE of \(\mathbf{\theta}\), denoted as \(\tilde{\mathbf{\theta}}\), is given by
\[\tilde{\mathbf{\theta}}=\begin{pmatrix}A(\tilde{\mathbf{\eta}}_{1})-A(\tilde{\mathbf{\eta }}_{0})\\ \tilde{\mathbf{\eta}}_{1}-\tilde{\mathbf{\eta}}_{0}\end{pmatrix}. \tag{7}\]
Before we formally state the invariance property of the MLEs, we first introduce the so-called induced likelihood (Casella and Berger, 2002), which is also known as the profile likelihood in the literature (Barndorff-Nielsen, 1988). Suppose a distribution family is indexed by a parameter \(\theta\), and we want to find the MLE of some function of \(\theta\), denoted by \(\tau(\theta)\). When \(\tau(\cdot)\) is one-to-one, then given the MLE of \(\theta\) being \(\hat{\theta}\), it can be seen that the MLE of \(\tau(\theta)\) is \(\tau(\hat{\theta})\). When \(\tau(\cdot)\) is not one-to-one, there are some technical issues under the current definition of likelihood. To overcome these problems, we need a more general definition of the likelihood function of \(\tau(\theta)\), called the induced likelihood function, and correspondingly a more general definition of the MLE. They are defined as follows.
**Definition 3.1** (Extended definitions of likelihood and MLE (Casella and Berger, 2002)).: _Denote by \(L(\theta)\) the likelihood function of \(\theta\). The induced likelihood function for \(\eta=\tau(\theta)\) is defined as_
\[L^{*}(\eta)=\sup_{\{\theta:\tau(\theta)=\eta\}}L(\theta).\]
_Further, \(\hat{\eta}\) that maximizes the induced likelihood \(L^{*}(\eta)\) is defined as the MLE of \(\eta=\tau(\theta)\)._
With this extended definition of the MLE, we are now ready to formally state the invariance property of MLEs in the following theorem.
**Theorem 3.1** (Invariance property of MLE (Casella and Berger, 2002)).: _If \(\tilde{\theta}\) is the MLE of \(\theta\), then for any function \(\tau(\theta)\), \(\tau(\tilde{\theta})\) is an MLE of \(\tau(\theta)\)._
After obtaining the parametric MLE \(\tilde{\mathbf{\theta}}\), we now study its asymptotic properties. By the standard results on maximum likelihood estimation theory, under the exponential family model in (5) and suppose some regularity conditions are satisfied, the MLE \(\tilde{\mathbf{\eta}}_{k}\) is asymptotically normal:
\[\sqrt{n_{k}}(\tilde{\mathbf{\eta}}_{k}-\mathbf{\eta}_{k}^{*})\overset{d}{ \rightarrow}N(\mathbf{0},\mathrm{Var}_{k}^{-1}[\mathbf{q}_{-}(X)]), \tag{8}\]
as \(n_{k}\rightarrow\infty\), where \(\mathbf{\eta}_{k}^{*}\) denotes the true value of \(\mathbf{\eta}_{k}\). Applying the delta method (Casella and Berger, 2002), we have that \((A(\tilde{\mathbf{\eta}}_{k}),\tilde{\mathbf{\eta}}_{k})\) is also asymptotically normal:
\[\sqrt{n_{k}}\begin{pmatrix}A(\tilde{\mathbf{\eta}}_{k})-A(\mathbf{\eta}_{k}^{*})\\ \tilde{\mathbf{\eta}}_{k}-\mathbf{\eta}_{k}^{*}\end{pmatrix}\overset{d}{\rightarrow}N \left(\mathbf{0},\begin{pmatrix}-\operatorname{\mathbb{E}}_{k}[\mathbf{q}_{-}^{\top}( X)]\\ \mathbf{I}_{d}\end{pmatrix}\mathrm{Var}_{k}^{-1}[\mathbf{q}_{-}(X)]\left(- \operatorname{\mathbb{E}}_{k}[\mathbf{q}_{-}(X)],\,\mathbf{I}_{d}\right)\right),\]
as \(n_{k}\rightarrow\infty\). Note that here we have used a standard result on the exponential family model:
\[\frac{\partial A(\mathbf{\eta}_{1}^{*})}{\partial\mathbf{\eta}_{1}}=- \operatorname{\mathbb{E}}_{1}[\mathbf{q}_{-}(X)]. \tag{9}\]
Taking advantage of the nice mathematical properties of exponential families, we forego other otherwise necessary verifications.
Finally, because the two samples are independent of each other, we have
\[\sqrt{n_{1}}(\tilde{\mathbf{\theta}}-\mathbf{\theta}^{*}) =(n_{1}/n_{0})^{1/2}\sqrt{n_{0}}\begin{pmatrix}A(\tilde{\mathbf{\eta} }_{1})-A(\mathbf{\eta}_{1}^{*})\\ \tilde{\mathbf{\eta}}_{1}-\mathbf{\eta}_{1}^{*}\end{pmatrix}-\sqrt{n_{1}}\begin{pmatrix} A(\tilde{\mathbf{\eta}}_{0})-A(\mathbf{\eta}_{0}^{*})\\ \tilde{\mathbf{\eta}}_{0}-\mathbf{\eta}_{0}^{*}\end{pmatrix}\] \[\overset{d}{\rightarrow}N\left(\mathbf{0},\begin{pmatrix}- \operatorname{\mathbb{E}}_{1}[\mathbf{q}_{-}^{\top}(X)]\\ \mathbf{I}_{d}\end{pmatrix}\mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}(X)]\left(- \operatorname{\mathbb{E}}_{1}[\mathbf{q}_{-}(X)],\,\mathbf{I}_{d}\right)\right), \tag{10}\]
where \(n_{1}/n_{0}\to 0\), as \(n_{0},n_{1}\rightarrow\infty\). We will compare the asymptotic variance in (10) of the parametric MLE \(\tilde{\mathbf{\theta}}\) with the asymptotic variance of the DRM-based estimator of \(\mathbf{\theta}\) later.
### Distribution estimation under the parametric submodel
We next study the inference on the distribution function \(G_{1}(x)\) under the two-sample exponential family model in (5). By Theorem 3.1, the MLE of \(G_{1}(x)=G_{1}(x;\mathbf{\eta}_{1}^{*})\) is given by
\[\tilde{G}_{1}(x)=G_{1}(x;\tilde{\mathbf{\eta}}_{1})=\int_{-\infty}^{x}B(t)\exp\{ \tilde{\mathbf{\eta}}_{1}^{\top}\mathbf{q}_{-}(t)+A(\tilde{\mathbf{\eta}}_{1})\}\mathrm{d}t,\]
where we recall that \(\tilde{\mathbf{\eta}}_{1}\) is the MLE of \(\mathbf{\eta}_{1}\).
The limiting distribution of such a parametric distribution estimator \(\tilde{G}_{1}(x)\) can be straightforwardly obtained, as follows. Recall that the MLE \(\tilde{\mathbf{\eta}}_{1}\) is asymptotically normal:
\[\sqrt{n_{1}}(\tilde{\mathbf{\eta}}_{1}-\mathbf{\eta}_{1}^{*})\overset{d}{ \rightarrow}N(\mathbf{0},\mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}(X)]),\]
as \(n_{1}\rightarrow\infty\). Applying the delta method, we have that for every fixed \(x\), the MLE \(\tilde{G}_{1}(x)\) is also asymptotically normal:
\[\sqrt{n_{1}}\{\tilde{G}_{1}(x)-G_{1}(x)\} =\sqrt{n_{1}}\{G_{1}(x;\tilde{\mathbf{\eta}}_{1})-G_{1}(x;\mathbf{\eta}_{ 1}^{*})\}\] \[\overset{d}{\rightarrow}N\left(\mathbf{0},\frac{\partial G_{1}( x;\mathbf{\eta}_{1}^{*})}{\partial\mathbf{\eta}_{1}^{\top}}\mathrm{Var}_{1}^{-1}[\mathbf{q}_{ -}(X)]\frac{\partial G_{1}(x;\mathbf{\eta}_{1}^{*})}{\partial\mathbf{\eta}_{1}}\right),\ \ \text{as }n_{1}\rightarrow\infty. \tag{11}\]
Under regularity conditions, we can further simplify the variance matrix in (11) as follows. First, we note that
\[\frac{\partial G_{1}(x;\mathbf{\eta}_{1}^{*})}{\partial\mathbf{\eta}_{1}} =\int_{-\infty}^{x}\left.\frac{\partial\exp\{\mathbf{\eta}_{1}^{\top }\mathbf{q}_{-}(t)+A(\mathbf{\eta}_{1})\}}{\partial\mathbf{\eta}_{1}}\right|_{\mathbf{\eta} _{1}=\mathbf{\eta}_{1}^{*}}B(t)\mathrm{d}t\] \[=\int_{-\infty}^{x}\left\{\mathbf{q}_{-}(t)+\frac{\partial A(\bm {\eta}_{1}^{*})}{\partial\mathbf{\eta}_{1}}\right\}\exp\{\mathbf{\eta}_{1}^{*\top} \mathbf{q}_{-}(t)+A(\mathbf{\eta}_{1}^{*})\}B(t)\mathrm{d}t\] \[=\int_{-\infty}^{x}\mathbf{q}_{-}(t)\mathrm{d}G_{1}(t)+\frac{ \partial A(\mathbf{\eta}_{1}^{*})}{\partial\mathbf{\eta}_{1}}\int_{-\infty}^{x} \mathrm{d}G_{1}(t)\] \[=\mathbf{Q}(x)+G_{1}(x)\frac{\partial A(\mathbf{\eta}_{1}^{*})}{ \partial\mathbf{\eta}_{1}}, \tag{12}\]
by the definition of \(\mathbf{Q}(x)\) given in Theorem 4.3. Further, recall from (9) that \(\partial A(\mathbf{\eta}_{1}^{*})/\partial\mathbf{\eta}_{1}=-\mathbb{E}_{1}[\mathbf{q }_{-}(X)].\) Therefore, the preparation results in (9) and (12) together lead to a simpler expression for the variance matrix in (11):
\[\frac{\partial G_{1}(x;\mathbf{\eta}_{1}^{*})}{\partial\mathbf{\eta}_{1}^ {\top}}\mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}]\frac{\partial G_{1}(x;\mathbf{\eta}_ {1}^{*})}{\partial\mathbf{\eta}_{1}}\] \[=\{\mathbf{Q}(x)-\mathbb{E}_{1}[\mathbf{q}_{-}]G_{1}(x)\}^{\top} \mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}]\left\{\mathbf{Q}(x)-\mathbb{E}_{1}[ \mathbf{q}_{-}]G_{1}(x)\right\}. \tag{13}\]
We will examine later that the DRM-based estimator of \(G_{1}(x)\) attains the same asymptotic efficiency as above.
### Quantile estimation under the parametric submodel
Quantiles are very important population parameters in many applications. In this section, we study the asymptotic efficiency of the parametric quantile estimator under the exponential family model in (5). We first define, for any quantile level \(p\in(0,1)\), the \(p\)th quantile of \(G_{1}\) as
\[\xi_{p}=G_{1}^{-1}(p)\coloneqq\inf\{t:G_{1}(t)\geq p\},\]
where the inverse function \(G_{1}^{-1}(\cdot)\) is also known as the quantile function. We assume that the distribution \(G_{1}(x)\) has a density function \(g_{1}(x)\) that is positive and continuous at \(x=\xi_{p}\). Then, by Theorem 3.1, the MLE of \(\xi_{p}=G_{1}^{-1}(p;\boldsymbol{\eta}_{1}^{*})\) is given by
\[\tilde{\xi}_{p}=G_{1}^{-1}(p;\tilde{\boldsymbol{\eta}}_{1}),\]
where \(\tilde{\boldsymbol{\eta}}_{1}\) is the MLE of \(\boldsymbol{\eta}_{1}\) under the parametric model in (5).
Because the MLE \(\tilde{\boldsymbol{\eta}}_{1}\) is asymptotically normal (see (8)), the quantile MLE \(\tilde{\xi}_{p}\) is also asymptotically normal by the delta method:
\[\sqrt{n_{1}}\{\tilde{\xi}_{p}-\xi_{p}\} =\sqrt{n_{1}}\{G_{1}^{-1}(p;\tilde{\boldsymbol{\eta}}_{1})-G_{1}^ {-1}(p;\boldsymbol{\eta}_{1}^{*})\}\] \[\overset{d}{\to}N\left(\mathbf{0},\frac{\partial G_{1}^{-1}(p; \boldsymbol{\eta}_{1}^{*})}{\partial\boldsymbol{\eta}_{1}^{\top}}\mathrm{Var} _{1}^{-1}[\mathbf{q}_{-}(X)]\frac{\partial G_{1}^{-1}(p;\boldsymbol{\eta}_{1}^ {*})}{\partial\boldsymbol{\eta}_{1}}\right),\ \ \ \text{as }n_{1}\to\infty. \tag{14}\]
In addition, the variance matrix in (14) has a more specific expression, as follows. Firstly,
\[\frac{\partial G_{1}^{-1}(p;\boldsymbol{\eta}_{1}^{*})}{\partial\boldsymbol{ \eta}_{1}}=-\frac{1}{g_{1}(\xi_{p})}\frac{\partial G_{1}(\xi_{p};\boldsymbol{ \eta}_{1}^{*})}{\partial\boldsymbol{\eta}_{1}}.\]
Then, from previous derivations given in (9) and (12), we have
\[\frac{\partial G_{1}(\xi_{p};\boldsymbol{\eta}_{1}^{*})}{\partial\boldsymbol{ \eta}_{1}}=\mathbf{Q}(\xi_{p})-G_{1}(\xi_{p})\mathbb{E}_{1}[\mathbf{q}_{-}(X) ]=\mathbf{Q}(\xi_{p})-p\mathbb{E}_{1}[\mathbf{q}_{-}(X)].\]
Hence, the variance matrix in (14) can be written as
\[\frac{\partial G_{1}^{-1}(p;\boldsymbol{\eta}_{1}^{*})}{\partial \boldsymbol{\eta}_{1}^{\top}}\mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}(X)]\frac{ \partial G_{1}^{-1}(p;\boldsymbol{\eta}_{1}^{*})}{\partial\boldsymbol{\eta}_{1}}\] \[=\{\mathbf{Q}(\xi_{p})-p\mathbb{E}_{1}[\mathbf{q}_{-}(X)]\}^{ \top}\frac{\mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}(X)]}{g_{1}^{2}(\xi_{p})}\left\{ \mathbf{Q}(\xi_{p})-p\mathbb{E}_{1}[\mathbf{q}_{-}(X)]\right\}. \tag{15}\]
Our objective, to be realized later, is to investigate whether the DRM-based estimator of \(\xi_{p}\) has the asymptotic variance as low as the variance in (14) and equivalently in (15).
Estimation efficiency under the DRM
Under the DRM, the base distribution \(G_{0}\) is left unspecified. If we impose a parametric form on \(G_{0}\), the DRM would reduce to a parametric model. In this case, we gain model simplicity but bear a higher risk of model misspecification. We thus leave \(G_{0}\) unspecified and use the nonparametric EL of Owen (1988) as a platform for statistical inference under the DRM. There has been a rich literature on the coupling of the EL and the DRM; see, for example, Qin (1993); Qin and Zhang (1997); Qin (1998); Fokianos et al. (2001).
### EL-based inference under the DRM
We first review the EL method under the DRM. For convenience, let \(p_{kj}=\mathrm{d}G_{0}(x_{kj})=P(X=x_{kj};G_{0})\), the probability of observing \(x_{kj}\) under \(G_{0}\) for all applicable \(k,j\). Applying the likelihood principle, we obtain the EL based on the multiple sample under the DRM:
\[L_{n}(G_{0},\ldots,G_{m})=\prod_{k,j}P(X=x_{kj};G_{k})=\prod_{k=0}^{m}\prod_{j =1}^{n_{k}}p_{kj}\exp\{\boldsymbol{\theta}_{k}^{\top}\mathbf{q}(x_{kj})\}, \tag{16}\]
with \(\boldsymbol{\theta}_{0}\coloneqq\mathbf{0}\) by convention. We observe that \(L_{n}(\cdot)=0\) if \(G_{k}\) are continuous distribution functions. This seemingly devastating property does little harm to the usefulness of the EL. As we will see, in the EL we search for the distribution estimator within the space of discrete distributions that assign positive probability mass to the observed data. This does not eliminate much generality because every distribution can be precisely approximated by such discrete distributions when the sample size grows. Since \(L_{n}(\cdot)\) in (16) is also a function of the parameters \(\boldsymbol{\theta}\) and base distribution \(G_{0}\), we may also write its logarithm as
\[\ell_{n}(\boldsymbol{\theta},G_{0})=\sum_{k,j}\log p_{kj}+\sum_{k=1}^{m}\sum_{ j=1}^{n_{k}}\boldsymbol{\theta}_{k}^{\top}\mathbf{q}(x_{kj}). \tag{17}\]
The EL-based inferences on the population parameters are usually carried out through a profile likelihood function. We first observe that the DRM assumption in (1) implies
\[1=\int\mathrm{d}G_{0}(x),\quad 1=\int\mathrm{d}G_{r}=\int\exp\{\boldsymbol{ \theta}_{r}^{\top}\mathbf{q}(x)\}\mathrm{d}G_{0}(x),\;\;\;r=1,\ldots,m.\]
Confining the common support of \(G_{0},\ldots,G_{m}\) to the observed data \(\{x_{kj}\}_{k,j}\) yields the EL-version constraints
\[\sum_{k,j}p_{kj}=1,\quad\sum_{k,j}p_{kj}\exp\{\boldsymbol{\theta}_{r}^{\top} \mathbf{q}(x_{kj})\}=1,\ \ r=1,\ldots,m.\]
With these preparations and following the foundational work by Qin and Lawless (1994), we define the profile log-EL function of \(\boldsymbol{\theta}\) as the supremum of the log-EL \(\ell_{n}(\boldsymbol{\theta},G_{0})\) in (17) over \(G_{0}\) subject to the above constraints:
\[\tilde{\ell}_{n}(\boldsymbol{\theta})=\sup_{G_{0}}\Big{\{}\ell_{n}( \boldsymbol{\theta},G_{0}):\sum_{k,j}p_{kj}=1,\ \ \sum_{k,j}p_{kj}\exp\{\boldsymbol{\theta}_{r}^{\top}\mathbf{q}(x_{kj})\}=1,\ \ r=1,\ldots,m,\Big{\}}.\]
The above optimization problem has a simple solution by the Lagrange multiplier method. For the best relevance, we present the results when \(m=1\). We have
\[\tilde{\ell}_{n}(\boldsymbol{\theta})=-\sum_{k,j}\log\Big{[}n+\hat{\lambda}_ {1}\big{(}\exp\{\boldsymbol{\theta}^{\top}\mathbf{q}(x_{kj})\}-1\big{)}\Big{]} +\sum_{j=1}^{n_{1}}\boldsymbol{\theta}^{\top}\mathbf{q}(x_{1j}),\]
for some \(\hat{\lambda}_{1}\) satisfying \(\sum_{k,j}1/[n+\hat{\lambda}_{1}(\exp\{\boldsymbol{\theta}^{\top}\mathbf{q}(x _{kj})\}-1)]=1\). One may estimate \(\boldsymbol{\theta}\) by the maximum empirical likelihood estimator (MELE):
\[\hat{\boldsymbol{\theta}}=\arg\max\tilde{\ell}_{n}(\boldsymbol{\theta}). \tag{18}\]
At \(\boldsymbol{\theta}=\hat{\boldsymbol{\theta}}\), some algebra gives \(\hat{\lambda}_{1}=n_{1}\), and we naturally get another function by replacing \(\hat{\lambda}_{1}\) with \(n_{1}\) in the profile log-EL \(\tilde{\ell}_{n}(\boldsymbol{\theta})\):
\[\ell_{n}(\boldsymbol{\theta})=-\sum_{k,j}\log\big{[}n_{0}+n_{1}\exp\{ \boldsymbol{\theta}^{\top}\mathbf{q}(x_{kj})\}\big{]}+\sum_{j=1}^{n_{1}} \boldsymbol{\theta}^{\top}\mathbf{q}(x_{1j}). \tag{19}\]
The profile log-EL \(\tilde{\ell}_{n}(\boldsymbol{\theta})\) and \(\ell_{n}(\boldsymbol{\theta})\) have the same maximum value and maximizer. Because of this, we study the asymptotic properties of the MELE \(\hat{\boldsymbol{\theta}}\) through the analytically simpler \(\ell_{n}(\boldsymbol{\theta})\). By convention, we call this function a _dual function_ of the profile log-EL. Following Chen and Liu (2013), we regard this dual function \(\ell_{n}(\boldsymbol{\theta})\) as if it is the profile log-EL.
Our ultimate goal is to study the asymptotic efficiency of the DRM-based inferences of \(G_{1}\) that has a smaller sample under the two-sample scenario when \(n_{0}/n_{1}\to\infty\). To achieve this
goal, we first show that the MELE \(\hat{\mathbf{\theta}}\) under the DRM is asymptotic normal with the same low variance as the MLE \(\tilde{\mathbf{\theta}}\) in (7) derived under the parametric submodel (5). Therefore, when the sample sizes \(n_{0}/n_{1}\to\infty\) and the DRM holds, the inference on \(G_{1}\) is **asymptotically as efficient** as it attains under a corresponding correct parametric model for \(G_{1}\). We further prove that the DRM-based estimator of the cumulative distribution function \(G_{1}(x)\) and the quantiles of \(G_{1}\) both achieve parametric efficiency under the two-sample scenario. Proofs of the main results in this section are provided in Appendix.
### Estimation of \(\mathbf{\theta}\) under the DRM
In this section, we show that the MELE \(\hat{\mathbf{\theta}}\) in (18) under the DRM is asymptotic normal with the same low asymptotic variance as the parametric MLE \(\tilde{\mathbf{\theta}}\) in (7) under the corresponding parametric submodel in (5). Therefore, when the sample sizes \(n_{0}/n_{1}\to\infty\) and the semiparametric DRM holds, the inference on \(G_{1}\) is as efficient as we postulate a correct parametric model for \(G_{1}\).
Our theories are established under the mild conditions as follow. We use \(\mathbb{E}_{0}[X]\) and \(\mathbb{E}_{1}[X]\) for expectations calculated when \(X\) has distributions \(G_{0}\) and \(G_{1}\) respectively. We denote the true parameter value by \(\mathbf{\theta}^{*}\).
1. As \(n_{0},n_{1}\to\infty\), \(n_{0}/n_{1}\to\infty\).
2. The matrix \(\mathbb{E}_{0}[\mathbf{q}(X)\mathbf{q}^{\top}(X)]\) is positive definite.
3. For \(\mathbf{\theta}\) in a neighbourhood of the true parameter value \(\mathbf{\theta}^{*}\) or of \(\mathbf{0}\), we have \[\mathbb{E}_{0}\left[\exp\bigl{(}\mathbf{\theta}^{\top}\mathbf{q}(X)\bigr{)}\right] <\infty,\hskip 14.226378pt\mathbb{E}_{1}\left[\exp\bigl{(}\mathbf{\theta}^{ \top}\mathbf{q}(X)\bigr{)}\right]<\infty.\]
Condition (ii) implies that the matrix \(\mathbb{E}_{1}[\mathbf{q}(X)\mathbf{q}^{\top}(X)]\) is also positive definite. With respect to both \(G_{0}\) and \(G_{1}\), Condition (iii) states that the moment generating function of \(\mathbf{q}(X)\) exists in a neighbourhood of \(\mathbf{0}\). Hence, all finite-order moments of \(\|\mathbf{q}(X)\|\) are finite. Similarly, \(\|\mathbf{q}(X)\|^{L}\exp\{\mathbf{\theta}^{*\top}\mathbf{q}(X)\}\) has finite expectation for all positive \(L\). Furthermore, when \(n\) is large enough and \(\mathbf{\theta}\) is in a small neighbourhood of the truth \(\mathbf{\theta}^{*}\), the derivatives of the log-EL \(\ell_{n}(\mathbf{\theta})\) are all bounded by some polynomials of \(\|\mathbf{q}(X)\|\), and therefore they are all integrable.
The main goal of this section is the efficiency of MELE \(\hat{\mathbf{\theta}}\). The following intermediate results are helpful to comprehend the main result.
**Lemma 4.1**.: _Under Conditions_ (i) _to_ (iii)_, we have_
1. \(\mathbb{E}\big{\{}\partial\ell_{n}(\mathbf{\theta}^{*})/\partial\mathbf{\theta}\big{\}}= \mathbf{0};\)__
2. \(n_{1}^{-1/2}\{\partial\ell_{n}(\mathbf{\theta}^{*})/\partial\mathbf{\theta}\} \overset{d}{\to}N(\mathbf{0},\mathrm{Var}_{1}[\mathbf{q}(X)])\) _as_ \(n_{0},n_{1}\to\infty\)_;_
3. \(-n_{1}^{-1}\{\partial^{2}\ell_{n}(\mathbf{\theta}^{*})/\partial\mathbf{\theta}\partial \mathbf{\theta}^{\top}\}\overset{p}{\to}\mathbb{E}_{1}[\mathbf{q}(X)\mathbf{q}^{\top}(X)]\) _as_ \(n_{0},n_{1}\to\infty\)_._
Since the profile log-EL \(\ell_{n}(\mathbf{\theta})\) has items related to observations from both \(G_{0}\) and \(G_{1}\), the first expectation \(\mathbb{E}\) in the above lemma is with respect to these distributions item by item. The main result is as follows.
**Theorem 4.2**.: _Under Conditions_ (i) _to_ (iii)_, as \(n_{0},n_{1}\to\infty\), the MELE \(\hat{\mathbf{\theta}}\) defined in_ (18) _is asymptotically multivariate normal:_
We can now answer the question: is the asymptotic variance as low as it can be under the parametric submodel in (5)? Recall that \(\mathbf{q}^{\top}(x)=(1,\mathbf{q}^{\top}_{-}(x))\), and let \(\mathbf{I}_{d}\) be the identity matrix of dimension \(d\). After some matrix algebra (Harville, 1997, Theorem 8.5.11), we have
\[\left\{\mathbb{E}_{1}[\mathbf{q}(X)\mathbf{q}^{\top}(X)]\right\}^{-1}- \begin{pmatrix}1&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\end{pmatrix}\] \[=\begin{pmatrix}\mathbb{E}_{1}[\mathbf{q}^{\top}_{-}(X)]\mathrm{Var}_ {1}^{-1}[\mathbf{q}_{-}(X)]\mathbb{E}_{1}[\mathbf{q}_{-}(X)]&-\mathbb{E}_{1}[\mathbf{q}^{ \top}_{-}(X)]\mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}(X)]\\ -\mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}(X)]\mathbb{E}_{1}[\mathbf{q}_{-}(X)]&\mathrm{ Var}_{1}^{-1}[\mathbf{q}_{-}(X)]\end{pmatrix}\] \[=\begin{pmatrix}-\mathbb{E}_{1}[\mathbf{q}^{\top}_{-}(X)]\\ \mathbf{I}_{d}\end{pmatrix}\mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}(X)]\left(-\mathbb{ E}_{1}[\mathbf{q}_{-}(X)],\;\mathbf{I}_{d}\right). \tag{20}\]
With the help from the identity in (20), we can see that the asymptotic variance in Theorem 4.2 is exactly equal to the asymptotic variance of the parametric MLE of \(\mathbf{\theta}\) in (10). In other words,
when \(n_{0}/n_{1}\to\infty\), the DRM-based estimator of \(\mathbf{\theta}\) achieves the same asymptotic efficiency as the parametric estimator under the submodel (5), namely the highest possible efficiency. Estimating the parameter \(\mathbf{\theta}\) and hence also the density ratio is an interesting problem by itself. We, however, focus more on the efficiency of estimating the cumulative distribution function and quantiles of \(G_{1}\). The following sections are devoted to these tasks. As a note for notation, hereafter we may drop the dummy variable inside the expectation and variance operators for a better presentation:
\[\mathbb{E}_{1}[\mathbf{q}_{-}]=\mathbb{E}_{1}[\mathbf{q}_{-}(X)],\quad\mathrm{ Var}_{1}[\mathbf{q}_{-}]=\mathrm{Var}_{1}[\mathbf{q}_{-}(X)].\]
### Distribution estimation under the DRM
In this section, we investigate the asymptotic efficiency of the DRM-based estimator of \(G_{1}(x)\) under the two-sample scenario. We first define this estimator. With the DRM-based MELE \(\hat{\mathbf{\theta}}\) defined in (18), we have the fitted values of \(p_{kj}=P(X=x_{kj};G_{0})\) that characterize \(G_{0}\):
\[\hat{p}_{kj}=[n_{0}+n_{1}\exp\{\hat{\mathbf{\theta}}^{\top}\mathbf{q}(x_{kj})\}]^{ -1}.\]
We then naturally obtain an estimator of \(G_{0}(x)\) as \(\hat{G}_{0}(x)=\sum_{k,j}\hat{p}_{kj}\mathbb{1}(x_{kj}\leq x)\), and an estimator of \(G_{1}(x)\) under the DRM:
\[\hat{G}_{1}(x)=\sum_{k,j}\hat{p}_{kj}\exp\{\hat{\mathbf{\theta}}^{\top}\mathbf{q}( x_{kj})\}\mathbb{1}(x_{kj}\leq x), \tag{21}\]
where \(\mathbb{1}(\cdot)\) is the indicator function.
The following theorem states that \(\hat{G}_{1}(x)\) is also asymptotically normal as \(n_{0}/n_{1}\to\infty\).
**Theorem 4.3**.: _Under Conditions_ (i) _to_ (iii)_, for every \(x\) in the support of \(G_{1}\), we have that as \(n_{0},n_{1}\to\infty\),_
\[\sqrt{n_{1}}\{\hat{G}_{1}(x)-G_{1}(x)\}\overset{d}{\to}\] \[N\left(0,\{\mathbf{Q}(x)-\mathbb{E}_{1}[\mathbf{q}_{-}]G_{1}(x) \}^{\top}\mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}]\{\mathbf{Q}(x)-\mathbb{E}_{1}[ \mathbf{q}_{-}]G_{1}(x)\}\right),\]
_where we define_
\[\mathbf{Q}(x)=\int_{-\infty}^{x}\mathbf{q}_{-}(t)\mathrm{d}G_{1}(t).\]
_The variance matrix in the limiting distribution can also be written as_
\[\mathrm{Cov}_{1}[\mathbf{Q}^{\top}(X),\mathbf{1}(X\leq x)]\mathrm{Var}_{1}^{-1}[ \mathbf{q}_{-}(X)]\mathrm{Cov}_{1}[\mathbf{Q}(X),\mathbf{1}(X\leq x)],\]
_where \(\mathrm{Cov}_{1}\) denotes the covariance calculated under distribution \(G_{1}\)._
We observe that the asymptotic variance in the above theorem is the same as that in (13), which further equals the asymptotic variance of the parametric MLE \(\tilde{G}_{1}(x)\) in (11). Therefore, same as the inference on \(\mathbf{\theta}\), the DRM-based estimator of the distribution function \(G_{1}(x)\) is also as efficient as the parametric estimator under the submodel model in (5) when \(n_{0}/n_{1}\to\infty\).
### Quantile estimation under the DRM
In this section, we derive the asymptotic variance of DRM-based quantile estimator for \(G_{1}\). The \(p\)th quantile of \(G_{1}\) is denoted by \(\xi_{p}\). Estimator of \(\xi_{p}\) under the DRM is constructed based on the distribution estimator \(\hat{G}_{1}(x)\) defined in (21):
\[\hat{\xi}_{p}=\inf\{t:\hat{G}_{1}(t)\geq p\}. \tag{22}\]
Because of the relationship between the quantile and distribution function, we often study the asymptotic properties of quantile estimator through distribution estimator. With the help from Theorem 4.3 for the asymptotic normality of the DRM-based distribution estimator, we are able to derive the limiting distribution of the DRM-based quantile estimator \(\hat{\xi}_{p}\), as shown in the following theorem.
**Theorem 4.4**.: _Under Conditions_ (i) _to_ (iii)_, and suppose that the density function \(g_{1}(\cdot)\) is continuous and positive at \(\xi_{p}\). The DRM-based quantile estimator is asymptotically normal:_
\[\sqrt{n_{1}}(\hat{\xi}_{p}-\xi_{p})\overset{d}{\to}N\left(0,\{\mathbf{Q}(\xi_{ p})-p\mathbb{E}_{1}[\mathbf{q}_{-}]\}^{\top}\frac{\mathrm{Var}_{1}^{-1}[ \mathbf{q}_{-}]}{g_{1}^{2}(\xi_{p})}\{\mathbf{Q}(\xi_{p})-p\mathbb{E}_{1}[ \mathbf{q}_{-}]\}\right),\]
_as \(n_{0},n_{1}\to\infty\)._
Evidently, the asymptotic variance in Theorem 4.4 equals the variance in (15), and equivalently is as low as the asymptotic variance in (14) of the parametric MLE \(\tilde{\xi}_{p}\). Hence, under the
two-sample scenario where \(n_{1}/n_{0}\to 0\), we have proved that the DRM-based quantile estimator for \(G_{1}\) attains parametric efficiency as if the parametric model in (5) is assumed.
Beyond our core contribution, we also present a useful result that reveals a linear relationship between quantile estimator and distribution estimator. This type of result is famously known as the Bahadur representation.
**Theorem 4.5**.: _Under Conditions_ (i) _to_ (iii)_, and suppose that the density function \(g_{1}(\cdot)\) is continuous and positive at \(\xi_{p}\). The DRM-based quantile estimator has Bahadur representation:_
\[\hat{\xi}_{p}=\xi_{p}+\frac{G_{1}(\xi_{p})-\hat{G}_{1}(\xi_{p})}{g _{1}(\xi_{p})}+O_{p}(n_{1}^{-3/4}\log^{1/2}n_{1}).\]
The conclusion in Theorem 4.5 is stronger than that in Theorem 4.4.
## 5 Efficiency of DRM-based quantile estimation when \(G_{0}=g_{1}\)
For illustration, in this section we demonstrate how the asymptotic variance of the DRM-based quantile estimator \(\hat{\xi}_{p}\) evolves as a function of the sample size ratio \(k=n_{0}/n_{1}\) when two population distributions are actually identical. That is, we study the efficiency when the true model parameter \(\mathbf{\theta}^{*}=\mathbf{0}\) in the DRM (1). However, we do not assume the knowledge of \(G_{0}=G_{1}\) when fitting the DRM to the data. Although our main focus is on the situation when \(k\to\infty\), we can learn a lot from the case when \(k\) is finite but large. Applying the results from Zhang (2000) and Chen and Liu (2013), the following corollary explicitly quantifies the efficiency gap between the DRM-based quantile estimator \(\hat{\xi}_{p}\) and the parametric MLE \(\tilde{\xi}_{p}\) of the quantile.
**Corollary 5.1**.: _Assume that the DRM holds with the true \(\mathbf{\theta}^{*}=\mathbf{0}\), and \(\mathbb{E}_{0}\left[\exp\bigl{(}\mathbf{\theta}^{\top}\mathbf{q}(X)\bigr{)}\right]<\infty\) for \(\mathbf{\theta}\) in a neighbourhood of \(\mathbf{0}\). Further, assume that \(k=n_{0}/n_{1}\) is fixed, finite, and positive as \(n_{0},n_{1}\to\infty\), and that the density function \(g_{1}(x)\) is continuous and positive at \(x=\xi_{p}\). Then, the centralized DRM-based quantile estimator \(\sqrt{n_{1}}(\hat{\xi}_{p}-\xi_{p})\) has a limiting normal distribution with variance_
\[\sigma^{2}_{\hat{\xi}_{p}}=\frac{1}{k+1}\frac{p(1-p)}{g_{1}^{2}( \xi_{p})}+\frac{k}{k+1}\sigma^{2}_{\hat{\xi}_{p}}, \tag{23}\]
_where \(\sigma^{2}_{\tilde{\xi}_{p}}\) is the variance of the limiting normal distribution of the MLE \(\tilde{\xi}_{p}\) that has a matrix expression given in (15)._
The asymptotic variance in (23) depends on \(k=n_{0}/n_{1}\), providing an insight on how the efficiency of the DRM quantile estimator evolves as \(n_{0}/n_{1}\to\infty\). We now take a closer look at the first part in the right hand side of (23). Let \(F_{n,1}(x)=n_{1}^{-1}\sum_{j=1}^{n_{1}}\mathbb{1}(X_{1j}\leq x)\) denote the empirical distribution of the sample from \(G_{1}\). The empirical quantile with level \(p\) for \(G_{1}\) is \(\bar{\xi}_{p}=\inf\{x:F_{n,1}(x)\geq p\}.\) It is well known that if \(g_{1}(\xi_{p})>0\), \(\bar{\xi}_{p}\) is asymptotically normal (Serfling, 1980):
\[\sqrt{n_{1}}(\bar{\xi}_{p}-\xi_{p})\stackrel{{ d}}{{\to}}N\left(0,\frac{p(1-p)}{g_{1}^{2}(\xi_{p})}\right),\quad\text{as }n_{1}\to\infty. \tag{24}\]
Therefore, the asymptotic variance of the DRM-based quantile estimator given in (23) is a weighted average of the asymptotic variance of the empirical quantile and the asymptotic variance of the MLE. Interestingly, as \(k=n_{0}/n_{1}\) increases, the efficiency of the DRM-based quantile estimator \(\hat{\xi}_{p}\) approaches the efficiency of the MLE \(\tilde{\xi}_{p}\). Although this intriguing observation is developed under a special situation where \(G_{0}=G_{1}\) and \(k\) is assumed to be fixed and positive, it enlightens us on the efficiency of the DRM-based quantile estimator in addition to our theoretical result in Theorem 4.4 that is established under general conditions and evolving \(k\).
## 6 Simulation studies
In this section, we report some simulation results on the efficiency of the DRM-based estimators of quantiles. We are particularly interested in their efficiency in estimating lower or higher quantiles, compared to the empirical quantiles and the parametric MLEs. Because the probability density function often has small value at \(\xi_{p}\) when \(p\) is close to zero or one, the corresponding empirical quantile has very large variance according to (24). In contrast, the parametric quantile estimators are largely free from this deficiency but suffer from serious bias when the model is misspecified. Based on our theoretical results, DRM-based quantile estimator achieves the efficiency of the parametric estimator with low model misspecification risk in the presence of an extra sample with a much larger size. In the following sections, we use 1000 repetitions to obtain
the simulated biases and variances of the three quantile estimators: the DRM-based estimator, the parametric MLE, and the empirical quantile.
### Data generated from normal distributions
We first examine the performance of the DRM-based quantile estimator when data are from normal distributions. The family of normal distributions has many nice statistical properties. For example, the MLEs of the model parameters and quantiles under the normal family have closed forms. Such properties make it easier to compare the efficiency between the DRM-based estimator and the parametric MLE. Also, the normal distributions collectively satisfy the DRM with basis function \(\mathbf{q}(x)=(1,x,x^{2})^{\top}\). We assume the knowledge of this basis function but not the parametric form in the DRM approach in simulations.
We generate the first sample from \(N(\mu_{0},\sigma_{0}^{2})\) and the second sample from \(N(\mu_{1},\sigma_{1}^{2})\). We observe the performance of the DRM-based quantile estimators for various choices of the means \(\mu_{0},\mu_{1}\), standard deviations \(\sigma_{0},\sigma_{1}\), and sample sizes \(n_{0},n_{1}\). Under normal model, the parametric MLE quantile estimator of \(\xi_{p}\) is given by
\[\tilde{\xi}_{p}=\tilde{\mu}_{1}+\Phi^{-1}(p)\tilde{\sigma}_{1},\]
where \(\Phi^{-1}(p)\) is the \(p\)th quantile of the standard normal distribution, and \(\tilde{\mu}_{1}\) and \(\tilde{\sigma}_{1}\) are the sample mean and standard deviation based on \(\{x_{1j}:j=1,\ldots,n_{1}\}\). It can be shown that the asymptotic variance of the MLE \(\tilde{\xi}_{p}\) is given by
\[n_{1}\mathrm{Var}(\tilde{\xi}_{p})=\sigma_{1}^{2}\left\{1+\frac{[\Phi^{-1}(p)] ^{2}}{2}\right\}.\]
We first generate both samples from standard normal distribution, namely, \(\mu_{0}=\mu_{1}=0\) and \(\sigma_{0}=\sigma_{1}=1\). Table 1 contains the simulated biases and variances of the three quantile estimators after being properly scaled based on sample size. We consider 4 quantile levels \(p\in\{0.01,0.05,0.10,0.50\}\) and 4 sample size combinations \(n_{1}\in\{100,1000\}\) and \(n_{0}\in\{10n_{1},100n_{1}\}\). Due to symmetry of the normal distribution, we do not include the higher level quantiles. The biases and variances in the table are inflated by a factor \(\sqrt{n_{1}}\) and \(n_{1}\) respectively.
We observe that when \(n_{0}/n_{1}=100\), the asymptotic variances of the DRM-based quantile estimators approximately equal the weighted averages of the variances of the parametric MLEs (with weight \(k/(k+1)\)) and empirical quantiles (with weight \(1/(k+1)\)). This is consistent with our theoretical finding in Corollary 5.1. Further, when \(n_{0}/n_{1}\) increases from \(10\) to \(100\), the variances of the DRM-based quantile estimators \(\hat{\xi}_{p}\) rapidly approaches the variances of the parametric MLEs \(\tilde{\xi}_{p}\). The improvement in efficiency due to increased \(n_{0}/n_{1}\) is especially significant for quantiles at levels \(p\) close to zero.
We next experiment with data from normal distributions with \(\mu_{0}=1,\mu_{1}=2\) and \(\sigma_{0}^{2}=1.5,\sigma_{1}^{2}=2\). The performance of the three quantile estimators is summarized in Table 2. We note that our previous comments on the efficiency of the DRM-based estimators when \(n_{0}/n_{1}\) increases are also applicable here, and therefore we do not offer more interpretations. We anticipate other combinations of normal distribution, if not far different, will still lead to similar results.
### Data generated from exponential distributions
We then examine the performance of the three quantile estimators based on data from exponential distributions. The exponential distributions collectively fit into the DRM with the basis function \(\mathbf{q}(x)=(1,x)^{\top}\). We assume the knowledge of this \(\mathbf{q}(x)\) but not the parametric form when fitting the DRM to the data in simulations.
The quantile of the exponential distribution with mean \(\mu\) has a simple form: \(\xi_{p}=-\mu\log(1-p)\). In the two-sample setting of this article, the parametric MLE of the second population mean is the corresponding sample mean: \(\tilde{\mu}=n_{1}^{-1}\sum_{j=1}^{n_{1}}x_{1j}\). This leads to the parametric MLE quantile estimator:
\[\tilde{\xi}_{p}=-\tilde{\mu}\log(1-p),\]
with variance
\[n_{1}\mathrm{Var}(\tilde{\xi}_{p})=\mu^{2}\log^{2}(1-p).\]
We first generate both samples from exponential distributions with \(\mu=1\). We include the three quantile estimators at levels \(p\in\{0.5,0.9,0.95,0.99\}\). The exponential distribution has low
density values at higher level quantiles. Therefore, the efficiency gain at higher level quantiles are more meaningful for our investigation. We use the same sample size combinations as in the previous section, and the results are given in Table 3. We again observe the phenomenon illustrated in Corollary 5.1: the asymptotic variances of the DRM-based quantile estimators are close to the weighted averages of the variances of the parametric MLEs (with weight \(k/(k+1)\)) and empirical quantiles (with weight \(1/(k+1)\)). Same as in the normal data example, when \(n_{0}/n_{1}\) increases from \(10\) to \(100\), the efficiency of the DRM-based quantile estimator quickly approaches
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Level \(p\) & \multicolumn{2}{c}{DRM-based} & \multicolumn{2}{c}{Para MLE} & \multicolumn{2}{c}{Empirical} \\ \cline{2-7} & Bias & Var & Bias & Var & Bias & Var \\ \hline \multirow{3}{*}{0.01} & \multicolumn{4}{c}{\(n_{1}=100\), \(n_{0}=10n_{1}\),} \\ \cline{2-7} & \(0.11\) & \(5.19\) & \(0.25\) & \(3.73\) & \(1.97\) & \(10.44\) \\
0.05 & \(0.12\) & \(2.61\) & \(0.18\) & \(2.36\) & \(0.60\) & \(4.20\) \\
0.10 & \(0.11\) & \(1.97\) & \(0.15\) & \(1.82\) & \(0.33\) & \(2.77\) \\
0.50 & \(0.01\) & \(1.04\) & \(0.03\) & \(0.99\) & \(0.01\) & \(1.44\) \\ \cline{2-7} & \multicolumn{4}{c}{\(n_{1}=100\), \(n_{0}=100n_{1}\)} \\ \cline{2-7}
0.01 & \(0.17\) & \(3.73\) & \(0.18\) & \(3.57\) & \(1.90\) & \(8.91\) \\
0.05 & \(0.12\) & \(2.29\) & \(0.13\) & \(2.27\) & \(0.42\) & \(4.28\) \\
0.10 & \(0.10\) & \(1.77\) & \(0.10\) & \(1.75\) & \(0.21\) & \(2.63\) \\
0.50 & \(-0.01\) & \(0.96\) & \(-0.01\) & \(0.96\) & \(0.02\) & \(1.48\) \\ \cline{2-7} & \multicolumn{4}{c}{\(n_{1}=1000\), \(n_{0}=10n_{1}\)} \\ \cline{2-7}
0.01 & \(-0.02\) & \(4.91\) & \(0.03\) & \(3.81\) & \(0.51\) & \(13.53\) \\
0.05 & \(-0.01\) & \(2.61\) & \(0.02\) & \(2.42\) & \(0.09\) & \(4.53\) \\
0.10 & \(0.00\) & \(1.98\) & \(0.02\) & \(1.87\) & \(0.03\) & \(3.29\) \\
0.50 & \(0.00\) & \(1.10\) & \(0.01\) & \(1.03\) & \(0.05\) & \(1.58\) \\ \cline{2-7} & \multicolumn{4}{c}{\(n_{1}=1000\), \(n_{0}=100n_{1}\)} \\ \cline{2-7}
0.01 & \(-0.06\) & \(3.94\) & \(-0.05\) & \(3.83\) & \(0.58\) & \(13.66\) \\
0.05 & \(-0.05\) & \(2.46\) & \(-0.05\) & \(2.41\) & \(0.11\) & \(4.45\) \\
0.10 & \(-0.05\) & \(1.86\) & \(-0.05\) & \(1.85\) & \(0.01\) & \(2.88\) \\
0.50 & \(-0.05\) & \(0.97\) & \(-0.04\) & \(0.96\) & \(-0.04\) & \(1.52\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Simulated bias (\(\times\sqrt{n_{1}}\)) and variance (\(\times n_{1}\)) of quantile estimators. Both samples are from standard normal.
the efficiency of the parametric MLE. The improvement is particularly obvious for quantiles at levels \(p\) close to one.
We next simulate data from exponential distributions with \(\mu_{0}=1/0.3\) and \(\mu_{1}=2\). The performance of the quantile estimators is summarized in Table 4. The same conclusions regarding the DRM efficiency with growing \(n_{0}/n_{1}\) can be drawn in this situation. We expect similar findings for other combinations of exponential distributions.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Level \(p\) & \multicolumn{2}{c}{DRM-based} & \multicolumn{2}{c}{Para MLE} & \multicolumn{2}{c}{Empirical} \\ \cline{2-7} & Bias & Var & Bias & Var & Bias & Var \\ \hline \multirow{3}{*}{0.01} & \multicolumn{4}{c}{\(n_{1}=100\), \(n_{0}=10n_{1}\)} \\ \cline{2-7} & \(0.12\) & \(9.69\) & \(0.35\) & \(7.45\) & \(2.78\) & \(20.88\) \\ \cline{2-7} & \(0.05\) & \(0.16\) & \(5.27\) & \(0.26\) & \(4.72\) & \(0.85\) & \(8.41\) \\ \cline{2-7} & \(0.10\) & \(0.18\) & \(3.87\) & \(0.21\) & \(3.65\) & \(0.47\) & \(5.53\) \\ \cline{2-7} & \(0.50\) & \(0.06\) & \(2.16\) & \(0.04\) & \(1.98\) & \(0.02\) & \(2.88\) \\ \cline{2-7} & \multicolumn{4}{c}{\(n_{1}=100\), \(n_{0}=100n_{1}\)} \\ \cline{2-7} & \(0.01\) & \(0.16\) & \(7.60\) & \(0.26\) & \(7.13\) & \(2.68\) & \(17.83\) \\ \cline{2-7} & \(0.05\) & \(0.13\) & \(4.69\) & \(0.18\) & \(4.53\) & \(0.59\) & \(8.56\) \\ \cline{2-7} & \(0.10\) & \(0.11\) & \(3.57\) & \(0.14\) & \(3.51\) & \(0.30\) & \(5.27\) \\ \cline{2-7} & \(0.50\) & \(0.02\) & \(1.95\) & \(-0.01\) & \(1.91\) & \(0.02\) & \(2.96\) \\ \cline{2-7} & \multicolumn{4}{c}{\(n_{1}=1000\), \(n_{0}=10n_{1}\)} \\ \cline{2-7} & \(0.01\) & \(-0.11\) & \(9.84\) & \(0.04\) & \(7.63\) & \(0.73\) & \(27.06\) \\ \cline{2-7} & \(0.05\) & \(-0.01\) & \(5.54\) & \(0.03\) & \(4.84\) & \(0.13\) & \(9.06\) \\ \cline{2-7} & \(0.10\) & \(0.01\) & \(4.11\) & \(0.02\) & \(3.75\) & \(0.04\) & \(6.58\) \\ \cline{2-7} & \(0.50\) & \(0.03\) & \(2.24\) & \(0.01\) & \(2.07\) & \(0.07\) & \(3.16\) \\ \cline{2-7} & \(n_{1}=1000\), & \(n_{0}=100n_{1}\) & & & & \\ \cline{2-7} & \(-0.10\) & \(8.27\) & \(-0.07\) & \(7.65\) & \(0.82\) & \(27.32\) \\ \cline{2-7} & \(0.05\) & \(-0.08\) & \(5.03\) & \(-0.07\) & \(4.81\) & \(0.16\) & \(8.90\) \\ \cline{2-7} & \(0.10\) & \(-0.07\) & \(3.80\) & \(-0.06\) & \(3.69\) & \(0.02\) & \(5.76\) \\ \cline{2-7} & \(0.50\) & \(-0.05\) & \(1.94\) & \(-0.06\) & \(1.92\) & \(-0.05\) & \(3.04\) \\ \hline \end{tabular}
\end{table}
Table 2: Simulated bias (\(\times\sqrt{n_{1}}\)) and variance (\(\times n_{1}\)) of quantile estimators. Both samples are from normal.
## 7 Real-data analysis
In this section, we study the efficiency of the DRM-based quantile estimator and its parametric and nonparametric competitors with a real-world data. We use the collegiate sports budgets dataset from the TidyTuesday data project (Mock, 2022), which is accessible from the Github repository ([https://github.com/rfordatascience/tidytuesday/tree/master/data/2022/2022-03-29](https://github.com/rfordatascience/tidytuesday/tree/master/data/2022/2022-03-29)). The dataset contains yearly samples concerning some demo
\begin{table}
\begin{tabular}{c r r r r r r} \hline \hline Level \(p\) & \multicolumn{2}{c}{DRM-based} & \multicolumn{2}{c}{Para MLE} & \multicolumn{2}{c}{Empirical} \\ \cline{2-7} & Bias & Var & Bias & Var & Bias & Var \\ \hline \multicolumn{7}{c}{\(n_{1}=100,\;\;n_{0}=10n_{1}\)} \\ \cline{2-7}
0.50 & \(0.01\) & \(0.55\) & \(0.01\) & \(0.50\) & \(0.05\) & \(0.98\) \\
0.90 & \(0.02\) & \(6.06\) & \(0.03\) & \(5.57\) & \(-0.33\) & \(9.09\) \\
0.95 & \(-0.00\) & \(10.49\) & \(0.04\) & \(9.43\) & \(-0.75\) & \(18.68\) \\
0.99 & \(-0.58\) & \(25.40\) & \(0.07\) & \(22.27\) & \(-4.17\) & \(64.04\) \\ & \multicolumn{7}{c}{\(n_{1}=100,\;\;n_{0}=100n_{1}\)} \\ \cline{2-7}
0.50 & \(-0.02\) & \(0.50\) & \(-0.02\) & \(0.50\) & \(0.06\) & \(1.01\) \\
0.90 & \(-0.06\) & \(5.53\) & \(-0.06\) & \(5.48\) & \(-0.45\) & \(8.66\) \\
0.95 & \(-0.08\) & \(9.53\) & \(-0.08\) & \(9.28\) & \(-0.79\) & \(17.70\) \\
0.99 & \(-0.19\) & \(23.08\) & \(-0.12\) & \(21.92\) & \(-3.76\) & \(74.20\) \\ & \multicolumn{7}{c}{\(n_{1}=1000,\;\;n_{0}=10n_{1}\)} \\ \cline{2-7}
0.50 & \(0.03\) & \(0.54\) & \(0.03\) & \(0.49\) & \(0.05\) & \(0.98\) \\
0.90 & \(0.11\) & \(5.54\) & \(0.11\) & \(5.42\) & \(-0.01\) & \(9.00\) \\
0.95 & \(0.12\) & \(10.11\) & \(0.14\) & \(9.18\) & \(-0.21\) & \(19.57\) \\
0.99 & \(0.06\) & \(29.28\) & \(0.22\) & \(21.68\) & \(-1.54\) & \(88.77\) \\ & \multicolumn{7}{c}{\(n_{1}=1000,\;\;n_{0}=100n_{1}\)} \\ \cline{2-7}
0.50 & \(-0.01\) & \(0.46\) & \(-0.01\) & \(0.46\) & \(-0.02\) & \(0.99\) \\
0.90 & \(-0.05\) & \(5.09\) & \(-0.05\) & \(5.08\) & \(-0.19\) & \(8.22\) \\
0.95 & \(-0.07\) & \(8.63\) & \(-0.06\) & \(8.60\) & \(-0.13\) & \(19.14\) \\
0.99 & \(-0.08\) & \(21.81\) & \(-0.10\) & \(20.31\) & \(-1.29\) & \(88.06\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Simulated bias (\(\times\sqrt{n_{1}}\)) and variance (\(\times n_{1}\)) of quantile estimators. Both samples are from identical exponential.
graphics of collegiate sports in the U.S. from 2015 to 2019. The variable we consider is the total revenue (in USD) per sports team for both men and women, for which we have approximately 17,000 observations each year, and we log-transform the values to make the scale more suitable for numerical computation. As an exploratory data analysis, we plot in Figure 1 the histograms of the log-transformed revenue data for the years 2015-2019. The population distributions of the revenues in these years look similar. This is also reflected in their kernel density estimators (Silverman, 1986) as depicted in the solid curves in Figure 2. On one hand, the density estimates are much more accurate than the log-transformed revenue data.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Level \(p\) & \multicolumn{2}{c}{DRM-based} & \multicolumn{2}{c}{Para MLE} & \multicolumn{2}{c}{Empirical} \\ \cline{2-7} & Bias & Var & Bias & Var & Bias & Var \\ \hline \multicolumn{7}{c}{\(n_{1}=100\), \(n_{0}=10n_{1}\)} \\ \cline{2-7}
0.50 & \(-0.01\) & \(2.13\) & \(0.02\) & \(2.02\) & \(0.11\) & \(3.93\) \\
0.90 & \(-0.02\) & \(24.17\) & \(0.07\) & \(22.27\) & \(-0.66\) & \(36.34\) \\
0.95 & \(-0.04\) & \(41.19\) & \(0.08\) & \(37.70\) & \(-1.50\) & \(74.72\) \\
0.99 & \(-0.13\) & \(100.11\) & \(0.13\) & \(89.09\) & \(-8.35\) & \(256.15\) \\ \multicolumn{7}{c}{\(n_{1}=100\), \(n_{0}=100n_{1}\)} \\ \cline{2-7}
0.50 & \(-0.04\) & \(2.01\) & \(-0.04\) & \(1.99\) & \(0.13\) & \(4.04\) \\
0.90 & \(-0.14\) & \(21.85\) & \(-0.12\) & \(21.92\) & \(-0.91\) & \(34.64\) \\
0.95 & \(-0.16\) & \(37.42\) & \(-0.16\) & \(37.10\) & \(-1.57\) & \(70.78\) \\
0.99 & \(-0.27\) & \(89.22\) & \(-0.24\) & \(87.68\) & \(-7.52\) & \(296.81\) \\ \multicolumn{7}{c}{\(n_{1}=1000\), \(n_{0}=10n_{1}\)} \\ \cline{2-7}
0.50 & \(0.07\) & \(2.16\) & \(0.07\) & \(1.96\) & \(0.11\) & \(3.91\) \\
0.90 & \(0.17\) & \(22.64\) & \(0.22\) & \(21.68\) & \(-0.02\) & \(36.01\) \\
0.95 & \(0.15\) & \(38.85\) & \(0.28\) & \(36.70\) & \(-0.42\) & \(78.29\) \\
0.99 & \(0.27\) & \(94.78\) & \(0.43\) & \(86.73\) & \(-3.08\) & \(355.08\) \\ \multicolumn{7}{c}{\(n_{1}=1000\), \(n_{0}=100n_{1}\)} \\ \cline{2-7}
0.50 & \(-0.02\) & \(1.86\) & \(-0.03\) & \(1.84\) & \(-0.05\) & \(3.97\) \\
0.90 & \(-0.11\) & \(20.34\) & \(-0.10\) & \(20.31\) & \(-0.38\) & \(32.88\) \\
0.95 & \(-0.15\) & \(34.68\) & \(-0.13\) & \(34.38\) & \(-0.26\) & \(76.55\) \\
0.99 & \(-0.22\) & \(81.58\) & \(-0.20\) & \(81.25\) & \(-2.58\) & \(352.23\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Simulated bias (\(\times\sqrt{n_{1}}\)) and variance (\(\times n_{1}\)) of quantile estimators. Both samples are from exponential.
are close to bell-shaped, suggesting quantile estimation based on normal model is bearable. On the other hand, the estimated densities sufficiently deviate normal density as they have two or more modes, which can be better illustrated by comparing the estimated densities with the fitted normal densities depicted in the dashed curves. However, the estimated densities appear to have some common structures. Therefore, a DRM-based approach is more convincing and may work well in this situation.
We conduct a real data-based simulation as follows. We regard the yearly samples from 2015-2019 as five populations, and sample with replacement from these populations to form multiple samples repeatedly. As remarked previously, our two-sample results are applicable to the multi-sample situation when \(n_{0}/n_{j}\rightarrow\infty\) for all \(j\neq 0\). We hence make the sample size from 2015 substantially larger to mimic the situation of \(n_{0}/n_{j}\) being very large. This is meaningful as we can see how a large historical dataset could help predict the future with small datasets under the DRM. Specifically, we sample from 2016-2019 with equal sizes \(n\in\{200,500\}\), and sample from 2015 with size \(n_{0}\in\{200,1000,5000\}\). To apply the DRM-based approach, the user needs to specify
Figure 1: Histograms of log-transformed total revenues for 2015–2019.
a basis function \(\mathbf{q}(x)\). In this simulation, we use i) data-adaptive basis function \(\mathbf{q}(x)\)(Zhang and Chen, 2022) learned using the full data with \(d=2\); ii) \(\mathbf{q}(x)=(1,x)^{\top}\) whose DRM contains the normal model with equal variances; iii) \(\mathbf{q}(x)=(1,x,x^{2})^{\top}\) whose DRM contains the normal model without equal variance assumption. The simulation also correspondingly includes iv) the parametric MLE of quantile derived under the normal model with common variance assumption; v) the parametric MLE under the normal model with no assumption on equal variances; and vi) the empirical quantiles. By regarding the empirical quantiles based on the full data (treated as the populations) as the truth, we compute the simulated absolute biases, variances, and mean squared errors (MSEs) of these quantile estimators at some selected levels \(p\) for each population from 2016 to 2019. To save space, we report only the average values of the aforementioned performance measures across 2016-2019 in Tables 5-6.
Figure 2: Kernel density estimators based on the log-transformed revenue data for 2015–2019, using the Gaussian kernel and Silverman’s rule-of-thumb bandwidth (Silverman, 1986). Each pair of solid and dashed curves depicts the estimated density function and fitted normal density function respectively for one year.
We first observe that when \(n\) is fixed at 200 and 500 while \(n_{0}\) increases from 200 to 5000, the variances of the DRM-based quantile estimators with various \(\mathbf{q}(x)\) decrease in nearly all the cases. This observation supports our theoretical results. Second, the DRM-based quantile estimators are more efficient than the nonparametric empirical quantiles, which suggests that data pooling via the DRM works well for this data. Third, when \(n_{0}=5000\), the quantile estimators derived under the DRM with \(\mathbf{q}(x)=(1,x)^{\top}\) overall have comparable MSEs with the parametric estimators under the normal model with common variance assumption, except for the \(90\%\) quantile level. The same also applies to the comparison between the estimators under DRM with \(\mathbf{q}(x)=(1,x,x^{2})^{\top}\) and the parametric estimators under the normal model with no equal variance assumption. This phenomenon can partly be explained by seeing in Figure 2 that the true distributions deviate noticeably from the fitted normal distributions at the the right tails, which suggests normal model is unsatisfactory there. Finally, the DRM quantile estimators using the data-adaptive basis function \(\mathbf{q}(x)\) generally beat the DRM estimators using the two prespecified \(\mathbf{q}(x)\) in all three performance measures, except for very few cases where the results are still comparable. This is expected because when the basis function \(\mathbf{q}(x)\) is adaptively learned from the data, the resulting DRM should fit the data better than a predetermined DRM. In fact, when \(n_{0}=5000\), the adaptive DRM produces overall the most accurate quantile estimators, indicating that an appropriate DRM is suitable to the data.
## 8 Conclusions and discussion
The DRM for multi-sample data is generating growing research interest and has wide applications in statistics and econometrics. It has proven particularly useful in situations where the populations may possess shared underlying structures. The DRM provides a good trade-off between low model misspecification risk and satisfactory statistical efficiency. Its effectiveness primarily stems from its ability to enable users to draw inferences on each population using pooled data, which leads to efficiency gain compared to using individual samples that overlooks the shared latent structures. The literature has engaged in discussions regarding the efficiency of the DRM approach in comparison to the nonparametric approach. However, none of these discussions have systematically explored the limit of the efficiency gain through the DRM, which is an important
research problem. This article addresses this problem by considering a scenario where one of the samples significantly outweighs the others in size. We establish through theoretical analysis that within this context, the DRM-based estimators of model parameters, distribution functions, and quantiles for the smaller sample populations attain the efficiency as if a parametric model is assumed. In essence, for these estimands and in this scenario, we identify their highest achievable efficiency under a specific parametric model, investigate their asymptotic efficiency under the DRM, and demonstrate the equivalence between the two aforementioned efficiencies. Our simulation experiments and analyses of real-world data, with a particular focus on quantile estimation, support our theoretical discoveries. The significance of this article's contribution extends to practical scenarios where researchers aim to make inferences about a population with limited data, but can rely on a substantial dataset from a related population for support.
While the scenario examined in this article addresses many real-world applications, there exist situations where this sampling scheme is not applicable. In such cases, we may encounter a growing number of samples, each of relatively similar sizes. For example, when economists investigate the evolution of income distribution over time, they may collect income samples year after year, and these yearly samples often have comparable sizes. Consequently, it becomes intriguing to investigate the asymptotic efficiency of the DRM approach when the sample sizes \(n_{i}/n_{j}\to 1\) and the number of populations \(m\rightarrow\infty\) as \(n_{i}\rightarrow\infty\). Additionally, it is worthwhile to study the asymptotic efficiency of other population parameters than the ones we investigated. We leave these interesting problems for future work, anticipating that the technical methods and theoretical results in this article may be valuable for such inquiries.
## Acknowledgement
This research was partially supported by the Natural Sciences and Engineering Research Council of Canada (Grants RGPIN-2018-06484, RGPIN-2019-04204, and RGPIN-2020-05897), the Canadian Statistical Sciences Institute (Grant 592307), and the Department of Statistical Sciences in the University of Toronto. We also thank the Digital Research Alliance of Canada for computing support. This work was partially completed when Archer Gong Zhang was a PhD student at the University of British Columbia and a Postdoctoral Fellow at the University of Toronto.
\begin{table}
\begin{tabular}{l r r r r r r r r r r r r} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Level \(p\)}} & \multicolumn{3}{c}{DRM (adaptive \(\mathbf{q}(x)\))} & \multicolumn{3}{c}{DRM (\(\mathbf{q}(x)=(1,x)^{\,\cdot\,}\))} & \multicolumn{3}{c}{MLE (common-var normal)} & \multicolumn{3}{c}{Empirical quantile} \\ \cline{2-13} & Abs Bias & Var & MSE & Abs Bias & Var & MSE & Abs Bias & Var & MSE & Abs Bias & Var & MSE \\ \hline & \multicolumn{13}{c}{\(n_{0}=200,n=200\)} \\ \hline
[MISSING_PAGE_POST]
\begin{table}
\begin{tabular}{l r r r r r r r r r r r r} \hline \hline \multicolumn{1}{c}{Level \(p\)} & \multicolumn{3}{c}{DRM (adaptive \(\mathbf{q}(x)\))} & \multicolumn{3}{c}{DRM (\(\mathbf{q}(x)=(1,x,x^{2})^{\,\dagger}\) )} & \multicolumn{3}{c}{MIE (normal)} & \multicolumn{3}{c}{Empirical quantile} \\ \cline{2-13} & Abs Bias & Var & MSE & Abs Bias & Var & MSE & Abs Bias & Var & MSE & Abs Bias & Var & MSE \\ \hline & & & & & & \multicolumn{3}{c}{\(n_{0}=200,n=200\)} & & & & \\ \cline{2-13}
[MISSING_PAGE_POST]
Appendix
In this section, we provide the proofs of the theoretical results in Section 4.
### Proof of Lemma 4.1
This lemma claims some properties of the profile log-EL \(\ell_{n}(\boldsymbol{\theta})\) and its derivatives.
Proof.: We start with the first conclusion of the lemma that the score function \(\partial\ell_{n}(\boldsymbol{\theta})/\partial\boldsymbol{\theta}\) has expectation zero. Let \(\rho_{n,0}=n_{0}/n,\,\rho_{n,1}=n_{1}/n\) denote the sample proportions, where \(n=n_{0}+n_{1}\) is the total sample size. By Condition (i), \(\rho_{n,0}\to 1,\,\rho_{n,1}\to 0\) as \(n_{0},n_{1}\to\infty\). With this notation, the profile log-EL in (19) can be written as
\[\ell_{n}(\boldsymbol{\theta})=-\sum_{k,j}\log\big{[}\rho_{n,0}+\rho_{n,1}\exp \big{\{}\boldsymbol{\theta}^{\top}\mathbf{q}(x_{kj})\big{\}}\big{]}+\sum_{j=1} ^{n_{1}}\boldsymbol{\theta}^{\top}\mathbf{q}(x_{1j})-n\log n.\]
The EL-based score function is then
\[\frac{\partial\ell_{n}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}=- \sum_{k,j}\frac{\rho_{n,1}\exp\{\boldsymbol{\theta}^{\top}\mathbf{q}(x_{kj}) \}\mathbf{q}(x_{kj})}{\rho_{n,0}+\rho_{n,1}\exp\{\boldsymbol{\theta}^{\top} \mathbf{q}(x_{kj})\}}+\sum_{j=1}^{n_{1}}\mathbf{q}(x_{1j}).\]
For notational convenience, we let
\[h_{n}(x,\boldsymbol{\theta})=\frac{\exp\{\boldsymbol{\theta}^{\top}\mathbf{q}( x)\}}{\rho_{n,0}+\rho_{n,1}\exp\{\boldsymbol{\theta}^{\top}\mathbf{q}(x)\}},\]
where the subscript \(n\) is to emphasize that it evolves with the sample size \(n\).
It is then seen that at \(\boldsymbol{\theta}=\boldsymbol{\theta}^{*}\), we have
\[\mathbb{E}\Big{[}\frac{\partial\ell_{n}(\boldsymbol{\theta}^{*}) }{\partial\boldsymbol{\theta}}\Big{]} =-\sum_{k}n_{k}\mathbb{E}_{k}[\rho_{n,1}h_{n}(X,\boldsymbol{ \theta}^{*})\mathbf{q}(X)]+n_{1}\mathbb{E}_{1}[\mathbf{q}(X)]\] \[=-\mathbb{E}_{0}[\rho_{n,1}h_{n}(X,\boldsymbol{\theta}^{*}) \mathbf{q}(X)\{n_{0}+n_{1}\exp\bigl{(}\boldsymbol{\theta}^{*\top}\mathbf{q}(X) \bigr{)}\}]+n_{1}\mathbb{E}_{1}[\mathbf{q}(X)]\] \[=-\mathbb{E}_{0}[n_{1}\exp\{\boldsymbol{\theta}^{*\top}\mathbf{q }(X)\}\mathbf{q}(X)]+n_{1}\mathbb{E}_{1}[\mathbf{q}(X)]\] \[=\mathbf{0}.\]
This is the first conclusion of Lemma 4.1.
The second conclusion is the asymptotic normality of the score function. Despite its complex expression, the score function is a sum of three sets of i.i.d. random variables:
\[n_{1}^{-1/2}\frac{\partial\ell_{n}(\boldsymbol{\theta}^{*})}{ \partial\boldsymbol{\theta}}= n_{1}^{-1/2}\bigg{\{}\frac{\partial\ell_{n}(\boldsymbol{ \theta}^{*})}{\partial\boldsymbol{\theta}}-\mathbb{E}\big{[}\frac{\partial \ell_{n}(\boldsymbol{\theta}^{*})}{\partial\boldsymbol{\theta}}\big{]}\bigg{\}}\] \[= n_{1}^{-1/2}\sum_{j=0}^{n_{0}}\rho_{n,1}\{-h_{n}(x_{0j}, \boldsymbol{\theta}^{*})\mathbf{q}(x_{0j})-\mathbb{E}_{0}[-h_{n}(X,\boldsymbol {\theta}^{*})\mathbf{q}(X)]\} \tag{25}\] \[+ n_{1}^{-1/2}\sum_{j=1}^{n_{1}}\rho_{n,1}\{-h_{n}(x_{1j}, \boldsymbol{\theta}^{*})\mathbf{q}(x_{1j})-\mathbb{E}_{1}[-h_{n}(X,\boldsymbol {\theta}^{*})\mathbf{q}(X)]\}\] (26) \[+ n_{1}^{-1/2}\sum_{j=1}^{n_{1}}\{\mathbf{q}(x_{1j})-\mathbb{E}_{ 1}[\mathbf{q}(x_{1j})]\}. \tag{27}\]
We now study them term by term. Without loss of generality, we drop the expectations in the preceding terms (25)-(27) and proceed as if they are already centralized.
The first term (25) can be written as
\[-n_{1}^{-1/2}\sum_{j=0}^{n_{0}}\rho_{n,1}h_{n}(x_{0j},\boldsymbol{\theta}^{*} )\mathbf{q}(x_{0j})=-(\rho_{n,0}\rho_{n,1})^{1/2}\Big{\{}n_{0}^{-1/2}\sum_{j=0 }^{n_{0}}h_{n}(x_{0j},\boldsymbol{\theta}^{*})\mathbf{q}(x_{0j})\Big{\}}.\]
The term in the curly brackets is asymptotically normal with variance \(\mathrm{Var}_{0}\left[\exp\{\boldsymbol{\theta}^{*\top}\mathbf{q}(X)\} \mathbf{q}(X)\right]\) by the central limit theorem for triangular arrays (Durrett, 2019, Theorem 3.4.10) (also known as the Lindeberg-Feller central limit theorem). Note that the variance is finite by Condition (iii). Combined with that \((\rho_{n,0}\rho_{n,1})^{1/2}=o(1)\), the term (25) is \(o_{p}(1)\).
For the second term (26), by Chebyshev's inequality, \(\forall\epsilon>0\),
\[P\left(\Big{\|}-n_{1}^{-1/2}\sum_{j=1}^{n_{1}}\rho_{n,1}h_{n}(x_{1j}, \boldsymbol{\theta}^{*})\mathbf{q}(x_{1j})\Big{\|}\geq\epsilon\right)\leq \epsilon^{-2}\mathbb{E}_{1}\|\rho_{n,1}h_{n}(X,\boldsymbol{\theta}^{*}) \mathbf{q}(X)\|^{2}.\]
Since \(\|\rho_{n,1}h_{n}(x,\boldsymbol{\theta}^{*})\mathbf{q}(x)\|<\|\mathbf{q}(x)\|\) for all \(x\) with \(\mathbb{E}_{1}\|\mathbf{q}(X)\|^{2}<\infty\), and \(\rho_{n,1}h_{n}(x,\boldsymbol{\theta}^{*})\mathbf{q}(x)\to\mathbf{0}\) pointwise, we have
\[\mathbb{E}_{1}\|\rho_{n,1}h_{n}(X,\boldsymbol{\theta}^{*})\mathbf{q}(X)\|^{2 }\to 0,\]
by the dominated convergence theorem. Therefore, the term (26) is \(o_{p}(1)\) by definition. _(Remark: we can not factor out \(\rho_{n,1}\) and apply the central limit theorem to the remaining part here because \(\operatorname{Var}_{1}\big{[}\exp\{\boldsymbol{\theta}^{*\top}\mathbf{q}(X)\} \mathbf{q}(X)\big{]}\) may not be finite.)_
The third term (27) is straightforward to study: by the central limit theorem, it has an asymptotic normal distribution with finite variance \(\operatorname{Var}_{1}[\mathbf{q}(X)]\) guaranteed by Condition (iii).
Therefore, combining all the arguments above for the terms (25)-(27), we arrive at the conclusion that as \(n_{0},n_{1}\to\infty\),
\[n_{1}^{-1/2}\frac{\partial\ell_{n}(\boldsymbol{\theta}^{*})}{\partial \boldsymbol{\theta}}=o_{p}(1)+o_{p}(1)+n_{1}^{-1/2}\sum_{j=1}^{n_{1}}\{ \mathbf{q}(x_{1j})-\mathbb{E}_{1}[\mathbf{q}(x_{1j})]\}\stackrel{{ d}}{{\to}}N(\mathbf{0}, \operatorname{Var}_{1}[\mathbf{q}(X)]),\]
by Slutsky's theorem. This proves the second conclusion of Lemma 4.1.
Finally, the third conclusion of Lemma 4.1 states that the negative Hessian has a positive definite limit. At \(\boldsymbol{\theta}=\boldsymbol{\theta}^{*}\), the Hessian is the sum of two sets of i.i.d. random variables:
\[-n_{1}^{-1}\frac{\partial^{2}\ell_{n}(\boldsymbol{\theta}^{*})}{ \partial\boldsymbol{\theta}\partial\boldsymbol{\theta}^{\top}}= n_{1}^{-1}\sum_{j=1}^{n_{0}}\mathbf{q}(x_{0j})\mathbf{q}^{\top}(x_{0j} )\big{\{}\rho_{n,1}h_{n}(x_{0j},\boldsymbol{\theta}^{*})-\rho_{n,1}^{2}h_{n}^{ 2}(x_{0j},\boldsymbol{\theta}^{*})\big{\}} \tag{28}\] \[+n_{1}^{-1}\sum_{j=1}^{n_{1}}\mathbf{q}(x_{1j})\mathbf{q}^{\top} (x_{1j})\big{\{}\rho_{n,1}h_{n}(x_{1j},\boldsymbol{\theta}^{*})-\rho_{n,1}^{2} h_{n}^{2}(x_{1j},\boldsymbol{\theta}^{*})\big{\}}. \tag{29}\]
We again study them term by term. For the term (28), we can write
\[n_{1}^{-1}\sum_{j=1}^{n_{0}}\mathbf{q}(x_{0j})\mathbf{q}^{\top} (x_{0j})\big{\{}\rho_{n,1}h_{n}(x_{0j},\boldsymbol{\theta}^{*})-\rho_{n,1}^{2} h_{n}^{2}(x_{0j},\boldsymbol{\theta}^{*})\big{\}}\] \[= \rho_{n,0}\Big{\{}n_{0}^{-1}\sum_{j=1}^{n_{0}}\mathbf{q}(x_{0j}) \mathbf{q}^{\top}(x_{0j})h_{n}(x_{0j},\boldsymbol{\theta}^{*})\Big{\}}-\rho_{ n,0}\Big{\{}n_{0}^{-1}\sum_{j=1}^{n_{0}}\mathbf{q}(x_{0j})\mathbf{q}^{\top}(x_{0j}) \rho_{n,1}h_{n}^{2}(x_{0j},\boldsymbol{\theta}^{*})\Big{\}}.\]
The term in the first curly brackets is \(\mathbb{E}_{0}[\mathbf{q}(X)\mathbf{q}^{\top}(X)h_{n}(X,\boldsymbol{\theta}^{ *})]+o_{p}(1)\) by the weak law of large numbers for triangular arrays (Durrett, 2019, Theorem 2.2.6), which further is \(\mathbb{E}_{1}[\mathbf{q}(X)\mathbf{q}^{\top}(X)]+o_{p}(1)\) by the dominated convergence theorem as \(h_{n}(x,\boldsymbol{\theta}^{*})\to\exp\{\boldsymbol{\theta}^{*\top}\mathbf{q }(x)\}\). Here we provide more details because the similar derivations will be seen many times in the future proof. Note that \(h_{n}(x,\boldsymbol{\theta}^{*})<\rho_{n,0}^{-1}\exp\{\boldsymbol{\theta}^{* \top}\mathbf{q}(x)\}\), and \(\rho_{n,0}^{-1}\to 1\) is bounded by some constant
uniformly for all large enough \(n\). Therefore, \(\|\mathbf{q}(x)\mathbf{q}^{\top}(x)h_{n}(x,\mathbf{\theta}^{*})\|\) is uniformly bounded by \(C\|\mathbf{q}(x)\|^{2}\exp\{\mathbf{\theta}^{*\top}\mathbf{q}(x)\}\), which has finite expectation by Condition (iii).
Similarly, the term in the second curly brackets is \(\mathbb{E}_{0}[\mathbf{q}(X)\mathbf{q}^{\top}(X)\rho_{n,1}h_{n}^{2}(X,\mathbf{ \theta}^{*})]+o_{p}(1)\) by the weak law of large numbers for triangular arrays, which further is \(o_{p}(1)\) by the dominated convergence theorem because \(\rho_{n,1}h_{n}^{2}(x,\mathbf{\theta}^{*})\) is uniformly bounded by \(C\exp\{\mathbf{\theta}^{*\top}\mathbf{q}(x)\}\) and converges pointwise to \(0\). _(Remark: we can not factor out \(\rho_{n,1}\) and apply the law of large numbers to the remaining part here because the associated variance may not be finite.)_
Combining these results leads to that (28) is \(\mathbb{E}_{1}[\mathbf{q}(X)\mathbf{q}^{\top}(X)]+o_{p}(1)\). For the term (29), because \(\rho_{n,1}h_{n}(x_{1j},\mathbf{\theta}^{*})\to 0\) and is uniformly bounded by \(1\), we get (29) is \(o_{p}(1)\). Therefore, we finally have
\[-n_{1}^{-1}\frac{\partial^{2}\ell_{n}(\mathbf{\theta}^{*})}{\partial\mathbf{\theta} \partial\mathbf{\theta}^{\top}}=\mathbb{E}_{1}[\mathbf{q}(X)\mathbf{q}^{\top}(X)] +o_{p}(1),\]
which is positive definite. This proves the third conclusion in Lemma 4.1.
At this stage, we have completed the proof of Lemma 4.1.
### Proof of Theorem 4.2
This theorem states that the MELE \(\hat{\mathbf{\theta}}\), which is the maximizer of the profile log-EL \(\ell_{n}(\mathbf{\theta})\), is asymptotically normal.
Proof.: We first give a rough order assessment on \(\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}\) by showing that \(\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}=O_{p}(n_{1}^{-1/3})\), whose order will be refined later. The plan is as follows.
Since \(\ell_{n}(\mathbf{\theta})\) is a smooth function, there must be a maximizer of \(\ell_{n}(\mathbf{\theta})\) in the compact set \(\{\mathbf{\theta}:\|\mathbf{\theta}-\mathbf{\theta}^{*}\|\leq n_{1}^{-1/3}\}\). We prove that this maximizer is attained in the interior of the compact set with high probability. This will be done by showing that with high probability, \(\ell_{n}(\mathbf{\theta})<\ell_{n}(\mathbf{\theta}^{*})\) uniformly for \(\mathbf{\theta}\) on the boundary of the compact set. In other words, this maximizer is a stationary point of the profile log-EL \(\ell_{n}(\mathbf{\theta})\). Combined with the fact that the profile log-EL \(\ell_{n}(\mathbf{\theta})\) is a concave function, this maximizer must coincide with the global maximizer of \(\ell_{n}(\mathbf{\theta})\), which is \(\hat{\mathbf{\theta}}\). This conclusion would lead to \(\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}=O_{p}(n_{1}^{-1/3})\).
We now proceed to prove \(\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}=O_{p}(n_{1}^{-1/3})\). For any unit vector \(\mathbf{a}\) and \(\mathbf{\theta}=\mathbf{\theta}^{*}+n_{1}^{-1/3}\mathbf{a}\), expanding \(\ell_{n}(\mathbf{\theta})\) at \(\mathbf{\theta}^{*}\) yields
\[\ell_{n}(\mathbf{\theta})=\ell_{n}(\mathbf{\theta}^{*})+n_{1}^{-1/3}\frac{\partial \ell_{n}(\mathbf{\theta}^{*})}{\partial\mathbf{\theta}}\mathbf{a}+\frac{1}{2}n_{1}^{-2/3} \mathbf{a}^{\top}\frac{\partial^{2}\ell_{n}(\mathbf{\theta}^{*})}{\partial\mathbf{\theta} \partial\mathbf{\theta}^{\top}}\mathbf{a}+\varepsilon_{n}, \tag{30}\]
where \(\varepsilon_{n}\) is the remainder term:
\[\varepsilon_{n}=n_{1}^{-1}\sum_{|\alpha|=3}\frac{1}{\alpha!}\frac{\partial^{ \alpha}\ell_{n}(\mathbf{\theta})}{\partial\mathbf{\theta}^{\alpha}}\ \mathbf{a}^{\alpha},\]
for some \(\underline{\mathbf{\theta}}\) between \(\mathbf{\theta}^{*}\) and \(\mathbf{\theta}\). Note that \(\alpha\) with \(|\alpha|=3\) is a vector of nonnegative integers with entries summing to 3. Examples include \(\alpha=(1,0,1,0,1)^{\top}\) and \(\alpha=(2,0,1,0,0)^{\top}\) when \(d=5\). The partial derivative \(\partial^{\alpha}\ell_{n}(\mathbf{\theta})/\partial\mathbf{\theta}^{\alpha}\) is then with respect to \(\theta_{j}\) with the corresponding order.
We first show that the remainder term \(\varepsilon_{n}\) in (30) is \(O_{p}(1)\) uniformly over \(\mathbf{a}\). We achieve this goal by showing that
\[\frac{\partial^{\alpha}\ell_{n}(\mathbf{\theta})}{\partial\mathbf{\theta}^{\alpha}}=O _{p}(n_{1}),\]
for \(|\alpha|=3\) and uniformly over \(\mathbf{a}\).
Note that the partial derivative is given by
\[\frac{\partial^{\alpha}\ell_{n}(\mathbf{\theta})}{\partial\mathbf{\theta}^{\alpha}}=- \sum_{k,j}\frac{\rho_{n,0}\rho_{n,1}\exp\{\underline{\mathbf{\theta}}^{\top} \mathbf{q}(x_{kj})\}[\rho_{n,0}-\rho_{n,1}\exp\{\underline{\mathbf{\theta}}^{\top} \mathbf{q}(x_{kj})\}]}{[\rho_{n,0}+\rho_{n,1}\exp\{\underline{\mathbf{\theta}}^{ \top}\mathbf{q}(x_{kj})\}]^{3}}\mathbf{q}^{\alpha}(x_{kj}),\]
which is a sum of two terms indexed by \(k=0,1\). The term with \(k=1\) is \(O_{p}(n_{1})\) by the law of large numbers and the fact that
\[\left|\frac{\rho_{n,0}\rho_{n,1}\exp\{\underline{\mathbf{\theta}}^{\top}\mathbf{q }(x_{kj})\}[\rho_{n,0}-\rho_{n,1}\exp\{\underline{\mathbf{\theta}}^{\top}\mathbf{q }(x_{kj})\}]}{[\rho_{n,0}+\rho_{n,1}\exp\{\underline{\mathbf{\theta}}^{\top} \mathbf{q}(x_{kj})\}]^{3}}\right|\leq 1.\]
Further, because
\[\left|\frac{\rho_{n,0}\rho_{n,1}\exp\{\underline{\mathbf{\theta}}^{\top}\mathbf{q }(x_{kj})\}[\rho_{n,0}-\rho_{n,1}\exp\{\underline{\mathbf{\theta}}^{\top}\mathbf{ q}(x_{kj})\}]}{[\rho_{n,0}+\rho_{n,1}\exp\{\underline{\mathbf{\theta}}^{\top} \mathbf{q}(x_{kj})\}]^{3}}\right|\leq\rho_{n,0}^{-1}\rho_{n,1}\exp\{\underline {\mathbf{\theta}}^{\top}\mathbf{q}(x_{kj})\},\]
the term with \(k=0\) is
\[\left|-\sum_{j=1}^{n_{0}}\frac{\rho_{n,0}\rho_{n,1}\exp\{\mathbf{\theta}^ {\top}\mathbf{q}(x_{0j})\}[\rho_{n,0}-\rho_{n,1}\exp\{\mathbf{\theta}^{\top}\mathbf{ q}(x_{0j})\}]}{[\rho_{n,0}+\rho_{n,1}\exp\{\mathbf{\theta}^{\top}\mathbf{q}(x_{0j})\}]^{3}} \mathbf{q}^{\alpha}(x_{0j})\right|\] \[= O(\rho_{n,0}^{-1}\rho_{n,1})\sum_{j=1}^{n_{0}}\exp\{\mathbf{\theta}^ {\top}\mathbf{q}(x_{0j})\}|\mathbf{q}^{\alpha}(x_{0j})|\] \[= O(n_{1})n_{0}^{-1}\sum_{j=1}^{n_{0}}\exp\{\mathbf{\theta}^{\top} \mathbf{q}(x_{0j})\}|\mathbf{q}^{\alpha}(x_{0j})| \tag{31}\] \[= O_{p}(n_{1}).\]
The last equality is true because by the law of large numbers for triangular arrays and Condition (iii) (which implies the associated variance is finite), we have
\[n_{0}^{-1}\sum_{j=1}^{n_{0}}\exp\{\mathbf{\theta}^{\top}\mathbf{q}(x_ {0j})\}|\mathbf{q}^{\alpha}(x_{0j})|=\mathds{E}_{0}\left[\exp\{\mathbf{\theta}^{ \top}\mathbf{q}(X)\}|\mathbf{q}^{\alpha}(X)|\right]+o_{p}(1).\]
We remark that without loss of generality, we treat \(\mathbf{\theta}\) as if it is sample-independent that is uniformly \(\|\mathbf{\theta}-\mathbf{\theta}^{*}\|\leq O_{p}(n_{1}^{-1/3})\). Further, by Condition (iii) again and the Cauchy-Schwarz inequality, the main term in the right hand side of the preceding equality is uniformly bounded for \(\mathbf{\theta}\) in a neighbourhood of \(\mathbf{\theta}^{*}\). Therefore, the main term in (31) is \(O_{p}(1)\).
Thus, we have shown that
\[\frac{\partial^{\alpha}\ell_{n}(\mathbf{\theta})}{\partial\mathbf{\theta}^{\alpha}} =O_{p}(n_{1}) \tag{32}\]
for \(|\alpha|=3\), and \(\varepsilon_{n}=O_{p}(1)\) uniformly over \(\mathbf{a}\).
Next, we proceed with the main terms in the expansion in (30). For the first derivative term in (30), the asymptotic normality conclusion in Lemma 4.1 implies
\[\frac{\partial\ell_{n}(\mathbf{\theta}^{*})}{\partial\mathbf{\theta}}=O_{p}(n_{1}^{1/ 2})=o_{p}(n_{1}^{2/3}).\]
For the second derivative term in (30), the Hessian conclusion in Lemma 4.1 implies
\[\frac{\partial^{2}\ell_{n}(\mathbf{\theta}^{*})}{\partial\mathbf{\theta} \partial\mathbf{\theta}^{\top}}=-n_{1}\left\{\mathds{E}_{1}[\mathbf{q}\mathbf{q}^ {\top}]+o_{p}(1)\right\}.\]
Combining these order assessments for terms in (30), we find
\[\ell_{n}(\mathbf{\theta}^{*}+n_{1}^{-1/3}\mathbf{a})-\ell_{n}(\mathbf{\theta}^{*}) =n_{1}^{1/3}\mathbf{a}^{\top}\left\{-\mathbb{E}_{1}[\mathbf{q}\mathbf{q }^{\top}]+o_{p}(1)\right\}\mathbf{a}+o_{p}(n_{1}^{1/3})\] \[=n_{1}^{1/3}\{-C+o_{p}(1)\},\]
for some positive constant \(C\) because \(\mathbb{E}_{1}[\mathbf{q}\mathbf{q}^{\top}]\) is positive definite.
Note that the event \(\|\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}\|<n_{1}^{-1/3}\) is implied by the event \(\ell_{n}(\mathbf{\theta}^{*}+n_{1}^{-1/3}\mathbf{a})<\ell_{n}(\mathbf{\theta}^{*})\). Therefore, with the same positive constant \(C\), we have
\[P(\|\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}\|<n_{1}^{-1/3}) \geq P(\ell_{n}(\mathbf{\theta}^{*}+n_{1}^{-1/3}\mathbf{a})<\ell_{n}(\mathbf{ \theta}^{*}))\] \[=P(o_{p}(1)<C),\]
which is arbitrarily close to 1. This proves that the unique maximizer \(\hat{\mathbf{\theta}}\) of \(\ell_{n}(\mathbf{\theta})\) satisfies
\[\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}=O_{p}(n_{1}^{-1/3}). \tag{33}\]
We now prove the asymptotic normality of \(\hat{\mathbf{\theta}}\). Expanding \(\partial\ell_{n}(\hat{\mathbf{\theta}})/\partial\mathbf{\theta}\) at the truth \(\mathbf{\theta}^{*}\), we get
\[\mathbf{0}=\frac{\partial\ell_{n}(\hat{\mathbf{\theta}})}{\partial\mathbf{\theta}}=\frac{ \partial\ell_{n}(\mathbf{\theta}^{*})}{\partial\mathbf{\theta}}+\frac{\partial^{2}\ell _{n}(\mathbf{\theta}^{*})}{\partial\mathbf{\theta}\partial\mathbf{\theta}^{\top}}(\hat{\bm {\theta}}-\mathbf{\theta}^{*})+\varepsilon^{\prime}_{n}, \tag{34}\]
where \(\varepsilon^{\prime}_{n}\) is the remainder term:
\[\varepsilon^{\prime}_{n}=\sum_{|\alpha|=2}\frac{1}{\alpha!}\frac{\partial^{ \alpha+1}\ell_{n}(\bar{\mathbf{\theta}})}{\partial\mathbf{\theta}^{\alpha+1}}(\hat{ \mathbf{\theta}}-\mathbf{\theta}^{*})^{\alpha},\]
for some \(\bar{\mathbf{\theta}}\) between \(\mathbf{\theta}^{*}\) and \(\hat{\mathbf{\theta}}\). By the same technique as we earlier proved (32), we have
\[\frac{\partial^{\alpha+1}\ell_{n}(\bar{\mathbf{\theta}})}{\partial\mathbf{\theta}^{ \alpha+1}}=O_{p}(n_{1})\]
for \(|\alpha|=2\). Further, because \(\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}=O_{p}(n_{1}^{-1/3})\) as shown in (33), we have
\[(\hat{\mathbf{\theta}}-\mathbf{\theta}^{*})^{\alpha}=O_{p}(n_{1}^{-2/3}).\]
These results lead to \(\varepsilon^{\prime}_{n}=O_{p}(n_{1}^{1/3})\).
We have shown in Lemma 4.1 that
\[-n_{1}^{-1}\frac{\partial^{2}\ell_{n}(\mathbf{\theta}^{*})}{\partial\mathbf{\theta} \partial\mathbf{\theta}^{\top}}=\mathbb{E}_{1}[\mathbf{q}\mathbf{q}^{\top}]+o_{p}(1).\]
Therefore, rearranging the equation in (34) yields
\[\left[\mathbb{E}_{1}[\mathbf{q}\mathbf{q}^{\top}]+o_{p}(1)\right]\sqrt{n_{1}}( \hat{\mathbf{\theta}}-\mathbf{\theta}^{*})=n_{1}^{-1/2}\frac{\partial\ell_{n}(\mathbf{ \theta}^{*})}{\partial\mathbf{\theta}}+o_{p}(1). \tag{35}\]
By the asymptotic normality conclusion in Lemma 4.1, we have
\[\frac{\partial\ell_{n}(\mathbf{\theta}^{*})}{\partial\mathbf{\theta}}=O_{p}(n_{1}^{1/ 2}).\]
Therefore, it must be true that \(\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}=O_{p}(n_{1}^{-1/2})\); otherwise, the orders in the two sides of the equation (35) would not match. With this refinement of the order of \(\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}\), we have
\[\sqrt{n_{1}}(\hat{\mathbf{\theta}}-\mathbf{\theta}^{*})=\left\{\mathbb{E}_{1}[ \mathbf{q}\mathbf{q}^{\top}]\right\}^{-1}\left[n_{1}^{-1/2}\frac{\partial\ell _{n}(\mathbf{\theta}^{*})}{\partial\mathbf{\theta}}\right]+o_{p}(1).\]
Hence, by Slutsky's Theorem, we finally get
\[\sqrt{n_{1}}(\hat{\mathbf{\theta}}-\mathbf{\theta}^{*})\overset{d}{\to}N\left(\mathbf{0},\{\mathbb{E}_{1}[\mathbf{q}\mathbf{q}^{\top}]\}^{-1}\mathrm{Var}_{1}[\mathbf{ q}]\{\mathbb{E}_{1}[\mathbf{q}\mathbf{q}^{\top}]\}^{-1}\right), \tag{36}\]
as \(n_{0},n_{1}\to\infty\), where we recall that \(\mathrm{Var}_{1}[\mathbf{q}]\) is the covariance matrix in the limiting normal distribution of the score function \(n_{1}^{-1/2}\partial\ell_{n}(\mathbf{\theta}^{*})/\partial\mathbf{\theta}\).
We now simplify the covariance matrix in (36). Because \(\mathbf{q}^{\top}(x)=(1,\mathbf{q}_{-}^{\top}(x))\), we have
\[\mathrm{Var}_{1}[\mathbf{q}(X)]=\begin{pmatrix}0&\mathbf{0}\\ \mathbf{0}&\mathrm{Var}_{1}[\mathbf{q}_{-}(X)]\end{pmatrix}.\]
Also,
\[\mathbb{E}_{1}[\mathbf{q}(X)\mathbf{q}^{\top}(X)]=\begin{pmatrix}1&\mathbb{E} _{1}[\mathbf{q}_{-}^{\top}(X)]\\ \mathbb{E}_{1}[\mathbf{q}_{-}(X)]&\mathbb{E}_{1}[\mathbf{q}_{-}(X)\mathbf{q}_{ -}^{\top}(X)]\end{pmatrix},\]
whose inverse is given by (see Theorem 8.5.11 in Harville (1997))
\[\{\mathbb{E}_{1}[\mathbf{q}(X)\mathbf{q}^{\top}(X)]\}^{-1}=\begin{pmatrix}1& \mathbf{0}\\ \mathbf{0}&\mathbf{0}\end{pmatrix}+\begin{pmatrix}-\mathbb{E}_{1}[\mathbf{q}_{-}^{\top} (X)]\\ \mathbf{I}_{d}\end{pmatrix}\mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}(X)]\left(- \mathbb{E}_{1}[\mathbf{q}_{-}(X)],\,\mathbf{I}_{d}\right).\]
These matrix results lead to a more friendly expression of the covariance matrix in (36):
\[\{\mathbb{E}_{1}[\mathbf{q}(X)\mathbf{q}^{\top}(X)]\}^{-1}\mathrm{ Var}_{1}[\mathbf{q}(X)]\{\mathbb{E}_{1}[\mathbf{q}(X)\mathbf{q}^{\top}(X)]\}^{-1}\] \[=\begin{pmatrix}-\mathbb{E}_{1}[\mathbf{q}_{-}^{\top}(X)]\\ \mathbf{I}_{d}\end{pmatrix}\mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}(X)]\left(- \mathbb{E}_{1}[\mathbf{q}_{-}(X)],\,\mathbf{I}_{d}\right)\] \[=\{\mathbb{E}_{1}[\mathbf{q}(X)\mathbf{q}^{\top}(X)]\}^{-1}- \begin{pmatrix}1&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\end{pmatrix}.\]
This completes the proof of Theorem 4.2.
### Proof of Theorem 4.3
This theorem asserts that for every \(x\) in the support of the distribution function \(G_{1}\), the corresponding DRM-based distribution estimator \(\hat{G}_{1}(x)\) is asymptotically normal.
Proof.: Following Chen and Liu (2013, proof of Theorem 3.2), we can write \(\hat{G}_{1}(x)\) and \(G_{1}(x)\) as
\[\hat{G}_{1}(x)= n_{1}^{-1}\sum_{k=0,1}\sum_{j=1}^{n_{k}}\frac{\rho_{n,1}\exp\{ \hat{\boldsymbol{\theta}}^{\top}\mathbf{q}(x_{kj})\}}{\rho_{n,0}+\rho_{n,1}\exp \{\hat{\boldsymbol{\theta}}^{\top}\mathbf{q}(x_{kj})\}}\mathbb{1}\left(x_{kj} \leq x\right),\] \[G_{1}(x)= n_{1}^{-1}\sum_{k=0,1}\sum_{j=1}^{n_{k}}\mathbb{E}_{k}\left[ \frac{\rho_{n,1}\exp\{\boldsymbol{\theta}^{\ast\top}\mathbf{q}(x_{kj})\}}{ \rho_{n,0}+\rho_{n,1}\exp\{\boldsymbol{\theta}^{\ast\top}\mathbf{q}(x_{kj})\} }\mathbb{1}\left(x_{kj}\leq x\right)\right],\]
where for each \(k\), the variables \(\{x_{kj}:j=1,\ldots,n_{k}\}\) are taken expectation with respect to \(G_{k}\). Recall that we defined earlier:
\[h_{n}(x,\boldsymbol{\theta})=\frac{\exp\{\boldsymbol{\theta}^{\top}\mathbf{q}( x)\}}{\rho_{n,0}+\rho_{n,1}\exp\{\boldsymbol{\theta}^{\top}\mathbf{q}(x)\}}.\]
Therefore, we may write their difference as
\[\hat{G}_{1}(x)-G_{1}(x)= n_{1}^{-1}\sum_{k,j}\rho_{n,1}\Big{\{}h_{n}(x_{kj},\hat{\mathbf{ \theta}})\mathbb{1}(x_{kj}\leq x)-\mathbb{E}_{k}[h_{n}(x_{kj},\mathbf{\theta}^{*}) \mathbb{1}(x_{kj}\leq x)]\Big{\}}\] \[= n_{1}^{-1}\sum_{k,j}\rho_{n,1}\Big{\{}h_{n}(x_{kj},\mathbf{\theta}^{* })\mathbb{1}(x_{kj}\leq x)-\mathbb{E}_{k}[h_{n}(x_{kj},\mathbf{\theta}^{*}) \mathbb{1}(x_{kj}\leq x)]\Big{\}} \tag{37}\] \[+ (\hat{\mathbf{\theta}}-\mathbf{\theta}^{*})^{\top}\Big{\{}n_{1}^{-1}\sum _{k,j}\frac{\rho_{n,0}\rho_{n,1}h_{n}(x_{kj},\mathbf{\theta}^{*})\mathbf{q}(x_{kj} )}{\rho_{n,0}+\rho_{n,1}\exp\{\mathbf{\theta}^{*\top}\mathbf{q}(x_{kj})\}}\mathbb{ 1}(x_{kj}\leq x)\Big{\}}+R_{n}. \tag{38}\]
The remainder term is \(R_{n}=o_{p}(n_{1}^{-1/2})\), but we postpone this claim to the end of this section.
Notice that (37) is a sum of two terms indexed by \(k=0,1\), which are respectively similar to (25) and (26) that we handled earlier in the proof of Lemma 4.1. Therefore, we omit some details here and conclude directly based on the central limit theorem for triangular arrays and the dominated convergence theorem that (37) is \(o_{p}(n_{1}^{-1/2})\).
Next, we deal with the fraction term in (38), which is also a sum of two terms indexed by \(k=0,1\). The term with \(k=0\) is
\[n_{1}^{-1}\sum_{j=1}^{n_{0}}\frac{\rho_{n,0}\rho_{n,1}h_{n}(x_{0 j},\mathbf{\theta}^{*})\mathbf{q}(x_{0j})}{\rho_{n,0}+\rho_{n,1}\exp\{\mathbf{\theta}^{* \top}\mathbf{q}(x_{0j})\}}\mathbb{1}(x_{0j}\leq x)\] \[= \rho_{n,0}^{2}n_{0}^{-1}\sum_{j=1}^{n_{0}}\frac{h_{n}(x_{0j},\bm {\theta}^{*})\mathbf{q}(x_{0j})}{\rho_{n,0}+\rho_{n,1}\exp\{\mathbf{\theta}^{* \top}\mathbf{q}(x_{0j})\}}\mathbb{1}(x_{0j}\leq x)\] \[= \mathbb{E}_{0}\left[\frac{\rho_{n,0}^{2}h_{n}(X,\mathbf{\theta}^{*}) \mathbf{q}(X)}{\rho_{n,0}+\rho_{n,1}\exp\{\mathbf{\theta}^{*\top}\mathbf{q}(X)\}} \mathbb{1}(X\leq x)\right]+o_{p}(1)\] \[= \mathbb{E}_{0}[\exp\{\mathbf{\theta}^{*\top}\mathbf{q}(X)\}\mathbf{q }(X)\mathbb{1}(X\leq x)]+o_{p}(1)\] \[= \begin{pmatrix}G_{1}(x)\\ \mathbf{Q}(x)\end{pmatrix}+o_{p}(1).\]
Note that the third last equality is by applying the weak law of large numbers for triangular arrays; the second last equality is by the dominated convergence theorem because the term within the expectation is uniformly bounded by \(\|\mathbf{q}(X)\|\exp\{\mathbf{\theta}^{*\top}\mathbf{q}(X)\}\) that has finite expectation, and \(\rho_{n,0}\to 1,\rho_{n,1}\to 0\); and the last equality is by decomposing \(\mathbf{q}^{\top}(x)=(1,\mathbf{q}^{\top}(x))\), with \(\mathbf{Q}(x)\)
defined in the main Theorem 4.3. We can apply the same technique for the term with \(k=1\):
\[n_{1}^{-1}\sum_{j=1}^{n_{1}}\frac{\rho_{n,0}\rho_{n,1}h_{n}(x_{1j}, \boldsymbol{\theta}^{*})\mathbf{q}(x_{1j})}{\rho_{n,0}+\rho_{n,1}\exp\{ \boldsymbol{\theta}^{*\top}\mathbf{q}(x_{1j})\}}\mathbbm{1}\left(x_{1j}\leq x\right)\] \[= \mathbb{E}_{1}\left[\frac{\rho_{n,0}\rho_{n,1}h_{n}(X,\boldsymbol {\theta}^{*})\mathbf{q}(X)}{\rho_{n,0}+\rho_{n,1}\exp\{\boldsymbol{\theta}^{* \top}\mathbf{q}(X)\}}\mathbbm{1}\left(X\leq x\right)\right]+o_{p}(1)=o_{p}(1).\]
Therefore, we have shown that the fraction term in (38) is \((G_{1}(x),\,\mathbf{Q}^{\top}(x))^{\top}+o_{p}(1)\).
Combining these results for (37)-(38) and \(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}^{*}=O_{p}(n_{1}^{-1/2})\) from Theorem 4.2 yields
\[\hat{G}_{1}(x)-G_{1}(x)=(G_{1}(x),\,\mathbf{Q}^{\top}(x))\{\hat{\boldsymbol{ \theta}}-\boldsymbol{\theta}^{*}\}+o_{p}(n_{1}^{-1/2}).\]
Because we have proved in Theorem 4.2 that \(n_{1}^{1/2}(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}^{*})\) is asymptotically normal with covariance matrix in (20), it is clear that \(n_{1}^{1/2}\{\hat{G}_{1}(x)-G_{1}(x)\}\) is also asymptotically normal with mean zero and variance
\[(G_{1}(x),\,\mathbf{Q}^{\top}(x))\begin{pmatrix}-\mathbb{E}_{1}[ \mathbf{q}_{-}^{\top}]\\ \mathbf{I}_{d}\end{pmatrix}\mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}]\left(- \mathbb{E}_{1}[\mathbf{q}_{-}],\,\mathbf{I}_{d}\right)\begin{pmatrix}G_{1}(x) \\ \mathbf{Q}(x)\end{pmatrix}\] \[=\{\mathbf{Q}(x)-\mathbb{E}_{1}[\mathbf{q}_{-}]G_{1}(x)\}^{\top} \mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}]\{\mathbf{Q}(x)-\mathbb{E}_{1}[\mathbf{q }_{-}]G_{1}(x)\}.\]
So far, the main conclusion of the Theorem 4.3 has been proved, except for the proof that the remainder term \(R_{n}\) in (38) is \(o_{p}(n_{1}^{-1/2})\). The following is devoted to this task.
For simplicity, we define
\[B_{n}(\boldsymbol{\theta})=n_{1}^{-1}\sum_{k,j}\rho_{n,1}h_{n}(x_{kj}, \boldsymbol{\theta})\mathbbm{1}(x_{kj}\leq x).\]
For some \(\boldsymbol{\theta}_{\dagger}\) between \(\hat{\boldsymbol{\theta}}\) and \(\boldsymbol{\theta}^{*}\), we have
\[R_{n}=\frac{1}{2}(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}^{*})^{\top} \frac{\partial^{2}B_{n}(\boldsymbol{\theta}^{*})}{\partial\boldsymbol{\theta} \partial\boldsymbol{\theta}^{\top}}(\hat{\boldsymbol{\theta}}-\boldsymbol{ \theta}^{*})+\sum_{|\alpha|=3}\frac{1}{\alpha!}\frac{\partial^{\alpha}B_{n}( \boldsymbol{\theta}_{\dagger})}{\partial\boldsymbol{\theta}^{\alpha}}(\hat{ \boldsymbol{\theta}}-\boldsymbol{\theta}^{*})^{\alpha}, \tag{39}\]
where
\[\frac{\partial^{2}B_{n}(\boldsymbol{\theta}^{*})}{\partial\boldsymbol{\theta} \partial\boldsymbol{\theta}^{\top}}=n_{1}^{-1}\sum_{k,j}\frac{\rho_{n,0}\rho_{ n,1}h_{n}(x_{kj},\boldsymbol{\theta}^{*})[\rho_{n,0}-\rho_{n,1}\exp\{ \boldsymbol{\theta}^{*\top}\mathbf{q}(x_{kj})\}]}{[\rho_{n,0}+\rho_{n,1}\exp\{ \boldsymbol{\theta}^{*\top}\mathbf{q}(x_{kj})\}]^{2}}\mathbf{q}(x_{kj})\mathbf{ q}^{\top}(x_{kj})\mathbbm{1}\left(x_{kj}\leq x\right),\]
and for \(|\alpha|=3\),
\[\frac{\partial^{\alpha}B_{n}(\boldsymbol{\theta}_{\dagger})}{\partial \boldsymbol{\theta}^{\alpha}}=n_{1}^{-1}\sum_{k,j}\frac{\rho_{n,0}\rho_{n,1}h_{ n}(x_{kj},\boldsymbol{\theta}_{\dagger})[\rho_{n,0}-2\rho_{n,1}\exp\{ \boldsymbol{\theta}_{\dagger}^{\top}\mathbf{q}(x_{kj})\}]^{2}}{[\rho_{n,0}+ \rho_{n,1}\exp\{\boldsymbol{\theta}_{\dagger}^{\top}\mathbf{q}(x_{kj})\}]^{3}} \mathbf{q}^{\alpha}(x_{kj})\mathbb{1}(x_{kj}\leq x).\]
By the law of large numbers for triangular arrays, we have \(\partial^{2}B_{n}(\boldsymbol{\theta}^{*})/\partial\boldsymbol{\theta} \partial\boldsymbol{\theta}^{\top}=O_{p}(1)\). Next, we show that \(\partial^{\alpha}B_{n}(\boldsymbol{\theta}_{\dagger})/\partial\boldsymbol{ \theta}^{\alpha}=O_{p}(1)\) for \(|\alpha|=3\).
Note that \(\partial^{\alpha}B_{n}(\boldsymbol{\theta}_{\dagger})/\partial\boldsymbol{ \theta}^{\alpha}\) is a sum of two terms indexed by \(k=0,1\). The term with \(k=1\) is \(O_{p}(1)\) by the law of large numbers and the fact that
\[\frac{\rho_{n,0}\rho_{n,1}h_{n}(x_{kj},\boldsymbol{\theta}_{\dagger})[\rho_{n,0}-2\rho_{n,1}\exp\{\boldsymbol{\theta}_{\dagger}^{\top}\mathbf{q}(x_{kj})\}] ^{2}}{[\rho_{n,0}+\rho_{n,1}\exp\{\boldsymbol{\theta}_{\dagger}^{\top}\mathbf{ q}(x_{kj})\}]^{3}}\leq 4.\]
Further, because
\[\frac{\rho_{n,0}\rho_{n,1}h_{n}(x_{kj},\boldsymbol{\theta}_{\dagger})[\rho_{n, 0}-2\rho_{n,1}\exp\{\boldsymbol{\theta}_{\dagger}^{\top}\mathbf{q}(x_{kj})\}] ^{2}}{[\rho_{n,0}+\rho_{n,1}\exp\{\boldsymbol{\theta}_{\dagger}^{\top}\mathbf{ q}(x_{kj})\}]^{3}}\leq 4\rho_{n,0}^{-1}\rho_{n,1}\exp\{\boldsymbol{\theta}_{ \dagger}^{\top}\mathbf{q}(x_{0j})\},\]
the term with \(k=0\) is
\[\left|n_{1}^{-1}\sum_{j=1}^{n_{0}}\frac{\rho_{n,0}\rho_{n,1}h_{n} (x_{0j},\boldsymbol{\theta}_{\dagger})[\rho_{n,0}-2\rho_{n,1}\exp\{ \boldsymbol{\theta}_{\dagger}^{\top}\mathbf{q}(x_{0j})\}]^{2}}{[\rho_{n,0}+ \rho_{n,1}\exp\{\boldsymbol{\theta}_{\dagger}^{\top}\mathbf{q}(x_{0j})\}]^{3} }\mathbf{q}^{\alpha}(x_{0j})\mathbb{1}(x_{0j}\leq x)\right|\] \[= O(\rho_{n,0}^{-1}\rho_{n,1})n_{1}^{-1}\sum_{j=1}^{n_{0}}\exp\{ \boldsymbol{\theta}_{\dagger}^{\top}\mathbf{q}(x_{0j})\}|\mathbf{q}^{\alpha} (x_{0j})|\mathbb{1}(x_{0j}\leq x)\] \[= O(1)n_{0}^{-1}\sum_{j=1}^{n_{0}}\exp\{\boldsymbol{\theta}_{ \dagger}^{\top}\mathbf{q}(x_{0j})\}|\mathbf{q}^{\alpha}(x_{0j})|\mathbb{1}(x _{0j}\leq x) \tag{40}\] \[= O_{p}(1).\]
The last equality is true because by the law of large numbers for triangular arrays and Condition (iii) (for finite variance),
\[n_{0}^{-1}\sum_{j=1}^{n_{0}}\exp\{\boldsymbol{\theta}_{\dagger}^{\top}\mathbf{ q}(x_{0j})\}|\mathbf{q}^{\alpha}(x_{0j})|\mathbb{1}(x_{0j}\leq x)=\mathbb{E}_{0} \left[\exp\{\boldsymbol{\theta}_{\dagger}^{\top}\mathbf{q}(X)\}|\mathbf{q}^{ \alpha}(X)|\mathbb{1}(X\leq x)\right]+o_{p}(1).\]
We note that without loss of generality, we regard \(\mathbf{\theta}_{\dagger}\) as sample-independent that is uniformly \(\|\mathbf{\theta}_{\dagger}-\mathbf{\theta}^{*}\|\leq O_{p}(n_{1}^{-1/2})\). Further, by Condition (iii) again and the Cauchy-Schwarz inequality, the main term in the right hand side of the preceding equality is uniformly bounded for \(\mathbf{\theta}_{\dagger}\) in a neighbourhood of \(\mathbf{\theta}^{*}\). Therefore, the term in (40) is \(O_{p}(1)\).
By plugging these rate results in (39), along with that \(\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}=O_{p}(n_{1}^{-1/2})\), we have thus successfully proved that
\[R_{n}=O_{p}(n_{1}^{-1})+O_{p}(n_{1}^{-3/2})=o_{p}(n_{1}^{-1/2}).\]
This completes the proof.
### Proof of Theorem 4.4
This theorem states that the DRM-based quantile estimator \(\hat{\xi}_{p}\) of the \(p\)th quantile of distribution \(G_{1}\) has a limiting normal distribution.
Proof.: We prove the desired result by definition. First, note that
\[P(\sqrt{n_{1}}\{\hat{\xi}_{p}-\xi_{p}\}\leq t)=P(\hat{\xi}_{p}\geq\xi_{p}+tn_{1 }^{-1/2})=P\Big{(}\hat{G}_{1}(\xi_{p}+tn_{1}^{-1/2})\geq p\Big{)}\]
by the equivalence between the last two events in the probabilities, which is implied by the definition of the quantile estimator \(\hat{\xi}_{p}\) in (22). Then, we have
\[P(\sqrt{n_{1}}\{\hat{\xi}_{p}-\xi_{p}\}\leq t)=P\Big{(}\hat{G}_{1 }(\xi_{p}+tn_{1}^{-1/2})\geq p\Big{)}\] \[= P\Big{(}\hat{G}_{1}(\xi_{p}+tn_{1}^{-1/2})-G_{1}(\xi_{p}+tn_{1}^ {-1/2})\geq G_{1}(\xi_{p})-G_{1}(\xi_{p}+tn_{1}^{-1/2})\Big{)}\] \[= P\Big{(}[g_{1}(\xi_{p})]^{-1}\sqrt{n_{1}}\{\hat{G}_{1}(\xi_{p}+ tn_{1}^{-1/2})-G_{1}(\xi_{p}+tn_{1}^{-1/2})\}\geq t+o(1)\Big{)}, \tag{41}\]
because the density function \(g_{1}(\cdot)\) is continuous and positive at \(\xi_{p}\).
We now work on the distribution terms in (41). Following the same line of the proof of
Theorem 4.3, with a generic \(x\) we write
\[\hat{G}_{1}(x)-G_{1}(x)\] \[= n_{1}^{-1}\sum_{k,j}\rho_{n,1}\Big{\{}h_{n}(x_{kj},\boldsymbol{ \theta}^{*})\mathbb{1}(x_{kj}\leq x)-\mathbb{E}_{k}[h_{n}(x_{kj},\boldsymbol{ \theta}^{*})\mathbb{1}(x_{kj}\leq x)]\Big{\}} \tag{42}\] \[+(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}^{*})^{\top}\Big{\{} n_{1}^{-1}\sum_{k,j}\frac{\rho_{n,0}\rho_{n,1}h_{n}(x_{kj},\boldsymbol{ \theta}^{*})\mathbf{q}(x_{kj})}{\rho_{n,0}+\rho_{n,1}\exp\{\boldsymbol{\theta }^{*\top}\mathbf{q}(x_{kj})\}}\mathbb{1}(x_{kj}\leq x)\Big{\}}+R_{n}, \tag{43}\]
where
\[h_{n}(x,\boldsymbol{\theta})=\frac{\exp\{\boldsymbol{\theta}^{\top}\mathbf{q} (x)\}}{\rho_{n,0}+\rho_{n,1}\exp\{\boldsymbol{\theta}^{\top}\mathbf{q}(x)\}}.\]
When \(x\) is fixed (independent of \(n\)), we have shown in the proof of Theorem 4.3 that:
1. the term (42) is \(o_{p}(n_{1}^{-1/2})\);
2. the fraction term in (43) is \((G_{1}(x),\,\mathbf{Q}^{\top}(x))^{\top}+o_{p}(1)\); and
3. the remainder term \(R_{n}\) is \(o_{p}(n_{1}^{-1/2})\).
When \(x=\xi_{p}+tn_{1}^{-1/2}\to\xi_{p}\), the same argument can be made for the above items 1 and 3, and for item 2:
\[n_{1}^{-1}\sum_{k,j}\frac{\rho_{n,0}\rho_{n,1}h_{n}(x_{kj},\boldsymbol{\theta} ^{*})\mathbf{q}(x_{kj})}{\rho_{n,0}+\rho_{n,1}\exp\{\boldsymbol{\theta}^{* \top}\mathbf{q}(x_{kj})\}}\mathbb{1}(x_{kj}\leq\xi_{p}+tn_{1}^{-1/2})=\begin{pmatrix} p\\ \mathbf{Q}(\xi_{p})\end{pmatrix}+o_{p}(1).\]
These claims can all be proved by applying the same techniques as we used earlier in the proof of Theorem 4.3 for handling (37)-(38), and hence we omit some details here.
These rate results, along with the conclusions from Theorem 4.2 and (20) on the asymptotic normality of \(\sqrt{n_{1}}(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}^{*})\), lead to that the main term in (41) is also asymptotically normal:
\[[g_{1}(\xi_{p})]^{-1}\sqrt{n_{1}}\{ \hat{G}_{1}(\xi_{p}+tn_{1}^{-1/2})-G_{1}(\xi_{p}+tn_{1}^{-1/2})\}\] \[\overset{d}{\to}N\left(0,\{\mathbf{Q}(\xi_{p})-p\mathbb{E}_{1}[ \mathbf{q}_{-}]\}^{\top}\frac{\mathrm{Var}_{1}^{-1}[\mathbf{q}_{-}]}{g_{1}^{2} (\xi_{p})}\{\mathbf{Q}(\xi_{p})-p\mathbb{E}_{1}[\mathbf{q}_{-}]\}\right). \tag{44}\]
Therefore, by Slutsky's theorem, the \(o(1)\) term in (41) can be ignored, and the desired probability \(P(\sqrt{n_{1}}\{\hat{\xi}_{p}-\xi_{p}\}\leq t)\) converges to the cumulative distribution function of the normal distribution in (44).
This completes the proof.
### Proof of Theorem 4.5
This theorem presents the Bahadur representation for the DRM-based quantile estimator \(\hat{\xi}_{p}\) at quantile level \(p\) for distribution \(G_{1}\).
Proof.: We first claim a rate result on distribution estimator \(\hat{G}_{1}\), as follows.
\[\sup_{|x-\xi_{p}|\leq O_{p}(n_{1}^{-1/2})}|\{\hat{G}_{1}(x)-\hat{G}_{1}(\xi_{p} )\}-\{G_{1}(x)-G_{1}(\xi_{p})\}|=O_{p}(n_{1}^{-3/4}\log^{1/2}n_{1}). \tag{45}\]
For a better presentation, we delay the proof of the claim in (45) to the end of this proof.
We proceed to show the Bahadur representation.
Because the density \(g_{1}(\cdot)\) is continuous at \(\xi_{p}\), we have
\[G_{1}(\hat{\xi}_{p})-G_{1}(\xi_{p})=g_{1}(\xi_{p})\{\hat{\xi}_{p}-\xi_{p}\}+O _{p}(n_{1}^{-1}).\]
Since \(\hat{G}_{1}(x)\) is a discrete step function that has an increment of size \(O(n_{1}^{-1})\) at each observation (jump point), by the definition of \(\hat{\xi}_{p}\), we have \(\hat{G}_{1}(\hat{\xi}_{p})-p=O_{p}(n_{1}^{-1})\). Further, we have shown in Theorem 4.4 that \(\hat{\xi}_{p}-\xi_{p}=O_{p}(n_{1}^{-1/2})\). Combing these results and by letting \(x\) be \(\hat{\xi}_{p}\) in the claim in (45), we have
\[\{p-\hat{G}_{1}(\xi_{p})\}-g_{1}(\xi_{p})\{\hat{\xi}_{p}-\xi_{p}\}+O_{p}(n_{1 }^{-1})=O_{p}(n_{1}^{-3/4}\log^{1/2}n_{1}),\]
which, after rearrangement, leads to the desired result:
\[\hat{\xi}_{p}-\xi_{p}=\frac{G_{1}(\xi_{p})-\hat{G}_{1}(\xi_{p})}{g_{1}(\xi_{p} )}+O_{p}(n_{1}^{-3/4}\log^{1/2}n_{1}).\]
Finally, we now present the proof of the claim in (45). The proof is similar to the proof of Lemma A.2 in Chen and Liu (2013), with some modifications. Without loss of generality, we assume that \(x\geq\xi_{p}\).
Recall that \(\hat{G}_{1}(x)\) and \(G_{1}(x)\) can be written as
\[\hat{G}_{1}(x)= n_{1}^{-1}\sum_{k=0,1}\sum_{j=1}^{n_{k}}\rho_{n,1}h_{n}(x_{kj}, \hat{\mathbf{\theta}})\mathbb{1}\left(x_{kj}\leq x\right),\] \[G_{1}(x)= n_{1}^{-1}\sum_{k=0,1}\sum_{j=1}^{n_{k}}\mathbb{E}_{k}[\rho_{n,1 }h_{n}(x_{kj},\mathbf{\theta}^{*})\mathbb{1}\left(x_{kj}\leq x\right)],\]
where
\[h_{n}(x,\mathbf{\theta})=\frac{\exp\{\mathbf{\theta}^{\top}\mathbf{q}(x)\}}{\rho_{n,0} +\rho_{n,1}\exp\{\mathbf{\theta}^{\top}\mathbf{q}(x)\}}.\]
We also define another function by replacing \(\mathbf{\theta}\) with \(\mathbf{\theta}^{*}\) in \(\hat{G}_{1}\):
\[G_{1}^{*}(x)=n_{1}^{-1}\sum_{k=0,1}\sum_{j=1}^{n_{k}}\rho_{n,1}h_{n}(x_{kj}, \mathbf{\theta}^{*})\mathbb{1}\left(x_{kj}\leq x\right).\]
We then rewrite the left-hand side of (45) as
\[\{\hat{G}_{1}(x)-\hat{G}_{1}(\xi_{p})\}-\{G_{1}(x)-G_{1}(\xi_{p})\}\] \[= \{\hat{G}_{1}(x)-\hat{G}_{1}(\xi_{p})\}-\{G_{1}^{*}(x)-G_{1}^{*}( \xi_{p})\} \tag{46}\] \[+\{G_{1}^{*}(x)-G_{1}^{*}(\xi_{p})\}-\{G_{1}(x)-G_{1}(\xi_{p})\}. \tag{47}\]
Next, we deal with the terms (46) and (47) one by one.
For the term (47), because \(\mathbb{E}[G_{1}^{*}(x)]=G_{1}(x)\) and \(G_{1}^{*}\) is also a distribution function, following the proof in Serfling (1980, Lemma 2.5.4E), we have
\[\sup_{|x-\xi_{p}|\leq O_{p}(n_{1}^{-1/2})}|\{G_{1}^{*}(x)-G_{1}^{*}(\xi_{p})\} -\{G_{1}(x)-G_{1}(\xi_{p})\}|=O_{p}(n_{1}^{-3/4}\log^{1/2}n_{1}). \tag{48}\]
For the term (46), we note that
\[\{\hat{G}_{1}(x)-\hat{G}_{1}(\xi_{p})\}-\{G_{1}^{*}(x)-G_{1}^{*}( \xi_{p})\}\] \[= n_{1}^{-1}\sum_{k,j}\left[\rho_{n,1}h_{n}(x_{kj},\hat{\mathbf{\theta }})-\rho_{n,1}h_{n}(x_{kj},\mathbf{\theta}^{*})\right]\mathbb{1}\left(\xi_{p}<x_{ kj}\leq x\right). \tag{49}\]
By the mean value theorem, the middle term in (49) is
\[\rho_{n,1}h_{n}(x_{kj},\hat{\mathbf{\theta}})-\rho_{n,1}h_{n}(x_{kj},\mathbf{ \theta}^{*})=(\hat{\mathbf{\theta}}-\mathbf{\theta}^{*})^{\top}\mathbf{q}(x_{kj})\frac{ \rho_{n,0}\rho_{n,1}h_{n}(x_{kj},\mathbf{\theta}_{n})}{\rho_{n,0}+\rho_{n,1}\exp\{ \mathbf{\theta}_{n}^{\top}\mathbf{q}(x_{kj})\}},\]
for some \(\mathbf{\theta}_{n}\) between \(\hat{\mathbf{\theta}}\) and \(\mathbf{\theta}^{*}\). Plugging it back in (49) yields
\[\{\hat{G}_{1}(x)-\hat{G}_{1}(\xi_{p})\}-\{G_{1}^{*}(x)-G_{1}^{*}( \xi_{p})\}\] \[= (\hat{\mathbf{\theta}}-\mathbf{\theta}^{*})^{\top}\left\{n_{1}^{-1}\sum_{ k,j}\mathbf{q}(x_{kj})\frac{\rho_{n,0}\rho_{n,1}h_{n}(x_{kj},\mathbf{\theta}_{n})}{ \rho_{n,0}+\rho_{n,1}\exp\{\mathbf{\theta}_{n}^{\top}\mathbf{q}(x_{kj})\}}\mathbb{1 }\left(\xi_{p}<x_{kj}\leq x\right)\right\}\] \[= (\hat{\mathbf{\theta}}-\mathbf{\theta}^{*})^{\top}O_{p}(n_{1}^{-1/4}). \tag{50}\]
Here we prove the last equality. The expression in the curly brackets is a sum of two terms indexed by \(k=0,1\). For the term with \(k=0\), because
\[\frac{\rho_{n,0}^{2}h_{n}(x_{kj},\mathbf{\theta}_{n})}{\rho_{n,0}+\rho_{n,1}\exp\{ \mathbf{\theta}_{n}^{\top}\mathbf{q}(x_{kj})\}}<\exp\{\mathbf{\theta}_{n}^{\top} \mathbf{q}(x_{kj})\},\]
we have
\[\left\|n_{1}^{-1}\sum_{j=1}^{n_{0}}\mathbf{q}(x_{0j})\frac{\rho_{ n,0}\rho_{n,1}h_{n}(x_{0j},\mathbf{\theta}_{n})}{\rho_{n,0}+\rho_{n,1}\exp\{\mathbf{ \theta}_{n}^{\top}\mathbf{q}(x_{0j})\}}\mathbb{1}\left(\xi_{p}<x_{0j}\leq x \right)\right\|\] \[= \left\|n_{0}^{-1}\sum_{j=1}^{n_{0}}\mathbf{q}(x_{0j})\frac{\rho_{ n,0}^{2}h_{n}(x_{0j},\mathbf{\theta}_{n})}{\rho_{n,0}+\rho_{n,1}\exp\{\mathbf{\theta}_{n}^{ \top}\mathbf{q}(x_{0j})\}}\mathbb{1}\left(\xi_{p}<x_{0j}\leq x\right)\right\|\] \[= O(1)n_{0}^{-1}\sum_{j=1}^{n_{0}}\|\mathbf{q}(x_{0j})\|\exp\{\mathbf{ \theta}_{n}^{\top}\mathbf{q}(x_{0j})\}\mathbb{1}(\xi_{p}<x_{0j}\leq x)\] \[= O(1)\operatorname{\mathbb{E}}_{0}[\|\mathbf{q}(X)\|\exp\{\mathbf{ \theta}_{n}^{\top}\mathbf{q}(X)\}\mathbb{1}\left(\xi_{p}<X\leq x\right)]+o_{p} (1),\]
by the weak law of large numbers for triangular arrays. Note that without loss of generality, here we regard \(\mathbf{\theta}_{n}\) as a sample-independent quantity that is uniformly \(\|\mathbf{\theta}_{n}-\mathbf{\theta}^{*}\|\leq O_{p}(n_{1}^{-1/2})\). Then, by the Cauchy-Schwarz inequality, uniformly over \(x\) in a \(O_{p}(n_{1}^{-1/2})\) neighbourhood of \(\xi_{p}\),
\[\operatorname{\mathbb{E}}_{0}[\|\mathbf{q}(X)\|\exp\{\mathbf{\theta}_ {n}^{\top}\mathbf{q}(X)\}\mathbb{1}(\xi_{p}<X\leq x)]\] \[\leq \sqrt{\operatorname{\mathbb{E}}_{0}\|\mathbf{q}(X)\exp\{(\mathbf{ \theta}_{n}-\mathbf{\theta}^{*})^{\top}\mathbf{q}(X)\}\|^{2}}\sqrt{P(\xi_{p}<X \leq x;G_{1})}\] \[= O(1)O_{p}(n_{1}^{-1/4}),\]
where the first order assessment is uniformly over \(\|\mathbf{\theta}_{n}-\mathbf{\theta}^{*}\|\leq O_{p}(n_{1}^{-1/2})\) by Condition (iii) and applying the Cauchy-Schwarz inequality again; the second order assessment is because \(G_{1}(x)\) is a continuous distribution function.
For the term with \(k=1\), because
\[\frac{\rho_{n,0}\rho_{n,1}h_{n}(x_{kj},\mathbf{\theta}_{n})}{\rho_{n,0}+\rho_{n,1} \exp\{\mathbf{\theta}_{n}^{\top}\mathbf{q}(x_{kj})\}}<1,\]
we have
\[\left\|n_{1}^{-1}\sum_{j=1}^{n_{1}}\mathbf{q}(x_{1j})\frac{\rho_{ n,0}\rho_{n,1}h_{n}(x_{1j},\mathbf{\theta}_{n})}{\rho_{n,0}+\rho_{n,1}\exp\{\mathbf{ \theta}_{n}^{\top}\mathbf{q}(x_{1j})\}}\mathbb{1}\left(\xi_{p}<x_{1j}\leq x \right)\right\|\] \[= O(1)n_{1}^{-1}\sum_{j=1}^{n_{1}}\|\mathbf{q}(x_{1j})\|\mathbb{1 }\left(\xi_{p}<x_{1j}\leq x\right)\] \[= O(1)\operatorname{\mathbb{E}}_{1}[\|\mathbf{q}(X)\|\mathbb{1} \left(\xi_{p}<X\leq x\right)]+o_{p}(1),\]
by the weak law of large numbers for triangular arrays. Further, by the Cauchy-Schwarz inequality, uniformly over \(x-\xi_{p}=O_{p}(n_{1}^{-1/2})\) we have
\[\operatorname{\mathbb{E}}_{1}[\|\mathbf{q}(X)\|\mathbb{1}\left(\xi_{p}<X\leq x \right)]=O_{p}(n_{1}^{-1/4}).\]
Finally, by using the fact that \(\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}=O_{p}(n_{1}^{-1/2})\), together with (50), we have
\[\sup_{|x-\xi_{p}|\leq O_{p}(n_{1}^{-1/2})}|\{\hat{G}_{1}(x)-\hat{G}_{1}(\xi_{p })\}-\{G_{1}^{*}(x)-G_{1}^{*}(\xi_{p})\}|=O_{p}(n_{1}^{-3/4}). \tag{51}\]
Combining (51) and (48), the claim in (45) is proved.
Up to this point, the proof of Theorem 4.5 is complete. | 多数統計的および計量経済学アプリケーションでは、個人が集められたサンプルは、明確に共通する潜在構造を示しており、相互に関連している集団から得られます。これらの潜在構造を統合したモデルを使用することで、推論の効率性を向上させることができます。最近、多くの研究者たちが潜在構造を考慮した半パラメトリック密度比モデル (DRM) を採用して、その存在を考慮しています。DRMは、プールされたデータを使用して各集団の分布を推定し、非パラメトリック法よりも統計的により効率的な推定を行うことができます。この論文では、DRMの効率向上限界について調査します。特に、各集団のサンプルサイズが大きく異なる状況について調査します。この状況では、DRMに基づく推論は、パラメトリックモデルを仮定した状況と同様の効率性を達成します。考慮するestimandsには、モデルパラメータ、分布関数、量子が含まれます。特定の焦点に、サ |
2305.00594 | The MCC approaches the geometric mean of precision and recall as true
negatives approach infinity | The performance of a binary classifier is described by a confusion matrix
with four entries: the number of true positives (TP), true negatives (TN),
false positives (FP), and false negatives (FN).
The Matthew's Correlation Coefficient (MCC), F1, and Fowlkes--Mallows (FM)
scores are scalars that summarize a confusion matrix. Both the F1 and FM scores
are based on only three of the four entries in the confusion matrix (they
ignore TN). In contrast, the MCC takes into account all four entries of the
confusion matrix and thus can be seen as providing a more representative
picture.
However, in object detection problems, measuring the number of true negatives
is so large it is often intractable. Thus we ask, what happens to the MCC as
the number of true negatives approaches infinity? This paper provides insight
into the relationship between the MCC and FM score by proving that the
FM-measure is equal to the limit of the MCC as the number of true negatives
approaches infinity. | Jon Crall | 2023-04-30T22:36:47 | http://arxiv.org/abs/2305.00594v2 | # The MCC approaches the geometric mean of precision and recall as true negatives approach infinity.
###### Abstract
The performance of a binary classifier is described by a confusion matrix with four entries: the number of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN).
The Matthew's Correlation Coefficient (MCC), F1, and Fowlkes-Mallows (FM) scores are scalars that summarize a confusion matrix. Both the F1 and FM scores are based on only three of the four entries in the confusion matrix (they ignore TN). In contrast, the MCC takes into account all four entries of the confusion matrix and thus can be seen as providing a more representative picture.
However, in object detection problems, measuring the number of true negatives is so large it is often intractable. Thus we ask, what happens to the MCC as the number of true negatives approaches infinity? This paper provides insight into the relationship between the MCC and FM score by proving that the FM-measure is equal to the limit of the MCC as the number of true negatives approaches infinity.
Confusion Matrix Binary Classification Fowlkes-Mallows Index Matthew's Correlation Coefficient F1
## 1 Introduction
Evaluation of binary classifiers is central to the quantitative analysis of machine learning models [1]. Given a finite set of examples with known real labels, the quality of a set of corresponding predicted labels can quantified using a \(2\times 2\) confusion matrix. A confusion matrix counts the number of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) a model predicts with respect to the real labels. The confusion matrix is often written as:
\[\begin{bmatrix}\texttt{TP}&\texttt{FP}\\ \texttt{FN}&\texttt{TN}\end{bmatrix} \tag{1}\]
This matrix provides a holistic view of classifier quality, however, it is often desirable to summarize performance using fewer numbers. Two popular metrics defined on a classification matrix are precision and recall.
Precision -- also known as the positive-predictive-value (PPV) -- is the fraction of positive predictions that are correct.
\[\texttt{PPV}=\frac{\texttt{TP}}{\texttt{TP}+\texttt{FP}} \tag{2}\]
Recall -- also known as the true positive rate (TPR), sensitivity, or probability of detection (PD) -- is the fraction of real positive cases that are correct. | 二値分類器の性能は、4つの項目で表される混同行列で記述されます。真の正例(TP)、真の負例(TN)、偽の正例(FP)、偽の負例(FN)です。マッティアス相関係数(MCC)、F1値、Fowlkes-Mallows (FM)値は混同行列を要約する値であり、F1値とFM値は混同行列の3つの項目(TNを無視)に基づいています。一方、MCCは混同行列の4つの項目を考慮し、より代表的な情報をもたらします。しかし、物体検出の問題では、真の負例の数が多いと、測定が不可能になることがあります。そこで、真の負例の数が増えていくにつれて、MCCはどうなるのかを検討します。この論文では、FM値が真の負例数が増えていくにつれてMCCの極限値となることを |
2309.08209 | Attitude Control and Low Cost Design of UAV Bicopter | This paper present a control system for the attitude and low cost design of a
Bicopter. The control system uses a PID controller that receives feedback from
an IMU to calculate control inputs that adjust the Bicopters attitude (roll,
pitch and yaw angles) which is resistant to disturbances (wind noise) on a test
bed. The control system is implemented on a hardware platform consisting of a
Bicopter, an IMU sensor, and a microcontroller with low cost design. In
mechanical design, the Bicopter is designed to more closely resemble the letter
"V" so that the distribution of the centre of mass (CoM) of the Bicopter can be
such that the servomotor torque reaction is parallel to the axis of rotation of
the Bicopter during the movement of the pitch angle attitude. In electronic
design, the Bicopter was developed using the ATmega328P microcontroller. | Fahmizal, Hanung Adi Nugroho, Adha Imam Cahyadi, Igi Ardiyanto | 2023-09-15T07:19:06 | http://arxiv.org/abs/2309.08209v1 | # Attitude Control and Low Cost Design of UAV Bicopter
###### Abstract
This paper present a control system for the attitude and low cost design of a Bicopter. The control system uses a PID controller that receives feedback from an IMU to calculate control inputs that adjust the Bicopter's attitude (roll, pitch and yaw angles) which is resistant to disturbances (wind noise) on a test bed. The control system is implemented on a hardware platform consisting of a Bicopter, an IMU sensor, and a microcontroller with low cost design. In mechanical design, the Bicopter is designed to more closely resemble the letter "V" so that the distribution of the centre of mass (CoM) of the Bicopter can be such that the servomotor torque reaction is parallel to the axis of rotation of the Bicopter during the movement of the pitch angle attitude. In electronic design, the Bicopter was developed using the ATmega328P microcontroller.
+
Footnote †: 1 Introduction
Unmanned aerial vehicles (UAV) Bicopters are becoming increasingly popular due to their ability to perform a wide range of tasks, such as aerial photography, monitoring, and surveying [1]. The unique design of Bicopters, which combines the features of both fixed-wing and rotary-wing aircraft, makes them well suited to many applications [2, 3]. However, controlling the attitude of a Bicopter can be challenging due to the complex and nonlinear dynamics involved.
The control of attitude is a critical issue in the design and operation of UAVs. In particular, for a UAV Bicopter, which has a hybrid design combining the advantages of a helicopter and a fixed-wing aircraft, controlling the attitude is essential for stable flight and manoeuvrability. Conventional control approaches, such as proportional-integral-derivative (PID) controllers, have been widely used for Bicopter attitude control [4, 5, 6, 7, 8]. These controllers are easy to implement and have been shown to be effective in many cases.
Advanced model-based controllers, such as linear quadratic regulators (LQR), have been proposed as an alternative to PID controllers [9, 10, 11, 12, 13]. These controllers use a mathematical model of the Bicopter to predict its behavior and adjust the control inputs accordingly. While LQR controllers can be more effective than PID controllers in some cases, they are also more complex and require more computational resources.
Youming Qin et al. [4] detailed a Bicopter concept they called Gemini in their research. In this research, they focus on how the Bicopter can be put to use in enclosed environments. Starting with the selection of the optimal propeller and continuing through aerodynamic analysis, system design, optimisation, and implementation of the controls, this study details the full process of creating the Gemini platform. Cascaded PID controllers are used in practise on Gemini's attitude controller. In this research use high cost flight-controller.
In 2022, Bicopter's research on mechanical design without a servomotor has been carried out by Youming Qin et al. [14]. In this study, replacing the servomotor by using a cyclic blade pitch control but requires a magnetic encoder sensor on the cyclic blade system so that it becomes a high cost flight controller.
This paper has the following contributions: 1). developing and implementing a PID controller design to maintain a stable attitude of the UAV Bicopter, which is resistant to disturbances (wind noise) on a test bed. 2). designs and manufactures mechanical (tilt mechanism) and electronic (flight controller) UAV Bicopter with the principle of low cost.
This paper's remaining sections are organized as follows: Section II covers the methodology of mechanical design, electronics design, attitude sensor, and attitude modelling. Section III describes the design of the attitude control using the PID controller. Experimental results are presented in Section IV to demonstrate the value and effectiveness of the proposed methodologies. Section V concludes the paper.
## II Materials and Methods
### Mechanical Design
Bicopters are a type of UAV that have two rotors that are fixed in a parallel configuration, which allows them to perform VTOL and hover like a helicopter, as well as to fly forward like a fixed-wing aircraft. Designing the mechanics of a Bi-copter involves several considerations, including the size and weight of the vehicle, the choice of materials, the design of the rotors, and the placement of the motors. The size and weight of the Bicopter will determine the amount of lift required and the amount of power needed to achieve flight. The size and weight will also determine the maximum payload capacity of the Bicopter. The rotors are a critical component of the Bicopter, as they provide lift and control the vehicle's attitude.
The Bicopter is constructed with two driving rotors and two servomotors for tilting the two rotors in opposite directions. Figure 1 shows the right rotor thrust (\(F_{R}\)) and left rotor thrust (\(F_{L}\)) created by the rotor, propeller, and their components in the \(x\) and \(z\) axes. By altering the magnitude of the rotor thrust \(F_{R}\) and \(F_{L}\), the rolling movement may be adjusted. This paper develops the mechanical design of Bicopter using Autodesk Inventor Student version and printed by UP2 3D printer. The mechanical design of the UAV Bicopter consists of an arm, rotor holder and body. As shown in Fig. 1, the Bicopter is designed to more closely resemble the letter "V" so that the distribution of the centre of mass (CoM) of the Bicopter can be such that the servomotor torque reaction is parallel to the axis of rotation of the Bicopter during the movement of the pitch angle attitude.
The test bed rig is a device used as an evaluation platform for the stability of the rotational motion of roll, pitch, and yaw on a Bicopter. Without needing to fly the Bicopter, the stability of its attitude can be verified with this test bed rig. The Bicopter's attitude when disturbed can also be observed with this test bed rig. Figure 2 illustrates the design of the test bed rig that will be used for Bicopter attitude testing.
### Electronics Design
ATmega328P serves as the primary microcontroller in the Bi-copter electronics system. There are two MG90S servomotors and two left and right rotors use Sunnysky x2216 800 KV rotors, and one IMU sensor MPU6050. Figure 3 represents the results of the printed circuit board (PCB) design for the Bicopter electronic system. This PCB is also known as the Bicopter's flight controller. The electronic system is also coupled with a personal computer (PC) via serial communication with the graphical user interface (GUI) to automatically show sensor reading conditions in real time as presented in Fig. 4.
### Attitude Sensor
The _motion processing unit_ (MPU) 6050 is a type of inertial measurement unit (IMU) that is commonly used in small UAVs and other electronic devices that require accurate motion sensing. It is a small and affordable sensor that combines both accelerometer and gyroscope functionality. The accelerometer measures acceleration along the \(x\), \(y\), and \(z\) axes and is used to determine the orientation of the device. It senses both static and dynamic acceleration due to gravity and motion respectively. It is able to measure acceleration in the range of \(\pm 2g\), \(\pm 4g\), \(\pm 8g\), or \(\pm 16g\).
In this paper, the IMU MPU 6050 is used in the Bicopter orientation sensor system. This sensor's configuration is as follows: it has six degrees of freedom and is made up of two
Figure 1: Mechanical design of Bicopter.
Figure 2: Test bed rig for attitude testing of Bicopter.
types of sensors, namely accelerometers and gyroscopes with data transmission protocols based on inter-integrated circuits (I2C) [15]. The IMU MPU 6050 sensor produces an angle on the x-axis called the _phi_ (\(\phi\)) or roll angle, a pitch angle on the y-axis called the _theta_ (\(\theta\)), and a yaw angle on the z-axis called the _psi_ (\(\psi\)).
Noise is an important distraction that must be considered in a measuring process. Besides that, noise can also interfere with the process in a closed-loop control system. Because of that, filtering techniques are needed to separate the actual signal from a set of noises. This paper uses a complementary filter (CF) to remove noise [16; 17]. The configuration of the CF for the pitch angle is presented in Fig. 5. The characteristics of the raw accelerometer sensor data are filtered in the low-frequency region, and the gyroscope data are in the high-frequency region [18]. Therefore, to compensate for the sensor data, apply a low-pass filter (LPF) and a high-pass filter (HPF).
Theoretically, an LPF is shown in Eq. (1).
\[V_{in}(t)-V_{out}(t)=RC\frac{dV_{out}}{dt} \tag{1}\]
Equation 1 can be discretized into Eq. (2). Furthermore, for simplicity, it is assumed that the input and output are sampled at the same time interval, namely \(\Delta_{T}\). Input \(V_{in}\) is defined by \(x_{i}\) and output \(V_{out}\) is defined by \(y_{i}\).
\[x_{i}-y_{i}=RC\frac{y_{i}-y_{i-1}}{\Delta_{T}}\] \[y_{i}=x_{i}\left(\frac{\Delta_{T}}{RC+\Delta_{T}}\right)+y_{i-1 }\left(\frac{RC}{RC+\Delta_{T}}\right)\] \[y_{i}=\alpha x_{i}+\left(1-\alpha\right)y_{i-1} \tag{2}\]
Where \(\alpha=\frac{\Delta_{T}}{RC+\Delta_{T}}\), \(RC=\frac{1}{2\pi\ell}\),\(f_{c}\) show as frequency _cutoff_, \(\Delta_{T}\) is the sampling period and the smoothing factor lies between \(0\leq\alpha\leq 1\), then HPF is defined as in Eq. (3).
\[y_{i}=RC\left(\frac{x_{i}-x_{i-1}}{\Delta_{T}}-\frac{y_{i}-y_{i- 1}}{\Delta_{T}}\right)\] \[y_{i}=\left(\frac{\Delta_{T}}{RC+\Delta_{T}}\right)y_{i-1}+ \left(\frac{RC}{RC+\Delta_{T}}\right)(x_{i}-x_{i-1})\] \[y_{i}=\alpha y_{i-1}+\alpha\left(x_{i}-x_{i-1}\right) \tag{3}\]
In addition to implementing the IMU MPU 6050 using CF, this paper also uses a Quaternion-based sensor fusion processing technique. Quaternions were introduced to improve computational efficiency and to avoid gimbal lock and singularity [19; 20] problems in the Euler angle representation [21; 22; 23]. Quaternions have four dimensions, one real dimension (scalar element) and three imaginary dimensions ( vector). Each of these imaginary dimensions has a unit value of the square root of -1, but the square roots of the distinct -1 are perpendicular to one another; we know \(i,j\)
Figure 4: Graphical user interface (GUI) of Bicopter.
Figure 5: Complementary filter block diagram.
and \(k\). So, a Quaternion can be represented as \(q=w+ix+jy+kz\). A Quaternion can be converted into a 3D space like Euler and the yaw-pitch-roll (YPR) representation [24], [25]. This makes it easier to imagine and describe rotations along the \(x\), \(y\), and \(z\) axes. First, by extracting the gravity \(g=\begin{bmatrix}g_{x}&g_{y}&g_{z}\end{bmatrix}\) from the Quaternion defined by Eq. (4).
\[\begin{bmatrix}g_{x}\\ g_{y}\\ g_{z}\end{bmatrix}=\begin{bmatrix}2\left(q_{x}q_{z}-q_{y}q_{y}\right)\\ 2\left(q_{w}q_{x}-q_{y}q_{z}\right)\\ q_{w}q_{w}-q_{y}q_{x}+q_{z}q_{z}\end{bmatrix} \tag{4}\]
Then YPR and Euler can be obtained by conversion in Eq. (5) and Eq. (6).
\[\begin{bmatrix}yaw\\ pitch\\ roll\end{bmatrix}=\begin{bmatrix}\arctan 2\left(\frac{2q_{x}q_{y}-2q_{y}q_{z}}{2 q_{x}q_{y}+2q_{z}q_{z}^{*}}\right)\\ \arctan\left(\frac{g_{x}}{\sqrt{g_{x}g_{y}+g_{y}}}\right)\\ \arctan\left(\frac{g_{z}}{\sqrt{g_{x}g_{y}+g_{y}}}\right)\end{bmatrix} \tag{5}\]
\[\begin{bmatrix}\psi\\ \theta\\ \phi\end{bmatrix}=\begin{bmatrix}\arctan 2\left(\frac{2q_{x}q_{y}-2q_{y}q_{z}}{2 q_{x}q_{y}+2q_{z}q_{z}^{*}}\right)\\ -\arcsin\left(2q_{x}q_{z}+2q_{y}q_{y}\right)\\ \arctan 2\left(\frac{2q_{x}q_{y}-2q_{y}q_{z}}{2q_{x}q_{y}+2q_{z}q_{z}^{*}} \right)\end{bmatrix} \tag{6}\]
### Attitude modelling
The right rotor thrust (\(F_{R}\)) and left rotor thrust (\(F_{L}\)) are generated by the propeller and rotor and their components in the \(x\) and \(z\) directions shown in Fig. 6. With the parameters described in Table 1. Using Newton's second law, the equations of forces in the \(x\), \(y\) and \(z\) directions are defined as given in Eq. (7) - (9).
\[\sum F_{x} =F_{R}\sin\gamma_{R}+F_{L}\sin\gamma_{L} \tag{7}\] \[\sum F_{y} =0\] (8) \[\sum F_{z} =F_{R}\cos\gamma_{R}+F_{L}\cos\gamma_{L} \tag{9}\]
The total lift (thrust) and moment of force from the Bicopter can be obtained from the input \(u\) which is written in Eq. (10) - (14). Where, \(C_{T}\) is the thrust coefficient of the propeller. \(\Omega_{R}\) and \(\Omega_{L}\) are the rotational speeds of the right and left rotors, \(\gamma_{R}\) and \(\gamma_{L}\) are the tilt angles of the right and left rotors.
\[u =\begin{bmatrix}u_{1}&u_{2}&u_{3}&u_{4}\end{bmatrix}^{T} \tag{10}\] \[u_{1} =C_{T}\left(\Omega_{R}^{2}\cos\gamma_{R}+\Omega_{L}^{2}\cos\gamma _{L}\right)\] (11) \[u_{2} =C_{T}\left(\Omega_{R}^{2}\cos\gamma_{R}-\Omega_{L}^{2}\cos\gamma _{L}\right)\] (12) \[u_{3} =C_{T}\left(\Omega_{R}^{2}\sin\gamma_{R}+\Omega_{L}^{2}\sin \gamma_{L}\right)\] (13) \[u_{4} =C_{T}\left(\Omega_{R}^{2}\sin\gamma_{R}-\Omega_{L}^{2}\sin \gamma_{L}\right) \tag{14}\]
Bicopter dynamic movement can be divided into two subsystems, namely, the rotation subsystem (_roll_, _pitch_ and _yaw_) as the inner loop and the translation subsystem (\(x\), \(y\) position and \(z\) (altitude) position) as the outer loop. Based on the dynamic solution of the model using Newton-Euler [5], [6], [26], we get the equation of translational motion in Eq. (15) and Eq. (16) for rotational motion, where \(s=\sin\) and \(c=\cos\).
\[\ddot{x} =-\frac{1}{m}\left(s\phi s\psi+c\phi s\theta c\psi\right)u_{1}- \frac{c\theta c\psi}{m}u_{3}\] \[\ddot{y} =-\frac{1}{m}\left(-s\phi c\psi+c\phi s\theta s\psi\right)u_{1}+ \frac{c\theta s\psi}{m}u_{3}\] \[\ddot{z} =g-\frac{1}{m}\left(c\phi c\theta\right)u_{1}-\frac{s\theta}{m}u _{3} \tag{15}\]
\begin{table}
\begin{tabular}{l c c c} \hline Parameter & Symbols & Value & Unit \\ \hline Mass of the UAV Bicopter & \(m\) & 0.725 & \(kg\) \\ Gravitational acceleration & \(g\) & 9.81 & \(m.s^{-2}\) \\ Vertical distance between CoG and center of the rotor & \(h\) & 0.042 & \(m\) \\ Horizontal distance CoG and rotor center & \(L\) & 0.225 & \(m\) \\ Thrust coefficient & \(C_{T}\) & 0.1222 & - \\ The Moment of Inertia along x axis & \(I_{xx}\) & \(0.116\times 10^{-3}\) & \(kg.m^{2}\) \\ The Moment of Inertia along y axis & \(I_{yy}\) & \(0.0408\times 10^{-3}\) & \(kg.m^{2}\) \\ The Moment of Inertia along z axis & \(I_{zz}\) & \(0.105\times 10^{-3}\) & \(kg.m^{2}\) \\ \hline \end{tabular}
\end{table}
Table 1: Bicopter dynamic model parameters.
Figure 6: Bicopter reference frames.
\[\ddot{\phi} = \frac{L}{I_{xx}}u_{2}\] \[\ddot{\theta} = \frac{h}{I_{yy}}u_{3}\] \[\ddot{\psi} = \frac{L}{I_{zz}}u_{4} \tag{16}\]
In this paper, the design of attitude control is the main point, based on the illustration in Fig. 8, the roll angle movement condition occurs when there is a difference in the lift force caused by the rotation of the right rotor and the left rotor, and the condition of the right and left servomotors is zero (\(\cos(0)=1\)) so that the rolling case can be solved in Eq. (17). And in the case of pitching and yawing presented in Eq. (18) and Eq. (19). With parameters described in Table 1.
\[\ddot{\phi} = \frac{L}{I_{xx}}C_{T}\left(\Omega_{R}^{2}-\Omega_{L}^{2}\right)= \frac{L}{I_{xx}}C_{T}\left(F_{R}-F_{L}\right) \tag{17}\] \[\ddot{\theta} = \frac{h}{I_{yy}}C_{T}\left(\Omega_{R}^{2}\sin\lambda_{R}+\Omega_{ L}^{2}\sin\lambda_{L}\right)\] (18) \[\ddot{\psi} = \frac{L}{I_{zz}}C_{T}\left(\Omega_{R}^{2}\sin\lambda_{R}-\Omega_ {L}^{2}\sin\lambda_{L}\right) \tag{19}\]
## III Attitude Control Design
The block diagram of the closed-loop control system for the attitude stability of the Bicopter is presented in Fig. 7. From this block diagram, it can be seen that there are four closed loops. The first is for the altitude control loop of the Bicopter, for the second, third and fourth loops are the attitude control of the Bicopter in the orientation of the motion angles of roll, pitch and yaw.
### Pid Attitude Roll Control
The attitude roll is a rotational movement of the Bicopter about the x-axis, which means this attitude movement will cause the translational displacement of the Bicopter on the y-axis to shift to the right and left. An illustration of the rolling motion of the Bicopter is shown in Fig. 8.
By providing an input reference, the set point signal (SP), in the form of a Bicopter roll angle of 0 degrees, then the deviation of the roll angle \((\phi)\) from the reference roll angle \((\phi_{r})\) is defined as an error in Eq. (20). Furthermore, if you know the error value \((e)\), then the differential error \(\left(\frac{d\epsilon(t)}{dt}\right)\) can be calculated as shown in Eq. (21).
\[e_{\phi}(t)=\phi-\phi_{r} \tag{20}\]
\[\frac{de_{\phi}(t)}{dt}=\dot{\phi}-\dot{\phi_{r}} \tag{21}\]
Discrete PID control is another form of analog PID control (continues) in Eq. (22) which is programmed and executed using a computer or microcontroller. The analog PID control must first be converted to a discrete form to implement discrete PID on a computer or microcontroller [27]. The formulation of discrete PID control can be seen in Eq. (22) - (25).
\[u(t)=K_{p}e(t)+K_{i}\int_{0}^{t}e(t)dt+K_{d}\frac{de(t)}{dt} \tag{22}\]
With \(K_{i}=\frac{1}{\tau_{i}}\) and \(K_{d}=\tau_{d}\), the integral and differential forms can be written in discrete form as in Eq. (23) and Eq. (24), so that they are obtained in the discrete PID control form in Eq. (25). \(e(k)\) is the current error, \(e(k-1)\) is the previous error and \(T\) is the sampling time.
\[\int_{0}^{t}e(t)dt\approx T\sum_{0}^{k}e(k) \tag{23}\] \[\frac{de(t)}{dt}\approx\frac{e(k)-e(k-1)}{T} \tag{24}\]
\[u(k) = K_{p}e(k)+K_{i}T\sum_{0}^{k}e(k)\] \[+\frac{1}{T}K_{d}e(k)-e(k-1)\]
In the case of controlling the attitude roll of the Bicopter using PID control, the output value of the PID roll controller in Eq. (26) will be added or subtracted from the given throttle value depending on the roll angle error condition, and this condition can be explained by looking at the illustration in Fig. 8.
\[u_{\phi}(k) = K_{p\phi}\ e_{\phi}(k)+K_{i\phi}\ T\sum_{0}^{k}e_{\phi}(k)\] \[+\frac{1}{T}K_{d\phi}\ e_{\phi}(k)-e_{\phi}(k-1)\]
### Pid Attitude Pitch Control
The attitude pitch is the rotational movement of the Bicopter about the _y-axis_, which means this attitude will cause the translational displacement of the Bicopter on the _x-axis_ to shift forwards and/or backwards. In the case of controlling the attitude pitch of the Bicopter using PID control, the output value of the PID pitch controller in Eq. (27) will be added or subtracted from the given CenterServo value depending on the pitch angle error condition; this condition can be explained by looking at the illustration in Fig. 9.
\[u_{\theta}(k) = K_{p\theta}\ e_{\theta}(k)+K_{i\theta}\ T\sum_{0}^{k}e_{\theta}(k)\] \[+\frac{1}{T}K_{d\theta}\ e_{\theta}(k)-e_{\theta}(k-1)\]
### Pid Attitude Yaw Control
The attitude yaw is a rotational movement of the Bicopter about the z-axis, which means that this attitude movement will cause the rotational movement of the Bicopter to rotate
clockwise (CW) or counterclockwise (CCW). In the case of controlling the attitude yaw Bicopter using PID control, the output value of the yaw PID controller in Eq. (28) will be added or subtracted from the given CenterServo value depending on the yaw angle error condition, and this condition can be explained by looking at the illustration in Fig. 10.
\[\begin{split} u_{\psi}(k)=K_{p\psi}\;e_{\psi}(k)+K_{i\psi}\;T\sum_{ 0}^{k}e_{\psi}(k)\\ +\frac{1}{T}K_{d\psi}\;e_{\psi}(k)-e_{\psi}(k-1)\end{split} \tag{28}\]
### PID Controller Tuning
Before applying PID control parameters to control the attitude stability of the Bicopter, the tuning process of the PID attitude roll controller is carried out by simulating a dynamic model of the attitude roll of the Bicopter. Based on Eq. (16), the dynamics of the Bicopter rolling motion can be obtained in the form of a double integrator. If the moment of inertia \(I_{xx}\) is known to be \(0.116\times 10^{-3}\) and \(L\) is \(0.225\), the equation for the dynamic attitude roll transfer function can be obtained in Eq. (29).
\[\frac{\Phi}{U_{2}}=\frac{1939.7}{s^{2}} \tag{29}\]
From Eq. (29) when changed in the model state space with suppose \(\phi=y\), \(u_{2}=u\), \(x_{1}=y\), \(x_{2}=\dot{y}\), \(\dot{x}_{1}=\dot{y}\) dan \(\dot{x}_{2}=\ddot{y}\) then obtained as follows:
\[\begin{split}\dot{x}_{1}=x_{2}\\ \dot{x}_{2}=1939.7u\end{split} \tag{30}\]
From Eq. (30) when arranged in the form \(\dot{x}=Ax+Bu\), then obtained Matrix A and Matrix B as follows:
\[\begin{split}\dot{x}=Ax+Bu\\ A=\begin{bmatrix}0&1\\ 0&0\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix},B=\begin{bmatrix}0\\ 1939.7\end{bmatrix}u\end{split} \tag{31}\]
It is known that the closed loop characteristic equation of the system in Eq. (31) can be obtained using the formula _det_\((sI-(A-BK))=0\), with the following description:
Figure 7: Control system block diagram for Bicopter flight stability.
\[\begin{split} det\left(sI-\left(A-BK\right)\right)&=0\\ \left|s\begin{bmatrix}1&0\\ 0&1\end{bmatrix}-\left(\begin{bmatrix}0&1\\ 0&0\end{bmatrix}-\begin{bmatrix}0&0\\ 1939,7\end{bmatrix}\begin{bmatrix}K_{1}&K_{2}\end{bmatrix}\right)\right|&=0\\ \left|\begin{bmatrix}s&0\\ 0&s\end{bmatrix}-\begin{bmatrix}0&1\\ 0&0\end{bmatrix}-\begin{bmatrix}0&0\\ 1939.7K_{1}&1939.7K_{2}\end{bmatrix}\right|&=0\\ \left|\begin{bmatrix}s&0\\ 0&s\end{bmatrix}-\begin{bmatrix}0&1\\ -1939.7K_{1}&-1939.7K_{2}\end{bmatrix}\right|&=0\\ \left|\begin{bmatrix}s&0\\ 0&s\end{bmatrix}-\begin{bmatrix}0&1\\ -1939.7K_{1}&-1939.7K_{2}\end{bmatrix}\right|&=0\\ \left|\begin{bmatrix}s&-1\\ 1939,7K_{1}&s+1939,7K_{2}\end{bmatrix}\right|&=0\\ s^{2}+\left(1939.7K_{2}\right)s+1939.7K_{1}&=0\end{split} \tag{32}\]
It is known that the characteristic equation of a closed two-loop system in general can be defined in Eq. (33) where \(\zeta\) is the damping ratio and \(\omega_{n}\) is the natural frequency. By using the substitution method, it is possible to determine the gains of \(K_{1}\) and \(K_{2}\) according to \(\zeta\) and \(\omega_{n}\) based on the desired system performance.
\[s^{2}+2\zeta\omega_{n}s+\omega_{n}^{2}=0 \tag{33}\]
The closed-loop transfer function (CLTF) dynamic attitude roll is planned in Eq. (29) using a proportional-differential (PD) controller, as shown in Fig. 11. Equation (34) explains the CLTF results.
\[\begin{split}\frac{y(s)}{r(s)}&=\frac{\frac{1939,7K_{d}s+193 9,7K_{p}}{s^{2}}}{1+\frac{1939,7K_{d}s+1939,7K_{p}}{s^{2}+1939,7K_{d}s+1939,7K_{ p}}}\\ &=\underbrace{\frac{1939,7K_{d}s+1939,7K_{p}}{s^{2}+1939,7K_{d}s+1939,7K_{p}}}_{\underbrace{s^{2}+1939,7K_{d}s+1939,7K_{p}}}\\ \end{split} \tag{34}\]
From Eq. (32) and Eq. (34), it can be noticed that the value of \(K_{d}=K_{2}\) and \(K_{p}=K_{1}\), therefore if we want the system to have characteristics similar to Eq. (33), then we will get \(1939.7K_{d}s=2\zeta\omega_{n}s\) and \(1939.7K_{p}=\omega_{n}^{2}\). If the planned closed loop system in Eq. (34) has the characteristics of \(s^{2}+331s+1950=0\), \(K_{d}\) and \(K_{p}\) will be obtained in Eq. (35) and Eq. (36).
\[K_{d}=\frac{331s}{1939.7s}=0.1706 \tag{35}\]
\[K_{p}=\frac{1950}{1939.7}=1.0053 \tag{36}\]
Figure 11: CLTF with PD controller.
Figure 8: Bicopter attitude roll condition; (a) roll angle with an error (\(+\)) value produces a translational movement on the y-axis, which is shifted to the right, (b) roll angle with an error (\(-\)) value produces a translational movement on the y-axis, which is shifted to the left.
Figure 9: Bicopter attitude pitch condition; (a) pitch angle with error (\(+\)) value produces translational movement on the x-axis, which is shifted forward, (b) pitch angle with error (\(-\)) value produces translational movement on the x-axis, which is shifted backwards.
## IV Results and Discussion
A PID controller was implemented in the experiment to maintain a stable attitude from roll, pitch and yaw angles on the Bicopter. The test setup is carried out as shown in Fig. 12. With the help of the GUI, as presented in Fig. 4, the PID control parameter search process can be carried out using an experimental fine-tuning process by comparing the response results. This tuning process produces PID controller parameters described in Table 2.
Furthermore, the PID control test is carried out by adding noise, by giving noise in the form of wind emitted using a fan. The wind given is regulated into three modes where the wind speed has been measured using a wind sensor in the form of an anemometer.
The results of testing with noise were carried out in three modes, namely three types of wind strength. The first experiment, as shown in Fig. 13 when the wind strength is set at a speed of about 8 Knots, it can be noticed that at about the 300th data or at the time (300 x 2.8 ms = 8.4 s) the attitude pitch angle of the Bicopter increases to \(5^{\circ}\) means that the Bicopter experienced a condition of nose up of \(5^{\circ}\). Up to 28 seconds of experiment, it can be seen that the attitude pitch condition of the Bicopter can be maintained. For the attitude roll condition, the Bicopter experiences shocks with a change in attitude roll of \(\pm 3^{\circ}\).
In the second experiment by increasing the wind strength given by 9 Knots. In Fig. 14 it can be seen that at around the 250th data or at the time (250 x 2.8 ms = 7.0 s) the attitude pitch angle of the Bicopter undergoes a nose up process until the 1000th data or at the time (1000 x 2.8 ms = 28.0 s) the attitude pitch angle of the Bicopter is around \(7^{\circ}\) and the Bicopter experiences an attitude roll shock with a change of \(\pm 4^{\circ}\).
In the third experiment, the wind strength was set at a speed of around 10 Knots, as seen in Fig. 15, the attitude pitch dipped to \(11^{\circ}\) and also experienced an increase in attitude roll shocks with a change of \(\pm 6^{\circ}\). The RMSE attitude roll and pitch values are presented in Table 3 when tested on a test bed.
The Table 3 shows the RMSE attitude of a Bicopter when tested on a test bed with three variations of static noise. The RMSE attitude is a measure of how much the Bicopter's attitude (roll and pitch) deviates from the desired attitude. The higher the RMSE attitude, the more the Bicopter's attitude deviates from the desired attitude. The RMSE attitude of the Bicopter increases as the wind power from the fan increases. This is because the wind gusts cause the Bicopter to wobble, which increases the deviation of its attitude from the desired attitude. From Table 3 also shows that the RMSE attitude of the Bicopter is higher for roll than for pitch. This is because the Bicopter is more susceptible to rolling than pitching and also because the Bicopter's rotors are located on the same plane, so they provide more lift for rolling than for pitching. Overall, the RMSE attitude of a Bicopter increases as the wind power from the fan increases and as the roll angle increases. This information can be used to design a Bicopter that is more stable in windy conditions.
In subsequent tests, a PID controller was implemented to maintain stable roll, pitch and yaw angle attitudes on the
\begin{table}
\begin{tabular}{l l l l} \hline \hline Parameter & Roll & Pitch & Yaw \\ \hline \(K_{p}\) & 3.3 & 3.3 & 6.8 \\ \(K_{i}\) & 0.030 & 0.030 & 0.045 \\ \(K_{d}\) & 23 & 23 & 0 \\ \hline \hline \end{tabular}
\end{table} TABLE 2: PID control parameters on attitude Bicopter using the test bed rig.
Figure 10: Bicopter attitude yaw condition; (a) yaw angle with error (\(+\)) value produces COW rotational movement, (b) yaw angle with error (\(-\)) value produces CW rotational movement.
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multirow{2}{*}{Attitude} & \multicolumn{3}{c}{Wind power from the fan} \\ \cline{2-4} & 8 Knots & 9 Knots & 10 Knots \\ \hline Roll & 1.8868 & 2.7628 & 3.9183 \\ Pitch & 3.6764 & 4.2332 & 9.9868 \\ \hline \hline \end{tabular}
\end{table} TABLE 3: RMSE attitude of Bicopter when tested on a test bed with three variations of static noise.
Bicopter when flying indoors. The indoor flight test setup was carried out as shown in Fig. 18. From the results of tuning the PID controller parameters using the experimental fine-tuning process, it was obtained parameters as presented in Table 4. Figure 16 shows the results of the attitude roll and pitch response during the flight test. Figure 17 shows the results of the yaw attitude response. RMSE attitude roll and pitch values when tested in flight tests are presented in the Table 5.
The sampling data of 2.8 ms indicates that the data was collected at a rate of 1000/2.8 = 357 Hz. The attitude response shows that the Bicopter is able to track the desired attitude with a reasonable degree of accuracy.
## 5 Conclusion
The results of implementing the PID controller on the attitude of the Bicopter have been tested for its durability using a test bed with various variations of wind strength. From the static noise test results, when the wind strength is given at a speed of 10 Knots, the RMSE value for attitude roll is 3.9183, and the RMSE value for attitude pitch is 9.9868. During the flight test, the PID controller maintained a stable attitude of the UAV Bicopter with an RMSE of 2.3728 for attitude roll and 4.4219 for attitude pitch.
The mechanical design of the UAV Bicopter was developed using the concept of a "V" shaped frame with the aim that the center of mass (CoM) distribution of the UAV Bicopter can be in a position that causes the servomotor torque reaction to being parallel to the axis of rotation of the Bicopter when the attitude pitch angle moves. The electronic design of the UAV Bicopter was developed on the principle of low cost using the ATmega328P microcontroller.
## Acknowledgment
This work was supported by the Indonesian Postgraduate Domestic Education Scholarship (BPPDN) with contract number 2974/UN1.P.IV/KPT/DSDM/2019.
| この論文は、 attitudes controlと低コスト設計のシステムを提案しています。そのシステムは、IMUからのフィードバックに基づいてPID制御アルゴリズムを用いて、回転、俯仰、 yaw 角度を調整する制御入力を出力しています。これは、テストベッドでの風雑音などの干渉に強く、Bicopterの姿勢を安定させるためのものです。このシステムは、Bicopter、IMUセンサー、低コスト設計のマイクロコントローラを搭載したハードウェアプラットフォームで実装されています。機械設計では、BicopterはVの形状に近づけるように設計されています。これは、Bicopterの質量中心 (CoM) の分布を調整することで、ピッチ角度の姿勢変化時の servomotorトルク反作用を回転軸と平行にすることによります。電子設計では、ATmega328Pマイクロコントローラを用いてBicopterが開発されました。 |
2309.10446 | Vortex-core spectroscopy of $d$-wave cuprate high-temperature
superconductors | The mechanism of high-temperature superconductivity remains one of the great
challenges of contemporary physics. Here, we review efforts to image the vortex
lattice in copper oxide-based high-temperature superconductors and to measure
the characteristic electronic structure of the vortex core of a $d$-wave
superconductor using scanning tunneling spectroscopy. | Ivan Maggio-Aprile, Tejas Parasram Singar, Christophe Berthod, Tim Gazdić, Jens Bruér, Christoph Renner | 2023-09-19T09:05:06 | http://arxiv.org/abs/2309.10446v1 | # Vortex-core spectroscopy of \(d\)-wave cuprate high-temperature superconductors
###### Abstract
The mechanism of high-temperature superconductivity remains one of the great challenges of contemporary physics. Here, we review efforts to image the vortex lattice in copper oxide-based high-temperature superconductors and to measure the characteristic electronic structure of the vortex core of a \(d\)-wave superconductor using scanning tunneling spectroscopy.
## 1 Introduction
The observation of superconductivity at an unprecedented high temperature in Ba\({}_{x}\)La\({}_{5-x}\)Cu\({}_{5}\)O\({}_{5}\) over 35 years ago [1], has triggered considerable theoretical and experimental efforts to elucidate the underlying electron-pairing mechanism. To this date, high-temperature superconductivity (HTS) in copper oxide compounds remains a very active and challenging area of research. Among the many techniques used to investigate HTS, scanning tunneling microscopy (STM) and scanning tunneling spectroscopy (STS) have made important contributions, measuring the superconducting gap amplitude and symmetry, characterizing the pseudogap phase and excitation spectrum of quasiparticles, exploring atomic-scale defects [2], as well as competing (charge-ordered) and parent (pair density wave) phases [3].
The fundamental excitations bound to magnetic vortices in type-II superconductors carry information about essential properties of the superconducting state. Their proper identification is therefore of primary interest to elucidate the mechanism driving HTS. In 1964, Caroli, de Gennes, and Matricon [4] used the Bardeen-Cooper-Schrieffer (BCS) theory of superconductivity to predict that Abrikosov vortices would host a collection of localized electron states bound to their cores. The subsequent observation of these localized states [5], and their response to disorder using STS [6], provided a spectacular verification of the BCS theory, affirming the existence of vortex-core bound states for an \(s\)-wave superconductor.
Early scanning tunneling spectroscopy maps of vortex cores in HTS were neither compatible with the discrete Caroli-de Gennes-Matricon (CdGM) bound states expected for an \(s\)-wave superconductor [4] nor with the continuum first calculated by Wang and MacDonald for a \(d\)-wave superconductor [7]. Instead of the expected zero-bias conductance peak (ZBCP) that splits with increasing distance from the core in the \(d\)-wave case, these experiments revealed low-energy (\(E<\Delta_{\rm SC}\), where \(\Delta_{\rm SC}\) is the superconducting gap) subgap states (SGSs) in YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{7-\delta}\) (Y123) [8] and a pseudogap like spectrum in Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+\delta}\) (Bi2212) [9]. Subsequent STS mapping with improved resolution also found the presence of SGSs in the vortex cores of Bi2212 [10; 11]. The SGSs in Bi2212 were associated with a periodic short range \(\approx 4a_{0}\times 4a_{0}\) modulation of the local density of states spanning the vortex-core region, where \(a_{0}\) is the atomic lattice parameter [12]. Whether the SGSs reported in Y123 are also associated with a periodic charge modulation remains an open question due to the extreme difficulty of obtaining atomic resolution STM data on this material.
## 2 Vortex imaging and core spectroscopy by STS
High-resolution real space imaging of the Abrikosov vortex lattice by STS was first achieved on 2H-NbSe\({}_{2}\)[5]. These images relied on plotting the conductance as a function of position while scanning the sample surface at a tunneling setpoint corresponding to the superconducting coherence peaks at the gap edges. Much more detailed information about the complexity of the vortex-core structure was subsequently obtained by measuring full \(I(V,\hat{r})\) or \(dI/dV(V,\hat{r})\) tunneling spectra on a dense grid over the sample surface, and plotting the conductance at different energies below and above the superconducting gap in an XY map [Fig. 1(a)] [13]. STM data offer much more than high-resolution images of the vortex lattice, as they provide a unique insight into the electronic structure of the vortex core. By fitting the vortex profile in a zero-bias conductance map, it is also possible to extract fundamental superconducting quantities, such as coherence length and Fermi velocity.
Tremendous progress has been made since the first observation of individual vortex cores by STS [15]. The energy resolution in the early experiments was not sufficient to resolve the
Figure 1: STS images of the vortex lattice in (a) 2H-NbSe\({}_{2}\) (0.46 \(\times\) 0.46 \(\mu\)m\({}^{2}\), 0.1 Tesla; adapted from [13]). (b) YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{7-\delta}\) (0.1 \(\times\) 0.1 \(\mu\)m\({}^{2}\), 6 Tesla; adapted from [8]), and (c) Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+\delta}\) (difference between STS maps at 5 Tesla and at 0 Tesla; adapted from [14]).
detailed CdGM electronic structure. Improved instrumentation and selected materials with a larger superconducting gap and a lower Fermi energy have since enabled to resolve the discrete CdGM states bound to the vortex core of an \(s\)-wave superconductor [16; 17]. However, the electronic structure of a \(d\)-wave vortex remained to be measured.
## 3 Vortex imaging and core spectroscopy in cuprate HTS
The observation of individual vortices in a \(d\)-wave HTS cuprate has remained a challenge for a long time, despite remarkable advances in scanning probe instrumentation and single crystal synthesis. This is mainly due to three factors: i) the difficulty of obtaining high-quality surfaces, which above all, exhibit homogeneous spectroscopy; ii) the absence of a salient spectral feature specific to the vortex cores in most STS data sets; and iii) a very short coherence length, which implies that vortex cores are very small and therefore easily missed by STS. These problems are exacerbated by the propensity of vortices in HTS to bind to defects, which prevents the formation of a regular lattice and necessitates disentangling the electronic structure of the defect from that of the vortex core. This last point is particularly annoying as some defects show a conductance peak at or near zero bias [18; 19], which might be mistaken for the \(d\)-wave ZBCP at the centre of the vortex.
### STS vortex mapping in YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{7-x}\)
YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{7-x}\) was the first \(d\)-wave cuprate high-temperature superconductor where vortices were successfully observed by STS [8]. A typical spectroscopic image obtained on a fully oxygenated single crystal (\(T_{c}=91\) K) in a magnetic field of 6 Tesla at 4.2 K is reproduced in Fig. 1(b). This was a remarkable result, since it was achieved on a thoroughly cleaned native surface (not cleaved), where atomic resolution topography was not possible. Nevertheless, tunneling spectroscopy was stable and very reproducible, with STS images revealing very small and slightly elongated vortex cores. The oval shape of the vortices is consistent with the ab-plane anisotropy in Y123, although the lack of atomic resolution has prevented identification of the local orientation of the \(a\) and \(b\) axes.
Tunneling spectroscopy at the centre (\(Y=0\)) of a vortex in Y123 at 4.2 K revealed two well defined low-energy conductance peaks at about \(\pm 5.5\) meV [8], which do not shift in energy with increasing distance from the vortex core. As seen in Fig. 2(a), they progressively weaken while at the same time, the superconducting coherence peaks recover over a distance of the order of the coherence length \(\xi\). Theory predicts a very different electronic vortex-core structure for a \(d\)-wave superconductor [7; 20]. Calculations find a conductance peak at zero bias at the vortex centre [Fig. 2(c)] that splits into two subgap conductance peaks, which continuously shift to higher energies with increasing distance from the core centre. This distance dependence is more like what had been observed in 2H-NbSe\({}_{2}\) [Fig. 2(b)] [13]. Thus, experiments seemed in contradiction with theory, where core spectra resembling those expected for a \(s\)-wave superconductor were measured in Y123, and spectra resembling those expected for a \(d\)-wave superconductor were measured in 2H-NbSe\({}_{2}\) (Fig. 2).
The presence of a ZBCP at the centre of a vortex in 2H-NbSe\({}_{2}\) was well understood. It is a direct consequence of the very small energy splitting (\(\Delta_{\rm SC}^{2}/E_{\rm F}\)) of the lowest CdGM bound states, too small to be resolved in the experiment presented in Fig. 2(b). Subsequent studies with improved energy resolution on different \(s\)-wave superconductors with a larger energy spacing between adjacent CdGM bound states [16; 17], confirmed the tunneling spectrum expected at the centre of a vortex with CdGM bound states shown in Fig. 2(d).
The puzzle of the peculiar, non-\(d\)-wave vortex-core spectra in Y123 was solved when a new experiment revealed that the two non-dispersing SGSs are not vortex-core states. They were found to persist throughout the sample, on and off the vortex cores, and even in the absence of any magnetic field as seen in Fig. 3(a). This observation prompted Bruer _et al._[21] to propose a phenomenological model to explain the tunneling conductance spectra measured on Y123 at 0.4 K. They were able to reproduce the zero-field tunneling conductance spectra assuming two parallel contributions [Fig. 3(b)]; one from a two-dimensional band distorted by the coupling to spin excitation
Figure 3: Phenomenological modelling of the tunneling conductance measured on Y123. (a) The solid line is the sum of the two spectra displayed in (b). It reproduces very well the experimental data measured in zero-field at 0.4 K shown as symbols. (b) Top: Vortex-core spectrum measured in a field of 6 Tesla at 0.4 K. Bottom: Calculated tunneling conductance for a \(d\)-wave superconductor with realistic band structure and coupling to spin excitations. (Adapted from [21]).
Figure 2: Tunneling conductance curves measured as a function of distance from the vortex core (\(Y=0\)) in (a) Y123 (adapted from [8]) and (b) 2H-NbSe\({}_{2}\) (adapted from [5]). Corresponding model calculations for the local density of states (LDOS) at the centre of a vortex core in a (c) \(d\)-wave and (d) \(s\)-wave superconductor (adapted from [7]).
and subject to conventional BCS \(d\)-wave pairing; and one from a non-superconducting incoherent bath.
While the phenomenological model illustrated in Fig. 3 does not explain the origin of the SGSs, it offers a scheme to eliminate their contribution from the vortex conductance maps, and to access the intrinsic vortex-core local density of states (LDOS). The assumption is that the non-superconducting contribution is affecting all the measured tunneling spectra, whether a magnetic field is applied or not. Hence, this unknown feature can be eliminated by subtracting a zero-field spectrum from every spectrum measured in an applied magnetic field. These experimental spectra can then be directly compared to theoretical ones obtained by subtracting the superconducting DOS from the vortex-core LDOS, both calculated for a superconductor with a \(d\)-wave order parameter [22]. The actual calculations are somewhat more complicated, because one needs to take into account the finite image size and resolution, and the real vortex distribution at the sample surface. The latter is crucial, because each vortex is in a slightly different local environment, meaning that it is subject to different screening currents, which modify the LDOS. The correspondence between theory and data in Fig. 4 is remarkable, including minute differences between vortex cores and between traces along the [100] and [110] directions [22]. These data and analysis were the first, although somewhat indirect, experimental evidences for the \(d\)-wave vortex-core structure. As we will see in the next section, recent STS studies in highly overdoped (OD) Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+\delta}\) provide much more direct evidence for the electronic structure of a \(d\)-wave vortex core [23].
### STS vortex mapping in Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+\delta}\)
Detecting vortices by STS in Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+\delta}\) has been, and remains very challenging. This is surprising because, in contrast to Y123, reproducible atomic resolution STM topographic imaging and spectroscopy is routinely achieved on this compound. The first successful STS mapping of the vortex lattice in Bi2212 was reported on slightly under- and over-doped single crystals at 4.2 K and 6 Tesla [9]. Vortex contrast was extremely difficult to obtain, because the superconducting gap is very inhomogeneous along the sample surface -making the conductance at the coherence peak energy unsuitable to detect the vortices- and because there is no specific spectral feature associated with the vortex. The electronic structure of the vortex cores revealed in these experiments turned out to be very different from the theoretical expectations. Instead of the ZBCP expected at their centre for a \(d\)-wave superconductor [Fig. 2(b)], STS revealed a LDOS very similar to the normal state DOS measured above the superconducting transition temperature \(T_{c}\)[24]. Follow up experiments with better energy resolution later found two SGSs below the pseudogap inside the vortex core (Fig. 5) [10; 11]. These results have raised important questions about the pseudogap, its origin and its link with the superconducting state. The fact that the amplitudes of the pseudogap and the superconducting gap are similar and independent of temperature for different hole concentrations seems to favor a scenario where both have a common origin, consistent with an incoherent pairing state above \(T_{c}\)[25; 26].
The electronic structure of the vortex cores in slightly underdoped (UD) Bi2212 became even more intriguing when Hoffman _et al._[14] reported a \(\approx 4a_{0}\times 4a_{0}\) periodic charge modulation oriented along the Cu-O bonds in the vortex halo region [Fig. 1(c)]. This so-called _checkerboard_ was initially associated with some ordered electronic phase linked to the pseudogap [27]. It was later found that the checkerboard spanning the vortex halo in these compounds had both a low-energy dispersing, and a higher-energy (above \(\Delta_{\text{SC}}\)) non-dispersing character. The dispersing component was identified as a vortex enhanced quasiparticle interference (QPI) in Ca\({}_{2-x}\)Na\({}_{x}\)CuO\({}_{2}\)Cl\({}_{2}\)[28], and the non-dispersing component as a broken-spatial
Fig. 4: 90:90 nm\({}^{2}\) spectroscopic image of the vortex lattice on the (001) surface of Y123. The color scale is the ratio of the STS tunneling conductance at +5 mV and +17 mV. The four panels to the right correspond to the tunneling spectra along the indicated traces joining neighbouring vortices, divided by a tunneling spectrum away from any vortex core. (a) Experimental STS data at 6 Tesla. (b) Corresponding model calculations based on a \(d\)-wave superconducting gap. Note the excellent match between the data and model (see [22] for details).
Fig. 5: Tunneling spectroscopy of slightly UD Bi2212. (a) Tunneling spectra measured near the vortex centre (red) and outside the vortex cores (blue) in a field of 6 Tesla at 4.2 K (adapted from [10]). (b) Tunneling spectra measured as a function of increasing distance from the vortex core at 6 Tesla and 4.2 K (adapted from [10]). (c) Temperature-dependent tunneling conductance curves in the Meissner phase. The red spectrum corresponds to \(T_{c}\) (adapted from [9]).
symmetry state such as a CDW [29]. Note that in UD and nearly optimally-doped Bi2212, the dispersing checkerboard is very similar to the periodic charge modulation found throughout these samples in the Meissner phase, and away from the vortex cores in the mixed phase [27; 30; 29], while the non-dispersing component is similar to the one observed above \(T_{c}\)[31]. Angle-resolved photoemission, on the other hand, failed to find any evidence for a CDW, and explained the periodic charge modulations observed at all energies in the Meissner phase in terms of QPI [32].
A number of theoretical studies were conducted in order to explain why the ZBCP expected for a \(d\)-wave superconductor was systematically missing in the STS experiments. Since, until recently, most of the measurements were performed in UD and near optimally-doped Bi2212, the dominant suspicion was that the vortex-core signatures had in a way or the other to do with the pseudogap phase. Indeed, several theoretical explanations pointed the emergence of a competing non-superconducting static order [33; 34; 35] or the resurgence of strong dynamical pseudogap-state correlations [36] as the phenomena that remove the low-energy states from the vortex core. Other works invoked the development of a fully gapped secondary superconducting component [37; 20; 38; 39], or extrinsic effects like the anisotropy of c-axis tunneling [40].
Until 2021, all STM/STS studies of Bi2212 revealed this \(\approx 4a_{0}\times 4a_{0}\) periodic charge modulation in the vortex halo. The modulation amplitude was strongest in conductance maps measured close to the SGSs energy, suggesting a link between checkerboard and SGSs. All these results were obtained on UD to slightly overdoped (OD) Bi2212 crystals, which have a pseudogap above \(T_{c}\), further suggesting a link between the checkerboard and the pseudogap as mentioned above. A logical follow-up was to focus on heavily-OD Bi2212 (\(T_{c}\approx 52\) K), as it should behave more like a Fermi liquid, with no pseudogap or CDW reconstruction above a critical doping \(p_{c}\approx 0.19\)[42]. However, much to their surprise, Gazdic _et al._[23] observed the same vortex-core signature at 3 Tesla and 4.2 K (Fig. 6), with dispersing SGSs associated with a \(\approx 4a_{0}\times 4a_{0}\) periodic charge modulation. The high-field data on heavily-OD Bi2212 by Gazdic _et al._[23] remove any possible link between the pseudogap and SGSs or between the pseudogap and checkerboard. One notable difference to the earlier experiments was the absence of any checkerboard in zero magnetic field (except near some defects), unlike the reentrant CDW observed by Peng _et al._[43] in OD Bi2201 outside the pseudogap regime.
The pristine \(d\)-wave vortex-core structure was ultimately detected by Gazdic _et al._[23] in heavily-overdoped Bi2212 single crystals at 4.2 K and 0.16 Tesla (Fig. 7), an unprecedentedly low magnetic field for any HTS study by STS to date. The motivation for such a low field was to reduce the potential influence of neighbouring flux lines and come closer to the isolated vortex case considered in calculations. The challenge was to detect only few tiny vortices (\(\xi\approx 2\) nm) in a large field of view (vortex spacing \(\approx 120\) nm). The vortex core is identified at the centre of the 5 mV conductance map in Fig. 7(a). There is clearly no \(\approx 4a_{0}\times 4a_{0}\) charge modulation and the tunneling conductance spectrum at the centre of the vortex shows a clear peak structure at zero bias as expected for a \(d\)-wave superconductor [Fig. 7(b)]. Careful checks were performed to ensure the tunneling conductance features were due to the presence of a vortex, and not the spectral signature of a defect which can also feature a ZBCP.
The evolution of the tunneling conductance with increasing distance from the vortex centre in Fig. 7(b) closely follows the theoretical expectations for a \(d\)-wave vortex core in Fig. 7(d), with an increasingly large splitting of the peak at the Fermi level. Interestingly, the evolution of the tunneling conductance is slightly different along the nodal (110) and antinodal (100) directions, reflecting a small anisotropy in the Fermi velocity. We emphasize that this four-fold symmetric shape of the vortex-core is a consequence of the Fermi surface topology; it is not due to the \(d\)-wave symmetry of the superconducting gap (see supplemental material of reference [22]). The decay of the normalized zero-bias conductance \(f(r)\) allows one to extract the coherence length from \(f(r)=1-(1-\sigma_{0})\tanh(r/\xi)\) [pink line in Fig. 7(c)], where \(\sigma_{0}\) is the residual conductance far from the centre, \(r\) is the distance from the centre and \(\xi\) is the coherence length. The result is \(\xi\approx 2.7\) nm, implying an average perpendicular Fermi velocity of \(v_{F}\approx 167\) km/s consistent with band
Figure 6: Electronic structure of a checkerboard vortex core in heavily-OD Bi2212 (\(T_{c}\approx 52\) K) at 3 Tesla and 4.2 K. (a) STS conductance map at 5 mV revealing a vortex core and its checkerboard. (b) Tunneling conductance spectra measured along the arrow in panel (a). The red spectrum corresponds to the vortex centre. The pink and orange spectra delimit the core region where the SGSs develop and the superconducting coherence peaks are suppressed. The conductance scale corresponds to the lowest spectrum, the others are offset for clarity. (c) Fourier transform of the conductance map in (a). The red crosses correspond to the lattice Bragg peaks. (d) Momentum cuts along the orange line in (c) as a function of energy. The dashed line highlights the dispersion of the checkerboard \(q\)-vector. (adapted from [23]).
structure calculations for heavily-overdoped Bi2212. The reduced gap ratio \(2\Delta_{\rm SC}/k_{\rm B}T_{c}\) amounts to 4.5, close to 4.3, the BCS value for a weakly-coupled \(d\)-wave superconductor.
## 4 Discussion and outlook
The search for \(d\)-wave vortices by scanning tunneling spectroscopy in high-temperature superconductors has come a long way since the first images reported in 1995 [8]. The most recent work by Gazdic _et al._[23] provides a number of answers to outstanding questions while raising new ones. Finding the expected \(d\)-wave core structure removes some of the unusual features previously attributed to \(d\)-wave vortex cores that have challenged new theories of the HTS ground state. The physical parameters extracted from the vortex core displayed in Fig. 7 are perfectly consistent, and place heavily-OD Bi2212 in the BCS mean-field regime. Gazdic _et al._[23] further invalidate any possible link between the dispersing checkerboard order and the pseudogap, since they observe it well into the overdoped region of the Bi2212 doping phase diagram, where no pseudogap has been measured. This result favors a QPI scenario [32] over charge order to explain the checkerboard at all energies.
Outstanding questions include the nature of the SGSs and checkerboard. While the origin of the SGSs remains unclear, there are several well established experimental facts. SGSs have been reported in Y123 [8, 44, 45] and in Bi2212 [10, 11, 46, 47, 23]. Hoogenboom _et al._[48] observed a linear correlation between the SGS energy and the superconducting gap in Bi2212 at various doping levels and in optimally-doped Y123, noticing this would make it challenging to consider them as discrete CdGM bound states, which scale as \(\Delta_{\rm SC}^{2}\).
In Bi2212, SGSs always appear alongside the periodic \(\approx 4a_{0}\times 4a_{0}\) charge modulation in the vortex halo. In heavily-OD Bi2212, whenever the ZBCP is observed, there is neither any SGSs nor any checkerboard [23]. The link between the SGSs and checkerboard in Bi2212 was highlighted several years ago by Levy _et al._[12], who noted that the checkerboard's modulation amplitude was maximal in conductance maps taken at the energy of the SGSs. This observation remains true in heavily-OD Bi2212 [23]. Some periodic charge modulations at the energy of the 6 mV SGSs in Fig. 3 have also been detected in optimally-doped Y123 in zero magnetic field at 0.4 Kelvin (Fig. 8) [49]. Although the spatial resolution is far from optimal, the \(750\times 750\) nm\({}^{2}\) conductance map in Fig. 8(a) reveals 4 twin boundaries (TBs) along the (110) or (110) crystallographic directions running diagonally through the image. Between them, one can distinguish a charge texture at \(45^{\circ}\) to the TB. This one-dimensional texture rotating by \(90^{\circ}\) across a TB is better seen in the \(180\times 180\) nm\({}^{2}\) image in Fig. 8(b). Taking an auto-correlation of the bottom right-hand region of panel (b) emphasizes stripes oriented at \(45^{\circ}\) from the twin boundaries in Fig. 8(c), i.e. running along the Cu-O bonds of the CuO\({}_{2}\) planes. Whether these charge modulations are related to the ones reported from other experimental techniques [50, 51, 52] remains an open question.
Another outstanding puzzle is why the \(d\)-wave vortex-core structure could so far only be observed in heavily overdoped Bi2212 at very low magnetic fields (Fig. 7), and why is the ZBCP replaced with SGSs and a checkerboard when increasing the magnetic field (Fig. 6). There is to the best of our knowledge no model calculation that predicts such a drastic change of the vortex-core structure for moderate field changes. It is pos
Figure 8: STS images of Y123 acquired at the SGS energy (6 mV) at 0.4 K. (a) \(750\times 750\) nm\({}^{2}\) image revealing 4 twin boundaries and alternating charge textures between them. (b) \(180\times 180\) nm\({}^{2}\) image revealing stripe like modulations aligned with the Cu-O bonds and rotated by \(90^{\circ}\) across the twin boundary. (c) \(16\times 16\) nm\({}^{2}\) auto-correlation image of the bottom right region in panel (b) highlights a charge stripe modulation with a period of about 5 nm on top of a weaker one of about 1 nm, both at \(45^{\circ}\) to the TB (adapted from [49]).
Figure 7: Electronic structure of a pristine \(d\)-wave vortex core in heavily-OD Bi2212 (\(T_{c}\approx 52\) K) at 0.16 Tesla and 4.2 K. (a) STS conductance map at 5 mV revealing a vortex core. There is no checkerboard, the background fluctuations correspond to inhomogeneities. (b) Tunneling conductance spectra measured along the arrow in panel (a) revealing the \(d\)-wave core structure predicted by Wang and MacDonald [7]. (c) Normalized zero-bias tunneling conductance across the vortex centre along the (100) direction (blue) and the corresponding calculated profile \(f(r)\) (pink). (d) LDOS along the (100) direction for a self-consistent isolated vortex in a tight-binding model [41] with doping \(p=0.23\) and \(d\)-wave gap \(\Delta=10\) meV. The energy resolution is 1 meV. The inset shows the zero-energy LDOS as a function of distance \(r\) from the vortex-core and a fit \(\sim\exp(-r/2.3\) nm).
sible that the modification is not due to the magnetic field, but to the local doping level -these samples are inhomogeneous, and it is not impossible that the STM tip is sensing regions only few nanometers apart with quite different doping levels on the same sample surface. Indeed, a recent study by Datta _et al._[53] discusses the development of some "Mottness" in the vortex-core region of weakly-doped compounds, which triggers not only the vanishing of the ZBCP, but also the emergence of low-energy core states [53]. Liu _et al._[54] similarly report the vanishing of the vortex-core ZBCP in underdoped compounds. These authors further claim that the SGSs become strongly enhanced at large fields in the presence of a pair density wave (PDW). Edkins _et al._[55] claim to have identified a high-field-induced PDW state in the vortex halo of slightly UD Bi2212. All these studies rely on an electronic nematic or charge modulated background (CDW or PDW), which is assumed to form in the parent state of the pseudogap regime at high temperature (\(T>T_{c}\)), with superconductivity considered as a competing order emerging at \(T\leq T_{c}\)[56; 57]. These results do not explain why SGSs and conductance modulations are also present in the vortex halo of heavily-OD Bi2212, which do not have any pseudogap, and how the magnetic field strength might influence these features. Despite looking carefully for PDW modulations described in these model calculations, we have so far not been able to detect any in our experiments, and this fascinating topic is the subject of continued investigations.
## 5 Acknowledgements
K. A. Muller and his team have given the community a fantastic system to explore with scanning probes. CR and IMA would like to thank all the collaborators who have made it possible to build and operate a number of scanning probe instruments in Geneva, all the crystal producers who have synthesized exceptional samples, and all the scientists who have contributed to this stimulating research over the years. CR, IMA and CB would like to dedicate this article to late professor Oystein Fischer, who initiated the research on HTS and scanning probes in Geneva. The exploration of vortices in high-temperature superconductors at the University of Geneva has been supported by the Swiss National Science Foundation through various funding programs (research grants, NCCR MaNEP and R'Equip) and by the DQMP (formerly DPMC).
| 高温超伝導のメカニズムは、現代物理学における大きな課題の一つであり、ここでは、銅酸化物基の高温超伝導体における渦巻き格子像付けと、$d$-wave型超伝導体の渦巻きコアの特性電子構造を測定するために、スキャンTunneling分光法を用いて検討を行っている。
Please let me know if you need anything else from me. |
2309.17329 | Efficient Anatomical Labeling of Pulmonary Tree Structures via Implicit
Point-Graph Networks | Pulmonary diseases rank prominently among the principal causes of death
worldwide. Curing them will require, among other things, a better understanding
of the many complex 3D tree-shaped structures within the pulmonary system, such
as airways, arteries, and veins. In theory, they can be modeled using
high-resolution image stacks. Unfortunately, standard CNN approaches operating
on dense voxel grids are prohibitively expensive. To remedy this, we introduce
a point-based approach that preserves graph connectivity of tree skeleton and
incorporates an implicit surface representation. It delivers SOTA accuracy at a
low computational cost and the resulting models have usable surfaces. Due to
the scarcity of publicly accessible data, we have also curated an extensive
dataset to evaluate our approach and will make it public. | Kangxian Xie, Jiancheng Yang, Donglai Wei, Ziqiao Weng, Pascal Fua | 2023-09-29T15:40:58 | http://arxiv.org/abs/2309.17329v2 | # Efficient Anatomical Labeling of Pulmonary Tree Structures via Implicit Point-Graph Networks
###### Abstract
Pulmonary diseases rank prominently among the principal causes of death worldwide. Curing them will require, among other things, a better understanding of the many complex 3D tree-shaped structures within the pulmonary system, such as airways, arteries, and veins. In theory, they can be modeled using high-resolution image stacks. Unfortunately, standard CNN approaches operating on dense voxel grids are prohibitively expensive. To remedy this, we introduce a point-based approach that preserves graph connectivity of tree skeleton and incorporates an implicit surface representation. It delivers SOTA accuracy at a low computational cost and the resulting models have usable surfaces. Due to the scarcity of publicly accessible data, we have also curated an extensive dataset to evaluate our approach and will make it public.
pulmonary tree labeling, graph, point cloud, implicit function, 3D deep learning.
## I Introduction
In recent years, since pulmonary diseases [1, 2, 3] have become the leading causes of global mortality [4], pulmonary research has gained increasing attention. In studies related to pulmonary disease, understanding pulmonary anatomies through medical imaging is important due to the known association between pulmonary disease and inferred metrics from lung CT images [5, 6, 7, 8, 9, 10].
The tree-shaped pulmonary structures--airways, arteries, and veins, as depicted by Fig. 1--have high branching factors and play a crucial role in the respiratory system in the lung. Multi-class semantic segmentation of the pulmonary trees, where each class represents a specific division or branch of tree according to the medical definition of the pulmonary segments, is an effective approach to modeling their intricacies. In pulmonary tree labeling, the derived quantitative characteristics [6, 8, 11] not only associate with lung diseases and pulmonary-related medical applications [9, 10] but are also crucial for surgical navigation [7]. This work focuses on methodologies for efficient and accurate anatomical labeling of pulmonary airways, arteries, and veins.
Among deep learning approaches, convolutional neural networks (CNN) have become the _de facto_ standard approach to semantic segmentation [12, 13]. One of their strengths is that they yield volumes with well-defined surfaces. However, they are computationally demanding when processing large 3D volumes and often deliver unsatisfactory results when operating at a reduced resolution (Fig. 2 (a)) or local patches (Fig. 2 (b)), either leading to a lack of details or global context. In contrast, point-cloud representations [14, 15] have lower computational requirements while perserving global structures (Fig. 2 (c)). Besides, considering the inherent tree structures of pulmonary airways, arteries and veins, graph modeling (Fig. 2 (d)) that preserves the connectivity and structural topology is also visible [16, 17, 18]. Nevertheless, extracting usable surfaces from point clouds or graphs is yet non-trivial.
To be computationally efficient while enabling continuous surface definition and tree topology preservation, as illustrated in Fig. 2 (e), we introduce an approach that combines skeleton graph and point representations, with an implicit surface representation to yield a feature field. Connectivity constraints, based on the skeleton graphs, are imposed on the surfaces reconstructed from the feature field, achieved at a low computational cost without sacrificing accuracy. The proposed _Implicit Point-Graph Network (IPGN)_ includes backbone point and graph networks, _Point-Graph Fusion_ layers for deep feature fusion and an _Implicit Point Module_ to model the implicit surface in 3D, allowing for fast classification inference at arbitrary locations. Furthermore, thanks to the flexibility of implicit representations, the IPGN trained for pulmonary tree labeling can be extended to pulmonary
Fig. 1: **Pulmonary Tree Labeling.** (a) The pulmonary tree consists of three anatomic structures (airway, artery and vein). (b) Given a binary volume representing a tree structure as input, we attempt to label each voxel into one of 19 classes based on the branching regions, _i.e._, the pulmonary segments.
by simply modifying the inference method (Sec. VI).
As illustrated by Fig. 7, our approach produces high-quality dense reconstructions of pulmonary structures at an acceptable computational cost. To evaluate it quantitatively, we compiled the Pulmonary Tree Labeling (PTL) dataset illustrated in Fig. 1. It contains manual annotations for 19 different components of pulmonary airways, arteries, and vein, which will be made publicly available as a multi-class semantic segmentation benchmark for pulmonary trees1. Our method achieves SOTA performance on this dataset while being the most computationally efficient.
Footnote 1: An URL with data and code will be provided in the final paper version.
## II Related Works
### _Pulmonary Anatomy Segmentation_
In pulmonary-related studies, image-based understanding of pulmonary anatomies is important as metrics inferred from lung imaging have shown to be related to severity, development and therapeutic outcome of pulmonary diseases [5, 6, 7, 8, 9, 10].
Previously, CNN-based methods have been tailored for comprehension of different pulmonary anatomies, such as pulmonary lobe [19], airway [20]. artery-vein [21, 22] and fissure [23]. Among various pulmonary anatomies, tree-shaped pulmonary structures have drawn a lot of research attention. For pulmonary airway tree, not only is proper segmentation crucial for surgical navigation [7], segmentation-derived attributes like airway counts [6], thickness of wall [11] and morphological changes [8] also have association with lung diseases. For pulmonary vasculature including arteries and veins, quantitative attributes extracted from segmentation are also commonly applied in multiple pulmonary-related medical applications like emboli detection [9] and hypertension [10].
Previous works specifically on pulmonary tree segmentation either apply graph-only modeling or leverage graph-CNN multi-task learning. A Tree-Neural Network [16] leverages handcrafted features on a tailored hyper-graph for pulmonary bronchial tree segmentation to address the inherent problem of overlapping distribution of features. A recent [18] work on airway segmentation proposes graph modeling to incorporate structural context to local CNN feature from each airway branch. Yet, both works merely provide labeling to pulmonary tree branches that are pre-defined at pixel level, and thus are not for semantic segmentation, where defining accurate borders between neighboring branches remains a challenge. Applicable to binary or raw images of pulmonary structures, SG-Net [17] employs CNN features for detecting of landmarks, constructing graphs for CNN-graph multi-task learning.
Although these methods are graph-based, the graph construction procedure varies. While one treats each pre-defined branch as a node [18], disrupting the original spatial structure, another is parameter-based [16], causing the quality of the constructed tree highly dependent on the parameter selection, and finally, the SG-Net [17] established its graph node by learned landmark prediction, whose structural quality can not be ensured. In our setup, the skeleton graphs are based on a thinning algorithm [24], with no modeling or hyper-parameters tuning involved, and all spatial and connection relationships between tree segments are acquired directly from the original dense volume (Fig. 1). Additionally, as CNN methods incur memory footprint that grows in cubical, they are expensive when facing large 3D volumes.
### _3D Deep Learning_
Deep learning on 3D geometric data is a specialized field that focuses on extending neural networks to handle non-Euclidean domains such as point clouds, graphs (or meshes), and implicit surfaces.
Point-based methods have emerged as a novel approach for 3D geometric modeling. As sparse 3D data, point cloud can be modeled in a variety of ways, such as multi-layer perception [25, 26], convolution [27, 28], graph [29] and transformer [30, 31]. Point-based methods have also been validated in the medical image analysis domain [14, 15].
Since the initial introduction [32], graph learning has become a powerful tool for analyzing graph-structured data, being widely applied to medical images analysis [16, 17, 18] or bioinformatics [33]. Meshes represent a specialized form of graph, typically characterized by a collection of triangular faces to depict the object surfaces, and are extensively employed in the field of graphics. Additionally, they also maintain some applications in the domain of medical imaging [34, 35].
While point-based methods enable computation-efficient modeling on volumetric data, graph learning is lightweight and learns structural context within geometric data. Combining point and graph methods is advantageous in our task. However, extracting surfaces (or dense volumes) from sparse point-based or graph-based prediction is non-trivial, which leads us to introduce implicit surface representation to address this issue.
Deep implicit functions have been successful at modeling 3D shapes [36, 37, 38, 39, 40, 41, 42]. Typically, the implicit function predicts occupancy or signed distance at continuous locations, thus capable of reconstructing 3D shapes at arbitrary resolutions [43]. Moreover, implicit networks could be trained with randomly sampled points, which lowers the computation burden during training. These characteristics suggest that implicit functions have inherent advantages when reconstructing the sparse prediction of pulmonary tree labeling into dense.
Fig. 2: **A Comparison of Data Representation for Pulmonary Tree Labeling.** The CNN-based methods are either low-resolution (down-sampled) or local (sliding-window). The standard sparse representation like point and graph is global but it is not-trivial to reconstruct high-quality dense volume. Our method that combines point, graph, and implicit functions produces high-quality dense reconstruction efficiently.
## 3 Problem Formulation
### Pulmonary Tree Labeling
In this study, we address the pulmonary tree labeling problem in 3D CT images. Specifically, given a binary volumetric image of a pulmonary tree, our objective is to provide accurate 19-class segmentation of the pulmonary airway, artery, and vein trees into segmental-level branches and components, demonstrated in Fig. 1 (b). Through this process, each foreground pixel will be assigned to its respective semantic class.
When evaluating the segmentation performance, we consider the presence of intersegmental veins and the potential ambiguity in their class assignment. Intersegmental veins are veins that lie along the border between two neighboring pulmonary segments [44], highlighted in Fig. 3. As pulmonary tree branches are involved in the boundary definition of pulmonary segments [43], intersegmental veins pose an inherent challenge in their class definition. To address this issue, we mask out the intersegmental veins during the evaluation and only focus on segmentation of the pulmonary airway, artery, and veins within each individual segment.
### Dataset
#### 3.2.1 Overview
From multiple medical centers, we compile the Pulmonary Tree Labeling (PTL) dataset containing annotated pulmonary tree structures for 800 subjects. For each subject, there are 3 volumetric images containing its pulmonary airway, artery, and vein, illustrated in Fig. 1 (b). In each volume, the annotations consist of 19 classes, where labels 1 to 10 represent branches located in the left lung, labels 11 to 18 represent those in the right lung, and class 19 represents extra-pulmonary structures while 0 is background.
All 3D volumes have shapes \(N\times 512\times 512\), where \(512\times 512\) represents the CT slice dimension, and \(N\) denotes the number of CT slices, ranging from 181 to 798. Z-direction spacings of these scans range from 0.5mm to 1.5mm. Manual annotations are produced by a junior radiologist and verified by a senior radiologist. During the modeling process, the images are converted to binary as input, and the original annotated image is the prediction target. The dataset is randomly split into 70% train, 10% validation, and 20% test samples.
#### 3.2.2 Skeleton Graph
We utilize a software application vesselsvio [45], which applies a thinning algorithm [24], to pre-process the volumetric data and derive a skeleton graph in 3D space for each pulmonary structure, illustrated by Fig. 2 (d). The graphs consist of nodes that represent branch bifurcation points and edges that represent anatomical connections. In this manually derived graph dataset, the label for each graph node and edge is recovered from the dense volume. Therefore, the task within this graph dataset involves performing a 19-class classification for both graph nodes and edges.
### Evaluation Metrics
For segmentation performance evaluation, classification accuracy and micro-averaged dice scores are used as metrics. For point-level results on dense volume, the classification accuracy measures the percentage of correctly classified points in the dense volume. The dice score, on the other hand, assesses the overlap between the predicted segment and the ground truth segment, providing a measure of similarity. For graph-level node and edge classification, the same metrics can be applied. The classification accuracy measures the percentage of correctly classified nodes and edges in the pre-processed skeleton graph dataset (Sec. 3.2.2). The dice score can also be used to assess the similarity of the predicted graph structure with the ground truth graph structure.
## 4 Methodology
Our objective is centered around the anatomical labeling of pulmonary trees. That is to say, given an input of binary volumes of a pulmonary tree--derivable from manual annotations or model predictions--we aim to execute a 19-class semantic segmentation on every non-zero voxel. However, diverging from standard methodologies reliant on CNNs, we down-sample the raw dense volume to a point cloud while concurrently extracting a skeleton graph. Our approach (Sec. 4.1) engages in representation learning individually on the sparse representations of both the point cloud and the skeleton graph, and a fusion module is employed to perform deep integration of point-graph representations (Sec. 4.2). Ultimately, to reconstruct the predictions based on sparse representations back to dense, we introduce implicit functions to facilitate efficient reconstruction (Sec. 4.3).
### Implicit Point-Graph Network Architecture
Given a binary volumetric image of a pulmonary tree (Fig. 4 (a)), a graph (Fig. 4 (b)) is constructed with VesselVio [45] from the original volume and a set of points are randomly sampled from the tree voxels to construct a point cloud (Fig. 4 (c)). While the point cloud is a sparse representation of the volume, the graph represents a skeleton of the pulmonary tree.
We first introduce the general notation rule for both point and graph elements. While the coordinates of \(M\) points and \(N\) graph nodes are represented as **P** and **G**, single point or graph element is expressed as **p** and **g**, where \(\textbf{P}=\{\textbf{p}_{1},\textbf{p}_{2},...,\textbf{p}_{m}\}\), \(\textbf{G}=\{\textbf{g}_{1},\textbf{g}_{2},...,\textbf{g}_{n}\}\). The superscript notation \(\textbf{p}^{(1)}\) represents an element's feature at the \(i\)-th network layer.
At input, the 3-dimensional \(\{x,y,z\}\) point coordinates, **P**\(\in\mathbb{R}^{M\times 3}\) and graph nodes, \(\textbf{G}\in\mathbb{R}^{N\times 3}\) are utilized as initial feature. We use a point neural network, and a graph neural
Figure 3: **Visualization of Pulmonary Tree and Pulmonary Segment Anatomy. Each pulmonary tree branch corresponds to a pulmonary segment. The intersegmental vein, which lies along the pulmonary segment border, is highlighted in red.**
network as initial feature encoders, from which we extract a 128-dimensional intermediate feature for each point and graph node, expressed as \(\textbf{P}^{(0)}\in\mathbb{R}^{M\times 128}\) and \(\textbf{G}^{(0)}\in\mathbb{R}^{N\times 128}\).
Subsequently, initial features from both branches \(\textbf{P}^{(0)}\) and \(\textbf{G}^{(0)}\) are incorporated within one or multiple _Point-Graph Fusion_ layers, which allow for two-way feature integration based on feature propagation [46] and ball-query&grouping [26]. Let the input to a Point-Graph Fusion layer be defined as \(\textbf{P}^{(i-1)}\) and \(\textbf{G}^{(i-1)}\), the feature out of the fusion layer is \(\textbf{P}^{(i)}\) and \(\textbf{G}^{(i)}\). The last Point-Graph Fusion layer outputs \(\textbf{P}^{(I)}\) and \(\textbf{G}^{(I)}\) after \(l\) Point-Graph Fusion layers for deep feature fusion. Finally, a lightweight MLP network and a GNN projects the fusion feature to 19-dimensional vectors for graph (Fig. 4 (d)) and point predictions (Fig. 4 (e)).
An _Implicit Point Module_ is further introduced to reconstruct the dense volumes, which consists of a feature propagation process and an MLP network. As features are extracted by the Point-Graph Network, the Implicit Point Module leverages the extracted multi-stage point features for fast dense volume segmentation. Given a query point \(\textbf{p}_{q}\) with at arbitrary coordinates, the module locates \(p_{q}\)'s \(k\)-nearest point elements from the point cloud: \(\{\textbf{p}_{1},\textbf{p}_{2},...,\textbf{p}_{k}\}\), and extracts their multi-stage features \(\{\textbf{z}_{1},\textbf{z}_{2},...,\textbf{z}_{k}\}\) from the backbone Network for feature propagation into a multi-stage representation \(\textbf{z}_{q}\) of the query point \(\textbf{p}_{q}\). After propagating the point feature \(\textbf{z}_{q}\), the MLP network \(\mathcal{H}\) is utilized to make class predictions. By applying this process to all foreground points, we can efficiently generate a dense volume reconstruction (Fig. 4 (f)).
To avoid naming ambiguity, we refer to the aforementioned complete network as _Implicit Point-Graph Network (IPGN)_, and that sans the implicit part as _Point-Graph Network (PGN)_.
### Point-Graph Feature Fusion
The essence of our point-graph fusion learning approach lies in leveraging coordinate information as a basis for querying neighboring elements in the opposite branch. To achieve feature integration, We adopt ball-query & grouping for point-to-graph feature merging and feature propagation for graph-to-point feature merging.
For the ball-query & grouping method in the \(i\)-th Point-Graph Fusion layer, a graph node \(g\) searches for all opposite point elements within a given ball with radius \(r\) as \(\{\textbf{p}_{1},\textbf{p}_{2},...,\textbf{p}_{b}\}\). Then, an MLP module \(\mathcal{F}_{1}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) independently project the point feature vectors to an updated representation of all queried points, Then, a feature-wise max-pooling layer aggregates all updated point features as point representation of the node **g**, expressed as:
\[\textbf{g}^{(i)}_{bg}=\max_{j}\left(\mathcal{F}_{1}(\textbf{p}^{(i-1)}_{j})\right) \tag{1}\]
Subsequently, the ball-queried feature \(\textbf{g}^{(i)}_{bg}\) is combined with the current feature \(\textbf{g}^{(i-1)}\) before using a graph neural network \(\mathcal{G}:\mathbb{R}^{2D}\rightarrow\mathbb{R}^{D_{next}}\) to perform graph convolution for feature fusion, resulting an updated feature representation of the node, \(\textbf{g}^{(i)}\in\mathbb{R}^{D_{next}}\), as input to the next Point-Graph Fusion layer.
For feature fusion from graph to point, feature propagation is utilized. In the process, each query point \(p\) with feature \(p^{(i-1)}\in\mathbb{R}^{D}\) at the \(i\)-th fusion layer locates its \(k\)-nearest graph nodes \(\{n_{1},n_{2},...,n_{k}\}\) in coordinate space. With \(k\)-nearest neighbors, the query point **p** acquires summarized graph feature \(\textbf{p}^{(i)}_{fp}\) by weighted summation (Eq. 2) of the \(k\) node features \(\{\textbf{g}^{(i-1)}_{1},\textbf{g}^{(i-1)}_{2},...,\textbf{g}^{(i-1)}_{k}\} \in\mathbb{R}^{D}\), where the weights are based on the normalized reciprocal distances. Let the distance between the query point and \(k\) neighbor nodes be \(\{d_{1},d_{2},...,d_{k}\}\), the propagation can be expressed as:
Figure 4: **Overview of the proposed Implicit Point-Graph Network (IPGN) for Pulmonary Tree Labeling. The pipeline pre-processes dense volume to graph and point cloud input for feature fusion learning. The _Point-Graph Fusion_ layers enhance point features with graph context, and the _Implicit Point Module_ produces dense prediction efficiently.**
Figure 5: **Point-Graph Fusion Layer. The details of the Point-Graph Fusion layer are presented. Here, \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) are MLP networks while \(\mathcal{G}\) represents a graph neural network.**
\[\mathbf{p}_{p}^{(i)}=\frac{\sum_{j=1}^{k}\mathbf{e}_{j}^{(i-1)}\times\frac{1}{d_{j} }}{\sum_{l=1}^{k}\frac{1}{d_{l}}} \tag{2}\]
Then, the aggregated feature for point **p**, \(\mathbf{p}_{fp}^{(i)}\in\mathbb{R}^{D}\) is concatenated with the incoming point feature \(\mathbf{p}^{(i-1)}\) to create \(\mathbf{p}_{concat}^{(i)}\in\mathbb{R}^{2D}\). Finally, an MLP module \(\mathcal{F}_{2}:\mathbb{R}^{2D}\rightarrow\mathbb{R}^{D_{nest}}\) projects the concatenated point feature to the input dimension of the next layer as \(\mathbf{p}^{(i)}\in\mathbb{R}^{D_{nest}}\).
### Implicit Dense Volume Reconstruction
To acquire dense volume segmentation results, the naive method is to sample all points from the pulmonary tree and group into multiple non-overlapping point clouds for full inferences. For example, for a point cloud containing \(6k\) points, when the total number of foreground points in dense volume is around \(180k\), the number of forward passes required to reconstruct the dense would be around \(30\). However, repeated inferences is computationally inefficient because the graph input remains identical and the point cloud is globally invariant during the \(30\) inferences.
To avoid repetitive computation, we propose the _Implicit Point Module_ in Fig. 6 for efficient and arbitrary point inference, enabling fast dense volume reconstruction in 3-steps. First, for arbitrary point coordinates \(\mathbf{p}_{q}=(x_{q},y_{q},z_{q})\in\mathbb{R}^{3}\) as input, its \(k\) nearest-neighbor points \(\{\mathbf{p}_{1},\mathbf{p}_{2},...,\mathbf{p}_{k}\}\) in point cloud (Fig. 6 (a)) are queried. Second, for the \(i\)-th nearest neighbor point \(p_{i}\), its corresponding features in different stages of the network are extracted and concatenated to form a multi-stage feature vector \(\mathbf{z}_{i}=\{\mathbf{p}_{i}^{(0)}\frown\mathbf{p}_{i}^{(1)}\frown... \frown\mathbf{p}_{i}^{(l)}\}\), where \(l\) denotes the number of Point-Graph Fusion layers. A feature propagation (Fig. 6 (b)), similar to Eq. 2, is performed to aggregate \(\{\mathbf{z}_{1},\mathbf{z}_{2},...,\mathbf{z}_{k}\}\) into the feature representation \(\mathbf{z}_{q}\) for the query point. Finally, an MLP network \(\mathcal{H}\) projects the feature \(\mathbf{z}_{q}\) into a 19-dimensional vector for final classification (Fig. 6 (c)). For dense volume segmentation results, simply sample all foreground points and query through the module for prediction.
### Model Details
The IPGN is a customizable pipeline. For ball query in point-to-graph fusion, we set ball radius \(r\) = 0.1, and the maximum queried point is 24. For feature propagation in both graph-to-point fusion and the Implicit Point Module, we set \(k\)=3 for the \(k\)-nearest neighbor search. During the entire Point-Graph Network backbone, we set the intermediate fusion features to be 128-dimensional.
#### 4.1.1 Training
At the point branch, we experimented with two point models, PointNet++ [26] or Point Transformer [31] as initial feature encoders. For the graph encoder, we apply an 11-layer GAT [47] as the initial feature encoder. Prior to the training of the network, the feature encoders at point and graph branches are independently trained on the corresponding point and graph segmentation tasks and kept frozen.
The Point-Graph Network and the Implicit Point Module are trained simultaneously. First, we perform a forward pass on the Point-Graph Network, generating multi-stage point and graph features as well as predictions for the \(M\) input points and \(N\) graph elements. Subsequently, another set of \(M^{\prime}\) foreground points are randomly sampled, and perform forward inference on the Implicit Point Module for \(M^{\prime}\) predictions. After acquiring \(M+M^{\prime}\) point and \(N\) graph predictions, we apply the Cross-Entropy loss for training.
The two point encoder candidates: PointNet++ and Point Transformer, are both trained for 120 epochs with a learning rate of 0.002 while the GAT graph encoder is trained for 240 epochs with a learning rate of 0.02. The IPGN pipeline is trained for 100 epochs with a learning rate of 0.01 and for every 45 epochs, the learning rates are divided by 2. To improve model robustness, we employ random rotation, shift, and scaling as data augmentation during training.
#### 4.1.2 Inference
Testing on the dense volume input involves a 2-step procedure. First, the input point cloud and graph perform inference and generate multi-stage features through the backbone Point-Graph Network. In the second step, we sample all foreground points on the pulmonary tree structure and feed them to the Implicit Point Module in an iterative manner for predictions. With predictions for all dense volume elements, we simply reconstruct the volume by placing the predicted point labels at the 3D coordinates.
## 5 Experiments
### Experiment Setting
By default, all experiments use dense volume as initial input. While CNN is natural for manipulating dense volume, we pre-process the dense volume to point clouds and skeleton graphs for point and graph experiments. We present the experiment metrics at the point-level and graph-level. The performance at graph-level represents how structural components and connections within a graph are recognized. Moreover, point-level dense volume performance evaluation can be achieved using CNN methods, point-based methods with repeated inferences and graph-based models after post-processing (section 5.3). Therefore, the evaluations in point-level across convolution, point, and graph methods are consistent and fair.
#### 5.1.1 CNNs
In CNN experiments, given a 3D input, the task involves providing semantic segmentation prediction for the pulmonary tree in the image and all CNN methods apply a combination of dice loss and the cross-entropy loss for training. During testing, the image background is excluded from the metric computation.
Figure 6: **Implicit Point Module. For any query point, the Implicit Point Module consumes multi-stage features from a Point-Graph Network with feature propagation and a neural network to provide a label. \(\mathcal{H}\) represents an MLP.**
Among the CNN experiments, we employ 3D-unet [48] as the basic setup. Based on the 3D-Unet, we also implement a multi-task key-point regression method [17], abbreviated as "3D-Unet + KP". More specifically, an additional regression prediction head predicts a heatmap representing how likely is the location of a graph node as a key-point. In these two CNN-based experiments, data are down-sampled to \(96\times 96\times 96\) due to limited GPU memory during training and validation. During testing, the input is down-sampled for inference and re-scaled to the original dimension \(N\times 512\times 512\) for evaluation.
To address the issue of high memory usage and information loss by compromised resolution, we apply a sliding window approach in another CNN experiment, in which local 3D patches with dimension \(96\times 96\times 96\) from the original image are used for training and validation purposes. During testing, the predictions obtained from the sliding-window technique were assembled back onto the original image for evaluation.
#### Iv-A2 Point Clouds
Point-based experiments involve treating a set of tubular voxels of a pulmonary tree as a point cloud for sparse representation for modeling. At the output, point-based model provides per-point classification prediction.
During training and validation of the experiments, we randomly sampled 6000 foreground elements as point cloud input. During testing, all foreground points are sampled, randomly permuted, and grouped into multiple point clouds. Then each point cloud containing 6000 points is iteratively fed into the model for evaluation. Consequently, the inference processes provide a prediction for all foreground elements as dense volume results. In terms of baselines, PointNet [25], PointNet++ [26] and Point Transformer [31] are tested.
#### Iv-A3 Graphs
Graph experiments utilize the skeleton graph generated from dense volume by software [45] as graph structure, and evaluate networks' ability to recognize key-point and structural connections within the skeleton tree, respectively represented by per-node and per-edge performance.
In graph experiments, we leverage multiple graph neural networks (GNN) such as GAT [47], GraphSage [50], and graph network with pre-trained point-based features as input. We ensure fair comparisons across all GNNs by using 14 respective layers. After acquiring node features, features from the source node and destination node are interpolated by averaging for edge features before an MLP network projects all features to 19-dimension for final predictions.
Furthermore, we implement a post-processing technique to dilate graph-based prediction to dense volume prediction. Specifically, given any voxel, the algorithm searches for the label of the nearest graph element as its own label. Therefore, all graph baselines also provide point-level prediction metrics.
#### Iv-A4 Ours
As discussed in depth in section IV, we perform experiments combining point learning with graph learning. For the proposed Point-Graph network, the input and output setups for the point branch and graph branch are identical to those of point experiments and graph experiments. To speed up dense reconstruction, we incorporate the Implicit Point Module for point-based prediction, evaluated at point-level.
### Model Performance Comparison
In this section, we perform a comparative analysis based on Table I. The statistics presented in this section are the average performance over the 3 pulmonary structures.
At graph-level evaluation, among GNN methods with graph-only context, GAT [47] model, with minimal tuning, outperforms most baselines with considerable margin, displaying the advantage of attention mechanism in pulmonary tree setting. In addition to graph modeling with 3D coordinate features, the performance of the GAT [47] model, and a hypergraph [16]
\begin{table}
\begin{tabular}{l|c c|c c c|c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Feature} & \multicolumn{4}{c|}{Airusy} & \multicolumn{4}{c|}{Atery} & \multicolumn{4}{c}{Vain} \\ & & \multicolumn{4}{c|}{Point-level} & \multicolumn{4}{c|}{Graph-level} & \multicolumn{4}{c|}{Point-level} & \multicolumn{4}{c|}{Graph-level} & \multicolumn{4}{c|}{Point-level} & \multicolumn{4}{c|}{Graph-level} \\ & & CNN & Point & Graph & Acc & Dice & Acc & Dice & Acc & Dice & Acc & Dice & Acc & Dice & Acc & Dice & Acc & Dice \\ \hline _User / Polar_ & & & & & & & & & & & & & & & & & & & & \\ 3D-Uet (down-sampled) [48] & ✓ & & & 62.9 & 58.5 & - & - & - & - & 67.1 & 61.4 & - & - & - & - & 63.0 & 54.0 & - & - & - & - \\
3D-Uet (down-sampled) + KP [17] & ✓ & & & 63.6 & 56.9 & - & - & - & - & 67.2 & 61.3 & - & - & - & - & 63.7 & 55.0 & - & - & - & - \\
3D-Uet (diding-window) [48] & ✓ & & & 61.0 & 39.8 & - & - & - & - & 51.0 & 22.5 & - & - & - & - & 52.6 & 28.7 & - & - & - & - \\ PointNet [25] & ✓ & & 87.4 & 79.1 & - & - & - & - & 86.6 & 80.0 & - & - & - & - & 79.6 & 70.8 & - & - & - & - \\ PointNet++ [26] & ✓ & & 90.1 & 82.9 & - & - & - & - & 89.0 & 82.5 & - & - & - & - & 82.5 & 75.7 & - & - & - & - \\ Point Transformer [31] & ✓ & & 91.1 & 87.8 & - & - & - & - & 90.1 & 86.5 & - & - & - & - & 83.4 & 77.7 & - & - & - & - \\ \hline _Graph_ & & & & & & & & & & & & & & & & & & & & & & \\ GCN [32] & & & & 84.0 & 82.8 & 87.8 & 85.5 & 85.3 & 83.0 & 80.2 & 79.6 & 83.2 & 82.1 & 81.8 & 80.4 & 74.6 & 73.7 & 74.7 & 73.5 & 71.7 & 70.4 \\ GNN [49] & & & ✓ & 85.3 & 84.6 & 90.5 & 88.8 & 87.6 & 85.6 & 82.1 & 81.4 & 86.0 & 84.9 & 84.3 & 82.8 & 75.0 & 74.0 & 75.8 & 74.3 & 72.5 & 70.9 \\ GraphSage [50] & & ✓ & 86.6 & 86.0 & 92.8 & 91.7 & 89.5 & 85.0 & 83.0 & 82.4 & 87.8 & 86.7 & 85.8 & 84.5 & 76.7 & 75.9 & 79.9 & 79.1 & 76.1 & 75.0 \\ HyperGraph [16] & ✓ & & 86.7 & 86.0 & 93.4 & 90.2 & 89.2 & 88.9 & 82.2 & 89.2 & 89.3 & 89.0 & 86.3 & 85.7 & 76.4 & 75.7 & 79.6 & 79.4 & 76.1 & 75.8 \\ HyperGraph + Handcrafted Features [16] & ✓ & & 86.4 & 85.3 & 93.0 & 92.8 & 89.8 & 87.2 & 85.8 & 81.8 & 89.0 & 80.6 & 86.1 & 85.2 & 76.3 & 75.7 & 79.6 & 79.5 & 78.5 & 75.4 \\ GAT [47] + Handcrafted Features [16] & ✓ & & 86.9 & 86.3 & 94.1 & 92.7 & 90.8 & 89.1 & 84.0 & 83.4 & 89.6 & 88.7 & 87.6 & 86.3 & 76.7 & 76.0 & 80.3 & 79.0 & 76.7 & 75.3 \\ GAT [47] + Handcrafted Features [16] & ✓ & & 86.9 & 86.3 & 94.0 & 92.5 & 90.7 & 88.8 & 84.1 & 83.7 & 89.4 & 88.4 & 87.4 & 86.2 & 77.1 & 76.1 & 79.5 & 78.1 & 75.6 & 74.1 \\ \hline _Point + Graph_ & & & & & & & & & & & & & & & & & & & & & & \\ GAT [47] (PointNet++ Features) [26] & ✓ & & 87.2 & 86.7 & 94.4 & 94.6 & 91.7 & 91.8 & 84.5 & 83.8 & 91.8 & 90.5 & 89.1 & 88.6 & 77.6 & 76.3 & 81.8 & 80.4 & 78.3 & 77.0 \\ PGN (PointNet++ [26]) & ✓ & & 91.0 & 87.7 & 95.2 & **98.1** & 92.9 & 92.3 & 93.9 & 89.6 & 86.8 & 92.9 & **92.9** & **90.0** & **89.3** & **83.8** & 77.4 & **82.9** & **82.8** & **76.** **74** \\ PGN (PointNet++ [26]) & ✓ & & 91.0 & 87.7 & 95.2 & **98.1** & 92.3 & 92.3 & 89.9 & 86.6 & 92.9 & 92.9 & 90.0 & **89.3** & 83.6 & 77.5 & **82.9** & **82.8** & **79.6** & **79.4** \\ PGN (Point Transformer [31]) & ✓ & & **91.6** & 88.4 & **95.3** & 95.0 & **92.6** & **92.7** & **90.7** & **87.2** & **93.0** & 92.6 & **90
model with handcrafted features [16] are presented. Compared to applying coordinate features as input, GAT with handcrafted features suffers from a slight drop of 0.4% in accuracy. On the other hand, applying handcrafted features to the hypergraph [51] model also translates to a performance drop. These results suggest that in graph settings, tailor-designed features don't provide more valuable information than 3D coordinates. Further insights are provided in section V-E1.
For graph performance in settings with both graph and point context, GAT (PointNet++ feature) has the lowest performance among all. Nevertheless, it still outperforms any other graph-context-only baseline by a margin of around 1.4% in accuracy at minimum. Such a gap in performance indicates that the integration of point-context into graph learning is beneficial. For the proposed Point-Graph Network and IPGN, their performances beat all baselines in both metrics, displaying superiority in point-graph fusion learning.
At point-level, CNN methods achieve the worst performance. Two 3D-Unet [17], [48] methods based on down-sampled input yield unsatisfying metrics, indicating that training CNN methods on reduced resolution leads to inferior modeling. As an attempt to train on the original resolution without memory restriction, the 3D-Unet [48] experiment with sliding-window strategy reports the poorest performances at both accuracy and dice, likely due to lack of global context.
Among point-based methods, the Point Transformer [31] achieves the overall best performances. In terms of graph-based methods, their point-level results generally report lower accuracy, while offering relatively high dice scores compared to point baselines despite the lack of local shape context: The highest dice score from graph methods is only 1.7% behind that of Point Transformer. In our view, such dice quality based on graph can be attributed to the accurate prediction of graph nodes, which is equivalent to border prediction as nodes represent bifurcation points in a tree. Additionally, in CNN, point-based, and the point-graph fusion methods, accuracy is generally higher than dice score. We believe that because the extra-pulmonary structure (Fig. 1, colored in light blue) possesses a large volume and has a distinct location, it could be easily recognized, boosting the accuracy over the dice score.
For settings with point and graph context, the proposed Point-Graph Network and IPGN based on pre-trained PointNet++ [26] feature reports similar performance as Point Transformer [31], showing that tree-topology context provides a similar benefit as the long-range dependency information from Point Transformer. Point-Graph Network and IPGN based on Point Transformer achieve SOTA results among all settings.
### Dense Volume Reconstruction
In this subsection, we focus on the Implicit Point Module (Fig. 6), which aims to enhance the efficiency of labeled dense volume reconstruction (Fig. 4 (f)). As discussed previously in Sec. I, the integration of the implicit module is necessary as high-performing point models like Point Transformer [31] could still be computationally expensive at test time: with repeated inferences for dense prediction, its computation cost grows in cubic, similar to CNN. Experiments are completed to compare the efficiency using the Point Implicit module against using the Point-Graph Network as well as various baseline methods. The reconstruction efficiency is represented as the average run-time in seconds, for dense volume prediction across the Pulmonary Tree Labeling test set.
Regarding volume reconstruction efficiency, the test time of convolution methods, graph-based method, point-based method, and our methods were measured2. For all setups, we evaluate and present the inference time and quality for dense volume segmentation in Table 2. Apart from model inference costs, test time is composed of various operations in different setups. For CNN with down-sampled input, test time includes down-sampling and up-sampling operations. For graph baselines, time measurement takes post-processing into account. Finally, test time for IPGN includes a forward pass on the Point-Graph Network and the subsequent Implicit Point Module inference on all remaining points.
Footnote 2: To make it reproducible, the measurement was conducted on a free Google Colab T4 instance—CPU: Intel(R) Xeon(R) CPU @ 2.00GHz, GPU: NVIDIA Tesla T4 16Gb, memory: 12Gb.
The results demonstrated the superiority of the IPGN in dense volume reconstruction. Qualitatively, the Point-Graph Network and IPGN achieve nearly identical performance. In terms of efficiency, IPGN only requires \(1.5\) seconds on average with PointNet++ [26] as point encoder and \(2.3\) seconds with Point Transformer [31]. Although the graph-based method records a similar time cost, it fails to generate quality results. Further, all other methods are relatively inefficient. Therefore, the proposed IPGN pipeline is the all-around optimal solution for dense volume segmentation for pulmonary structures.
These findings highlight the practical utility of the proposed module, as it allows for fast and efficient point inference without compromising quality. The Implicit Point Module serves as a valuable contribution in overcoming the computational challenges associated with analyzing the pulmonary tree, enabling rapid decision-making in clinical settings.
### Qualitative Analysis with Visualization
In the qualitative analysis section, we employ 2 concrete cases each from the airway, artery, and vein tree to demonstrate the efficacy of our method. Fig. 7 (a-f) showcases the results of full pulmonary tree segmentation using graph-only method, point-only method, and the Implicit Graph-Point Network, against the ground truth image.
The graph-based predictions from Fig. 7 (b-d) reveal instances of incorrect graph predictions leading to anomalous outcomes, where multiple class labels appear in the same branch, thereby inducing inconsistent branch predictions. For point-based method, due to its sparse representation in individual forward passes, point predictions often lose details and disrupt the borders between branches (Fig. 7 (a,c,e)), resulting in volumes that lack smoothness and quality.
Conversely, the proposed IPGN offers various advantages. Firstly, our method accurately classifies distal branches, illustrated by dense volume prediction in Fig. 7 (d,f). Additionally, our method effectively defines clear and uninterrupted borders between the child branches at bifurcation points. This is
especially valuable when a parent node branches into multiple child branches belonging to different classes, as demonstrated in Fig. 7 (b,e). By integrating graph and point context into the modeling process, our method enhances the segmentation capabilities at bifurcation points and distal branches, ultimately producing smoother and more uniform branch predictions.
### Ablation Experiments
#### 5.5.1 Feature Input Selection
In this section, we examine the effect of different input features for the skeleton graph node. The candidates are 3D coordinates feature and handcrafted feature. As coordinate feature is simply the \(\{x,y,z\}\) coordinates, handcrafted feature follows the feature design from TNN [16], containing structural, positional, and morphological features.
We perform experiments on a GAT [47] and a 5-layer MLP network to evaluate the impact of coordinate features and handcrafted features, reported in Table 3. Based on the MLP experiment, handcrafted feature achieves 81.4% node accuracy on average while raw coordinate only reports 74.4%. Conversely, in the experiment with GAT [47], applying the co
\begin{table}
\begin{tabular}{c c|c c c|c c c|c c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Airway} & \multicolumn{2}{c|}{Artery} & \multicolumn{2}{c}{Vain} \\ & & Time (s) & Acc (\%) & Dice (\%) & Time (s) & Acc (\%) & Dice (\%) & Time (s) & Acc (\%) & Dice (\%) \\ \hline _Voxel_ & 3D-Unet [48] (downsampled) & 8.16 & 62.9 & 85.5 & 8.02 & 67.1 & 61.4 & 7.46 & 63.0 & 54.0 \\ _Voxel_ & 3D-Unet [48] (sliding-window) & 9.43 & 61.0 & 39.8 & 11.88 & 51.0 & 22.5 & 12.44 & 52.6 & 28.7 \\ _Point_ & PointNet++ [26] & 4.51 & 90.1 & 82.9 & 9.178 & 89.0 & 82.5 & 9.65 & 82.5 & 75.7 \\ _Point_ & Point Transformer [31] & 10.97 & 91.1 & 87.8 & 12.94 & 90.1 & 86.5 & 23.25 & 83.4 & 77.7 \\ _Graph_ & GAT [47] & 1.15 & 86.9 & 86.3 & 2.05 & 84.0 & 83.4 & 2.24 & 76.7 & 76.0 \\ \hline _Point + Graph_ & RGN (PointNet++ [26]) & 5.65 & 91.0 & 87.7 & 12.40 & 89.9 & 86.8 & 13.16 & **83.8** & 77.6 \\ _Point + Graph_ & IPGN (PointNet++ [26]) & **1.30** & 91.0 & 87.5 & **1.49** & 89.9 & 86.6 & **1.57** & 83.6 & 77.5 \\ _Point + Graph_ & PON (Point Transformer [31]) & 12.10 & **91.6** & 88.4 & 24.28 & **90.7** & **87.2** & 25.81 & **83.8** & **78.6** \\ _Point + Graph_ & IPGN (Point Transformer [31]) & 2.32 & 91.5 & **88.5** & 2.29 & **90.7** & **87.2** & 2.39 & 83.7 & 78.3 \\ \hline \end{tabular}
\end{table}
Table 2: **Inference Speed and Segmentation Metrics.** This table compares the dense volume segmentation test time and quality across graph-based, point-based, and point-graph fusion methods. The test times are measured in seconds while accuracy and dice score present segmentation quality.
Figure 7: **Visualization of Segmentation Results.** This figure displays segmentation results using GAT [47] (graph-only context), PointNet++ [26] (point-only context) and IPGN (with GAT and PointNet++ backbones) methods along with the ground truth. For each method class, the corresponding predictions in their initial form prior to dense volume prediction are also presented.
ordinate feature achieves overall best performances, reaching 88.17% while the handcrafted feature trails behind.
The results indicate that the handcrafted feature provides more information and produces better performance in an MLP setting, where the neural network simply learns a non-linear projection without any global or tree topology context. Once the connections between graph nodes are established through edge in graph modeling, the handcrafted feature completely loses its advantages over the raw coordinate feature, which implies that the handcrafted feature could be learned during graph learning and the learned graph-based feature is more beneficial to graph segmentation quality.
#### 5.3.2 Input Selection For Implicit Point Module
In this section, we present the results of an ablation study where we explore the impact of multi-stage features using different layers of the intermediate feature vector as input to the implicit module.
In the experiment, we initially utilized the feature output of the final Point-Graph Fusion layer as the sole input to the implicit module. Motivated by the design of DenseNet [52], we experimented with adding feature outputs from shallower intermediate blocks, forming multi-stage features, and report the performance with each feature addition until the initial point feature. Table 4 demonstrates that combining multi-stage features, along with the initial point (PointNet++ [26]) feature, enhances the predictive capabilities of our model, contributing to better performance. Finally, the best-performing configuration we observed involved using all available features, yielding results on par with the full modeling approach.
## 6 Extended Application: Reconstruction of Pulmonary Segments
The Implicit Point Module plays a vital role in defining implicit surfaces between different classes within the pulmonary tree, allowing for efficient dense reconstruction of the pulmonary tree. As the module utilizes the point-graph feature field for labeling, the input coordinates are not constrained to the tubular voxel/point. Consequently, any point within the pulmonary lobe can be assigned a class label.
Pulmonary segments are subdivisions within the pulmonary lobes and their boundaries are determined based on the 18 branches of the pulmonary trees of our interest [43]. By utilizing the extracted point-graph feature field from the airway, artery, or vein tree, our module can accurately infer information for all points within the lobe. This enables us to achieve a natural semantic reconstruction of the pulmonary segments, depicted in Fig. 8.
Compared to the ground truth, the tree-segmentation-induced pulmonary segments reconstruction achieves 80.13%, 82.15% and 76.20% prediction accuracy and 79.89%, 81.85% and 76.16% micro-average dice scores, respectively for pulmonary airway, artery, and vein. Future works can potentially produce better pulmonary segment reconstruction by integrating features from the three pulmonary trees or leveraging the explicit pulmonary lobe boundaries as additional guidelines.
## 7 Conclusion
In conclusion, we take an experimentally comprehensive deep-dive into pulmonary tree segmentation based on the compiled PTL dataset. A novel architecture Implicit Point-Graph Network (IPGN) is presented for accurate and efficient pulmonary tree segmentation. Our method leverages a dual-branch point-graph fusion model to effectively capture the complex branching structure of the respiratory system. Extensive experiment results demonstrate that by implicit modeling on point-graph features, the proposed model achieves state-of-the-art segmentation quality with minimum computation cost for practical dense volume reconstruction. The advancements made in this study could potentially enhance the diagnosis, management, and treatment of pulmonary diseases, ultimately improving patient outcomes in this critical area of healthcare.
| 肺疾患は世界中で死亡の主要原因の一つであり、その治療には、肺系内の複雑な三次元樹状構造、例えば気管、動脈、静脈をより深く理解する必要がある。理論的には、高解像度画像スタックを用いてこれらの構造をモデル化することができるが、標準的なCNNアプローチが密集した体積格子で動作するため、そのコストは prohibitively 高い。この問題に対処するために、私たちは樹木骨格のグラフ接続性を保ち、非明確な表面表現を統合した点ベースアプローチを導入した。このアプローチは、SOTAの精度を達成した一方で、計算コストが低い。結果として得られたモデルは、実用的な表面を提供している。公開データの不足のため、私たちは、このアプローチを評価するために広範囲なデータセットを構築し、それを公開する予定です。 |
2302.14552 | Toward Robust Uncertainty Estimation with Random Activation Functions | Deep neural networks are in the limelight of machine learning with their
excellent performance in many data-driven applications. However, they can lead
to inaccurate predictions when queried in out-of-distribution data points,
which can have detrimental effects especially in sensitive domains, such as
healthcare and transportation, where erroneous predictions can be very costly
and/or dangerous. Subsequently, quantifying the uncertainty of the output of a
neural network is often leveraged to evaluate the confidence of its
predictions, and ensemble models have proved to be effective in measuring the
uncertainty by utilizing the variance of predictions over a pool of models. In
this paper, we propose a novel approach for uncertainty quantification via
ensembles, called Random Activation Functions (RAFs) Ensemble, that aims at
improving the ensemble diversity toward a more robust estimation, by
accommodating each neural network with a different (random) activation
function. Extensive empirical study demonstrates that RAFs Ensemble outperforms
state-of-the-art ensemble uncertainty quantification methods on both synthetic
and real-world datasets in a series of regression tasks. | Yana Stoyanova, Soroush Ghandi, Maryam Tavakol | 2023-02-28T13:17:56 | http://arxiv.org/abs/2302.14552v1 | # Toward Robust Uncertainty Estimation with Random Activation Functions
###### Abstract
Deep neural networks are in the limelight of machine learning with their excellent performance in many data-driven applications. However, they can lead to inaccurate predictions when queried in out-of-distribution data points, which can have detrimental effects especially in sensitive domains, such as healthcare and transportation, where erroneous predictions can be very costly and/or dangerous. Subsequently, quantifying the uncertainty of the output of a neural network is often leveraged to evaluate the confidence of its predictions, and ensemble models have proved to be effective in measuring the uncertainty by utilizing the variance of predictions over a pool of models. In this paper, we propose a novel approach for uncertainty quantification via ensembles, called _Random Activation Functions (RAFs) Ensemble_, that aims at improving the ensemble diversity toward a more robust estimation, by accommodating each neural network with a different (random) activation function. Extensive empirical study demonstrates that RAFs Ensemble outperforms state-of-the-art ensemble uncertainty quantification methods on both synthetic and real-world datasets in a series of regression tasks.
_Activation Functions (RAFs) Ensemble_, that aims at improving the ensemble diversity toward a more robust estimation, by accommodating each neural network with a different (random) activation function. Extensive empirical study demonstrates that RAFs Ensemble outperforms state-of-the-art ensemble uncertainty quantification methods on both synthetic and real-world datasets in a series of regression tasks.
## Introduction
Recent advances in deep neural networks have demonstrated remarkable performance in a wide variety of applications, ranging from recommendation systems and improving user experience to natural language processing and speech recognition [1]. Nevertheless, blindly relying on the outcome of these models can have harmful effects, especially in high-stake domains such as healthcare and autonomous driving, as models can provide inaccurate predictions when queried in out-of-distribution data points [1]. Consequently, correctly quantifying the uncertainty of models' predictions is an admissible mechanism to distinguish where a model can or cannot be trusted, and thus, increases the transparency of models about their capabilities and limitations [1]. Uncertainty Quantification (UQ) is important for a variety of reasons. For instance, in order to preserve the model's credibility, it is essential to report and communicate the encountered uncertainties regularly [13]. Additionally, models' predictions are inevitably uncertain in most cases, which has to be addressed to increase their transparency, trustworthiness, and reliability.
In the machine learning literature, uncertainty is usually decomposed into two different types, namely aleatoric uncertainty and epistemic uncertainty [14]. _Aleatoric_ uncertainty, aka data uncertainty, refers to the inherent uncertainty that stems from the data itself, e.g., noise. On the other hand, _epistemic_ uncertainty, also called model uncertainty, is the type of uncertainty that occurs due to the lack of sufficient data. While data uncertainty _cannot_ be alleviated, model uncertainty can be addressed by e.g., acquiring more data. Let \(\mathbf{\sigma}_{a}^{2}\) and \(\mathbf{\sigma}_{e}^{2}\) denote the aleatoric and epistemic uncertainties, respectively. Since the distinction between the two is imprecise to some degree [12], we focus on the predictive (total) uncertainty, which is defined as the sum of the two
\[\mathbf{\sigma}_{p}^{2}=\mathbf{\sigma}_{a}^{2}+\mathbf{\sigma}_{e}^{2}. \tag{1}\]
Accordingly, the approaches developed for uncertainty estimation can be categorized into three groups: Bayesian UQ methods, ensemble UQ methods, and a combination of both, i.e., Bayesian ensemble UQ [1]. In this paper, we focus on ensemble UQ techniques, either Bayesian or non-Bayesian, as this group is less explored compared to the solely Bayesian techniques. An ensemble model aggregates the predictions of multiple individual base-learners (or ensemble members), which in our case are neural networks (NNs), and the empirical variance of their predictions gives an approximate measure of uncertainty. The idea behind this heuristic is highly intuitive: the more the base-learners disagree on the outcome, the more uncertain they are. Therefore, the goal of ensemble members is to have a great level of disagreement (variability) in the areas where little or no data is available, and to have a high level of agreement in regions with abundance of data [1].
In this paper, we propose a novel method, called _Random Activation Functions Ensemble (RAFs Ensemble)_, for a more robust uncertainty estimation in (deep) neural networks. RAFs Ensemble is developed on top of Anchored Ensemble technique, proposed by [1], however, instead of initializing each NN member in the ensemble with the same activation function, the NNs in RAFs Ensemble are accommodated with different (random) activation functions in the hidden layers. This simple, yet crucial, mod
ification greatly improves the overall diversity of the ensemble, which is one of the most important components in forming a successful ensemble. We empirically show that RAFs Ensemble provides high quality uncertainty estimates compared to five state-of-the-art ensemble methods, that is Deep Ensemble (Lakshminarayanan, Pritzel, and Blundell, 2017), Neural Tangent Kernel Gaussian Process Parameter Ensemble (He, Lakshminarayanan, and Teh, 2020), Anchored Ensemble (Pearce et al., 2018), Bootstrapped Ensemble of NNs Coupled with Random Priors (Osband, Aslanides, and Cassirer, 2018), and Hyperdeep Ensemble (Wenzel et al., 2020). The comparisons are performed in a wide range of regression tasks on both synthetic and real-world datasets in terms of negative log-likelihood and root mean squared error.
## Related Work
Uncertainty Quantification (UQ) is an active field of research and various methods have been proposed to efficiently estimate the uncertainty of machine learning models (see Abdar et al., 2021 for an extensive overview). While most research focuses on Bayesian deep learning (Srivastava et al., 2014; Blundell et al., 2015; Sensoy, Kandemir, and Kaplan, 2018; Fan et al., 2020; Jarvenpaa, Vehtari, and Marttinen, 2020; Charpentier, Zugner, and Gunnemann, 2020), deep ensemble methods, which benefit from the advantages of both deep learning and ensemble learning, have been recently leveraged for empirical uncertainty quantification (Egele et al., 2021; Hoffmann, Fortmeier, and Elster, 2021; Brown, Bhuiyan, and Talbert, 2020; Althoff, Rodrigues, and Bazame, 2021). Although Bayesian UQ methods have solid theoretical foundation, they often require significant changes to the training procedure and are computationally expensive compared to non-Bayesian techniques such as ensembles (Egele et al., 2021; Rahaman and Thiery, 2021; Lakshminarayanan, Pritzel, and Blundell, 2017).
Lakshminarayanan, Pritzel, and Blundell (2017) are among the first to challenge Bayesian UQ methods by proposing Deep Ensemble, a simple and scalable technique, that demonstrates superb empirical performance on a variety of datasets. However, one of the challenges of ensemble techniques when quantifying uncertainty is that they tend to give overconfident predictions (Amodei et al., 2016). To address this, Pearce et al. (2018) propose to also regularize the model's parameters w.r.t. the initialization values, instead of zero, leading to Anchored Ensembles, which additionally allows for performing Bayesian inference in NNs. He, Lakshminarayanan, and Teh (2020) relate Deep Ensembles to Bayesian inference using neural tangent kernels. Their method, i.e., Neural Tangent Kernel Gaussian Process Parameter Ensemble (NTKGP-param), trains all layers of a finite width NN, obtaining an exact posterior interpretation in the infinite width limit with neural tangent kernel parameterization and squared error loss. They prove that NTKGP-param is always more conservative than Deep Ensemble, yet, its advantages are generally not clear in practice.
A prominent advance to the Bayesian ensemble UQ methods is the bootstrapped ensemble of NNs coupled with random priors, proposed by (Osband, Aslanides, and Cassirer, 2018), in which, the random prior function and neural models share an input and a summed output, but the networks are the only trainable parts, while the random prior remains untrained throughout the whole process. Furthermore, Wenzel et al. (2020) exploit an additional source of randomness in ensembles by designing ensembles not only over weights, but also over hyperparameters. Their method, called Hyperdeep Ensemble, demonstrates high accuracy for a number of different classification tasks. Nevertheless, despite the recent contributions in ensemble UQ methods, the research in this direction still needs further advancement.
## Toward Robust Uncertainty Estimation
### Preliminaries
Following the notations of (Lakshminarayanan, Pritzel, and Blundell, 2017), let \(S_{train}\) be a training dataset consisting of \(n\) independently and identically drawn (i.i.d.) data points, \(S_{train}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n}\), where \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) denotes a \(d\)-dimensional feature vector and \(y_{i}\in\mathbb{R}\) is a scalar output. Similarly, \(S_{test}\) indicates the test set. Subsequently, \(X\) represents the design matrix and \(\mathbf{y}\) indicates the output vector, where \((S_{train}\_X,S_{train}\_\mathbf{y})\) and \((S_{test}\_X,S_{test}\_\mathbf{y})\) represent the train and test sets, respectively. Without the loss of generality, we consider the regression tasks of the form
\[\mathbf{y}=f(X)+\epsilon,\]
where \(\epsilon\) is a normally distributed constant noise, i.e., \(\epsilon\sim\mathcal{N}(0,\mathbf{\sigma}_{a}^{2})\), and is assumed to be known. The goal is hence to quantify the predictive uncertainty \(\mathbf{\sigma}_{p}^{2}\) associated with \(S_{test}\_\mathbf{y}\), while optimizing \(f\) on the training data.
We adapt the regularized loss function from the Anchored Ensemble technique (Pearce et al., 2018), in which, the regularization of the models' parameters are carried out w.r.t. their initialization values instead of zero. Consequentially, given \(\mathbf{\theta}_{j}\) as the parameters of the \(j_{\text{th}}\) base-learner, the objective function is as follows
\[\mathcal{L}(\mathbf{\theta}_{j})=\frac{1}{n}||\mathbf{y}-\mathbf{\hat{y}}_{j}||_{2}^{2}+ \frac{1}{n}||\Gamma^{1/2}(\mathbf{\theta}_{j}-\mathbf{\theta}_{0,j})||_{2}^{2}, \tag{2}\]
where \(\mathbf{\theta}_{0,j}\) is derived from the prior distribution, \(\mathbf{\theta}_{0,j}\sim\mathcal{N}(\mathbf{\mu}_{0},\Sigma_{0})\), and \(\Gamma\) is the regularization matrix. Furthermore, minimizing this objective allows for performing Bayesian inference in NNs. However, this technique only models the epistemic uncertainty, while aleatoric uncertainty is assumed to be constant (Pearce et al., 2018), which is a limitation, as it is not always possible to distinguish the different origins or types of uncertainty in practice (see Equation 1).
Therefore, in this paper, we aim at enhancing the performance of the ensemble toward a more robust uncertainty estimation. The literature suggests that diversifying the ensembles is effective in improving their predictive performance both theoretically and empirically (Zhou, 2012; Zhang and Ma, 2012; Hansen and Salamon, 1990; Krogh and Vedelsby, 1994). Ideally, diversity is achieved when the predictions made by each model in the ensemble are independent and uncorrelated. However, generating diverse ensemble members is not a straightforward task. The main impediment is the fact that each neural network is trained on the same training data to solve the same problem, which usually results in
a high correlation among the individual base-learners [20]. In the subsequent section, we introduce a simple technique to efficiently improve the overall diversity of the ensemble for a more reliable uncertainty quantification.
### RAFs Ensemble
In this section, we present Random Activation Functions (RAFs) Ensemble for uncertainty estimation, which can be extended to all ensemble methods in terms of methodological modification. When a (Bayesian) ensemble is leveraged to estimate the uncertainty of a deep neural network model, we propose to increase the diversity of predictions among the ensemble members using varied activation functions (AFs), in addition to the random initialization of the parameters. To do so, instead of initializing the neural networks with the same activation function, each NN is accommodated with a different (random) activation function. Subsequently, distinct activation functions account for different non-linear properties introduced to each ensemble member, therewith improving the overall diversity of the ensemble.
As mentioned previously, the ensemble diversity is one of the most important building blocks when it comes to creating a successful ensemble [1]. Hence, it might be preferable to combine the predictions of top-performing base-learners with the predictions of weaker ones [21]. Otherwise, stacking only strong models will likely result in a poor ensemble as the predictions made by the models will be highly correlated, and thus, the ensemble diversity will be greatly limited. Therefore, the choice of activation functions should be motivated purely by their variability and not their appropriateness for the task at-hand.
Let \(\mathbf{\mu}_{0}\) be the prior means, \(\Sigma_{0}\) be the prior covariance, \(\hat{\mathbf{\sigma}}_{a}^{2}\) be an estimate of data noise, \(m\) denote the number of base-learners, and \(NN_{j}\) indicate the \(j_{\text{th}}\) member, the entire procedure for both training and prediction is summarized in Algorithm 1. In this algorithm, a regularization matrix is first created and a set of activation functions is defined (line 1-2). Then, the NNs in the ensemble are trained to minimize the loss function defined in Equation 2 with stochastic gradient descent, using arbitrary optimizer and no early stopping (line 3-13). Note that if the size of the ensemble \(m\) is smaller or equal to the cardinality of the AFs set \(k\), then each NN is trained with a different activation function, and with random functions from the set, otherwise (line 7-11). Consequently, predictions are made with each ensemble member (line 14-16), which are then averaged and an estimate of the predictive uncertainty is computed (line 17-19).
```
0:\(S_{train},S_{test},\) priors \(\mathbf{\mu}_{0}\) and \(\Sigma_{0}\), \(m\), \(\hat{\mathbf{\sigma}}_{a}^{2}\)
0: Estimate of predictive mean \(\hat{\mathbf{y}}\) and variance \(\hat{\mathbf{\sigma}}_{p}^{2}\)
1:\(\Gamma\leftarrow\hat{\mathbf{\sigma}}_{a}^{2}\Sigma_{0}^{-1}\)\(\triangleright\) Regularization matrix
2:\(\mathbb{A}\leftarrow\{a_{1},\ldots,a_{k}\}\)\(\triangleright\) Set of \(k\) AFs
3:for\(j\) in \(1:m\)do\(\triangleright\) Train the ensemble
4: Create \(NN_{j}\) with \(\mathbf{\theta}_{j,0}\leftarrow\mathcal{N}(\mathbf{\mu}_{0},\Sigma_{0})\)
5:if\(j\leq k\)then
6:\(\alpha_{j}=a_{j}\)
7:else
8:\(\alpha_{j}\leftarrow\) Randomly selected from \(\mathbb{A}\)
9:endif
10:\(NN_{j}.train(S_{train},\Gamma,\mathbf{\theta}_{j,0},\alpha_{j})\) using loss in Eq. 2
11:endfor
12:for\(j\) in \(1:m\)do\(\triangleright\) Predict with the ensemble
13:\(\hat{\mathbf{y}}_{j}=NN_{j}.predict(S_{test}.X)\)
14:endfor
15:\(\hat{\mathbf{y}}=\frac{1}{m}\sum_{j=1}^{m}\hat{\mathbf{y}}_{j}\)\(\triangleright\) Mean predictions
16:\(\hat{\mathbf{\sigma}}_{e}^{2}=\frac{1}{m-1}\sum_{j=1}^{m}(\hat{\mathbf{y}}_{j}-\hat{ \mathbf{y}})^{2}\)\(\triangleright\) Epistemic variance
17:\(\hat{\mathbf{\sigma}}_{p}^{2}=\hat{\mathbf{\sigma}}_{e}^{2}+\hat{\mathbf{\sigma}}_{a}^{2}\)\(\triangleright\) Total variance Eq. 1
18:return\(\hat{\mathbf{y}},\hat{\mathbf{\sigma}}_{p}^{2}\)
```
**Algorithm 1** RAFs Ensemble
## Empirical Study
### Experimental Setups
In the experiments, the base-learners of RAFs Ensemble are multilayer perceptrons that consist of one hidden layer of 100 neurons. The ensemble size \(m\) is set to five. This is standard for the implementations of all methods in this paper, as \(m=5\) proved to be empirically sufficient for obtaining predictive uncertainty estimates in the experiments. In addition, we choose a set of seven activation functions which is comprised of (i) Gaussian Error Linear Unit (GELU) [1], (ii) Softsign [14], (iii) Swish [15], (iv) Scaled Exponential Linear Unit (SELU) [16], (v) hyperbolic tangent (tanh), (vi) error activation function, and (vii) linear (identity) activation function. Furthermore, the number of testing samples is set to be always larger than the number of training points \(n\) to detail the uncertainty. Moreover, to account for epistemic uncertainty, the synthetic testing feature vectors \(\mathbf{x}\in S_{test}\) range over wider intervals compared to \(\mathbf{x}\in S_{train}\) and both are sampled uniformly at random. The code is available at [https://github.com/YanasGH/RAFs_code](https://github.com/YanasGH/RAFs_code) for reproducibility.
#### Baselines.
We include five state-of-the-art methods as baselines for empirical comparison with RAFs Ensemble as follows. (i) DE [17], (ii) AE [18], (iii) HDE [19], (iv) RP-param [20], and (v) NTKGP-param [10, 11], on both synthetic and real-world datasets with different dimensionalities (see the Technical appendix for a detailed overview). To ensure fair comparison between the UQ techniques, roughly the same amount of time has been put into hyperparameter tuning for each method.
#### Synthetic Data.
We generate multiple synthetic datasets that fall into four categories: physical models (PM), many local minima (MLM), trigonometric (T), and others (O). Each set in the PM category is generated from a physical mathematical model, such that all values in \(S_{train}\) and \(S_{test}\) are achievable in the real world. Generally, the PM datasets
have complex modeling dynamics and can be characterized as having predominant epistemic uncertainty due to the considerably wider testing sampling regions by design. Similarly, the MLM data, generated from functions with many local minima, are also designed so that the model uncertainty is higher than the aleatoric one. These datasets are usually hard to approximate due to their inherent high-nonlinearity and multimodality. Another category with higher epistemic uncertainty is trigonometric, such as data generated by [1] and [13], where the training data is partitioned into two equal-sized clusters in order to detail uncertainty on out-of-distribution data (see Figure 1). In contrast, the predominant type of uncertainty in the O category is aleatoric. This category includes datasets generated from various functions such as rational and product integrand functions. It is distinguished from the rest of the categories by its high interaction effects. The dimensionality of all datasets can range from one to ten and we consider two datasets per dimension, thus, the total number of synthetic data is 20. More detail on how the data is created can be found in the Technical appendix.
**Real-world Data.** Additionally, we use five real-world datasets for evaluation: Boston housing, Abalone shells [21], Naval propulsion plant [1], Forest fire [13], and Parkinson's disease dataset [14]. To account for aleatoric uncertainty (some) context factors are disregarded, such that this type of uncertainty is characteristically high (see Technical appendix for more details).
**Evaluation Criteria.** We employ two evaluation criteria to gauge the overall performance of the trained models, namely calibration and robustness to the distribution shift. Both measures are inspired by the practical applications of NNs, as generally there is no theoretical evidence for evaluating uncertainty estimates [1]. Calibration is defined as the analytical process of adjusting the inputs with the purpose of making the model to predict the actual observations as precisely as possible [1]. The quality of calibration can be measured by proper scoring rules such as negative log-likelihood (NLL). NLL is a common choice when it comes to evaluating UQ estimates, as it depends on predictive uncertainty [12]. Additionally, due to its practical applicability in a wide spectrum of regression tasks, root mean squared error (RMSE) is measured, although it does not depend on the estimated uncertainty [12], but serves as a proxy and a secondary assessor of the performance. Moreover, to measure the robustness/generalization of methods to distributional shift, we test the models in out-of-distribution settings, such as the synthetic datasets by [13, 14].
### Performance Results
**Qualitative Comparison.** Figure 1 shows the performance of different methods compared to a Gaussian process with a neural tangent kernel (NTKGP analytic) as a reference, on a 1D toy dataset generated from \(\mathbf{y}=\mathbf{x}\text{sin}(\mathbf{x})+\mathbf{\epsilon}\) (dashed line). The plots demonstrate that DE, HDE, and AE provide narrow uncertainty bounds in areas where no data has been observed by the model, which translates to high confidence in OOD data. On the contrary, NTKGP-param, RP-param, and RAFs Ensemble bound their uncertainty estimates with wider intervals in areas with no data, accounting for adequate quantification of epistemic uncertainty, while also indicating robustness to OOD data. Among these methods, RAFs Ensemble provides the widest uncertainty which is reasonable considering the amount of data that is available to the methods over each area. Moreover, this observation is quantitatively validated as RAFs Ensemble achieves the lowest NLL compared to the other methods (see Table 1).
**Overall Performance.** We evaluate the overall performance of all methods in terms of both NLL and RMSE. The outcomes of comparing RAFs Ensemble with five baseline methods on twenty synthetic and five real-world datasets are outlined in Table 1 and Table 2. The results illustrate that our approach outperforms the competitors in most scenarios. Furthermore, Table 3 summarizes the obtained results in terms of ranking, in which the methods are ranked based on their performance for a particular dataset. The left integer corresponds to NLL, while the right one points to RMSE, and the bold values indicate the best-performing method.
mates for datasets with many local minima, despite its unimpressive overall results when compared to the other methods. However, both DE and HDE can produce uncertainty bounds that are unreasonably narrow in areas with unobserved data, as shown in Figure 1 and noted by Heiss et al. (2021).
Nonetheless, AE demonstrates good performance in the dataset categories that exhibit higher epistemic uncertainty such as the physical models. This is due to the fact that AE is designed for capturing model uncertainty, while aleatoric uncertainty is assumed to be constant. Accordingly, AE achieves inferior performance on the real-world datasets, as those generally have more data uncertainty appropriated.
On the other hand, NTKGP-param achieves its finest performance for datasets in the physical model category, which is normally associated with substantial model uncertainty. A credible rationale to explain this insight is the fact that NTKGP-param tends to be more conservative than Deep Ensemble. However, it is generally unclear in which situations this is beneficial since the ensemble members of NTKGP-param will always be misspecified in practice according to He, Lakshminarayanan, and Teh (2020).
Furthermore, RP-param manages to rank comparatively high for real-world datasets as well as trigonometric data, that contain vast amounts of aleatoric and epistemic uncertainty, respectively, indicating that it does not quantify either type of uncertainty better than the other. This observation serves as a demonstration that RP-param generalizes well for different types of datasets that exhibit broad characteristics. However, this technique fails to deliver low NLL scores on some occasions, which might be attributed to the fact that RP-param is based on bootstrapping. While bootstrapping can be a successful strategy for inducing diversity, it can sometimes harm the performance when the base-learners have multiple local optima, as is a common case with NNs Lakshminarayanan, Pritzel, and Blundell (2017).
Nevertheless, RAFs Ensemble outperforms RP-param, and every other method in the comparisons, for 13 out of 25 datasets. In terms of NLL, our approach does not rank below the second place for any data, which is consistent with the strong results from Table 1. Meanwhile, the RMSE scores of this method are altogether satisfactory, although not as prominent compared to the NLL scores. In agreement with the overall outstanding results, RAFs Ensemble holds the highest NLL rank for all data from _MLM_ and \(T\) categories, which can be contemplated as a concluding statement regarding its capabilities to estimate epistemic uncertainty and challenging multimodality. Among all categories, the real-world datasets are least favored by RAFs Ensemble, primarily due to their high level of aleatoric uncertainty. This indicates that RAFs Ensemble captures model uncertainty slightly better than aleatoric uncertainty. Nonetheless, the empirical superiority of this technique is due to the exhaustively exploited added source of randomness via random activation functions, combined with method simplicity and Bayesian behavior, resulted from the anchored loss (Equation 2). This successful combination leads to greatly improved diversity among ensemble members, which can be further confirmed by a direct comparison between RAFs Ensemble and AE. Note that even though RAFs Ensemble does not provide as prominent results with respect to RMSE in the
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{NLL} \\ \hline & DE & HDE & AE & NTKGP-p. & RP-p. & RAFs \\ \hline He et al. 1D & \(>\)100 \(\pm\) 0.18 & 71.31 \(\pm\) 0.51 & 38.75 \(\pm\) 0.12 & 4.48 \(\pm\) 0.18 & 13.05 \(\pm\) 0.43 & **2.21 \(\pm\) 0.18** \\ Forrester et al. 1D & \(>\)100 \(\pm\) 0.53 & \(>\)100 \(\pm\) 0.51 & 50.82 \(\pm\) 0.52 & \(>\)100 \(\pm\) 0.50 & 13.7 \(\pm\) 0.58 & **0.64 \(\pm\) 0.74** \\ Schaffer N.4 2D & 0.29 \(\pm\) 0.01 & -0.71 \(\pm\) 0.01 & 2.15 \(\pm\) 0.01 & -0.55 \(\pm\) 0.01 & -0.36 \(\pm\) 0.01 & **-0.79 \(\pm\) 0.01** \\ Double pendulum 2D & 2.95 \(\pm\) 0.05 & 2.18 \(\pm\) 0.84 & -0.36 \(\pm\) 0.05 & **-0.58 \(\pm\) 0.05** & **-0.47 \(\pm\) 0.05** & **-0.49 \(\pm\) 0.04** \\ Rastrigin 3D & 29.24 \(\pm\) 1.30 & **3.09 \(\pm\) 1.15** & 35.94 \(\pm\) 0.74 & 28.38 \(\pm\) 0.64 & **4.35 \(\pm\) 1.24** & **3.44 \(\pm\) 1.05** \\ Ishigami 3D & 6.01 \(\pm\) 0.08 & \(>\)100 \(\pm\) 0.08 & 8.73 \(\pm\) 0.08 & 1.53 \(\pm\) 0.08 & **-0.01 \(\pm\) 0.08** & **0.06 \(\pm\) 0.07** \\ Environmental 4D & 64.72 \(\pm\) 0.23 & 7.84 \(\pm\) 0.13 & 1.65 \(\pm\) 0.20 & 4.5 \(\pm\) 0.27 & 3.94 \(\pm\) 0.21 & **0.81 \(\pm\) 0.17** \\ Griewank 4D & 28.29 \(\pm\) 2.43 & **5.50 \(\pm\) 1.62** & **4.64 \(\pm\) 3.06** & 10.21 \(\pm\) 2.37 & **4.29 \(\pm\) 2.93** & **4.79 \(\pm\) 2.40** \\ Roos \& Arnold 5D & -2.02 \(\pm\) 0.01 & **-2.21 \(\pm\) 0.00** & -1.89 \(\pm\) 0.01 & -1.71 \(\pm\) 0.01 & -1.70 \(\pm\) 0.01 & -2.1 \(\pm\) 0.01 \\ Friedman 5D & 96.94 \(\pm\) 0.41 & \(>\)100 \(\pm\) 0.51 & 15.04 \(\pm\) 0.50 & 41.69 \(\pm\) 0.39 & 4.22 \(\pm\) 0.44 & **1.78 \(\pm\) 0.39** \\ Planar arm torque 6D & 9.58 \(\pm\) 0.07 & 4.11 \(\pm\) 0.08 & 3.07 \(\pm\) 0.05 & **-0.32 \(\pm\) 0.08** & -0.05 \(\pm\) 0.07 & -0.16 \(\pm\) 0.06 \\ Sum of powers 6D & \(>\)100 \(\pm\) 0.41 & \(>\)100 \(\pm\) 0.62 & 55.03 \(\pm\) 0.43 & \(>\)100 \(\pm\) 0.41 & 41.59 \(\pm\) 0.40 & **35.22 \(\pm\) 0.35** \\ Ackley 7D & 7.11 \(\pm\) 0.23 & **1.38 \(\pm\) 0.16** & 2.50 \(\pm\) 0.36 & 3.11 \(\pm\) 0.27 & 2.09 \(\pm\) 0.26 & **1.16 \(\pm\) 0.08** \\ Prison simulation 7D & **2.19 \(\pm\) 0.00** & 14.06 \(\pm\) 0.00 & 3.50 \(\pm\) 2.40 & 2.87 \(\pm\) 2.93 & 2.67 \(\pm\) 0.42 & 3.63 \(\pm\) 0.57 \\ Robot arm 8D & 10.71 \(\pm\) 0.03 & 6.87 \(\pm\) 0.01 & 7.11 \(\pm\) 0.01 & **0.27 \(\pm\) 0.03** & 0.80 \(\pm\) 0.06 & **0.25 \(\pm\) 0.02** \\ Borehole 8D & \(>\)100 \(\pm\) 1.01 & \(>\)100 \(\pm\) 1.01 & **4.89 \(\pm\) 1.87** & **5.48 \(\pm\) 3.54** & **4.06 \(\pm\) 1.20** & **4.36 \(\pm\) 1.26** \\ Styblinski-Tang 9D & \(>\)100 \(\pm\) 3.05 & \(>\)100 \(\pm\) 0.00 & 40.80 \(\pm\) 5.33 & \(>\)100 \(\pm\) 3.03 & **15.82 \(\pm\) 6.31** & **25.23 \(\pm\) 4.12** \\ PUMA560 9D & 6.59 \(\pm\) 0.15 & **1.62 \(\pm\) 0.14** & 4.24 \(\pm\) 0.14 & 5.93 \(\pm\) 0.08 & 6.40 \(\pm\) 0.14 & 2.14 \(\pm\) 0.13 \\ Adapted Welch 10D & \(>\)100 \(\pm\) 0.81 & \(>\)100 \(\pm\) 0.75 & \(>\)100 \(\pm\) 0.055 & \(>\)100 \(\pm\) 0.75 & \(>\)100 \(\pm\) 0.57 & **78.53 \(\pm\) 0.67** \\ Wing weight 10D & \(>\)100 \(\pm\) 0.00 & 27.31 \(\pm\) 4.37 & **5.46 \(\pm\) 3.46** & 67.30 \(\pm\) 0.53 & **5.54 \(\pm\) 4.15** & **5.39 \(\pm\) 1.69** \\ \hline Boston housing & 74.54 \(\pm\) 1.06 & \(>\)100 \(\pm\) 1.04 & 71.53 \(\pm\) 1.06 & 70.82 \(\pm\) 1.06 & -100 \(\pm\) 1.10 & **40.67 \(\pm\) 1.00** \\ Abalone & \(>\)100 \(\pm\) 0.10 & \(>\)10
higher dimensional datasets as it does in datasets of lower dimensions, it still achieves better or on par results compared to the state-of-the-art methods. In addition, RAFs Ensemble can be deployed in both complex and straightforward settings. On a related note, while DE struggles when dealing with high multimodality and RP-param underperforms when the dataset has interaction effects (from "others" category), RAFs excels in both such settings.
**Scalability to higher dimensions and larger networks.** To test the scalability of RAFs Ensemble, we compare it with the strongest baseline, RP-param, on two additional real-world datasets, i.e., a 65-dimensional data with around 20k samples and a 40-dimensional data with almost 40k samples. The former is the superconductivity dataset, where the goal is to predict the critical temperature of superconductors [1]. The latter summarizes features about articles, where the target is the number of shares in social networks [15]. Both methods utilize the same neural architecture for their base-learners, that is two hidden layers of 128 hidden neurons each, which is more complex than the previous experiments. The conclusion of these experiments is conclusive in favor of our approach. RAFs Ensemble scores NLL of 5.49 and 25.89 on the first and second dataset, respectively, while RP-param scores NLL of over 100 on both datasets.
**Confidence vs. Error.** We further analyze the relation between the RMSE and the precision thresholds in order to examine the confidence of each method in the prediction task. Figure 2 displays the confidence versus error plots for one synthetic and one real-world dataset, i.e., Friedman and Abalone (see the Technical Appendix for more detail). In this figure, for each precision threshold \(\tau\), the RMSE is plotted for examples where the predicted precision \(\mathbf{\sigma}_{p}^{-2}\) is larger than the threshold \(\tau\), demonstrating confidence. In gen
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & DE & HDE & AE & NTKGP-p. & RP-p. & RAFs \\ \hline He et al. ID & 3.71 \(\pm\) 0.18 & 5.70 \(\pm\) 0.51 & **3.15 \(\pm\) 0.12** & 3.64 \(\pm\) 0.18 & 5.24 \(\pm\) 0.43 & 3.80 \(\pm\) 0.18 \\ Forrester et al. 1D & 5.00 \(\pm\) 0.53 & 4.12 \(\pm\) 0.51 & 4.09 \(\pm\) 0.52 & 6.05 \(\pm\) 0.50 & 5.70 \(\pm\) 0.58 & **2.8 \(\pm\) 0.74** \\ Schaffer N.4 2D & **0.23 \(\pm\) 0.01** & 0.34 \(\pm\) 0.01 & 0.30 \(\pm\) 0.01 & **0.24 \(\pm\) 0.01** & 0.31 \(\pm\) 0.01 & 0.27 \(\pm\) 0.01 \\ Double pendulum 2D & **0.46 \(\pm\) 0.05** & 2.22 \(\pm\) 0.84 & 0.71 \(\pm\) 0.05 & **0.51 \(\pm\) 0.05** & 0.74 \(\pm\) 0.05 & **0.58 \(\pm\) 0.04** \\ Rastrigin 3D & 18.41 \(\pm\) 1.30 & **10.96 \(\pm\) 1.15** & 25.58 \(\pm\) 0.74 & 18.10 \(\pm\) 0.64 & **12.87 \(\pm\) 1.24** & **14.85 \(\pm\) 1.05** \\ Ishigami 3D & **0.69 \(\pm\) 0.08** & 1.05 \(\pm\) 0.08 & 0.88 \(\pm\) 0.08 & **0.69 \(\pm\) 0.08** & **0.58 \(\pm\) 0.08** & **0.57 \(\pm\) 0.07** \\ Environmental 4D & **2.04 \(\pm\) 0.23** & 2.51 \(\pm\) 0.13 & **1.83 \(\pm\) 0.20** & **2.34 \(\pm\) 0.27** & **2.03 \(\pm\) 0.21** & **1.68 \(\pm\) 0.17** \\ Griewank 4D & 83.97 \(\pm\) 2.43 & **45.68 \(\pm\) 1.62** & **42.12 \(\pm\) 3.06** & 78.47 \(\pm\) 2.37 & **38.62 \(\pm\) 2.93** & 78.79 \(\pm\) 2.40 \\ Roos \& Arnold 5D & 0.07 \(\pm\) 0.01 & **0.01 \(\pm\) 0.00** & 0.07 \(\pm\) 0.01 & 0.09 \(\pm\) 0.01 & 0.08 \(\pm\) 0.01 & 0.08 \(\pm\) 0.01 \\ Friedman 5D & **3.17 \(\pm\) 0.41** & **3.63 \(\pm\) 0.51** & **2.95 \(\pm\) 0.50** & **3.39 \(\pm\) 0.39** & **2.74 \(\pm\) 0.44** & **3.1 \(\pm\) 0.39** \\ Planar arm torque 6D & **0.65 \(\pm\) 0.07** & **0.62 \(\pm\) 0.08** & **0.71 \(\pm\) 0.05** & **0.71 \(\pm\) 0.08** & 1.08 \(\pm\) 0.07 & **0.74 \(\pm\) 0.06** \\ Sum of powers 6D & **22.81 \(\pm\) 0.41** & **21.19 \(\pm\) 0.62** & **21.87 \(\pm\) 0.43** & **22.79 \(\pm\) 0.41** & **22.22 \(\pm\) 0.40** & **22.24 \(\pm\) 0.35** \\ Ackley 7D & 8.92 \(\pm\) 0.23 & 2.43 \(\pm\) 0.16 & 7.28 \(\pm\) 0.36 & 8.58 \(\pm\) 0.27 & 4.03 \(\pm\) 0.26 & **1.33 \(\pm\) 0.08** \\ Piston simulation 7D & **0.02 \(\pm\) 0.00** & 0.04 \(\pm\) 0.00 & 29.1 \(\pm\) 2.40 & \(\sim\)100 \(\pm\) 2.93 & 5.78 \(\pm\) 0.42 & 7.40 \(\pm\) 0.57 \\ Robot arm 8D & 0.92 \(\pm\) 0.03 & **0.80 \(\pm\) 0.01** & 0.88 \(\pm\) 0.01 & 0.93 \(\pm\) 0.03 & 1.09 \(\pm\) 0.06 & **0.83 \(\pm\) 0.02** \\ Borehole 8D & **32.11 \(\pm\) 1.01** & **32.12 \(\pm\) 1.01** & 48.75 \(\pm\) 1.87 & \(>\)100 \(\pm\) 3.54 & 38.60 \(\pm\) 1.20 & 41.35 \(\pm\) 1.26 \\ Styblinski-Tang 9D & \(>\)100 \(\pm\) 3.05 & \(>\)100 \(\pm\) 0.00 & **\(>\)100 \(\pm\) 5.33** & \(>\)100 \(\pm\) 3.03 & \(>\)**100 \(\pm\) 6.31** & \(>\)100 \(\pm\) 4.12 \\ PUMA560 9D & 3.93 \(\pm\) 0.15 & **3.23 \(\pm\) 0.14** & **3.40 \(\pm\) 0.14** & 3.93 \(\pm\) 0.08 & **3.24 \(\pm\) 0.14** & **3.4 \(\pm\) 0.13** \\ Adapted Welch 10D & **99.51 \(\pm\) 0.81** & **99.4 \(\pm\) 0.75** & \(>\)100 \(\pm\) 0.55 & **99.79 \(\pm\) 0.75** & \(>\)100 \(\pm\) 0.57 & **100.00 \(\pm\) 0.67** \\ Wing weight 10D & \(>\)100 \(\pm\) 0.00 & **58.16 \(\pm\) 4.37** & **63.1 \(\pm\) 4.36** & \(>\)100 \(\pm\) 0.53 & **63.35 \(\pm\) 4.15** & \(>\)100 \(\pm\) 1.69 \\ \hline Boston housing & **11.28 \(\pm\) 1.06** & **11.36 \(\pm\) 1.04** & **11.42 \(\pm\) 1.06** & **11.28 \(\pm\) 1.06** & **11.56 \(\pm\) 1.10** & **11.31 \(\pm\) 1.00** \\ Abalone & **2.06 \(\pm\) 0.10** & **2.09 \(\pm\) 0.10** & **2.08 \(\pm\) 0.10** & **2.05 \(\pm\) 0.10** & **2.09 \(\pm\) 0.10** \\ Naval propulsion & **0.02 \(\pm\) 0.00** & **0.02 \(\pm\) 0.00** & 38.86 \(\pm\) 0.60 & 62.61 \(\pm\) 1.51 & 9.40 \(\pm\) 0.16 & 3.45 \(\pm\) 0.08 \\ Forest fire & 1.97 \(\pm\) 0.05 & **1.87 \(\pm\) 0.02** & 6.43 \
eral, reliable estimates are expected to have decreasing error when the confidence is increasing. For Friedman dataset, it is clear that RAFs Ensemble delivers well-calibrated estimates, which is especially in contrast with DE, NTKGP-param, and HDE (Figure 1(a)). However, for the Abalone data, RP-param demonstrates the most reliable behavior, although RAFs Ensemble meets its performance at the last precision threshold (Figure 1(b)). Overall, our approach sustains lower error over most precision thresholds compared to the majority of the other methods, and this contrast in performance is emphasized as the predictions get more confident.
Ablation.We study the effect of number of base-learners in the ensemble on the quality of UQ, which also measures the sensitivity of the results to the cardinality of the set of AFs \(k\). We conduct an experiment on two different datasets, one synthetic (PUMA590) and one real-world (Abalone), where the results in terms of NLL are represented in Figure 2(b). Note that Figure 2(b) is shown in log-scale for better visibility. According to the theory, in the limit of infinite number of ensemble members, the ensemble error converges to zero (Hansen and Salamon 1990). However, practically speaking, five NNs in the ensemble provide optimal results regarding the trade-off between empirical performance and computational time (Lakshminarayanan, Pritzel, and Blundell 2017), which is also the case in our experiments. This is further confirmed by the plot in Figure 2(a). In addition, for the PUMA590 dataset, it seems that RAFs Ensemble's performance is not impacted negatively by the number of NNs in the ensemble. Moreover, an interesting observation is the steep through for seven NNs (equal to \(k\)) in Figure 2(b), which is an indication that there might be a correlation between \(k\) and the performance in some cases. A plausible reason for this is the fact that the additional source of randomness is utmostly exploited via a different activation function.
To further confirm the effectiveness of the random activation functions, we evaluate the performance of RAFs Ensemble (of five NNs) in terms of NLL w.r.t. different cardinalities \(k\) of the set of AFs. The dataset used for this experiment is the superconductivity data. As the results in Figure 4 clearly suggest, by increasing the cardinality \(k\), NLL has a decreasing pattern, which shows that having more random AFs significantly improves the performance of the ensemble.
Moreover, we combine our approach with RP-param instead of AE to show that RAFs can be methodologically applied to any ensemble technique. We evaluate the performance of this combination on the Parkinson's dataset, using the same network architecture for fair comparison. The obtained results demonstrate that applying RAFs to RP-param leads to reducing the original NLL score of \(>100\) to 48.66, which is in line with the results we get when comparing AE with RAFs Ensemble and is a further proof that the methodology indeed increases the performance.
## Conclusions
We introduced a novel method, Random Activation Functions Ensemble, for a more robust uncertainty estimation in approaches based on neural networks, in which, each network in the ensemble is accomodated with a different (random) activation function to increase the diversity of the ensemble. The empirical study illustrates that our approach achieves excellent results in quantifying both epistemic and aleatoric uncertainty compared to five state-of-the-art ensemble uncertainty quantification methods on a series of regression tasks across 25 datasets, which proved there does not have to be a trade-off between simplicity and strong empirical performance. Furthermore, the properties of datasets such as dimensionality or complexity of modeling dynamics do not appear to affect RAFs Ensemble negatively, which also demonstrates robustness in out-of-distribution settings.
Figure 4: The effect of cardinality \(k\) on NLL for Superconductivity data.
Figure 3: The effect of number of NNs in the ensemble in terms of NLL, including the 95% confidence interval.
Figure 2: Confidence versus error of estimations. | 深層ニューラルネットワークは、機械学習の目立つ存在であり、多くのデータ駆動アプリケーションにおいて優れたパフォーマンスを発揮しています。しかし、それらは、アウト・オブ・ディストリビューションデータポイントで質問されると、不正確な予測を引き起こす可能性があります。これは、特に医療や交通など、誤った予測が非常にコストが高く危険な領域では、悪影響を引き起こす可能性があります。したがって、ニューラルネットワークの出力の不確実性を定量化する事は、その予測の信頼性を評価するために頻繁に行われます。また、予測の分散を用いて、複数モデルの平均値を算出することで、予測の不確実性を測定するエンsembleモデルが有効であることが示されています。この論文では、エンsembleによる不確実性定量化のための新しいアプローチであるランダムアクティベーションファンクション(RAFs)エンsembleを提案しています。これは、各ニューラルネットワークに異なる(ランダム)活性 |
2309.06224 | Hyperbolic groups satisfy the Boone-Higman conjecture | The 1973 Boone-Higman conjecture predicts that every finitely generated group
with solvable word problem embeds in a finitely presented simple group. In this
paper, we show that hyperbolic groups satisfy this conjecture, that is, each
hyperbolic group embeds in some finitely presented simple group. This shows
that the conjecture holds in the "generic" case for finitely presented groups.
Our key tool is a new family of groups, which we call "rational similarity
groups (RSGs)", that is interesting in its own right. We prove that every
hyperbolic group embeds in a full, contracting RSG, and every full, contracting
RSG embeds in a finitely presented simple group, thus establishing the result.
Another consequence of our work is that all contracting self-similar groups
satisfy the Boone-Higman conjecture. | James Belk, Collin Bleak, Francesco Matucci, Matthew C. B. Zaremsky | 2023-09-12T13:42:24 | http://arxiv.org/abs/2309.06224v2 | # Hyperbolic groups satisfy the
###### Abstract.
The 1973 Boone-Higman conjecture predicts that every finitely generated group with solvable word problem embeds in a finitely presented simple group. In this paper, we show that hyperbolic groups satisfy this conjecture, that is, each hyperbolic group embeds in some finitely presented simple group. This shows that the conjecture holds in the "generic" case for finitely presented groups. Our key tool is a new family of groups, which we call _rational similarity groups (RSGs)_, that is interesting in its own right. We prove that every hyperbolic group embeds in a full, contracting RSG, and every full, contracting RSG embeds in a finitely presented simple group, thus establishing the result. Another consequence of our work is that all contracting self-similar groups satisfy the Boone-Higman conjecture.
Key words and phrases:Boone-Higman conjecture, simple group, word problem, finite presentability, hyperbolic group, horofunction boundary, Thompson group, Rover-Nekrashevych group, subshift of finite type, rational group, self-similar group, full group 2
Classification of full RSGs
* 3 Finite presentability of full, contracting RSGs
* 3.1 Nuclear generators
* 3.2 An infinite presentation
* 3.3 A finite presentation
* 4 Embedding hyperbolic groups into RSGs
* 4.1 The horofunction boundary and the tree of atoms
* 4.2 Types of atoms
* 4.3 Atoms in \(G*\mathbb{Z}\)
* 4.4 The contracting lemma
* 4.5 Proof that \(G\) has finite nucleus
* 5 Boone-Higman embeddings
## 1. Introduction
The Boone-Higman conjecture, posed by William Boone and Graham Higman in 1973 [1, 2], predicts the following group theoretic equivalent condition to a finitely generated group having solvable word problem:
**Boone-Higman conjecture**.: _A finitely generated group has solvable word problem if and only if it embeds as a subgroup of some finitely presented simple group._
Recall that a group has **solvable word problem** if there exists an algorithm to determine whether a given word in the generating set represents the identity element of the group. Historically, the word problem for groups was one of the foundational issues in the development of combinatorial and geometric group theory, being first studied by Max Dehn in the early 20th century, along with the conjugacy and isomorphism problems; see [1]. A primary motivation for the Boone-Higman conjecture is that it would establish a very straightforward, purely group theoretic, equivalent condition for finitely generated groups to have solvable word problem, namely, being a subgroup of a finitely presented simple group. We will often refer to an injective homomorphism from a group to a finitely presented simple group as a **Boone-Higman embedding**.
The "if" direction of the Boone-Higman conjecture is straightforward. First, any finitely generated subgroup of a finitely generated group with solvable word problem itself has solvable word problem. Moreoever, finitely presented simple groups are easily seen to have solvable word
problem, as was first observed by Alexander Kuznetsov in 1958 [11]. The "only if" direction, however, has been open for fifty years. Boone and Higman did prove a weaker version of this direction, namely, every finitely presented group with solvable word problem embeds in a simple subgroup of a finitely presented group [1]. This was improved by Thompson to include that the simple subgroup can be taken to be finitely generated [12]. Also, it suffices to prove the conjecture in the finitely presented case, since every finitely generated group with solvable word problem embeds in a finitely presented group with solvable word problem [10]. The Boone-Higman conjecture is a relative of the famous Higman embedding theorem, which states that a finitely generated group is computably presented if and only if it embeds in a finitely presented group [13]. Both the Boone-Higman conjecture and the Higman embedding theorem are thus of the form "a finitely generated group has a certain algorithmic property if and only if it embeds in a certain kind of group." The survey [1] contains more history and background around the Boone-Higman conjecture.
Perhaps the most robust partial result toward the Boone-Higman conjecture, historically speaking, is that the groups \(\operatorname{GL}_{n}(\mathbb{Z})\) (which have solvable word problem) satisfy the conjecture - this follows from work of Elizabeth Scott [14]. Her proof involves (what would in modern terminology be called) finding a faithful self-similar action of \(\operatorname{GL}_{n}(\mathbb{Z})\) on an appropriate tree and then taking what is now known as the Rover-Nekrashevych group. As a consequence, all groups embeddable into some \(\operatorname{GL}_{n}(\mathbb{Z})\) also satisfy the conjecture, such as right-angled Artin groups, Coxeter groups, and virtually nilpotent groups. See [1] for a more comprehensive list of prominent groups that are known to satisfy the Boone-Higman conjecture. To be clear, whenever we say that a finitely generated group, "satisfies the Boone-Higman conjecture," we are implicitly asserting that the group is known to have solvable word problem, and that moreover it has been proven to embed in a finitely presented simple group.
### Statement of results and background
The main result of this paper is the following, which establishes that hyperbolic groups satisfy the Boone-Higman conjecture.
**Theorem A** (Boone-Higman for hyperbolic groups).: _Every hyperbolic group embeds as a subgroup of a finitely presented simple group._
Recall that a geodesic metric space is **hyperbolic** if there exists \(\delta>0\) such that for any triangle of geodesic paths, each side lies in the \(\delta\)-neighborhood of the other two. A finitely generated group
is called **hyperbolic** if some (equivalently any) Cayley graph of \(G\) with respect to a finite generating set is hyperbolic under the path metric. Hyperbolic groups are among the most prominent objects of study in geometric group theory. They were first introduced by Gromov in [10], although in some sense they were already present in Dehn's early 20th century work, since it turns out a group is hyperbolic if and only if it admits a so-called Dehn presentation [1, Theorem III.\(\Gamma.2.6\)]. Hyperbolic groups are always finitely presented, and indeed have type \(\mathrm{F}_{\infty}\); see Proposition III.\(\Gamma.2.2\) and Corollary III.\(\Gamma.2.26\) of [1]. Recall that a group has type \(\mathrm{F}_{n}\) if it has a classifying space with finite \(n\)-skeleton, and type \(\mathrm{F}_{\infty}\) means type \(\mathrm{F}_{n}\) for all \(n\). Since coming to prominence thanks to Gromov's work, hyperbolic groups have inspired an incredible amount of research. Prominent examples of hyperbolic groups include free groups, free products of finite groups, groups acting properly discontinuously cocompactly by isometries on a hyperbolic metric space (e.g. fundamental groups of compact hyperbolic manifolds), many Coxeter groups, and groups satisfying certain small cancellation conditions. The class of hyperbolic groups is also closed under taking free products. Prominent examples of groups that are not hyperbolic include \(\mathbb{Z}^{2}\), Baumslag-Solitar groups \(BS(m,n)\), and groups containing any of these as subgroups. See, e.g. [1, Chapter III] for an overview of hyperbolic groups.
Hyperbolic groups are characterized by having word problem solvable in linear time (see, e.g. [1, Section III.\(\Gamma\)]), and so are a natural test case for the Boone-Higman conjecture. Moreover, they are ubiquitous among finitely presented groups; indeed, a "random" finitely presented group is hyperbolic with probability approaching \(1\)[1, 12, 13]. Our result therefore establishes that the Boone-Higman conjecture holds in the "generic" case for finitely presented groups. While several examples of hyperbolic groups were previously known to embed into finitely presented simple groups, such as all virtually special hyperbolic groups, to the best of our knowledge this result is new outside these examples. In particular it is new for any non-linear hyperbolic groups. Non-linear hyperbolic groups exist, for example the first construction was given by Michael Kapovich in [14, Section 8], and see [15] for some more recent examples.
The finitely presented groups that we use to find Boone-Higman embeddings of hyperbolic groups come from the extended family of "Thompson-like" groups. This is a loosely defined class of groups that generalize the classical Thompson groups \(F\), \(T\), and \(V\), introduced by Richard Thompson; see [15] for background on Thompson's groups.
Groups that are considered Thompson-like often have a combination of good finiteness properties like finite presentability together with good simplicity properties like having simple commutator subgroup, and so are good candidates for targets of Boone-Higman embeddings. In this paper, the finitely presented simple Thompson-like groups that we use are so called twisted Brin-Thompson groups, introduced by the first and fourth authors in [1] following the construction of the Brin-Thompson groups by Matthew Brin in [1]. Thanks to a criterion due to the fourth author in [13] (see Theorem 5.3), we do not need to work directly with twisted Brin-Thompson groups here, but rather embed hyperbolic groups into a new family of finitely presented groups that (may or may not be simple but) admit certain actions, ensuring their embeddability into finitely presented simple twisted Brin-Thompson groups.
This new family of groups is the main focus of this paper. We call them "rational similarity groups", or "RSGs" for short. They arise as natural subgroups of the rational group of a subshift of finite type, and many existing examples of interesting groups turn out to be RSGs. Besides consisting of elements that act rationally on the subshift, the key property of an RSG is that it can realize every "canonical similarity" between compatible cones in the subshift. See Definition 2.32 for the formal definition of an RSG. An important family of examples of RSGs is the Rover-Nekrashevych groups of self-similar groups, as in Example 2.35, and to some extent RSGs can be viewed as an "asynchronous" analog of Rover-Nekrashevych groups. However, we should emphasize that the Rover-Nekrashevych groups are only a very special case of RSGs, as will be made clear at various points.
### Outline of the proof
Let us give a few more details of how we prove Theorem A. One of the most difficult steps is the following finite presentability result, which is Theorem 3.1.
**Theorem B**.: _Every full, contracting RSG is finitely presented._
See Definition 2.24 and Definition 2.41 for the definitions of full and contracting. To prove Theorem B, we write down an explicit infinite presentation, and use a family of words that we call "normalish forms" (Definition 3.5) to prove that it presents the group. Then we reduce the presentation to an explicit finite one. Crucial to all of this is that a certain group \(V_{\Gamma,E}\) is finitely presented (Corollary 2.21); this is already interesting in its own right, and generalizes work of Matui from [16].
Combining Theorem B with the aforementioned Theorem 5.3, we conclude:
**Theorem C**.: _Every contracting RSG embeds into a finitely presented simple group, and hence satisfies the Boone-Higman conjecture._
One consequence of this is the following, which is Corollary 5.2.
**Corollary D**.: _Every contracting self-similar group embeds into a finitely presented simple group, and hence satisfies the Boone-Higman conjecture._
Finally, in order to prove Theorem A, it remains to prove the following, which is Theorem 4.1.
**Theorem E**.: _Every hyperbolic group embeds in a full, contracting RSG._
We prove Theorem E by using a group denoted \([[\,G\mid\partial_{h}G\,]]\), for \(G\) a given hyperbolic group. This group arises from the action of \(G\) on its horofunction boundary \(\partial_{h}G\), as defined by Gromov. The horofunction boundary of a hyperbolic group is a totally disconnected, compact metrizable space that maps onto the more familiar Gromov boundary \(\partial G\) by a finite-to-one map [20, 19]. The group \([[\,G\mid\partial_{h}G\,]]\) consists of all homeomorphisms of \(\partial_{h}G\) that locally agree with elements of the hyperbolic group. That is, a homeomorphism \(f\colon\partial_{h}G\to\partial_{h}G\) lies in \([[\,G\mid\partial_{h}G\,]]\) if and only if there exists a partition of \(\partial_{h}G\) into finitely many clopen sets such that \(f\) agrees with some element of \(G\) on each set of the partition.
These groups \([[\,G\mid\partial_{h}G\,]]\) are "Thompson-like" in the sense that their action shares many properties with the usual action of Thompson's group \(V\) or a Rover-Nekrashevych group on the Cantor set. Indeed, in the framework established by Matui and Matsumoto [13, 14, 15, 16], the groups \([[\,G\mid\partial_{h}G\,]]\) can be viewed as topological full groups of certain etale groupoids. For most hyperbolic groups \(G\), this groupoid is purely infinite and minimal. Then \([[\,G\mid\partial_{h}G\,]]\) is the topological full group associated to the etale groupoid of all germs of elements of \(G\) acting on \(\partial_{h}G\). Note that the construction of \([[\,G\mid X\,]]\) makes sense whenever \(G\) is a group acting on a topological space \(X\), though the resulting group is not always Thompson-like. For example, if the action of \(G\) on \(X\) preserves some measure, then the action of \([[\,G\mid X\,]]\) on \(X\) will too.
In [1], the first three authors described an assignment of addresses to \(\partial_{h}G\) with respect to which the action of \(G\), for most hyperbolic groups \(G\), is faithful and asynchronous rational, as defined by Grigorchuk, Nekrashevych, and Sushchanskii [11]. Specifically, \(\partial_{h}G\)
can be viewed as a clopen subset of a subshift of finite type, and \(G\) acts by finite-state transducers. It follows that the action of \([[\,G\mid\partial_{h}G\,]]\) on \(\partial_{h}G\) is faithful and rational. The group \([[\,G\mid\partial_{h}G\,]]\) also contains the topological full group \(V_{\Gamma,E}\) for the associated clopen subset \(E\) of a subshift \(\Sigma_{\Gamma}\), which ensures that it is an RSG. Finally, we prove that it is contracting; the proof of this involves some intricate geometric arguments, and is the part of the overall argument that utilizes hyperbolicity most essentially; see Lemma 4.19 for the culmination of these technical arguments.
All of these steps work for hyperbolic groups \(G\) outside some low-complexity cases; in particular they work whenever \(G\) has a proper \(\mathbb{Z}\) free factor, and so every hyperbolic group embeds into a hyperbolic group \(G\) for which \([[\,G\mid\partial_{h}G\,]]\) is a full, contracting RSG, thus establishing Theorem E.
### Open questions
We end the introduction with some questions that naturally arise.
**Question 1.1**.: Is every non-elementary hyperbolic group isomorphic to a contracting RSG?
We show in Section 4 that every hyperbolic group \(G\) that acts faithfully on its Gromov boundary is isomorphic to an RSG. Moreover, if \(G\) has \(\mathbb{Z}\) as a proper free factor then the RSG is contracting. The main impediment to obtaining the contracting property outside this case is that we do not know whether \(\Sigma_{\Gamma}\) always has an irreducible core.
**Question 1.2**.: Do full, contracting RSGs have type \(\mathrm{F}_{\infty}\)? Even for Rover-Nekrashevych groups of contracting self-similar groups, this remains open, and was conjectured to be true by Nekrashevych; see [20].
A first step toward answering Question 1.2 would be finding a topological proof of finite presentability of full, contracting RSGs. An attempt that the authors made, following a variant of the "standard" approach to deducing type \(\mathrm{F}_{\infty}\) for Thompson-like groups (see, e.g. [1, 10]), was unsuccessful, so it seems that some new ideas are needed.
One could also ask whether finite presentability of the full group of a hyperbolic group \(G\) acting on its horofunction boundary could be deduced in some manner making direct use of the finite presentability of \(G\), rather than the contracting property. More generally, one can ask:
**Question 1.3**.: Does every finitely presented subgroup of the rational group \(\mathcal{R}_{\Gamma,E}\) embed in a finitely presented subgroup whose action on
some orbit in \(E\) is oligomorphic and has finitely generated stabilizers of finite subsets? (And hence embed in a finitely presented (simple) twisted Brin-Thompson group?)
Scott in [13, Theorem 2] showed that (what is now called the) Rover-Nekrashevych group \(V_{d}(G)\) of a finitely presented self-similar group \(G\) is finitely presented, and Skipper, Witzel, and the fourth author extended this result to higher finiteness properties in [14, Theorem 4.15]. We can ask the following analogous question about RSGs.
**Question 1.4**.: If \(G\) is a finitely presented RSG, then must the full closure of \(G\) also be finitely presented?
The following more ambitious question is a concrete approach to trying to prove the full Boone-Higman conjecture:
**Question 1.5**.: Does every finitely presented group with solvable word problem embed in a finitely presented oligomorphic group with finitely generated stabilizers of finite subsets?
If the answer to this question is "Yes", then we would have that every finitely presented group with solvable word problem embeds in a finitely presented (simple) twisted Brin-Thompson group, thus proving the Boone-Higman conjecture. In a similar vein, one can wonder whether twisted Brin-Thompson groups might always serve as the target of a Boone-Higman embedding:
**Question 1.6**.: Does every finitely presented simple group embed in a finitely presented (simple) twisted Brin-Thompson group?
Finally, another obvious question is whether one could try to extend the techniques here to prove the Boone-Higman conjecture for other classes of groups that are related to hyperbolic groups, such as CAT(0) groups, automatic groups, relatively hyperbolic groups with nice peripheral subgroups, and so forth. A starting point would have to be an interesting action on a Cantor space (e.g. the horofunction boundary in our case), which is not an issue in and of itself, but in order for our techniques here to work it turns out that the group of germs at any point must be virtually cyclic (see Proposition 5.5), and this is an impediment for several potential approaches. Let us also mention another obvious family of groups related to hyperbolic groups, namely acylindrically hyperbolic groups; it turns out there exist acylindrically hyperbolic groups with unsolvable word problem [16, Theorem 7.7] (in fact, one can just take \(G*\mathbb{Z}\), for any \(G\) with unsolvable word problem), so in fact they do not all embed into finitely presented simple groups, and
it is not clear whether assuming acylindrical hyperbolicity should help with finding Boone-Higman embeddings.
This paper is organized as follows. We begin by defining rational similarity groups (RSGs) in Section 2. In Section 3 we prove that full, contracting RSGs are finitely presented. In Section 4 we show that every hyperbolic group embeds into a full, contracting RSG. Finally, in Section 5 we prove that full, contracting RSGs embed into finitely presented simple groups, and hence so do all hyperbolic groups.
### Acknowledgments
Thanks are due to Martin Bridson, Matt Brin, Francesco Fournier-Facio, Anthony Genevois, James Hyde, Ilya Kapovich, Davide Perego, Shayo Olukoya, Rachel Skipper, Owen Tanner, Slobodan Tanushevski, Matteo Tarocchi, and Xiaolei Wu for a variety of helpful discussions and pointers to references. We would particularly like to thank James Hyde for getting all of us interested in the Boone-Higman conjecture in the first place, and for contributing some of the ideas in the proof of Proposition 5.8. The first and second authors would like to thank the Universita degli Studi di Milano-Bicocca (FA project 2020-ATE-0006 "Strutture Algebriche") for travel support, as well as Anna Maria Savi and Patrizio Matucci for graciously hosting us in Florence. The third author is a member of the Gruppo Nazionale per le Strutture Algebriche, Geometriche e le loro Applicazioni (GNSAGA) of the Istituto Nazionale di Alta Matematica (INdAM) and gratefully acknowledges the support of the Fundacao para a Ciencia e a Tecnologia (CEMAT-Ciencias FCT projects UIDB/04621/2020 and UIDP/04621/2020) and of the Universita degli Studi di Milano-Bicocca (FA project 2020-ATE-0006 "Strutture Algebriche"). The fourth author is supported by grant #635763 from the Simons Foundation.
## 2. Rational similarity groups (RSGs)
In this section we construct a new family of groups called rational similarity groups (RSGs), see Definition 2.32. These are certain groups of homeomorphisms, with two key defining properties: they consist of "rational" homeomorphisms of a subshift of finite type \(\Sigma_{\Gamma}\), and they can achieve any "canonical similarity" between compatible cones in \(\Sigma_{\Gamma}\). Rational similarity groups seem to be fundamental to the study of asynchronous rational groups in the same way that self-similar groups are fundamental to the study of synchronous rational groups, in that they emerge as a unifying property common to most examples of interest in the literature. This will become clear in the course of constructing the groups and detailing many examples.
In Subsection 2.1 we establish notation and terminology for subshifts of finite type, and in Subsection 2.2 we develop a theory of rational homeomorphisms of such subshifts. In Subsections 2.3 and 2.4 we recall the Thompson group \(V_{\Gamma,E}\) associated to a subshift and summarize its known properties. We introduce the class of RSGs in Subsection 2.5 and discuss several existing examples in the literature. Finally, we introduce the nucleus for an RSG together with the property of an RSG being contracting in Subsection 2.6, and in Subsection 2.7 we use nuclei to prove a classification theorem for full RSGs.
### Subshifts of finite type
In this subsection we briefly recall the definition of a subshift of finite type and establish relevant notation.
Throughout this subsection, let \(\Gamma\) be a finite directed graph. Note that we allow \(\Gamma\) to have both loops and parallel edges. If \(e\) is an edge of \(\Gamma\), we let \(o(e)\) denote the origin of \(e\), and \(t(e)\) denote the terminus of \(e\). A **directed path** in \(\Gamma\) is a (finite or infinite) sequence \(\{e_{i}\}\) of edges such that \(t(e_{i})=o(e_{i+1})\) for each \(i\). A finite directed path \(e_{1}\cdots e_{n}\) has an origin and terminus defined by
\[o(e_{1}\cdots e_{n})=o(e_{1})\qquad\text{and}\qquad t(e_{1}\cdots e_{n})=t(e_ {n}).\]
An infinite directed path \(e_{1}e_{2}\cdots\) has an origin \(o(e_{1})\) but no terminus. We will often just say "path" with "directed" being implicitly understood.
We will often think of the set of edges of \(\Gamma\) as a finite alphabet, with finite directed paths being words. In particular, an initial segment of a path \(\alpha\) will be called a **prefix** of \(\alpha\) and any final segment of a path \(\alpha\) will be called a **suffix** of \(\alpha\). For convenience, we also regard each node \(v\) in \(\Gamma\) as a path of length \(0\) with \(o(v)=t(v)=v\), where \(v\) is considered to be a prefix of every path that has origin \(v\). Finally, there is a **null path**\(\varnothing\) of length \(0\), which is a prefix of every path.
**Definition 2.1** (Subshift of finite type).: The **subshift of finite type** associated to \(\Gamma\) is the set \(\Sigma_{\Gamma}\) of all infinite directed paths \(e_{1}e_{2}\cdots\) in \(\Gamma\).
This coincides with the terminology in [10], though "subshift of finite type" often refers to a slightly more general kind of sequence space (cf. [11, Definition 2.1.1]). In that context, the subshifts \(\Sigma_{\Gamma}\) defined above are sometimes called "edge shifts" or "topological Markov shifts".
If \(\mathcal{E}\) is the set of edges of \(\Gamma\), then we can view \(\Sigma_{\Gamma}\) as a closed subspace of the infinite product space \(\mathcal{E}^{\mathbb{N}}\), where \(\mathcal{E}\) has the discrete topology. With respect to this topology, \(\Sigma_{\Gamma}\) is compact, totally disconnected, and metrizable. Indeed, there is a **standard ultrametric** on \(\Sigma_{\Gamma}\), for which the distance between two distinct infinite paths \(\omega\) and \(\omega^{\prime}\) is \(1/2^{n}\), where \(n\) is the length of the greatest common prefix of \(\omega\) and \(\omega^{\prime}\).
**Example 2.2**.: If \(\Gamma\) has a single node and \(n\geq 2\) (loop) edges, then \(\Sigma_{\Gamma}\) is the usual \(n\)-ary Cantor space \(C_{n}=\{0,\ldots,n-1\}^{\mathbb{N}}\).
If \(\alpha\) is a finite path in \(\Gamma\), the associated **cone** is the set \(\mathfrak{C}_{\alpha}\subseteq\Sigma_{\Gamma}\) of all infinite directed paths that have \(\alpha\) as a prefix. Note that if \(v\) is a node in \(\Gamma\), then \(\mathfrak{C}_{v}\) is the set of all infinite directed paths with origin \(v\). In addition, the cone \(\mathfrak{C}_{\varnothing}\) is the entire subshift \(\Sigma_{\Gamma}\). Each cone is a clopen subset of \(\Sigma_{\Gamma}\), and these form a basis for the topology. In particular, a subset of \(\Sigma_{\Gamma}\) is clopen if and only if it is a disjoint union of finitely many cones.
**Remark 2.3**.: If \(\Sigma_{\Gamma}\) is a subshift of finite type, then the set of non-null finite paths in \(\Gamma\) has the structure of a forest of finitely many rooted trees, with one tree for each node of \(\Gamma\). The subshift \(\Sigma_{\Gamma}\) is precisely the Gromov boundary of this forest, i.e. the space of leaves at infinity.
**Remark 2.4**.: We will largely be concerned with subshifts \(\Sigma_{\Gamma}\) with the following two properties:
1. The subshift \(\Sigma_{\Gamma}\) has no isolated points; and
2. There are no empty cones, i.e. \(\mathfrak{C}_{\alpha}\neq\emptyset\) for every finite directed path \(\alpha\).
If the first property holds, then the topological space \(\Sigma_{\Gamma}\) is a Cantor space. Together, these properties are equivalent to saying that for every node \(v\) of \(\Gamma\) there is a node \(w\) of \(\Gamma\) such that there is a directed path from \(v\) to \(w\) and \(w\) has at least two outgoing edges.
If \(\alpha\) and \(\beta\) are directed paths with \(t(\alpha)=o(\beta)\), let \(\alpha\cdot\beta\) denote the concatenation of \(\alpha\) and \(\beta\), where \(v\cdot\alpha=\alpha\) for any directed path \(\alpha\) with origin \(v\) and \(\varnothing\cdot\alpha=\alpha\) for every directed path \(\alpha\). If \(\alpha\) is any finite path, observe that
\[\mathfrak{C}_{\alpha}=\{\alpha\cdot\omega\mid\omega\in\mathfrak{C}_{t(\alpha) }\}.\]
This holds for \(\alpha=\varnothing\) as well if we adopt the convention that \(t(\varnothing)=\varnothing\). The **canonical similarity**
\[L_{\alpha}\colon\mathfrak{C}_{t(\alpha)}\to\mathfrak{C}_{\alpha}\]
is the homeomorphism defined by \(L_{\alpha}(\omega)=\alpha\cdot\omega\). More generally, if \(\alpha\) and \(\beta\) are finite paths with \(t(\alpha)=t(\beta)\), the homeomorphism \(L_{\beta}\circ L_{\alpha}^{-1}\) is the **canonical similarity**\(\mathfrak{C}_{\alpha}\to\mathfrak{C}_{\beta}\), sending \(\alpha\cdot\omega\) to \(\beta\cdot\omega\) for all \(\omega\in\mathfrak{C}_{\alpha}\).
### Rational homeomorphisms
In this subsection we define rational homeomorphisms on subshifts of finite type, and we briefly develop the corresponding theory. Rational homeomorphisms on full shifts were defined by Grigorchuk, Nekrashevych, and Sushchanskii in [10].
Throughout this subsection, let \(\Sigma_{\Gamma}\) be a subshift of finite type without isolated points or empty cones. A **nondegenerate map** on \(\Sigma_{\Gamma}\) is a map \(f\colon E\to\Sigma_{\Gamma}\), where \(E\) is any nonempty clopen subset of \(\Sigma_{\Gamma}\), with the property that no cone in \(E\) maps to a single point. For example, since \(\Sigma_{\Gamma}\) has no isolated points and hence no one-point cones, any injective map on \(\Sigma_{\Gamma}\) is nondegenerate.
If \(E\) is a nonempty clopen subset of \(\Sigma_{\Gamma}\), let \(\operatorname{Cones}(E)\) denote the set of all finite paths \(\alpha\) for which \(\mathfrak{C}_{\alpha}\subseteq E\). For a nondegenerate map \(f\colon E\to\Sigma_{\Gamma}\), let
\[\overline{f}\colon\operatorname{Cones}(E)\to\operatorname{Cones}(\Sigma_{ \Gamma})\]
be the function that sends each \(\alpha\in\operatorname{Cones}(E)\) to \(\beta\), where \(\mathfrak{C}_{\beta}\) is the smallest cone in \(\Sigma_{\Gamma}\) that contains \(f(\mathfrak{C}_{\alpha})\). Equivalently, \(\overline{f}(\alpha)\) is the greatest common prefix of all points in \(f(\mathfrak{C}_{\alpha})\). Note that this is well-defined since \(f(\mathfrak{C}_{\alpha})\) has at least two points.
**Definition 2.5** (Local Action).: Let \(f\colon E\to\Sigma_{\Gamma}\) be a nondegenerate map on \(\Sigma_{\Gamma}\). If \(\alpha\in\operatorname{Cones}(E)\), the **local action** of \(f\) at \(\alpha\) is the map \(f|_{\alpha}\colon\mathfrak{C}_{t(\alpha)}\to\mathfrak{C}_{t(\overline{f}( \alpha))}\) defined by
\[f(\alpha\cdot\omega)=\overline{f}(\alpha)\cdot f|_{\alpha}(\omega)\]
for all \(\omega\in\mathfrak{C}_{t(\alpha)}\).
That is, \(f|_{\alpha}\) is the map that fits into a commutative diagram
where \(L_{\alpha}\) and \(L_{\overline{f}(\alpha)}\) are canonical similarities. Note then that two local actions \(f|_{\alpha}\) and \(g|_{\beta}\) are equal if and only if there is a commutative diagram
where \(\phi\) and \(\psi\) are canonical similarities. Moreover, note that if \(\alpha\) and \(\beta\) are two finite directed paths with \(f|_{\alpha}=g|_{\beta}\), then necessarily \(t(\alpha)=t(\beta)\). This is because the functions \(f|_{\alpha}\) and \(g|_{\beta}\) have the same
domain, made of infinite directed paths necessarily starting from the same node of \(\Gamma\).
**Definition 2.6** (Rational map).: A nondegenerate map \(f\colon E\to\Sigma_{\Gamma}\) is **rational** if it only has finitely many distinct local actions.
**Remark 2.7**.: The local actions \(f|_{\alpha}\) correspond to what Grigorchuk, Nekrashevych, and Sushchanskii refer to as "restrictions" in [10]. In the context of self-similar groups, they are also often referred to as "states", since they correspond to the states of a transducer, and rational maps are said to be "finite-state".
Though we do not use transducers here, there is no particular obstacle to writing a rational map on a subshift of finite type as a transducer. The input and output alphabet would be the set of edges of \(\Gamma\), and the transducer would have the property that it can only input or output valid words in the subshift.
Let us collect some lemmas about local actions, which among other things will establish that the property of being rational is closed under compositions and inverses.
**Lemma 2.8**.: _Let \(f\colon E\to\Sigma_{\Gamma}\) be a nondegenerate map. Then for each \(\beta\in\operatorname{Cones}(\Sigma_{\Gamma})\), the set \(\overline{f}^{-1}(\beta)\) is finite. If \(f\) is rational and injective with exactly \(k\) distinct local actions, then more precisely we have \(\big{|}\overline{f}^{-1}(\beta)\big{|}\leq k\)._
Proof.: Let \(n\) be the length of \(\beta\). Under the standard ultrametric, the cone \(\mathfrak{C}_{\beta}\) has diameter \(2^{-n}\), and any subset of \(\mathfrak{C}_{\beta}\) of diameter \(2^{-(n+1)}\) or less is contained in a proper subcone of \(\mathfrak{C}_{\beta}\). Since \(f\) is uniformly continuous by virtue of \(E\) being compact, there exists \(\delta>0\) so that
\[d(p,q)<\delta\quad\Rightarrow\quad d\big{(}f(p),f(q)\big{)}<2^{-(n+1)}\]
for all \(p,q\in E\). It follows that any \(\alpha\in\overline{f}^{-1}(\beta)\) must have \(\operatorname{diam}(\mathfrak{C}_{\alpha})\geq\delta\), and there are only finitely many such cones.
Now suppose that \(f\) is rational and injective with exactly \(k\) distinct local actions, and suppose to the contrary that \(\big{|}\overline{f}^{-1}(\beta)\big{|}>k\). By the pigeonhole principle, there must exist distinct \(\alpha,\alpha^{\prime}\in\overline{f}^{-1}(\beta)\) so that \(f|_{\alpha}=f|_{\alpha^{\prime}}\). Since \(\Sigma_{\Gamma}\) has no isolated points, if \(v\) is a node of \(\Gamma\) such that \(\Gamma\) admits an infinite path starting from \(v\), then it must be the case that \(\Gamma\) admits at least two (and in fact, infinitely many) distinct paths starting at \(v\). As \(t(\alpha)=t(\alpha^{\prime})\) is a node of \(\Gamma\) admitting an infinite path \(\omega\) based at \(t(\alpha)\), then it must admit another distinct path \(\omega^{\prime}\) based at \(t(\alpha)\), and it follows that either \(\alpha\cdot\omega\neq\alpha^{\prime}\cdot\omega\) or \(\alpha\cdot\omega^{\prime}\neq\alpha^{\prime}\cdot\omega^{\prime}\). Assume without meaningful loss of generality that \(\alpha\cdot\omega\neq\alpha^{\prime}\cdot\omega\). But now we
have
\[f(\alpha\cdot\omega)=\beta\cdot f|_{\alpha}(\omega)=\beta\cdot f|_{\alpha^{\prime}} (\omega)=f(\alpha^{\prime}\cdot\omega)\]
and so \(f\) is not injective, a contradiction.
**Lemma 2.9**.: _Let \(f\colon E\to\Sigma_{\Gamma}\) be a nondegenerate map. Then for finite paths \(\alpha,\beta\) in \(\Gamma\) for which \(\alpha\cdot\beta\) is defined, we have \((f|_{\alpha})|_{\beta}=f|_{\alpha\cdot\beta}\)._
Proof.: Note that, since \(t(\beta)=t(\alpha\cdot\beta)\), the maps \((f|_{\alpha})|_{\beta}\) and \(f|_{\alpha\cdot\beta}\) have the same domain \(\mathfrak{C}_{t(\beta)}\). Furthermore, if \(\omega\in\mathfrak{C}_{t(\beta)}\), then
\[\overline{f}(\alpha\cdot\beta)\cdot f|_{\alpha\cdot\beta}(\omega )=f(\alpha\cdot\beta\cdot\omega)\\ =\overline{f}(\alpha)\cdot f|_{\alpha}(\beta\cdot\omega)= \overline{f}(\alpha)\cdot\overline{f|_{\alpha}}(\beta)\cdot(f|_{\alpha})|_{ \beta}(\omega),\]
so it suffices to prove that \(\overline{f}(\alpha\cdot\beta)=\overline{f}(\alpha)\cdot\overline{f|_{\alpha} }(\beta)\).
Recall that \(\overline{f|_{\alpha}}(\beta)\) is the greatest common prefix of \(f|_{\alpha}(\mathfrak{C}_{\beta})\), or equivalently the greatest common prefix of all words \(f|_{\alpha}(\beta\cdot\omega)\) for \(\omega\in\mathfrak{C}_{t(\beta)}\). Then \(\overline{f}(\alpha)\cdot\overline{f|_{\alpha}}(\beta)\) is the greatest common prefix of all words \(\overline{f}(\alpha)\cdot f|_{\alpha}(\beta\cdot\omega)\) for \(\omega\in\mathfrak{C}_{t(\beta)}\). But this is precisely the set of words \(f(\alpha\cdot\beta\cdot\omega)\) for \(\omega\in\mathfrak{C}_{t(\beta)}\), and the greatest common prefix of this set is \(\overline{f}(\alpha\cdot\beta)\). We conclude that \(\overline{f}(\alpha\cdot\beta)=\overline{f}(\alpha)\cdot\overline{f|_{\alpha} }(\beta)\), and hence \((f|_{\alpha})|_{\beta}=f|_{\alpha\cdot\beta}\).
**Lemma 2.10**.: _Let \(E,E^{\prime}\subseteq\Sigma_{\Gamma}\) be clopen, let \(f\colon E^{\prime}\to\Sigma_{\Gamma}\) and \(g\colon E\to E^{\prime}\) be nondegenerate maps, and suppose \(f\circ g\) is nondegenerate. Then for any \(\alpha\in\operatorname{Cones}(E)\),_
\[(f\circ g)|_{\alpha}=(f|_{\overline{g}(\alpha)}\circ g|_{\alpha})|_{t(\alpha)}.\]
Proof.: Let \(h=f|_{\overline{g}(\alpha)}\circ g|_{\alpha}\). We must prove that \((f\circ g)|_{\alpha}=h|_{t(\alpha)}\). Since \(t(t(\alpha))=t(\alpha)\), both of these maps have the same domain \(\mathfrak{C}_{t(\alpha)}\). Furthermore, if \(\omega\in\mathfrak{C}_{t(\alpha)}\), then
\[\overline{f\circ g}(\alpha)\cdot(f\circ g)|_{\alpha}(\omega)=(f \circ g)(\alpha\cdot\omega)=f\big{(}\overline{g}(\alpha)\cdot g|_{\alpha}( \omega)\big{)}\\ =(\overline{f}\circ\overline{g})(\alpha)\cdot h(\omega)=( \overline{f}\circ\overline{g})(\alpha)\cdot h(t(\alpha)\cdot\omega)\\ =(\overline{f}\circ\overline{g})(\alpha)\cdot\overline{h}(t(\alpha ))\cdot h|_{t(\alpha)}(\omega)\]
so it suffices to prove that \(\overline{f\circ g}(\alpha)=(\overline{f}\circ\overline{g})(\alpha)\cdot \overline{h}(t(\alpha))\).
Recall that \(\overline{h}(t(\alpha))\) is the greatest common prefix of the words \(h(\omega)\) for \(\omega\in\mathfrak{C}_{t(\alpha)}\). Then \((\overline{f}\circ\overline{g})(\alpha)\cdot\overline{h}(t(\alpha))\) is the greatest common prefix of the words \((\overline{f}\circ\overline{g})(\alpha)\cdot h(\omega)\) for \(\omega\in\mathfrak{C}_{t(\alpha)}\). By the calculation above, this is precisely the set of words \((f\circ g)(\alpha\cdot\omega)\) for \(\omega\in\mathfrak{C}_{t(\alpha)}\), and the greatest common prefix of this set is \(\overline{f\circ g}(\alpha)\). We conclude that \(\overline{f\circ g}(\alpha)=(\overline{f}\circ\overline{g})(\alpha)\cdot \overline{h}(t(\alpha))\), and therefore \((f\circ g)|_{\alpha}=h|_{t(\alpha)}\).
**Lemma 2.11**.: _Let \(E,E^{\prime}\subseteq\Sigma_{\Gamma}\) be clopen. If \(f\colon E\to E^{\prime}\) is a rational homeomorphism, then \(f^{-1}\) is rational._
Proof.: For simplicity, we suppose instead that \(f^{-1}\) is rational, and we prove that \(f\) is rational. Put an equivalence relation on \(\operatorname{Cones}(E)\) by \(\alpha\sim\beta\) if \(f^{-1}|_{\overline{f}(\alpha)}=f^{-1}|_{\overline{f}(\beta)}\). Since \(f^{-1}\) is rational, there are only finitely many equivalence classes. Let \(\mathcal{C}\) be such an equivalence class. It suffices to prove that \(f\) has only finitely many distinct local actions at elements of \(\mathcal{C}\).
Fix a \(\gamma\in\mathcal{C}\). By the first part of Lemma 2.8, there are only finitely many \(\beta\in\mathcal{C}\) so that \(\overline{f}(\beta)=\overline{f}(\gamma)\). Therefore, it suffices to prove that for every \(\alpha\in\mathcal{C}\) there exists \(\beta\in\mathcal{C}\) such that \(f|_{\alpha}=f|_{\beta}\) and \(\overline{f}(\beta)=\overline{f}(\gamma)\).
Let \(\alpha\in\mathcal{C}\). Since \(f^{-1}|_{\overline{f}(\alpha)}=f^{-1}|_{\overline{f}(\gamma)}\) we have a commutative diagram
where \(\mathfrak{C}_{\alpha^{\prime}}\) and \(\mathfrak{C}_{\gamma^{\prime}}\) are the smallest cones that contain \(f^{-1}(\mathfrak{C}_{\overline{f}(\alpha)})\) and \(f^{-1}(\mathfrak{C}_{\overline{f}(\gamma)})\), respectively, and \(\phi\) and \(\psi\) are canonical similarities. Since \(f(\mathfrak{C}_{\alpha})\subseteq\mathfrak{C}_{\overline{f}(\alpha)}\), we know that \(\mathfrak{C}_{\alpha}\subseteq f^{-1}(\mathfrak{C}_{\overline{f}(\alpha)}) \subseteq\mathfrak{C}_{\alpha^{\prime}}\), so \(\alpha^{\prime}\) is a prefix of \(\alpha\). Thus, \(\phi(\mathfrak{C}_{\alpha})\) must be some cone \(\mathfrak{C}_{\beta}\subseteq\mathfrak{C}_{\gamma^{\prime}}\). Since \(f(\mathfrak{C}_{\beta})=\psi(f(\mathfrak{C}_{\alpha}))\) and \(f(\mathfrak{C}_{\alpha})\) is not contained in any proper subcone of \(\mathfrak{C}_{\overline{f}(\alpha)}\), the set \(f(\mathfrak{C}_{\beta})\) is not contained in any proper subcone of \(\mathfrak{C}_{\overline{f}(\gamma)}\), and therefore \(\overline{f}(\beta)=\overline{f}(\gamma)\). We also have the commutative diagram
which proves that \(f|_{\alpha}=f|_{\beta}\). This finishes the proof that \(f\) is rational.
We have proven the following.
**Proposition 2.12**.: _Let \(\,\Sigma_{\Gamma}\) be a subshift of finite type without isolated points or empty cones, and let \(E\subseteq\Sigma_{\Gamma}\) be a nonempty clopen set. Then the set \(\mathcal{R}_{\Gamma,E}\) of rational homeomorphisms \(E\to E\) forms a group under composition._
The group \(\mathcal{R}_{\Gamma,E}\) is the **rational group** associated to \(E\). In the case where \(E\) is the whole subshift \(\Sigma_{\Gamma}\), we write \(\mathcal{R}_{\Gamma}\) for \(\mathcal{R}_{\Gamma,E}\). A **rational representation** of a group \(G\) is any homomorphism \(G\to\mathcal{R}_{\Gamma,E}\).
**Remark 2.13**.: If \(\Gamma\) consists of a single node and \(n\geq 2\) (loop) edges, then the corresponding subshift \(\Sigma_{\Gamma}\) is known as the **full shift** with an alphabet of size \(n\), and is the usual \(n\)-ary Cantor space \(\mathfrak{C}_{n}\). In this case, the rational group \(\mathcal{R}_{\Gamma}\) is the same as the rational group \(\mathcal{R}_{n}\) defined by Grigorchuk, Nekrashevych, and Sushchanskii [12].
### Thompson groups on subshifts
Matsumoto introduced the topological full group associated to a subshift of finite type \(\Sigma_{\Gamma}\) in 2015 [16]. Here we give a brief definition of Matsumoto's groups that does not use the language of etale groupoids. Specifically, we define one group \(V_{\Gamma,E}\) for each clopen set \(E\subseteq\Sigma_{\Gamma}\), which we refer to as the "Thompson group associated to \(E\)". In the case where \(E=\Sigma_{\Gamma}\), we write \(V_{\Gamma}\) instead of \(V_{\Gamma,E}\). To define the groups \(V_{\Gamma,E}\), recall that any two cones \(\mathfrak{C}_{\alpha},\mathfrak{C}_{\beta}\subseteq E\) with \(t(\alpha)=t(\beta)\) have a corresponding canonical similarity \(\alpha\cdot\omega\mapsto\beta\cdot\omega\).
**Definition 2.14** (Thompson group).: Let \(\Sigma_{\Gamma}\) be a subshift of finite type and \(E\subseteq\Sigma_{\Gamma}\) a nonempty clopen set. The **Thompson group associated to \(E\)** is the group of all homeomorphisms \(f\colon E\to E\) satisfying the following property: there exist two partitions \(\mathfrak{C}_{\alpha_{1}},\ldots,\mathfrak{C}_{\alpha_{n}}\) and \(\mathfrak{C}_{\beta_{1}},\ldots,\mathfrak{C}_{\beta_{n}}\) of \(E\) into cones, such that \(t(\alpha_{i})=t(\beta_{i})\) for each \(i\), and \(f\) maps each \(\mathfrak{C}_{\alpha_{i}}\) to \(\mathfrak{C}_{\beta_{i}}\) by the canonical similarity.
It is not difficult to prove that the set \(V_{\Gamma,E}\) of all such homeomorphisms really does form a group.
**Example 2.15** (Higman-Thompson groups).: If \(\Sigma_{\Gamma}\) is the full shift on an alphabet of size \(n\) (see Remark 2.13), then \(V_{\Gamma}\) is the well-known Higman-Thompson group \(V_{n,1}\). More generally, if \(E\) is any nonempty clopen subset of \(\Sigma_{\Gamma}\), then \(V_{\Gamma,E}\) is isomorphic to one of the Higman-Thompson groups \(V_{n,r}\).
In the case where \(\Sigma_{\Gamma}\) has no isolated points or empty cones, the following proposition shows that the Thompson group \(V_{\Gamma,E}\) is a subgroup of the corresponding rational group \(\mathcal{R}_{\Gamma,E}\).
**Proposition 2.16**.: _Suppose \(\Sigma_{\Gamma}\) has no isolated points or empty cones, and let \(E\subseteq\Sigma_{\Gamma}\) be a nonempty clopen set. Then \(V_{\Gamma,E}\) is precisely the group of all \(f\in\mathcal{R}_{\Gamma,E}\) with the property that \(f|_{\alpha}\) is the identity on \(\mathfrak{C}_{t(\alpha)}\) for all but finitely many \(\alpha\in\operatorname{Cones}(E)\)._
Proof.: Note that \(f|_{\alpha}\) is the identity on \(\mathfrak{C}_{t(\alpha)}\) if and only if \(f\) maps \(\mathfrak{C}_{\alpha}\) to some cone \(\mathfrak{C}_{\beta}\) by a canonical similarity. If \(f\in V_{\Gamma,E}\) has domain
partition \(\mathfrak{C}_{\alpha_{1}},\dots,\mathfrak{C}_{\alpha_{n}}\), then \(f\) acts as a canonical similarity on any cone that is contained in one of the \(\mathfrak{C}_{\alpha_{i}}\), and all but finitely many cones have this property. Conversely, if \(f\) acts as a canonical similarity on all but finitely many cones, then by compactness we can find a finite cover of \(E\) by cones on which \(f\) acts as a canonical similarity, and any such cover has a subcover whose cones are disjoint.
Matui proved that \(V_{\Gamma,E}\) is finitely presented for a certain class of subshifts \(\Sigma_{\Gamma}\).
**Definition 2.17** (Irreducible).: A subshift \(\Sigma_{\Gamma}\) is **irreducible** if the following conditions are satisfied:
1. \(\Gamma\) is **strongly connected**, i.e. for any two nodes \(v,w\) of \(\Gamma\) there is a directed path with origin \(v\) and terminus \(w\).
2. \(\Gamma\) is not a directed cycle.
Note that if \(\Sigma_{\Gamma}\) is irreducible, then it has no isolated points, and is therefore a Cantor space. Also, \(\Sigma_{\Gamma}\) has no empty cones, so the rational group \(\mathcal{R}_{\Gamma,E}\) is defined for any nonempty clopen \(E\subseteq\Sigma_{\Gamma}\). The following is proven in [14, Theorem 6.21].
**Theorem 2.18** (Matui).: _If \(\Sigma_{\Gamma}\) is irreducible and \(E\subseteq\Sigma_{\Gamma}\) is a nonempty clopen set, then \(V_{\Gamma,E}\) is finitely presented. Indeed, \(V_{\Gamma,E}\) has type \(\mathrm{F}_{\infty}\). _
We will need a slight generalization of Matui's theorem.
**Definition 2.19** (Irreducible core).: A subshift \(\Sigma_{\Gamma}\) has an **irreducible core** if there exists an induced subgraph \(\Gamma_{0}\) of \(\Gamma\) (the **core**) such that:
1. \(\Sigma_{\Gamma_{0}}\) is irreducible.
2. For every node \(v\) of \(\Gamma\), there is a directed path in \(\Gamma\) from \(v\) to a node of \(\Gamma_{0}\).
3. There exists \(N\geq 0\) so that every directed path in \(\Gamma\) of length \(N\) (with any origin) has terminus in \(\Gamma_{0}\).
Note that this is still enough to ensure that \(\Sigma_{\Gamma}\) has no isolated points or empty cones.
Given clopen subsets \(E,E^{\prime}\subseteq\Sigma_{\Gamma}\), a homeomorphism \(h\colon E\to E^{\prime}\) is **Thompson-like** if there exists a partition of \(E\) into cones \(\mathfrak{C}_{\alpha_{1}},\dots,\mathfrak{C}_{\alpha_{n}}\) and a partition of \(E^{\prime}\) into cones \(\mathfrak{C}_{\beta_{1}},\dots,\mathfrak{C}_{\beta_{n}}\) such that \(h\) maps each \(\mathfrak{C}_{\alpha_{i}}\) to \(\mathfrak{C}_{\beta_{i}}\) by a canonical similarity. For example, \(V_{\Gamma,E}\) is the group of all Thompson-like homeomorphisms from \(E\) to itself.
**Proposition 2.20** (Pushing into the core).: _Suppose that \(\Sigma_{\Gamma}\) has an irreducible core \(\Gamma_{0}\). Let \(E\subseteq\Sigma_{\Gamma}\) be a nonempty clopen set. Then there exists a clopen set \(E_{0}\subseteq\Sigma_{\Gamma_{0}}\) and a Thompson-like homeomorphism \(h\colon E\to E_{0}\) such that \(hV_{\Gamma,E}h^{-1}=V_{\Gamma_{0},E_{0}}\)._
Proof.: Let \(N\geq 0\) be such that every directed path in \(\Gamma\) of length \(N\) has terminus in \(\Gamma_{0}\). Note that every finite directed path of length greater than \(N\) also has terminus in \(\Gamma_{0}\), and therefore every directed edge in \(\Gamma\) whose origin lies in \(\Gamma_{0}\) also has its terminus in \(\Gamma_{0}\). In particular \(\Sigma_{\Gamma_{0}}\) is precisely the union of the cones \(\mathfrak{C}_{v}\) in \(\Sigma_{\Gamma}\) as \(v\) ranges over all nodes in \(\Gamma_{0}\).
Let \(\alpha_{1},\ldots,\alpha_{k}\) be all the elements of \(\operatorname{Cones}(E)\) of length \(N\). Then \(\mathfrak{C}_{\alpha_{1}},\ldots,\mathfrak{C}_{\alpha_{k}}\) is a partition of \(E\) into cones, and each \(t(\alpha_{i})\) is a node in \(\Gamma_{0}\). Since \(\Gamma_{0}\) is irreducible, it is not difficult to find disjoint cones \(\mathfrak{C}_{\beta_{1}},\ldots,\mathfrak{C}_{\beta_{k}}\) in \(\Sigma_{\Gamma_{0}}\) such that \(t(\beta_{i})=t(\alpha_{i})\) for all \(i\). Let \(E_{0}=\mathfrak{C}_{\beta_{1}}\cup\cdots\cup\mathfrak{C}_{\beta_{k}}\). Then the homeomorphism \(h\colon E\to E_{0}\) that maps each \(\mathfrak{C}_{\alpha_{i}}\) to \(\mathfrak{C}_{\beta_{i}}\) by the canonical similarity has the desired properties.
Now our generalization of Matui's theorem is immediate.
**Corollary 2.21**.: _If \(\Sigma_{\Gamma}\) has an irreducible core and \(E\subseteq\Sigma_{\Gamma}\) is a nonempty clopen set, then \(V_{\Gamma,E}\) is finitely presented. Indeed, it has type \(\mathrm{F}_{\infty}\). _
**Remark 2.22**.: If \(\Gamma\) and \(\Gamma^{\prime}\) both have irreducible cores, then \(\mathcal{R}_{\Gamma,E}\cong\mathcal{R}_{\Gamma^{\prime},E^{\prime}}\) for any \(E\) and \(E^{\prime}\); in particular all such groups are isomorphic to \(\mathcal{R}_{2}\), the rational group on the usual binary Cantor space. This follows from [1, Proposition 2.22] together with the proof of [1, Theorem 2.16]. We should also mention that this group is simple, and is not finitely generated [1]. Note, however, that these isomorphisms do not map \(V_{\Gamma,E}\) to \(V_{\Gamma^{\prime},E^{\prime}}\).
**Example 2.23**.: The groups \(V_{\Gamma,E}\) are much less well-behaved when \(\Gamma\) does not have an irreducible core. For example, as observed by Matteo Tarochi [13], if \(\Gamma\) is the graph with nodes \(u,v_{1},\ldots,v_{n}\), a loop at each node, and a directed edge from each \(v_{i}\) to \(u\), then one can check that \(V_{\Gamma}\) is isomorphic to the Houghton group \(H_{n}\), which has type \(\mathrm{F}_{n-1}\) but not type \(\mathrm{F}_{n}\)[1]. In particular, none of these examples have type \(\mathrm{F}_{\infty}\), in contrast to Corollary 2.21. It would be interesting to determine the finiteness properties of \(V_{\Gamma,E}\) for all finite directed graphs \(\Gamma\).
### Full groups, flexible groups, and classes
In this subsection we define full groups and flexible groups, and we discuss the abelian group of classes associated to any full flexible group. The "full and flexible" terminology was introduced in [1], and the group of classes is essentially due to Matui [14, Section 6.2], though our description follows [1, Section 3].
**Definition 2.24** (Full group).: For \(X\) a topological space and \(G\) a group of homeomorphisms of \(X\), say that a homeomorphism \(h\) of
\(X\) **locally agrees with \(G\)** if for all \(x\in X\) there exists an open neighborhood \(U\) of \(x\) and an element \(g\in G\) such that \(h(u)=g(u)\) for all \(u\in U\). The **full group induced by \(G\)**, denoted \([[\,G\mid X\,]]\), is the group of all homeomorphisms that locally agree with \(G\). Call \(G\)**full** if \([[\,G\mid X\,]]=G\).
For example, the Thompson group \(V_{\Gamma,E}\) is full for any \(\Sigma_{\Gamma}\) and any nonempty clopen set \(E\).
This terminology and notation is inspired by the theory of etale groupoids, where \([[\mathfrak{G}]]\) denotes the topological full group of an etale groupoid \(\mathfrak{G}\) (see [11, 12, 13, 14, 15, 16]). In particular, if \(X\) is a Cantor space then \([[\,G\mid X\,]]\) is the topological full group of the etale groupoid of all germs of elements of \(G\). Note that if \(X\) is compact, then \(h\) locally agrees with \(G\) if and only if there exists a finite covering of \(X\) by (basic, if desired) open sets \(U_{1},\ldots,U_{n}\) and elements \(g_{1},\ldots,g_{n}\in G\) such that \(h(u)=g_{i}(u)\) for all \(i\) and all \(u\in U_{i}\).
**Definition 2.25** (Flexible group).: If \(X\) is a Cantor space, a group \(G\) of homeomorphisms of \(X\) is **flexible** if, for every pair \(E_{1},E_{2}\) of proper, nonempty clopen subsets of \(X\), there exists \(g\in G\) such that \(g(E_{1})\subseteq E_{2}\).
**Remark 2.26**.: Flexible groups are closely related to the vigorous groups defined by the second author, Elliott, and Hyde [BEH], and are precisely the CO-transitive groups of homeomorphisms of Cantor spaces as defined by Kim, Koberda, and Lodha [17, 18]. Moreover, the class of full flexible groups is the same as the class of topological full groups of essentially principal, minimal, purely infinite etale groupoids with unit space a Cantor space (see [13]), and Matui has proven that all such groups have simple commutator subgroup.
If \(G\) is a full flexible group of homeomorphism of a Cantor space \(X\) and \(E\) is a proper, nonempty clopen subset of \(X\), the **class of \(E\)**, denoted \(\operatorname{class}(E)\), is the collection of all clopen sets in the same \(G\)-orbit as \(E\). Let \(\operatorname{Classes}(G)\) be the collection of such classes. There is a natural binary operation \(+\) on \(\operatorname{Classes}(G)\) defined by
\[\operatorname{class}(E_{1})+\operatorname{class}(E_{2})=\operatorname{class}( E_{1}\cup E_{2})\]
for every pair \(E_{1},E_{2}\) of disjoint, nonempty clopen sets whose union \(E_{1}\cup E_{2}\) is not all of \(X\). Under this operation, \(\operatorname{Classes}(G)\) forms an abelian group (see [13, Section 6.2] or [BEH, Section 3]); the key thing is that, since \(G\) is flexible, up to choosing different representatives the sum \(\operatorname{class}(E_{1})+\operatorname{class}(E_{2})\) makes sense for any proper, nonempty \(E_{1}\) and \(E_{2}\). The group \(\operatorname{Classes}(G)\) is precisely the 0th homology group of
the associated etale groupoid as defined by Crainic and Moerdijk [10] and described concretely by Matui [11].
By convention, we can also assign a class to the whole space \(X\), namely the sum \(\operatorname{class}(E_{1})+\operatorname{class}(E_{2})\) for any partition \(\{E_{1},E_{2}\}\) of \(X\) into nonempty clopen sets. It is easy to show that this does not depend on the chosen partition.
Matui describes the group \(\operatorname{Classes}(V_{\Gamma,E})\) in the irreducible case, where it has an especially nice group presentation. See [11, Section 6.1] or [11, Theorem 4.14], where it is proven that \(\operatorname{Classes}(V_{\Gamma,E})\) is isomorphic to the \(K_{0}\) group for the associated reduced groupoid \(C^{*}\)-algebra. These \(C^{*}\)-algebras are in the family of Cuntz-Krieger algebras [10], and their K-theory was computed by Cuntz [12].
**Theorem 2.27** (Matui).: _If \(\,\Sigma_{\Gamma}\) is irreducible and \(E\subseteq\Sigma_{\Gamma}\) is a nonempty clopen set, then \(V_{\Gamma,E}\) is flexible. If \(v\) is a node of \(\,\Gamma\), then all cones \(\mathfrak{C}_{\alpha}\) in \(E\) with \(t(\alpha)=v\) lie in the same class \([v]\), and such classes generate \(\operatorname{Classes}(V_{\Gamma,E})\). Indeed, \(\operatorname{Classes}(V_{\Gamma,E})\) has a presentation with these generators and one relation_
\[[v]=[w_{1}]+\dots+[w_{n}]\]
_for each node \(v\) of \(\,\Gamma\), where \(w_{1},\dots,w_{n}\) are the termini of all edges \(e_{1},\dots,e_{n}\) in \(\,\Gamma\) that have origin \(v\). _
In the above theorem, the relation \([v]=[w_{1}]+\dots+[w_{n}]\) comes from the decomposition of a cone \(\mathfrak{C}_{\alpha}\) with \(t(\alpha)=v\) into the disjoint union \(\mathfrak{C}_{\alpha e_{1}}\cup\dots\cup\mathfrak{C}_{\alpha e_{n}}\), where each \(t(\alpha e_{i})=w_{i}\). Note that \(e_{1},\dots,e_{n}\) are distinct but \(w_{1},\dots,w_{n}\) need not be.
**Example 2.28**.: In the Higman-Thompson case where \(\Gamma\) is a single node with \(n\geq 2\) loops (see Example 2.15) the group \(\operatorname{Classes}(V_{\Gamma,E})\) is isomorphic to \(\mathbb{Z}/(n-1)\mathbb{Z}\). In this case, each \(V_{\Gamma,E}\) is isomorphic to one of the Higman-Thompson groups \(V_{n,r}\) for \(1\leq r\leq n\), where \(r\) depends on the class of \(E\).
**Example 2.29**.: If \(\Gamma\) has two nodes \(v\) and \(w\), each with two loops, and with one edge going from \(v\) to \(w\) and one edge going from \(w\) to \(v\), then
\[\operatorname{Classes}(V_{\Gamma,E})\cong\langle[v],[w]\mid[v]=2[v]+[w],[w]=[v ]+2[w]\rangle\cong\mathbb{Z}.\]
In particular \(\operatorname{Classes}(V_{\Gamma,E})\) can be infinite.
**Corollary 2.30** (Irreducible core case).: _If \(\,\Sigma_{\Gamma}\) has an irreducible core \(\,\Gamma_{0}\) and \(E\subseteq\Sigma_{\Gamma}\) is a nonempty clopen set, then \(V_{\Gamma,E}\) is flexible. Moreover, if \(\mathfrak{C}_{\alpha}\) and \(\,\mathfrak{C}_{\beta}\) are cones in \(E\) and \(t(\alpha)=t(\beta)\), then \(\operatorname{class}(\mathfrak{C}_{\alpha})=\operatorname{class}(\mathfrak{C }_{\beta})\)._
Proof.: By Proposition 2.20 there is a Thompson-like homeomorphism \(h\colon E\to E_{0}\), where \(E_{0}\) is a clopen subset of \(\Sigma_{\Gamma_{0}}\). Then \(h\) conjugates
\(V_{\Gamma,E}\) to \(V_{\Gamma_{0},E_{0}}\), and the latter is flexible by Theorem 2.27, so \(V_{\Gamma,E}\) is flexible as well. Furthermore, observe that \(h\) induces an isomorphism \(\operatorname{Classes}(V_{\Gamma,E})\to\operatorname{Classes}(V_{\Gamma_{0},E_{0}})\).
Now suppose \(\mathfrak{C}_{\alpha}\) and \(\mathfrak{C}_{\beta}\) are cones in \(E\) with \(t(\alpha)=t(\beta)\), and let \(f\colon\mathfrak{C}_{\alpha}\to\mathfrak{C}_{\beta}\) be the canonical similarity. Then there exists a decomposition of \(\mathfrak{C}_{\alpha_{1}},\ldots,\mathfrak{C}_{\alpha_{n}}\) of \(\mathfrak{C}_{\alpha}\) into cones so that \(h\) acts as a canonical similarity on both \(\mathfrak{C}_{\alpha_{i}}\) and \(\mathfrak{C}_{\overline{f}(\alpha_{i})}\) for each \(i\). Since \(t(\overline{f}(\alpha_{i}))=t(\alpha_{i})\), we have \(t(\overline{hf}(\alpha_{i}))=t(\overline{h}(\alpha_{i}))\) for each \(i\), so \(\operatorname{class}(h(\mathfrak{C}_{\alpha_{i}}))=\operatorname{class}(h( \mathfrak{C}_{\overline{f}(\alpha_{i})}))\). Since \(h\) induces an isomorphism on classes, it follows that \(\operatorname{class}(\mathfrak{C}_{\alpha_{i}})=\operatorname{class}( \mathfrak{C}_{\overline{f}(\alpha_{i})})\) for each \(i\), so
\[\operatorname{class}(\mathfrak{C}_{\alpha})=\sum_{i=1}^{n}\operatorname{class }(\mathfrak{C}_{\alpha_{i}})=\sum_{i=1}^{n}\operatorname{class}(\mathfrak{C} _{\overline{f}(\alpha_{i})})=\operatorname{class}(\mathfrak{C}_{\beta}).\qed\]
We will often make use of the following corollary.
**Corollary 2.31** (Mapping cones with \(V_{\Gamma,E}\)).: _Suppose \(\Sigma_{\Gamma}\) has an irreducible core, let \(E\subseteq\Sigma_{\Gamma}\) be a nonempty clopen set, and let \(\mathfrak{C}_{\alpha_{1}}\)\(\ldots,\mathfrak{C}_{\alpha_{n}}\) and \(\mathfrak{C}_{\beta_{1}}\)\(\ldots,\mathfrak{C}_{\beta_{n}}\) be collections of pairwise disjoint cones in \(E\). If \(t(\alpha_{i})=t(\beta_{i})\) for each \(i\) and neither of the unions \(\bigcup_{i=1}^{n}\mathfrak{C}_{\alpha_{i}}\) and \(\bigcup_{i=1}^{n}\mathfrak{C}_{\beta_{i}}\) is equal to \(E\), then there exists \(f\in V_{\Gamma,E}\) that maps each \(\mathfrak{C}_{\alpha_{i}}\) to \(\mathfrak{C}_{\beta_{i}}\) by a canonical similarity._
Proof.: Let \(E_{\alpha}\) and \(E_{\beta}\) denote the two unions, neither of which is equal to \(E\). Since \(t(\alpha_{i})=t(\beta_{i})\) for each \(i\), it follows from Corollary 2.30 that \(\operatorname{class}(\mathfrak{C}_{\alpha_{i}})=\operatorname{class}( \mathfrak{C}_{\beta_{i}})\) for each \(i\). Then \(\operatorname{class}(E_{\alpha})=\operatorname{class}(E_{\beta})\), so there exists \(g\in V_{\Gamma,E}\) that maps \(E_{\alpha}\) to \(E_{\beta}\), and in particular \(g\) maps \(E\setminus E_{\alpha}\) to \(E\setminus E_{\beta}\). Then the homeomorphism \(f\colon E\to E\) that maps each \(\mathfrak{C}_{\alpha_{i}}\) to \(\mathfrak{C}_{\beta_{i}}\) by a canonical similarity and agrees with \(g\) on \(E\setminus E_{\alpha}\) is the desired element of \(V_{\Gamma,E}\).
### Rational similarity groups
In this subsection we introduce the class of rational similarity groups, which will be our main focus for the remainder of the paper.
**Definition 2.32**.: Let \(\Sigma_{\Gamma}\) be a subshift of finite type with no isolated points or empty cones, let \(E\subseteq\Sigma_{\Gamma}\) be a nonempty clopen set, and let \(\mathcal{R}_{\Gamma,E}\) be the associated rational group. A subgroup \(G\leq\mathcal{R}_{\Gamma,E}\) will be called a **rational similarity group (RSG)** if, for every pair of cones \(\mathfrak{C}_{\alpha},\mathfrak{C}_{\beta}\subsetneq E\) with \(t(\alpha)=t(\beta)\), there exists \(g\in G\) that maps \(\mathfrak{C}_{\alpha}\) to \(\mathfrak{C}_{\beta}\) by the canonical similarity.
It follows from the results in [1] that every hyperbolic group \(G\) that acts faithfully on its horofunction boundary \(\partial_{h}G\) is an RSG (see Section 4), and our embedding of \(G\) into a finitely presented simple
group will depend on the corresponding full closure \([[G\mid\partial_{h}G]]\). The following proposition describes the close relationship between full RSGs and Thompson groups.
**Proposition 2.33** (Full RSGs and \(V_{\Gamma,E}\)).: _Suppose \(\Sigma_{\Gamma}\) has an irreducible core. Then \(V_{\Gamma,E}\) is an RSG, as is any subgroup of \(\mathcal{R}_{\Gamma,E}\) that contains \(V_{\Gamma,E}\). Conversely, every full RSG in \(\mathcal{R}_{\Gamma,E}\) contains \(V_{\Gamma,E}\)._
Proof.: If \(G\) contains \(V_{\Gamma,E}\), then by Corollary 2.31 all canonical similarities can be achieved using elements of \(G\). Conversely, if \(G\) is a full RSG then since every element of \(V_{\Gamma,E}\) locally agrees with a canonical similarity at every point, and since \(G\) is full, every element of \(V_{\Gamma,E}\) lies in \(G\).
**Remark 2.34**.: Without the assumption of an irreducible core, there do exist examples of \(\Gamma\) where \(V_{\Gamma,E}\) is not an RSG. For instance, take \(\Gamma\) to be the graph with two nodes \(v\) and \(w\), one loop \(a\) at \(v\), three loops \(b_{1},b_{2},b_{3}\) at \(w\), and one edge \(x\) from \(v\) to \(w\). Then no element of \(V_{\Gamma}\) can realize the canonical similarity from \(\mathfrak{C}_{a}\) to \(\mathfrak{C}_{aa}\), since the complement of \(\mathfrak{C}_{a}\), namely \(\mathfrak{C}_{x}\), can only be partitioned into an odd number of cones, whereas the complement of \(\mathfrak{C}_{aa}\), namely \(\mathfrak{C}_{ax}\cup\mathfrak{C}_{x}\), can only be partitioned into an even number of cones.
In the irreducible core case, Proposition 2.33 makes it easy to exhibit a variety of other existing examples of RSGs, beyond the \(V_{\Gamma,E}\). As a rudimentary example, note that given any subgroup \(H\leq\mathcal{R}_{\Gamma,E}\) one can form an RSG by simply taking \(G=\langle H,V_{\Gamma,E}\rangle\). Let us look at some more interesting, explicit examples from the literature.
**Example 2.35** (Rover-Nekrashevych groups).: Let \(\Gamma\) consist of a single node and \(n\geq 2\) (loop) edges, so \(\Sigma_{\Gamma}=\mathfrak{C}_{n}\) is the usual \(n\)-ary Cantor space that is the boundary of the rooted \(n\)-regular tree \(T_{n}\). Note that \(\operatorname{Aut}(T_{n})\) is isomorphic to the wreath product \(\operatorname{Aut}(T_{n})\wr S_{n}\). Call a subgroup \(G\leq\operatorname{Aut}(T_{n})\)**self-similar** if its image in \(\operatorname{Aut}(T_{n})\wr S_{n}\) under this isomorphism lies in \(G\wr S_{n}\). For \(g\in G\), if the image of \(g\) under this isomorphism is \(((g_{1},\dots,g_{n}),\sigma)\), call the \(g_{i}\) the **level-1 states** of \(g\). By the **states** of \(g\) we mean all elements obtainable by a finite sequence of taking level-1 states. Thus, \(G\) is self-similar if and only if all the states of any \(g\in G\) themselves lie in \(G\). We call a self-similar group **finite-state** if each of its elements has only finitely many states. See [20] for a wealth of information about self-similar groups.
Given a self-similar group \(G\), the **Rover-Nekrashevych group**\(V_{n}(G)\) is the group of homeomorphisms \(h\) of \(\mathfrak{C}_{n}\) for which there exists a partition of \(\mathfrak{C}_{n}\) into cones such the image of any cone in the partition under \(h\) is some cone in \(\mathfrak{C}_{n}\), and the local action of \(h\) at each cone in the
partition is an element of \(G\). These groups were essentially introduced by Scott in [10] in very different language. Subsequently, Rover studied the special case when \(G\) is the Grigorchuk group in [11], and Nekrashevych considered the groups in full generality and with modern terminology in [12]. Note that for any element \(h\) of \(V_{n}(G)\), all but finitely many local actions of \(h\) lie in \(G\), so if \(G\) is finite-state then \(V_{n}(G)\) is a subgroup of the rational group \(\mathcal{R}_{n}=\mathcal{R}_{\Gamma}\). Also, clearly \(V_{n}\leq V_{n}(G)\), so we conclude that any Rover-Nekrashevych group of a finite-state self-similar group is a full RSG.
**Example 2.36** ((Twisted) Brin-Thompson groups).: In [1], Brin defined a class of higher-dimensional Thompson groups \(nV\), which we refer to as the Brin-Thompson groups. The first and second authors proved in [1] that these groups embed in \(\mathcal{R}_{2^{n}}\), and indeed they are full RSGs. In [1], the first and fourth authors generalized these to the class of twisted Brin-Thompson groups \(SV_{G}\), and when \(S\) is finite these are again full RSGs.
**Example 2.37** (Circular Thompson groups \(T_{n,r}\)).: Every Higman-Thompson group \(V_{n,r}\cong V_{\Gamma,E}\) (as in Example 2.15) has an associated circular group \(T_{n,r}\leq V_{n,r}\), which is the subgroup consisting of all elements that preserve a certain circular order on \(E\) (see [1]). These groups are RSGs, but they are not full as groups of homeomorphisms of the Cantor space \(E\). Indeed, the full closure of \(T_{n,r}\) is precisely \(V_{n,r}\). Note that \(T_{n,r}\) is full when viewed as a group of homeomorphisms of a circle.
**Example 2.38** (Thompson's group \(F\)).: Thompson's group \(V\) (\(=V_{2,1}\)) is the same as \(V_{\Gamma}\), where \(\Gamma\) is a graph with one node and two loops. Thompson's group \(F\) is the subgroup of \(V\) that preserves the canonical linear order on \(\Sigma_{\Gamma}\). This group is not an RSG, since cones that contain the maximum or minimum points under the linear order cannot be mapped to other cones.
However, it is possible to realize \(F\) as an RSG using a slightly different subshift. Specifically, let \(\Gamma^{\prime}\) be the graph with three vertices \(u\), \(v\), and \(w\) (corresponding to left cones, interior cones, and right cones, respectively), with loops at \(u\) and \(w\), two loops at \(v\), and one directed edge from each of \(u\) and \(w\) to \(v\). Then \(F\) embeds into \(V_{\Gamma^{\prime}}\) as an RSG, though it is not full.
**Example 2.39** (Lodha-Moore group).: The Lodha-Moore group originated in [13, 14], as the first example of a non-amenable type \(\mathrm{F}_{\infty}\) group with no non-abelian free subgroups. The original construction is analogous to Thompson's group \(F\), but one can also consider a \(V\)-like
version. Let \(\Gamma\) have one node and two (loop) edges, so \(\Sigma_{\Gamma}=\mathfrak{C}_{2}\) is the usual binary Cantor space \(\{0,1\}^{\mathbb{N}}\). Let \(\lambda\) be the homeomorphism of \(\Sigma_{\Gamma}\) defined by
\[\lambda(00\cdot\omega)=0\cdot\lambda(\omega),\quad\lambda(01\cdot\omega)=10 \cdot\lambda^{-1}(\omega),\quad\lambda(11\cdot\omega)=1\cdot\lambda(\omega)\]
for all \(\omega\in\Sigma_{\Gamma}\). One can check that \(\lambda\) is a rational map. For any finite binary word \(\alpha\), let \(\lambda_{\alpha}\) be the homeomorphism of \(\Sigma_{\Gamma}\) that sends \(\alpha\cdot\omega\) to \(\alpha\cdot\lambda(\omega)\) for all \(\omega\) and fixes all points outside \(\mathfrak{C}_{\alpha}\). The (\(V\)-like) Lodha-Moore group \(LM_{V}\) is then the group generated by \(V_{\Gamma}\) together with all the \(\lambda_{\alpha}\). This is a full RSG. Note that, just like the original Lodha-Moore group, \(LM_{V}\) also has type \(\mathrm{F}_{\infty}\), as recently shown by Farley [Far]. As with \(F\) and \(T\) (see Examples 2.37 and 2.38), the Lodha-Moore groups for an interval and a circle can also be realized as RSGs, though they are not full. See [1] for more on the circle version.
**Example 2.40** (Automorphisms of \(V_{n,r}\)).: For a Higman-Thompson group \(V_{n,r}\cong V_{\Gamma,E}\) as in Example 2.15, the second author, Cameron, Maissel, Navas, and Olukoya proved that the automorphism group \(\mathrm{Aut}(V_{n,r})\) embeds into \(\mathcal{R}_{\Gamma,E}\) [BCM\({}^{+}\)]. The group \(\mathrm{Aut}(V_{n,r})\) contains \(V_{\Gamma,E}\), and is therefore an RSG. A feature of interest here is that the group \(\mathrm{Aut}(\Sigma_{\Gamma},\sigma)\) of automorphisms of the one-sided subshift \(\Sigma_{\Gamma}\) (with shift operator \(\sigma\)) embeds as a subgroup \(\mathcal{H}_{n}\) of the outer automorphism group \(\mathrm{Out}(V_{n,r})\) (see [1] for information on \(\mathrm{Aut}(\Sigma_{\Gamma})\) and [1] for the embedding mentioned here). However, the group \(\mathrm{Aut}(V_{n,r})\) is not full since the image of an element \(f\in\mathrm{Aut}(V_{n,r})\) in \(\mathrm{Out}(V_{n,r})\) can be determined from the restriction of \(f\) to any cone.
### Nuclei and the contracting property
In this short subsection we define the crucial notion of a subgroup of \(\mathcal{R}_{\Gamma,E}\) being contracting. Let \(\Sigma_{\Gamma}\) be a subshift of finite type without isolated points or empty cones, and let \(E\subseteq\Sigma_{\Gamma}\) be a nonempty clopen set.
If \(f\in\mathcal{R}_{\Gamma,E}\), then \(f\) has only finitely many distinct local actions \(f|_{\alpha}\). The **nucleus** of \(f\), denoted \(\mathcal{N}_{f}\), is the set of all local actions that occur infinitely often. That is, \(f|_{\alpha}\in\mathcal{N}_{f}\) if \(f|_{\alpha}=f|_{\beta}\) for infinitely many \(\beta\in\mathrm{Cones}(E)\). Note that \(f|_{\alpha}\notin\mathcal{N}_{f}\) holds for only finitely many \(\alpha\in\mathrm{Cones}(E)\), so in particular there exists \(N\in\mathbb{N}\) so that \(f|_{\alpha}\in\mathcal{N}_{f}\) for all \(\alpha\in\mathrm{Cones}(E)\) for which the path \(\alpha\) has length \(N\) or greater.
**Definition 2.41** (Nucleus, contracting).: Given a subgroup \(G\leq\mathcal{R}_{\Gamma,E}\), define the **nucleus**\(\mathcal{N}_{G}\) of \(G\) to be the union of all \(\mathcal{N}_{g}\) for \(g\in G\). Thus, \(\mathcal{N}_{G}\) is the smallest set of maps such that for all \(g\in G\) we have \(g|_{\alpha}\in\mathcal{N}_{G}\) for all but finitely many \(\alpha\in\mathrm{Cones}(E)\). We say that \(G\leq\mathcal{R}_{\Gamma,E}\) is **contracting** if \(\Sigma_{\Gamma}\) has an irreducible core and \(\mathcal{N}_{G}\) is finite.
This generalizes the notion of contracting for self-similar groups introduced by Nekrashevych [20], and in the special case of Rover-Nekrashevych groups (see Example 2.35) it is equivalent to the underlying self-similar group being contracting. A related notion of contracting for asynchronous transducers appears in [10] as well. Their use case applies to maps arising as products of local maps coming from a fixed asynchronous transducer, as opposed to the whole collection of small-scale local maps from a particular group of homeomorphisms, but the concepts otherwise align.
Note that the elements of \(\mathcal{N}_{G}\) themselves are not necessarily in \(G\). Indeed, the domain of an element of \(\mathcal{N}_{G}\) can be either all of \(\Sigma_{\Gamma}\) or a cone \(\mathfrak{C}_{v}\) for some node \(v\) in \(\Gamma\), as opposed to the clopen set \(E\). Moreover, the image of an element of \(\mathcal{N}_{G}\) is a clopen subset of \(\Sigma_{\Gamma}\), but need not be a cone. Note also that the elements of \(\mathcal{N}_{G}\) are always injective open maps, since the elements of \(G\) are homeomorphisms.
**Remark 2.42**.: Our definition of \(G\leq\mathcal{R}_{\Gamma,E}\) being contracting includes both that \(\Sigma_{\Gamma}\) has an irreducible core, and that \(\mathcal{N}_{G}\) is finite. It is worth mentioning that these are independent properties. For example, in the \(V\)-like Lodha-Moore group from Example 2.39 there is an irreducible core but an infinite nucleus, and in the Houghton groups from Example 2.23 there is no irreducible core, but the nucleus is finite.
### Classification of full RSGs
Throughout this subsection, let \(\Sigma_{\Gamma}\) be a subshift of finite type with irreducible core \(\Gamma_{0}\).
**Definition 2.43** (Nucleus of injections).: Let \(\mathcal{N}\) be a set of nondegenerate maps on \(\Sigma_{\Gamma}\). We say that \(\mathcal{N}\) is a **nucleus of injections** if it has the following properties:
**(MapNuc):**: Each element of \(\mathcal{N}\) is a map \(\mathfrak{C}_{v}\to\mathfrak{C}_{w}\) for some nodes \(v,w\) of \(\Gamma_{0}\).
**(IdNuc):**: For each node \(v\) of \(\Gamma_{0}\), the identity map on \(\mathfrak{C}_{v}\) is an element of \(\mathcal{N}\). (Equivalently, the nucleus of the identity map on \(\Sigma_{\Gamma}\) is contained in \(\mathcal{N}\).)
**(LocNuc):**: For each \(p\in\mathcal{N}\), every local action \(p|_{\alpha}\) belongs to \(\mathcal{N}\).
**(RecurNuc):**: For each \(p\in\mathcal{N}\), there exists \(q\in\mathcal{N}\) such that \(p\in\mathcal{N}_{q}\).
**(InvNuc):**: Each \(p\in\mathcal{N}\) is an injective open map, and satisfies \(\mathcal{N}_{p^{-1}}\subseteq\mathcal{N}\).
**(ProdNuc):**: If \(p\colon\mathfrak{C}_{v}\to\mathfrak{C}_{w}\) and \(q\colon\mathfrak{C}_{u}\to\mathfrak{C}_{v}\) are elements of \(\mathcal{N}\), then \(\mathcal{N}_{pq}\subseteq\mathcal{N}\).
Note that (MapNuc) follows from (RecurNuc) together with Lemma 2.8, but we include it for simplicity. In the case where \(\mathcal{N}\) is finite, condition
(RecurNuc) is equivalent to saying that for each \(p\in\mathcal{N}\), there exists \(q\in\mathcal{N}\) and an edge \(e\) in \(\Gamma\) so that \(q|_{e}=p\) (as in the nucleus of a contracting self-similar group), but for infinite \(\mathcal{N}\) these conditions are not equivalent.
**Proposition 2.44**.: _If \(G\leq\mathcal{R}_{\Gamma,E}\) is an RSG, then the nucleus \(\mathcal{N}\) for \(G\) is a nucleus of injections._
Proof.: Let \(G\leq\mathcal{R}_{\Gamma,E}\) be an RSG.
For (MapNuc), let
\[\mathrm{Cones}_{0}(E)=\{\alpha\in\mathrm{Cones}(E)\mid t(\alpha)\text{ is a vertex in }\Gamma_{0}\}\]
and observe that \(\mathrm{Cones}(E)\setminus\mathrm{Cones}_{0}(E)\) is a finite set. If \(g\in G\), then it follows from Lemma 2.8 that both \(\alpha\) and \(\overline{g}(\alpha)\) lie in \(\mathrm{Cones}_{0}(E)\) for all but finitely many \(\alpha\), so any element of \(\mathcal{N}\) must have this property.
For (IdNuc), since \(\mathrm{id}_{E}\in G\) and every node of \(\Gamma_{0}\) is the terminus of an arbitrarily long directed path in \(\mathrm{Cones}(E)\), we know that \(\mathrm{id}_{\mathfrak{C}_{v}}\in\mathcal{N}\) for all nodes \(v\) of \(\Gamma_{0}\), so (IdNuc) holds.
For (LocNuc), if \(p\in\mathcal{N}_{g}\) for some \(g\in G\), then \(p=g|_{\beta}\) for infinitely many different \(\beta\). If \(p|_{\alpha}\) is a local action of \(p\), then by Lemma 2.9 we have \(p|_{\alpha}=g|_{\beta\cdot\alpha}\) for all such \(\beta\), so \(p|_{\alpha}\in\mathcal{N}_{g}\).
For (RecurNuc), let \(p\in\mathcal{N}_{g}\) for some \(g\in G\). Then \(g|_{\alpha}=p\) for infinitely many different \(\alpha\). Hence there must exist infinitely many different \(\beta\) with the property that \(g|_{\beta\cdot\gamma}=p\) for infinitely many different suffixes \(\gamma\). Since \(g|_{\beta\cdot\gamma}=(g|_{\beta})|_{\gamma}\) by Lemma 2.9, we conclude that \(p\in\mathcal{N}_{g|_{\beta}}\) for infinitely many different \(\beta\). Since \(g\) is rational, one such \(g|_{\beta}\) must lie in \(\mathcal{N}_{g}\), which gives \(p\in\mathcal{N}_{g|_{\beta}}\) and \(g|_{\beta}\in\mathcal{N}\).
For (InvNuc), let \(p\in\mathcal{N}_{g}\) for some \(g\in G\), so \(g|_{\beta}=p\) for infinitely many different \(\beta\). Since \(g\) is a homeomorphism, \(p\) must be an injective open map. Let \(\mathfrak{C}_{\alpha}\) be any cone contained in the image of \(p\). Then for any \(\beta\) with \(g|_{\beta}=p\), we have \(g^{-1}|_{\overline{g}(\beta)}(g|_{\beta})=(g^{-1}g)|_{\beta}=1\) by Lemma 2.10 and so \(g^{-1}|_{\overline{g}(\beta)}=(g|_{\beta})^{-1}\) (with domain the image of \(p\)), which implies that \(g^{-1}|_{g(\beta)\cdot\alpha}=(g^{-1}|_{\overline{g}(\beta)})|_{\alpha}=p^{-1 }|_{\alpha}\) by Lemma 2.9. Since there are infinitely many different \(\overline{g}(\beta)\) by Lemma 2.8, we conclude that \(p^{-1}|_{\alpha}\) is in \(\mathcal{N}_{g^{-1}}\) and hence in \(\mathcal{N}\).
Finally, for (ProdNuc), let \(p\in\mathcal{N}_{g}\) and \(q\in\mathcal{N}_{h}\) for some \(g,h\in G\), where \(p\colon\mathfrak{C}_{v}\to\mathfrak{C}_{w}\) and \(q\colon\mathfrak{C}_{u}\to\mathfrak{C}_{v}\) for some nodes \(u,v,w\) in \(\Gamma_{0}\). Then there exists \(\beta\) so that \(g|_{\beta}=p\) and \(\mathfrak{C}_{\beta}\) is a proper subset of \(E\), and by Lemma 2.8 there exists \(\gamma\) so that \(h|_{\gamma}=q\) and \(\mathfrak{C}_{\overline{h}(\gamma)}\) is a proper subset of \(E\). Since \(t(\beta)=t\big{(}\overline{h}(\gamma)\big{)}=v\) and \(G\) is an RSG, there exists \(f\in G\) that maps \(\mathfrak{C}_{\overline{h}(\gamma)}\) to \(\mathfrak{C}_{\beta}\) by the canonical similarity. Then \(gfh\in G\) and \((gfh)|_{\gamma\cdot\alpha}=(pq)|_{\alpha}\) for all \(\alpha\) by Lemmas 2.10 and 2.9, so for all but finitely many \(\alpha\) we have that \((pq)|_{\alpha}\) is in \(\mathcal{N}_{gfh}\) and hence in \(\mathcal{N}\)
**Remark 2.45**.: It is not true that the nucleus for an arbitrary subgroup \(G\leq\mathcal{R}_{\Gamma,E}\) is a nucleus of injections. Though (MapNuc), (IdNuc), (LocNuc), (RecurNuc), and (InvNuc) always hold, there are groups \(G\) whose nucleus does not satisfy (ProdNuc). For example, if \(g\) and \(h\) are the automorphisms of the infinite, rooted binary tree that satisfy the wreath recursion
\[g=(0\;1)(h,h)\qquad\text{and}\qquad h=(g,g)\]
then the group \(\langle g\rangle\cong\mathbb{Z}_{2}\) has nucleus \(\{g,h\}\), and this does not satisfy (ProdNuc) since \((gh)|_{\alpha}=gh\) for all \(\alpha\).
The following theorem serves as a classification of full RSGs. (We remind the reader that throughout this subsection we are assuming \(\Gamma\) has irreducible core \(\Gamma_{0}\).)
**Theorem 2.46**.: _Let \(\mathcal{N}\) be a nucleus of injections over \(\Sigma_{\Gamma}\), and let \(E\subseteq\Sigma_{\Gamma}\) be a nonempty clopen set. Then_
\[G=\{f\in\mathcal{R}_{\Gamma,E}\;|\;\mathcal{N}_{f}\subseteq\mathcal{N}\}\]
_is a full RSG with nucleus \(\mathcal{N}\), and every full RSG has this form._
Proof.: We prove first that \(G\) contains \(V_{\Gamma,E}\), and in particular \(G\) is nonempty. Let \(f\in V_{\Gamma,E}\). Then there exists a partition \(\mathfrak{C}_{\alpha_{1}},\ldots,\mathfrak{C}_{\alpha_{n}}\) of \(E\) into cones so that \(f\) acts as a canonical similarity on each cone. Subdividing further, we may assume that each \(t(\alpha_{i})\) is in \(\Gamma_{0}\). Then for each cone \(\mathfrak{C}_{\beta}\) contained in any of the \(\mathfrak{C}_{\alpha_{i}}\), the local action \(f|_{\beta}\) is the identity map on \(\mathfrak{C}_{v}\) for some node \(v\) of \(\Gamma_{0}\). By property (IdNuc), it follows that \(f\in G\), and therefore \(G\) contains \(V_{\Gamma,E}\).
We claim that \(G\) is a subgroup of \(\mathcal{R}_{\Gamma,E}\). We first prove that \(G\) is closed under inversion. Let \(g\in G\). We need to prove that \(g^{-1}|_{\alpha}\in\mathcal{N}\) for all but finitely many \(\alpha\). First note that for \(\alpha,\beta\) such that \(\mathfrak{C}_{\alpha}\subseteq g(\mathfrak{C}_{\beta})\), say with \(\alpha=\vec{g}(\beta)\cdot\gamma\), by Lemma 2.9, we have \(g^{-1}|_{\alpha}=(g^{-1}|_{\vec{g}(\beta)})|_{\gamma}=(g|_{\beta})^{-1}|_{\gamma}\), where the last equality follows from the fact that \(g^{-1}|_{\vec{g}(\beta)}\) and \((g|_{\beta})^{-1}\) agree on \(\mathfrak{C}_{\gamma}\). Now observe that for all but finitely many \(\alpha\) there exists \(\beta\) such that \(\mathfrak{C}_{\alpha}\subseteq g(\mathfrak{C}_{\beta})\) and \(g|_{\beta}\in\mathcal{N}\). By the above, for such an \(\alpha\) and \(\beta\), say \(\alpha=\vec{g}(\beta)\cdot\gamma\), we have that \(g^{-1}|_{\alpha}\) equals the local action at \(\gamma\) of the inverse of an element of \(\mathcal{N}\). For all but finitely many \(\gamma\) this lies in \(\mathcal{N}\) by property (InvNuc). Hence, considering decompositions \(\alpha=\vec{g}(\beta)\cdot\gamma\) where \(\beta\) has minimal length satisfying \(g|_{\beta}\in\mathcal{N}\), we see that for all but finitely many \(\alpha\) there is such a decomposition satisfying \((g|_{\beta})^{-1}|_{\gamma}\in\mathcal{N}\). We conclude that \(g^{-1}|_{\alpha}\in\mathcal{N}\) for all but finitely many \(\alpha\), so \(g^{-1}\in G\).
Next we prove \(G\) is closed under composition. Let \(g,h\in G\), and we need to prove that \((g\circ h)|_{\alpha}\in\mathcal{N}\) for all but finitely many \(\alpha\). First note that for any \(\beta\) we have \((g\circ h)|_{\beta}=(g|_{\vec{h}(\beta)}\circ h|_{\beta})|_{t(\beta)}\) by Lemma 2.10.
Now for any cone \(\mathfrak{C}_{\gamma}\subseteq\mathfrak{C}_{t(\beta)}\), Lemma 2.9 says that the local action of this at \(\gamma\) equals \((g|_{\overline{h}(\beta)}\circ h|_{\beta})|_{\gamma}\). Since \(\overline{h}\) is finite-to-one by Lemma 2.8, for all but finitely many \(\beta\) this is the local action at \(\gamma\) of a product of two elements of \(\mathcal{N}\). Property (ProdNuc) implies this is itself an element of \(\mathcal{N}\) for all but finitely many \(\gamma\). In particular, \((g\circ h)|_{\beta\cdot\gamma}\) lies in \(\mathcal{N}\) for all but finitely many \(\beta\) and \(\gamma\). We conclude that \(g\circ h\in G\), so \(G\) is a subgroup of \(\mathcal{R}_{\Gamma,E}\). Since \(G\) contains \(V_{\Gamma,E}\), it is an RSG by Proposition 2.33.
To prove that \(G\) is full, suppose \(h\in\operatorname{Homeo}(E)\) locally agrees with \(G\). This implies that there exists a partition of \(E\) into cones \(\mathfrak{C}_{\alpha_{1}},\ldots,\mathfrak{C}_{\alpha_{n}}\) and elements \(g_{1},\ldots,g_{n}\in G\) such that \(h(\alpha_{i}\cdot\omega)=g_{i}(\alpha_{i}\cdot\omega)\) for all \(i\) and all \(\omega\in\mathfrak{C}_{t(\alpha_{i})}\). Now for any cone \(\mathfrak{C}_{\alpha}\) contained in one of the \(\mathfrak{C}_{\alpha_{i}}\) (of which only finitely are not), we have \(h|_{\alpha}=g_{i}|_{\alpha}\), so all but finitely many local actions of \(h\) lie in \(\mathcal{N}\). Hence \(h\in G\), so \(G\) is full.
As for the nucleus \(\mathcal{N}_{G}\) of \(G\), clearly \(\mathcal{N}_{G}\subseteq\mathcal{N}\). For the opposite inclusion, if \(p\in\mathcal{N}\) then by (RecurNuc) there exists \(q\in\mathcal{N}\) so that \(p\in\mathcal{N}_{q}\). By (MapNuc), we know that \(q\colon\mathfrak{C}_{v}\to\mathfrak{C}_{w}\) for some nodes \(v,w\) in \(\Gamma_{0}\). Since \(\Gamma_{0}\) is irreducible, we can find two disjoint cones \(\mathfrak{C}_{\alpha},\mathfrak{C}_{\beta}\subseteq E\) such that \(t(\alpha)=v\) and \(t(\beta)=w\). Let \(g\colon E\to E\) be the homeomorphism that maps \(\mathfrak{C}_{\alpha}\) into \(\mathfrak{C}_{\beta}\) by \(L_{\beta}\circ q\circ L_{\alpha}^{-1}\) (where \(L_{\alpha}\) and \(L_{\beta}\) are canonical similarities), maps \(g(\mathfrak{C}_{\alpha})\) back to \(\mathfrak{C}_{\alpha}\) by the inverse of this mapping, and is the identity elsewhere. Then it is easy to check that \(g\in G\) and \(g|_{\alpha}=q\). It follows that \(\mathcal{N}_{q}\subseteq\mathcal{N}_{g}\), so \(p\in\mathcal{N}_{g}\), and therefore \(p\in\mathcal{N}_{G}\).
Finally, by Proposition 2.44 the nucleus for any RSG is a nucleus of injections, so suppose \(H\leq\mathcal{R}_{\Gamma,E}\) is any full RSG with nucleus \(\mathcal{N}\). We claim that \(H=G\). Clearly \(H\leq G\), so we must show that \(G\leq H\). Since \(H\) is full, it suffices to prove that every element of \(G\) locally agrees with \(H\), so let \(g\in G\) and \(\omega\in E\). We can choose a neighborhood of \(\omega\) of the form \(\mathfrak{C}_{\alpha}\) such that \(g|_{\alpha}\in\mathcal{N}\), and by Lemma 2.8 we can arrange for both \(\mathfrak{C}_{\alpha}\) and \(\mathfrak{C}_{\overline{h}(\alpha)}\) to be properly contained in \(E\). Let \(p=g|_{\alpha}\). Since \(p\in\mathcal{N}\), it is in the nucleus of \(H\), so there exists \(h\in H\) such that \(p\in\mathcal{N}_{h}\). That is, \(h|_{\beta}=p\) for infinitely many \(\beta\). By Lemma 2.8 we can choose one such \(\beta\) so that neither \(\mathfrak{C}_{\beta}\) nor \(\mathfrak{C}_{\overline{h}(\beta)}\) is all of \(E\). Since \(t(\beta)=t(\alpha)\) and \(t\big{(}\overline{h}(\beta)\big{)}=t\big{(}\overline{g}(\alpha)\big{)}\), by Corollary 2.31 there exist elements \(f,f^{\prime}\in V_{\Gamma,E}\) so that \(f\) maps \(\mathfrak{C}_{\alpha}\) to \(\mathfrak{C}_{\beta}\) by the canonical similarity, and \(f^{\prime}\) maps \(\mathfrak{C}_{\overline{h}(\beta)}\) to \(\mathfrak{C}_{\overline{g}(\alpha)}\) by the canonical similarity. Then \(g\) agrees with \(f^{\prime}hf\) on \(\mathfrak{C}_{\alpha}\), and \(f^{\prime}hf\in H\) since \(H\) is an RSG. We conclude that \(g\) locally agrees with \(H\), and thus \(g\in H\), which proves that \(H=G\).
The following corollary gives a nice alternative characterization for the elements of a full RSG.
**Corollary 2.47**.: _Let \(G\leq\mathcal{R}_{\Gamma,E}\) be a full RSG with nucleus \(\mathcal{N}\). Then a homeomorphism \(h\colon E\to E\) lies in \(G\) if and only if there exists a finite partition of \(E\) into cones \(\mathfrak{C}_{\alpha_{i}}\) such that each \(g|_{\alpha_{i}}\in\mathcal{N}\)._
Proof.: If such a partition exists, then clearly \(g\in G\) by Theorem 2.46. Now suppose \(g\in G\). Since \(g|_{\alpha}\in\mathcal{N}\) for all but finitely many \(\alpha\), there exists \(M\) such that \(g|_{\alpha}\in\mathcal{N}\) for all paths of length \(M\). Now if \(\{\alpha_{1},\ldots,\alpha_{m}\}\) is the set of all paths of length \(M\), then the cones \(\mathfrak{C}_{\alpha_{i}}\) form the desired partition.
We can use Theorem 2.46 to build some new explicit examples of RSGs, for which the nuclei contain non-invertible elements.
**Example 2.48**.: Let \(\Gamma\) have a single node and three (loop) edges, called \(0\), \(1\), and \(2\). Note that \(\Sigma_{\Gamma}\) is the usual \(3\)-ary Cantor space \(\{0,1,2\}^{\mathbb{N}}\). Let \(f\colon\Sigma_{\Gamma}\to\Sigma_{\Gamma}\) be the rational injection defined by
\[f(0\cdot\omega)=0\cdot f(\omega),\qquad f(1\cdot\omega)=02\cdot\omega,\qquad f (2\cdot\omega)=1\cdot\omega\]
for all \(\omega\in\Sigma_{\Gamma}\). The following facts are easy to verify:
* \(f|_{0}=f\), and \(f|_{1}=f|_{2}=1\).
* \(\operatorname{im}(f)=\mathfrak{C}_{0}\cup\mathfrak{C}_{1}\), with \(f^{-1}|_{0}=f\) and \(f^{-1}|_{1}=1\).
* \(f^{2}\) is the mapping \(\omega\mapsto 0\cdot\omega\), so in particular \(f^{2}|_{\varnothing}=1\).
Let \(\mathcal{N}=\{1,f\}\). Then \(\mathcal{N}\) is a nucleus of injections, so by Theorem 2.46 we get a full, contracting RSG.
**Example 2.49**.: Let \(\Gamma\) have a single node and two (loop) edges, called \(0\) and \(1\), so \(\Sigma_{\Gamma}\) is the usual \(2\)-ary Cantor space \(\{0,1\}^{\mathbb{N}}\). Let \(f\colon\Sigma_{\Gamma}\to\Sigma_{\Gamma}\) be the rational injection defined by
\[f(0\cdot\omega)=0\cdot f(\omega),\qquad f(10\cdot\omega)=011\cdot\omega, \qquad f(11\cdot\omega)=10\cdot\omega\]
for all \(\omega\in\Sigma_{\Gamma}\). The following facts are easy to verify:
* \(f|_{0}=f\), and \(f|_{10}=f|_{11}=1\).
* \(\operatorname{im}(f)=\mathfrak{C}_{0}\cup\mathfrak{C}_{10}\), with \(f^{-1}|_{0}=f\) and \(f^{-1}|_{10}=1\).
* \(f^{2}\) is the mapping \(\omega\mapsto 0\cdot\omega\), so in particular \(f^{2}|_{\varnothing}=1\).
Let \(\mathcal{N}=\{1,f\}\). Then \(\mathcal{N}\) is a nucleus of injections, so by Theorem 2.46 we get a full, contracting RSG.
## 3. Finite presentability of full, contracting RSGs
This section comprises a proof of the following:
**Theorem 3.1**.: _Every full, contracting RSG is finitely presented._
In the special case where the RSG is the Rover-Nekrashevych group of a contracting self-similar group (see Example 2.35), Nekrashevych proved finite presentability in [20]. Our general situation is much more complicated since \(\Gamma\) may have more than one node, we do not have a self-similar group to work with, and the elements of \(\mathcal{N}\) might not even be invertible. Thus, a few aspects of our proof are inspired by the proof in [20], but in general many new ideas and approaches are needed.
Throughout this section, we fix a full, contracting RSG \(G\leq\mathcal{R}_{\Gamma,E}\), with nucleus \(\mathcal{N}\). We introduce both a finite and an infinite generating set for \(G\) in Subsection 3.1, as well as a special class of words in the infinite generating set that we call "normalish forms". In Subsection 3.2 we construct an infinite presentation for \(G\), and in Subsection 3.3 we prove that only finitely many relations are needed.
### Nuclear generators
Let \(\mathbb{N}\mathcal{N}\) be the free commutative monoid generated by the elements of \(\mathcal{N}\). We think of elements of \(\mathbb{N}\mathcal{N}\) as formal sums \(P=p_{1}+\cdots+p_{k}\) of elements of \(\mathcal{N}\). If \(p\colon\mathfrak{C}_{v}\to\mathfrak{C}_{w}\) is an element of \(\mathcal{N}\), let
\[\partial_{1}(p)=\operatorname{class}(p(\mathfrak{C}_{v}))-\operatorname{class }(\mathfrak{C}_{v}).\]
This induces a monoid homomorphism \(\partial_{1}\colon\mathbb{N}\mathcal{N}\to\operatorname{Classes}(\Gamma)\). Elements of \(\ker(\partial_{1})\) are called **cycles** in \(\mathbb{N}\mathcal{N}\). (This homomorphism \(\partial_{1}\) is closely related to the first boundary homomorphism for etale groupoid homology defined by Matui [16].)
**Lemma 3.2**.: _The monoid \(\ker(\partial_{1})\) of cycles in \(\mathbb{N}\mathcal{N}\) is finitely generated._
Proof.: Note that for \(P,P^{\prime}\in\mathbb{N}\mathcal{N}\), if \(P\) and \(P+P^{\prime}\) are both cycles then \(P^{\prime}\) is a cycle. This means that \(\ker(\partial_{1})\) is a subtractive submonoid of \(\mathbb{N}\mathcal{N}\) in the sense of [1], and hence is finitely generated by [1, Proposition 7.1].
Fix some finite generating set for \(\ker(\partial_{1})\). We will use these generators to construct a finite generating set for \(G\).
First we need some definitions. We will use the term **code** to mean any set \(\{\alpha_{1},\ldots,\alpha_{k}\}\) in \(\operatorname{Cones}(E)\) whose corresponding cones \(\mathfrak{C}_{\alpha_{i}}\) are pairwise disjoint. A code is **complete** if \(\bigcup_{i=1}^{k}\mathfrak{C}_{\alpha_{i}}=E\), and **incomplete** otherwise. If \(P=p_{1}+\cdots+p_{k}\) is a generator for \(\ker(\partial_{1})\), where \(p_{i}\colon\mathfrak{C}_{v_{i}}\to\mathfrak{C}_{w_{i}}\), a **domain code** for \(P\) is a code \(\{\nu_{1},\ldots,\nu_{k}\}\) such that \(t(\nu_{i})=v_{i}\) for each \(i\), and **range code** is defined analogously.
Now, for each generator \(P=p_{1}+\cdots+p_{k}\), we choose a **model nuclear generator**\(h_{P}\in G\) as follows. Choose domain and range codes \(\{\nu_{1},\ldots,\nu_{k}\}\) and \(\{\xi_{1},\ldots,\xi_{k}\}\) for \(P\), both incomplete, and let
\(D=\bigcup_{i=1}^{k}\mathfrak{C}_{\nu_{i}}\) and \(R=\bigcup_{i=1}^{k}\mathfrak{C}_{\xi_{i}}\). Let \(g\colon D\to R\) be the map that sends each \(\mathfrak{C}_{\nu_{i}}\) to \(\mathfrak{C}_{\xi_{i}}\) with local action \(p_{i}\). Since \(P\in\ker(\partial_{1})\), the image \(g(D)\) has the same class as \(D\), so by Corollary 2.31 we can choose some \(s\in V_{\Gamma,E}\) that maps \(g(D)\) onto \(D\). Now we define \(h_{P}\) to be the homeomorphism of \(E\) that agrees with \(s\circ g\) on \(D\) and is the identity elsewhere. Note that \(h_{P}\in G\) since \(G\) is full.
The main goal of this subsection is to prove the following proposition.
**Proposition 3.3**.: _The group \(G\) is finitely generated, with generating set consisting of the model nuclear generators \(h_{P}\) together with a finite generating set for \(V_{\Gamma,E}\)._
We begin by defining some conjugates of the model nuclear generators that will be useful. If \(f\in V_{\Gamma,E}\), we say that \(f\)**maps** a code \(\{\alpha_{1},\dots,\alpha_{k}\}\) to a code \(\{\beta_{1},\dots,\beta_{k}\}\) if \(f\) maps each cone \(\mathfrak{C}_{\alpha_{i}}\) to \(\mathfrak{C}_{\beta_{i}}\) by the canonical similarity.
**Definition 3.4** (Nuclear generators).: A **nuclear generator** is a conjugate
\[h=ch_{P}c^{-1}\]
where \(h_{P}\) is a model nuclear generator with domain code \(\{\nu_{1},\dots,\nu_{k}\}\) and \(c\) is an element of \(V_{\Gamma,E}\) that maps this to some code \(\{\alpha_{1},\dots,\alpha_{k}\}\).
Note that \(h\) is supported on the union \(\bigcup_{i=1}^{k}\mathfrak{C}_{\alpha_{i}}\). This is the **domain of support** for \(h\). The code \(\{\alpha_{1},\dots,\alpha_{k}\}\) is the **nuclear code** for \(h\), and the corresponding **nuclear states** are the summands \(p_{i}\) of \(P\). We think of the nuclear code and corresponding nuclear states as part of the definition of a nuclear generator.
A **rectifier** for a nuclear generator \(h\) is an element \(r\in V_{\Gamma,E}\) such that \((rh)|_{\alpha_{i}}=p_{i}\) for each \(i\). Here \(h=ch_{P}c^{-1}\) and \(P=p_{1}+\dots+p_{k}\) as above. For example, if \(s\) is the element used to construct \(h_{P}\), then \(s^{-1}\) is a rectifier for \(h_{P}\), and more generally \(s^{-1}c^{-1}\) is a rectifier for any nuclear generator \(ch_{P}c^{-1}\).
**Definition 3.5** (Normalish form).: A **normalish form** for an element \(g\in G\) is a word for \(g\) of the form
\[fh_{1}\cdots h_{n}\]
where \(f\in V_{\Gamma,E}\) and \(h_{1},\dots,h_{n}\) are nuclear generators whose domains of support are pairwise disjoint.
As we will see, every element of \(G\) has a normalish form, but this form is not unique. The **nuclear code** for a normalish form is the union of the nuclear codes for the \(h_{i}\).
**Lemma 3.6** (Recognizing normalisation forms).: _Let \(g\in G\), and let \(h_{1}\cdots h_{n}\) be a normalisation form with nuclear code \(\{\alpha_{1},\ldots,\alpha_{k}\}\) and corresponding nuclear states \(p_{1},\ldots,p_{k}\). Then the following are equivalent:_
* _There exists_ \(f\in V_{\Gamma,E}\) _such that_ \(g=fh_{1}\cdots h_{n}\)_._
* _There exist_ \(r_{1},\ldots,r_{k}\in V_{\Gamma,E}\) _such that_ \((r_{i}g)|_{\alpha_{i}}=p_{i}\) _for each_ \(i\)_, and_ \(g\) _agrees with some element of_ \(V_{\Gamma,E}\) _on the complement of_ \(\bigcup_{i=1}^{k}\mathfrak{C}_{\alpha_{i}}\)_._
Proof.: If (i) holds then \(g\) agrees with \(f\) on the complement of \(\bigcup_{i=1}^{k}\mathfrak{C}_{\alpha_{i}}\). Furthermore, for each \(i\) there exists \(j\) so that \(\alpha_{i}\) is in the nuclear code for \(h_{j}\). If \(r\in V_{\Gamma,E}\) is a rectifier for \(h_{j}\), then \(rf^{-1}g\) agrees with \(rh_{j}\) on \(\mathfrak{C}_{\alpha_{i}}\), so \((rf^{-1}g)|_{\alpha_{i}}=(rh_{j})|_{\alpha_{i}}=p_{i}\), which proves (ii).
Now suppose (ii) holds, and let \(h=h_{1}\cdots h_{n}\). We must prove that \(gh^{-1}\in V_{\Gamma,E}\). Let \(E^{\prime}=\bigcup_{i=1}^{k}\mathfrak{C}_{\alpha_{i}}\). Then \(h(E^{\prime})=E^{\prime}\), so \(E\) is the disjoint union
\[(E\setminus E^{\prime})\cup\bigcup_{i=1}^{k}h(\mathfrak{C}_{\alpha_{i}}).\]
Thus it suffices to prove that \(gh^{-1}\) agrees with some element of \(V_{\Gamma,E}\) on each of these sets. Since \(h\) is the identity on \(E\setminus E^{\prime}\) and \(g\) agrees with some element of \(V_{\Gamma,E}\) on \(E\setminus E^{\prime}\), we already know that this holds for \(E\setminus E^{\prime}\).
Now consider one of the sets \(h(\mathfrak{C}_{\alpha_{i}})\). We know that \(\alpha_{i}\) is part of the nuclear code for some \(h_{j}\), and if \(r\) is a rectifier for \(h_{j}\) then \((r_{i}g)|_{\alpha_{i}}=p_{i}=(rh_{j})|_{\alpha_{i}}\). It follows that \((r_{i}g)(rh_{j})^{-1}=r_{i}gh_{j}^{-1}r^{-1}\) agrees with the canonical similarity \(\mathfrak{C}_{\overline{rh_{j}}(\alpha_{i})}\to\mathfrak{C}_{\overline{r_{i} g}(\alpha_{i})}\) on \(rh_{j}(\mathfrak{C}_{\alpha_{i}})\). Since \(r,r_{i}\in V_{\Gamma,E}\), it follows that \(gh_{j}^{-1}\) agrees with some element of \(V_{\Gamma,E}\) on \(h_{j}(\mathfrak{C}_{\alpha_{i}})\). But \(h\) agrees with \(h_{j}\) on \(\mathfrak{C}_{\alpha_{i}}\), so it follows that \(gh^{-1}\) agrees with some element of \(V_{\Gamma,E}\) on \(h(\mathfrak{C}_{\alpha_{i}})\). We conclude that \(gh^{-1}\in V_{\Gamma,E}\), so (i) holds.
**Proposition 3.7** (Existence of normalisation forms).: _Let \(g\in G\), let \(A\) be a code, and suppose that:_
* _Each local action_ \(g|_{\alpha}\;(\alpha\in A)\) _lies in_ \(\mathcal{N}\)_._
* _If_ \(A\) _is incomplete, then_ \(g\) _agrees with some element of_ \(V_{\Gamma,E}\) _on_ \(\bigcup_{\alpha\in A}\mathfrak{C}_{\alpha}\)_._
* _If_ \(A\) _is complete, then_ \(\sum_{\alpha\in A}g|_{\alpha}\) _is not a generator for_ \(\ker(\partial_{1})\)_._
_Then there exists a normalisation form for \(g\) whose nuclear code is \(A\)._
Proof.: Let \(E^{\prime}=\bigcup_{\alpha\in A}\mathfrak{C}_{\alpha}\). Since \(g\) agrees with an element of \(V_{\Gamma,E}\) on the complement of \(E^{\prime}\), we know that
\[\operatorname{class}(g(E^{\prime}))=\operatorname{class}(E)- \operatorname{class}(g(E\setminus E^{\prime}))\\ =\operatorname{class}(E)-\operatorname{class}(E\setminus E^{ \prime})=\operatorname{class}(E^{\prime})\]
so the sum \(P=\sum_{\alpha\in A}g|_{\alpha}\) lies in \(\ker(\partial_{1})\). Thus \(P=P_{1}+\dots+P_{n}\) for some generators \(P_{i}\) of \(\ker(\partial_{1})\), so we can partition \(A\) into sets \(A_{1},\dots,A_{n}\) such that \(\sum_{\alpha\in A_{i}}g|_{\alpha}=P_{i}\) for each \(i\).
Let \(C_{i}=\bigcup_{\alpha\in A_{i}}\mathfrak{C}_{\alpha}\) for each \(i\). By condition (iii), either \(n>1\), or \(n=1\) and \(C_{1}\neq E\), and therefore in either case each \(C_{i}\) is a proper subset of \(E\). By Corollary 2.31, we can construct nuclear generators \(h_{1},\dots,h_{n}\) such that each \(h_{i}\) has nuclear code \(A_{i}\) and nuclear states \(\{g|_{\alpha}\}_{\alpha\in A_{i}}\). Then by Lemma 3.6 there exists \(f\in V_{\Gamma,E}\) such that \(fh_{1}\cdots h_{n}\) is the desired normalisation form for \(g\).
**Corollary 3.8**.: _Every element of \(G\) has a normalish form._
Proof.: Let \(g\in G\). By Corollary 2.47, there exists a finite partition of \(E\) into cones \(\{C_{\alpha}\}_{\alpha\in A}\) such that each \(g|_{\alpha}\in\mathcal{N}\). Refining this partition further, we can make \(|A|\) large enough so that the element \(\sum_{\alpha\in A}g|_{\alpha}\) of \(\ker(\partial_{1})\) is not one of the generators for \(\ker(\partial_{1})\). Then by Proposition 3.7, there exists a normalish form for \(g\) with nuclear code \(A\).
Proof of Proposition 3.3.: By Corollary 3.8, every element of \(G\) has a normalish form. Since every nuclear generator is a conjugate of a model nuclear generator by an element of \(V_{\Gamma,E}\), we see that \(G\) is generated by the model nuclear generators together with the elements of \(V_{\Gamma,E}\). But \(V_{\Gamma,E}\) is finitely generated by Corollary 2.21, so \(G\) is finitely generated.
### An infinite presentation
In this subsection we describe an infinite presentation for \(G\) with respect to the nuclear generators and the elements of \(V_{\Gamma,E}\). We will use this presentation in Subsection 3.3 to derive a finite presentation for \(G\).
First we need some terminology. We say that a code \(B\) is a **refinement** of a code \(A\) if \(\bigcup_{\alpha\in A}\mathfrak{C}_{\alpha}=\bigcup_{\beta\in B}\mathfrak{C}_{\beta}\) and for every \(\beta\in B\) there exists \(\alpha\in A\) with \(\mathfrak{C}_{\beta}\subseteq\mathfrak{C}_{\alpha}\). We say that \(B\) is an **elementary refinement** of \(A\) if there exists \(\alpha\in A\) such that
\[B=\big{(}A\setminus\{\alpha\}\big{)}\cup\{\beta_{1},\dots,\beta_{k}\}\]
where \(\mathfrak{C}_{\beta_{1}},\dots,\mathfrak{C}_{\beta_{k}}\) are the maximal proper subcones of \(\mathfrak{C}_{\alpha}\). Note that any refinement can be realized by a sequence of elementary refinements.
Next, if \(h\) and \(h^{\prime}\) are nuclear generators with nuclear codes \(A\) and \(A^{\prime}\), respectively, we say that an element \(c\in V_{\Gamma,E}\) is a **rigid conjugator**
from \(h\) to \(h^{\prime}\) if \(c\) maps \(A\) to \(A^{\prime}\) in a way that preserves the corresponding nuclear states. Note then that \(h^{\prime}=chc^{-1}\).
Finally, we say that a normalisation form \(fh_{1}\cdots h_{n}\) is **exact** if \(f=1\).
**Definition 3.9** (An infinite set of relations for \(G\)).: Let \(\operatorname{Rel_{inf}}\) be the following set of relations in \(G\), with respect to the generating set consisting of all elements of \(V_{\Gamma,E}\) and all nuclear generators.
**(VRel):**: All relations in \(V_{\Gamma,E}\).
**(VNuc):**: All true relations of the form \(h=f\), where \(h\) is a nuclear generator and \(f\in V_{\Gamma,E}\).
**(DisjComm):**: All relations of the form \([h_{1},h_{2}]=1\), where \(h_{1}\) and \(h_{2}\) are nuclear generators whose domains of support are disjoint.
**(Conj):**: All relations \(h^{\prime}=chc^{-1}\), where \(h\) and \(h^{\prime}\) are nuclear generators and \(c\in V_{\Gamma,E}\) is a rigid conjugator from \(h\) to \(h^{\prime}\).
**(Refine):**: For every nuclear generator \(h\) and every elementary refinement \(B\) of the nuclear code for \(h\), a relation of the form \(h=fh_{1}\cdots h_{n}\), where the right side is a normalisation form for \(h\) with nuclear code \(B\).
**(Inv):**: For every nuclear generator \(h\), a relation \(h^{-1}=fh_{1}\cdots h_{n}\), where the right side is some normalisation form for \(h^{-1}\).
**(Prod):**: For every exact normalisation form \(h_{1}\cdots h_{m}\), say with nuclear code \(A\), and every nuclear generator \(h\) whose nuclear code \(B\) is a subset of \(A\), a relation \(h_{1}\cdots h_{m}h^{-1}=fh_{1}^{\prime}\cdots h_{n}^{\prime}\), where the right side is a normalisation form whose nuclear code contains \(A\setminus B\).
Note that (VRel), (VNuc), and (Conj) hold in \(G\) by definition, and (DisjComm) clearly holds as well. The relations (Refine), (Inv), and (Prod) involve some arbitrary choices for the normalisation forms on the right-hand sides of the relations, and these relations will hold in \(G\) as long as normalisation forms satisfying the given conditions exist. The following proposition verifies this.
**Proposition 3.10**.: _The normalisation forms required for the right-hand sides of the relations (Refine), (Inv), and (Prod) exist._
Proof.: For (Inv), any normalisation form for \(h^{-1}\) suffices, and such a normalisation form always exists by Corollary 3.8.
For (Refine), suppose \(h\) is a nuclear generator with nuclear code \(A\), and let \(B\) be an elementary refinement of \(A\). We must show that there exists a normalisation form for \(h\) with nuclear code \(B\). Let \(r\) be a rectifier for \(h\). Then \((rh)|_{\alpha}\in\mathcal{N}\) for all \(\alpha\in A\), and it follows that \((rh)|_{\beta}\in\mathcal{N}\) for all \(\beta\in B\). Since \(rh\) agrees with \(r\) on the complement of \(\bigcup_{\beta\in B}\mathfrak{C}_{\beta}\), by Proposition 3.7 there exists a normalisation form \(fh_{1}\cdots h_{n}\) for \(rh\) whose
nuclear code is \(B\). Then \(f^{\prime}h_{1}\cdots h_{n}\) is a normalisation form for \(h\) with nuclear code \(B\), where \(f^{\prime}=r^{-1}f\).
For (Prod), let \(h_{1}\cdots h_{m}\) be an exact normalisation form for some \(g\in G\) with nuclear code \(A\), and let \(h\) be a nuclear generator whose nuclear code \(B\) is a subset of \(A\). We must prove that there exists a normalisation form for \(gh^{-1}\) whose nuclear code contains \(A\setminus B\). If \(B=A\) then \(A\setminus B=\emptyset\), so the claim is just that \(h_{1}\cdots h_{m}h^{-1}\) has a normalisation form, and this is immediate from Corollary 3.8. Now suppose \(B\) is a proper subset of \(A\). For each \(i\), let \(A_{i}\) be the nuclear code for \(h_{i}\), and let \(r_{i}\) be a rectifier for \(h_{i}\).
Next observe that there exists a unique complete code \(\{\nu_{1},\ldots,\nu_{M}\}\) of minimum size such that each \(t(\nu_{j})\) is a node in \(\Gamma_{0}\). In particular, any cone \(\mathfrak{C}_{\nu}\subseteq E\) with \(t(\nu)\) a node in \(\Gamma_{0}\) is contained in some \(\mathfrak{C}_{\nu_{j}}\). Choose an incomplete code \(C=\{\xi_{ij}\mid 1\leq i\leq m,\,1\leq j\leq M\}\) such that each \(t(\xi_{ij})=t(\nu_{j})\), and for each \(1\leq i\leq m\) let \(k_{i}\colon E\to E\) be the (non-surjective) map that sends each \(\mathfrak{C}_{\nu_{j}}\) to \(\mathfrak{C}_{\xi_{ij}}\) by the canonical similarity. Note that, for any \(g^{\prime}\) in \(G\) and any \(\alpha\), if the local action \(g^{\prime}|_{\alpha}\) lies in \(\mathcal{N}\) then \(t(\overline{g^{\prime}}(\alpha))\) is a node in \(\Gamma_{0}\). Hence, in this case, \(t(\overline{g^{\prime}}(\alpha))\) has some \(\nu_{j}\) as a prefix, and so each \(k_{i}\) acts as a canonical similarity on \(\mathfrak{C}_{\overline{g^{\prime}}(\alpha)}\). In particular \((k_{i}\circ g^{\prime})|_{\alpha}=g^{\prime}|_{\alpha}\) for each \(i\).
Now let \(r\) be any element of \(V_{\Gamma,E}\) that agrees with \(k_{i}\circ r_{i}\) on each \(\bigcup_{\alpha\in A_{i}\setminus B}h_{i}(\mathfrak{C}_{\alpha})\). Such an element exists since \(A\setminus B\) and \(C\) are both incomplete codes. Note that \(r\) satisfies \((rg)|_{\alpha}=(rh_{i})|_{\alpha}=(r_{i}h_{i})|_{\alpha}\) for each \(i\) and each \(\alpha\in A_{i}\setminus B\). In particular, \((rg)|_{\alpha}\in\mathcal{N}\) for all \(\alpha\in A\setminus B\), and hence \((rgh^{-1})|_{\alpha}\in\mathcal{N}\) for all \(\alpha\in A\setminus B\). Let \(B^{\prime}\) be a refinement of \(B\) so that \((rgh^{-1})|_{\beta}\in\mathcal{N}\) for all \(\beta\in B^{\prime}\), and let \(A^{\prime}=(A\setminus B)\cup B^{\prime}\). Then \((rgh^{-1})|_{\alpha}\in\mathcal{N}\) for all \(\alpha\in A^{\prime}\). By refining \(B^{\prime}\) further we can make sure that \(\sum_{\alpha\in A^{\prime}}(rgh^{-1})|_{\alpha}\) is not a generator for \(\ker(\partial_{1})\), so by Proposition 3.7 there exists a normalisation form \(fh_{1}^{\prime}\cdots h_{n}^{\prime}\) for \(rgh^{-1}\) with nuclear code \(A^{\prime}\). Then \((r^{-1}f)h_{1}^{\prime}\cdots h_{n}^{\prime}\) is a normalisation form for \(gh^{-1}\) whose nuclear code \(A^{\prime}\) contains \(A\setminus B\).
Our goal for the remainder of this subsection is to prove that the relations in \(\operatorname{Rel}_{\inf}\) give a presentation for \(G\). If \(w\) and \(w^{\prime}\) are words in the nuclear generators and the elements of \(V_{\Gamma,E}\), we will write \(w\equiv_{\inf}w^{\prime}\) if the relation \(w=w^{\prime}\) follows from the relations in \(\operatorname{Rel}_{\inf}\).
**Lemma 3.11** (Refining a normalisation form).: _Let \(w\) be a normalisation form with nuclear code \(A\), and let \(B\) be any refinement of \(A\). Then there exists a normalisation form \(w^{\prime}\) with nuclear code \(B\) such that \(w\equiv_{\inf}w^{\prime}\)._
Proof.: It suffices to consider the case where \(B\) is an elementary refinement of \(A\), say
\[B=\big{(}A\setminus\{\alpha_{0}\}\big{)}\cup\{\beta_{1},\dots,\beta_{k}\}.\]
Suppose \(w\) is \(fh_{1}\cdots h_{n}\). By (DisjComm), we may permute \(h_{1},\dots,h_{n}\) freely, so suppose \(\alpha_{0}\) is part of the nuclear code \(A_{1}\) for \(h_{1}\). Let \(B_{1}=\big{(}A_{1}\setminus\{\alpha_{0}\}\big{)}\cup\{\beta_{1},\dots,\beta_{k}\}\). Then \(B_{1}\) is an elementary refinement of \(A_{1}\), so by (Refine) we have \(h_{1}\equiv_{\inf}f^{\prime}h_{1}^{\prime}\cdots h_{m}^{\prime}\) for some normalisation form \(f^{\prime}h_{1}^{\prime}\cdots h_{m}^{\prime}\) with nuclear code \(B_{1}\). Then
\[w\equiv_{\inf}f^{\prime\prime}h_{1}^{\prime}\cdots h_{m}^{\prime}h_{2}\cdots h _{n},\]
where \(f^{\prime\prime}=ff^{\prime}\), and the right side is the desired normalisation form \(w^{\prime}\) with nuclear code \(B\).
We will say that a normalisation form is **complete** if its nuclear code is complete, and **incomplete** otherwise.
**Lemma 3.12** (Completing a normalisation form).: _If \(w\) is any normalish form, there exists a complete normalisation form \(w^{\prime}\) such that \(w\equiv_{\inf}w^{\prime}\)._
Proof.: Observe that if \(v\) is any node of \(\Gamma_{0}\) and \(C_{\beta}\) is a cone with \(t(\beta)=v\), then the identity element of \(G\) can be viewed as a nuclear generator \(e_{\beta}\) with nuclear code \(\{\beta\}\). It follows from (VNuc) that \(e_{\beta}\equiv_{\inf}1\).
Now suppose \(w=fh_{1}\cdots h_{n}\) is an incomplete normalish form. Let \(A\) be its nuclear code, and let \(E^{\prime}=\bigcup_{\alpha\in A}\mathfrak{C}_{\alpha}\). We can partition \(E\setminus E^{\prime}\) into finitely many cones \(\mathfrak{C}_{\beta_{1}},\dots,\mathfrak{C}_{\beta_{k}}\) such that each \(t(\beta_{i})\) is a node of \(\Gamma_{0}\). Then \(w\equiv_{\inf}fh_{1}\cdots h_{n}e_{\beta_{1}}\cdots e_{\beta_{k}}\), where the right side is a complete normalisation form \(w^{\prime}\) with nuclear code \(A\cup\{\beta_{1},\dots,\beta_{k}\}\).
**Lemma 3.13** (Checking for the identity).: _If \(w\) is a normalish form for the identity, then \(w\equiv_{\inf}1\)._
Proof.: Suppose \(w=fh_{1}\cdots h_{n}\) is a normalish form for the identity, and let \(C_{i}\) be the domain of support for each \(h_{i}\). Then each \(h_{i}\) agrees with \(f^{-1}\) on \(C_{i}\) and is the identity elsewhere, so each \(h_{i}\) is equal to some element \(f_{i}\in V_{\Gamma,E}\). By (VNuc), it follows that \(h_{i}\equiv_{\inf}f_{i}\) for each \(i\), so \(w\equiv_{\inf}ff_{1}\cdots f_{n}\). Since \(ff_{1}\cdots f_{n}=1\), it follows from (VRel) that \(w\equiv_{\inf}1\).
**Lemma 3.14** (Multiplying by an element of \(V_{\Gamma,E}\)).: _For any normalish form \(w\) and any \(f\in V_{\Gamma,E}\), there exists a normalish form \(w^{\prime}\) such that \(wf\equiv_{\inf}w^{\prime}\)._
Proof.: Let \(A\) be the nuclear code for \(w\). We can choose a refinement \(B\) of \(A\) such that \(f^{-1}\) acts as a canonical similarity on \(\mathfrak{C}_{\beta}\) for each \(\beta\in B\). By Lemma 3.11, there exists a normalish form
with nuclear code \(B\) such that \(w\equiv_{\inf}w_{B}\). By (Conj), there exists for each \(h_{i}\) a nuclear generator \(h^{\prime}_{i}\) such that \(f^{-1}h_{i}f\equiv_{\inf}h^{\prime}_{i}\). Then \(w_{B}f=f^{\prime}h_{1}\cdots h_{n}f\equiv_{\inf}f^{\prime}fh^{\prime}_{1}\cdots h ^{\prime}_{n}\), and combining \(f^{\prime}f\) using a relation from (VRel) gives the desired normalisation form \(w^{\prime}\).
**Lemma 3.15** (Multiplying by a nuclear generator).: _If \(w\) is a normalish form and \(h\) is a nuclear generator, then there exists a normalish form \(w^{\prime}\) such that \(wh\equiv_{\inf}w^{\prime}\)._
Proof.: Suppose \(w=fh_{1}\cdots h_{n}\). By Lemma 3.12, we may assume that \(w\) is complete. By (Inv) and Lemma 3.12 we can write \(h^{-1}\) as a complete normalish form, say \(f^{\prime}h^{\prime}_{1}\cdots h^{\prime}_{m}\). Since our two normalish forms are complete, their nuclear codes have a common refinement, so by Lemma 3.11 we may assume that \(w\) and \(f^{\prime}h^{\prime}_{1}\cdots h^{\prime}_{m}\) have the same nuclear code \(A\). After applying (DisjComm), our goal is to put
\[wh^{-1}\equiv_{\inf}fh_{1}\cdots h_{n}(h^{\prime}_{1})^{-1}\cdots(h^{\prime} _{m})^{-1}(f^{\prime})^{-1}\]
into normalisation form. The nuclear code \(B_{1}\) of \(h^{\prime}_{1}\) is a subset of \(A\). Thus, using (Prod) and (VRel), we can put the subword \(fh_{1}\cdots h_{n}(h^{\prime}_{1})^{-1}\) into normalisation form while keeping \(A\setminus B_{1}\) a subset of the overall nuclear code. The nuclear code \(B_{2}\) of \(h^{\prime}_{2}\) is a subset of \(A\setminus B_{1}\), so we can once again apply (Prod) and (VRel), and continue in this way (since the domains of support of the \(h^{\prime}_{i}\) are pairwise disjoint) until the entire word equals some normalisation form times \((f^{\prime})^{-1}\). Now we conclude using Lemma 3.14.
**Proposition 3.16**.: _The relations of \(\operatorname{Rel}_{\inf}\) define an infinite presentation for \(G\)._
Proof.: Let \(w\) be any word for the identity. Applying (Inv) to any inverse nuclear generators occurring in \(w\), we obtain a word \(w^{\prime}\) without any inverse nuclear generators such that \(w\equiv_{\inf}w^{\prime}\). Applying Lemma 3.14 and Lemma 3.15 inductively, we obtain a normalisation form \(w^{\prime\prime}\) such that \(w^{\prime}\equiv_{\inf}w^{\prime\prime}\). Now since \(w^{\prime\prime}\) is a normalish form for the identity, Lemma 3.13 implies that \(w^{\prime\prime}\equiv_{\inf}1\), and hence \(w\equiv_{\inf}1\).
### A finite presentation
In this subsection we use the infinite presentation from Subsection 3.2 to derive a finite presentation for \(G\), using the finite generating set consisting of the model nuclear generators \(h_{P}\) and a finite generating set for \(V_{\Gamma,E}\) (see Proposition 3.3).
First, if \(h\) is a nuclear generator, define the **rigid centralizer** of \(h\) to be the group of all rigid conjugators from \(h\) to itself.
**Proposition 3.17**.: _The rigid centralizer of any nuclear generator is finitely generated._
Proof.: Let \(h\) be a nuclear generator with nuclear code \(A\) and domain of support \(E^{\prime}\). By definition, any rigid conjugator from \(h\) to \(h\) must permute the cones \(\{\mathfrak{C}_{\alpha}\}_{\alpha\in A}\) by canonical similarities. In particular, since \(A\) is finite, the group \(\operatorname{Fix}(E^{\prime})\) of elements of \(V_{\Gamma,E}\) that fix \(E^{\prime}\) pointwise has finite index in the rigid centralizer. But \(\operatorname{Fix}(E^{\prime})\) is isomorphic to \(V_{\Gamma,E\setminus E^{\prime}}\), and is therefore finitely generated by Corollary 2.21.
Next, we say that two exact normalish forms \(h_{1}\cdots h_{n}\) and \(h_{1}^{\prime}\cdots h_{n}^{\prime}\) are **rigidly conjugate** if there exists a single element \(c\in V_{\Gamma,E}\) that is a rigid conjugator from \(h_{i}\) to \(h_{i}^{\prime}\) for each \(i\).
Note that every nuclear generator is rigidly conjugate to a model nuclear generator, so there are only finitely many rigid conjugacy classes of nuclear generators. The following proposition generalizes this.
**Proposition 3.18** (Criterion for rigid conjugacy).: _Two exact normalish forms \(h_{1}\cdots h_{n}\) and \(h_{1}^{\prime}\cdots h_{n}^{\prime}\) are rigidly conjugate if and only if they are either both complete or both incomplete, and \(h_{i}^{\prime}\) is rigidly conjugate to \(h_{i}\) for each \(i\)._
Proof.: The forward direction is trivial. For the converse, suppose the given normalish forms satisfy the given conditions. For each \(i\), let \(c_{i}\) be a rigid conjugator from \(h_{i}\) to \(h_{i}^{\prime}\). Since the nuclear codes for our normalish forms are either both complete or both incomplete, there exists \(c\in V_{\Gamma,E}\) such that \(c\) agrees with each \(c_{i}\) on the domain of support of \(h_{i}\). Then \(c\) is the desired rigid conjugator from \(h_{1}\cdots h_{n}\) to \(h_{1}^{\prime}\cdots h_{n}^{\prime}\).
**Corollary 3.19**.: _For any \(N\in\mathbb{N}\), there are only finitely many rigid conjugacy classes of exact normalish forms \(h_{1}\cdots h_{n}\) with \(n\leq N\)._
Proof.: Any nuclear generator is rigidly conjugate to a model nuclear generator, so there are only finitely many rigid conjugacy classes of nuclear generators. The result follows easily from this and Proposition 3.18.
Now, fix an \(N\in\mathbb{N}\) so that each generator for \(\ker(\partial_{1})\) has at most \(N\) summands, and choose representatives for the rigid conjugacy classes of exact normalish forms \(h_{1}\cdots h_{n}\) with \(n\leq N\). We refer to these finitely many representatives as **model normalish forms**.
We are now ready to state the relations in our finite presentation for \(G\). For convenience, we actually state another infinite presentation, with generators consisting of the elements of \(V_{\Gamma,E}\) together with all nuclear generators. However, the presentation we give will easily reduce to a finite presentation through Tietze transformations.
**Definition 3.20** (Defining relations for \(G\)).: Let \(\operatorname{Rel}_{\operatorname{fin}}\) be the following set of relations.
**(VRel):**: All the relations in \(V_{\Gamma,E}\).
**(Model):**: For each nuclear generator \(h\) that is not a model nuclear generator, one relation of the form \(h=ch_{P}c^{-1}\), where \(h_{P}\) is a model nuclear generator and \(c\) is a rigid conjugator from \(h_{P}\) to \(h\).
**(VNucFin):**: Every true relation of the form \(h_{P}=f\), where \(h_{P}\) is a model nuclear generator and \(f\in V_{\Gamma,E}\).
**(CentConjFin):**: For each model nuclear generator \(h_{P}\), relations \(c_{i}h_{P}c_{i}^{-1}=h_{P}\), where \(\{c_{1},\ldots,c_{n}\}\) is some finite generating set for the rigid centralizer of \(h_{P}\) (see Proposition 3.17).
**(DisjCommFin):**: For each pair \(h_{P},h_{P^{\prime}}\) of model nuclear generators, one relation \([h_{P},h]=1\), where \(h\) is some nuclear generator that is rigidly conjugate to \(h_{P^{\prime}}\), and such that \(h_{P}h\) is an incomplete normalish form.
**(RefineFin):**: For each model nuclear generator \(h_{P}\) and each elementary refinement \(B\) of the nuclear code for \(h_{P}\), one corresponding relation \(h_{P}=fh_{1}\cdots h_{n}\) of type (Refine).
**(InvFin):**: For each model nuclear generator \(h_{P}\), one corresponding relation \(h_{P}^{-1}=fh_{1}\cdots h_{n}\) of type (Inv).
**(ProdFin):**: For each model normalish form \(h_{1}\cdots h_{m}\), say with nuclear code \(A\), and each nuclear generator \(h\) whose nuclear code \(B\) is a subset of \(A\), one corresponding relation \(h_{1}\cdots h_{m}h^{-1}=fh_{1}^{\prime}\cdots h_{n}^{\prime}\) of type (Prod).
Note that all of these sets of relations are finite except for (VRel) and (Model), which we plan to simplify with Tietze transformations. In particular, we should point out that for (ProdFin), even though \(h\) is not necessarily a model nuclear generator, nonetheless there can be only finitely many such \(h\) whose nuclear code is a subset of \(A\). Note also that all of the relations in \(\operatorname{Rel}_{\operatorname{fin}}\) are contained in \(\operatorname{Rel}_{\operatorname{inf}}\), with (Model) and (CentConjFin) both being subsets of (Conj).
Our goal is to prove that all the relations in \(\operatorname{Rel}_{\operatorname{inf}}\) follow from those in \(\operatorname{Rel}_{\operatorname{fin}}\), and therefore \(\operatorname{Rel}_{\operatorname{fin}}\) is a set of defining relations for \(G\). We will then use Tietze transformations to deduce that \(G\) is finitely presented. For the remainder of this section, if \(w\) and \(w^{\prime}\) are words, write \(w\equiv_{\operatorname{fin}}w^{\prime}\) if the relations of \(\operatorname{Rel}_{\operatorname{fin}}\) imply that \(w=w^{\prime}\).
**Lemma 3.21**.: _The relations in (VNuc) follow from those in \(\operatorname{Rel}_{\operatorname{fin}}\)._
Proof.: Let \(h\) be a nuclear generator, and suppose \(h=f\) for some \(f\in V_{\Gamma,E}\). By (Model), we know that \(h\equiv_{\operatorname{fin}}ch_{P}c^{-1}\) for some model nuclear generator \(h_{P}\) and some \(c\in V_{\Gamma,E}\). Then \(h_{P}\) also lies in \(V_{\Gamma,E}\), so
by (VNucFin), \(h_{P}\equiv_{\rm fin}f^{\prime}\) for some \(f^{\prime}\in V_{\Gamma,E}\). Then \(h\equiv_{\rm fin}cf^{\prime}c^{-1}\), and by (VRel) we have \(cf^{\prime}c^{-1}\equiv_{\rm fin}f\).
**Lemma 3.22**.: _The relations in_ (Conj) _follow from those in_ Rel\({}_{\rm fin}\)_._
Proof.: Let \(h_{2}=ch_{1}c^{-1}\) be a relation in (Conj), where \(h_{1},h_{2}\) are nuclear generators and \(c\) is a rigid conjugator from \(h_{1}\) to \(h_{2}\). Then \(h_{1}\) and \(h_{2}\) have the same nuclear states, so they are rigidly conjugate to the same model nuclear generator \(h_{P}\). By (Model), we know that \(h_{1}\equiv_{\rm fin}d_{1}h_{P}d_{1}^{-1}\) and \(h_{2}\equiv_{\rm fin}d_{2}h_{P}d_{2}^{-1}\) for some rigid conjugators \(d_{1}\) and \(d_{2}\). (In the special case where \(h_{i}=h_{P}\), there is no such relation in (Model), but in this case we can take \(d_{i}=1\).) Then
\[h_{P}=d_{2}^{-1}h_{2}d_{2}=d_{2}^{-1}ch_{1}c^{-1}d_{2}=d_{2}^{-1}cd_{1}h_{P}d_ {1}^{-1}c^{-1}d_{2},\]
so \(d_{2}^{-1}cd_{1}\) is in the rigid centralizer of \(h_{P}\). By (CentConjFin), we know that \([h_{P},c_{i}]\equiv_{\rm fin}1\) for each generator \(c_{i}\) of this rigid centralizer, and by (VRel) it follows that \([h_{P},d_{2}^{-1}cd_{1}]\equiv_{\rm fin}1\), so \(d_{2}h_{P}d_{2}^{-1}\equiv_{\rm fin}cd_{1}h_{P}d_{1}^{-1}c^{-1}\). Then
\[h_{2}\equiv_{\rm fin}d_{2}h_{P}d_{2}^{-1}\equiv_{\rm fin}cd_{1}h_{P}d_{1}^{-1} c^{-1}\equiv_{\rm fin}ch_{1}c^{-1}\]
as desired.
**Lemma 3.23**.: _We can choose the relations in_ (Refine) _to follow from those in_ Rel\({}_{\rm fin}\)_._
Proof.: Let \(h\) be a nuclear generator with nuclear code \(A\), and let \(B\) be an elementary refinement of \(A\). From (Model), we have \(h\equiv_{\rm fin}ch_{P}c^{-1}\), where \(h_{P}\) is a model nuclear generator and \(c\) is a rigid conjugator from \(h_{P}\) to \(h\). Let \(A^{\prime}\) be the nuclear code for \(h_{P}\), so \(c\) maps \(A\) to \(A^{\prime}\). Let \(B^{\prime}=\{\overline{c}(\beta)\ |\ \beta\in B\}\), so \(B^{\prime}\) is an elementary refinement of \(A^{\prime}\), and \(c\) maps \(B\) to \(B^{\prime}\). From (RefineFin), we have a relation \(h_{P}\equiv_{\rm fin}f^{\prime}w^{\prime}\), where \(w^{\prime}\) is an exact normalisation form with nuclear code \(B^{\prime}\). Then \(c\) is a rigid conjugator from \(w^{\prime}\) to some exact normalisation form \(w\) with nuclear code \(B\). It follows from (Conj) and Lemma 3.22 that \(w\equiv_{\rm fin}cw^{\prime}c^{-1}\), and hence
\[h\equiv_{\rm fin}ch_{P}c^{-1}\equiv_{\rm fin}cf^{\prime}w^{\prime}c^{-1}\equiv _{\rm fin}cf^{\prime}c^{-1}w.\]
Combining \(cf^{\prime}c^{-1}\) using (VRel), the right side becomes a normalisation form with nuclear code \(B\), as desired.
**Lemma 3.24**.: _The relations in_ (DisjComm) _follow from those in_ Rel\({}_{\rm fin}\)_._
Proof.: Let \([h_{1},h_{2}]=1\) be a relation in (DisjComm), where \(h_{1}\) and \(h_{2}\) are nuclear generators whose domains of support \(E_{1},E_{2}\) are disjoint.
Suppose first that \(E_{1}\cup E_{2}\neq E\). For \(i=1,2\), let \(h_{P_{i}}\) be a model nuclear generator that is rigidly conjugate to \(h_{i}\). By (DisjCommFin), we know that \([h_{P_{1}},h]\equiv_{\rm fin}1\), where \(h\) is some nuclear generator that is
rigidly conjugate to \(h_{P_{2}}\), and such that \(h_{P_{1}}h\) is an incomplete normalish form. Since \(h_{1}h_{2}\) is an incomplete normalisation form, by Proposition 3.18 there exists \(c\in\,V_{\Gamma,E}\) that rigidly conjugates \(h_{P_{1}}h\) to \(h_{1}h_{2}\). Then \(h_{1}\equiv_{\mathrm{fin}}ch_{P_{1}}c^{-1}\) and \(h_{2}\equiv_{\mathrm{fin}}chc^{-1}\) by (Conj) and Lemma 3.22, so it follows that \([h_{1},h_{2}]\equiv_{\mathrm{fin}}1\).
All that remains is the case where \(E_{1}\cup E_{2}=E\). In this case, let \(A\) be the nuclear code for \(h_{2}\), and let \(B\) be an elementary refinement of \(A\). By (Refine) and Lemma 3.23, we have \(h_{2}\equiv_{\mathrm{fin}}f^{\prime}h_{1}^{\prime}\cdots h_{n}^{\prime}\), where the right side is some normalisation form with nuclear code \(B\). Note that \(f^{\prime}\) must be supported on \(E_{2}\), so \([h_{1},f^{\prime}]\equiv_{\mathrm{fin}}1\) by (Conj) and Lemma 3.22. If \(n\geq 2\), then \([h_{1},h_{1}^{\prime}]\equiv_{\mathrm{fin}}1\) for each \(i\) by the argument above, and therefore \([h_{1},h_{2}]\equiv_{\mathrm{fin}}1\). Now suppose \(n=1\), so it suffices to prove that \([h_{1},h_{1}^{\prime}]\equiv_{\mathrm{fin}}1\). Note that the nuclear code for \(h_{1}^{\prime}\) is strictly larger than the nuclear code for \(h_{1}\). Since there is a maximum size for the nuclear code of a nuclear generator, we can continue this process until (Refine) gives us a normalisation form with two or more nuclear generators, at which point we have reduced to the \(n\geq 2\) case above, so we are done.
**Lemma 3.25** (Refining again).: _Let \(w\) be a normalisation form with nuclear code \(A\), and let \(B\) be any refinement of \(A\). Then there exists a normalish form \(w^{\prime}\) with nuclear code \(B\) such that \(w\equiv_{\mathrm{fin}}w^{\prime}\)._
Proof.: By Lemmas 3.23 and 3.24, the relations (Refine) and (DisjComm) follow from those in \(\mathrm{Rel}_{\mathrm{fin}}\). Thus, applying the proof of Lemma 3.11 verbatim with \(\equiv_{\mathrm{fin}}\) in place of \(\equiv_{\mathrm{inf}}\) yields the desired \(w^{\prime}\).
**Lemma 3.26**.: _We can choose the relations in (Inv) to follow from those in \(\mathrm{Rel}_{\mathrm{fin}}\)._
Proof.: Let \(h\) be a nuclear generator. From (Model), we have \(h\equiv_{\mathrm{fin}}ch_{PC}c^{-1}\) for some model nuclear generator \(h_{P}\), where \(c\) is a rigid conjugator from \(h_{P}\) to \(h\). From (InvFin), we have \(h_{P}^{-1}\equiv_{\mathrm{fin}}w\), where \(w\) is some normalisation form with nuclear code \(A\). Let \(B\) be a refinement of \(A\) such that \(c^{-1}\) acts as a canonical similarity on \(\mathfrak{C}_{\beta}\) for each \(\beta\in B\). By Lemma 3.25, there exists a normalish form \(w^{\prime}\) with nuclear code \(B\) such that \(w\equiv_{\mathrm{fin}}w^{\prime}\). Then \(c^{-1}\) rigidly conjugates \(w^{\prime}\) to some normalisation form \(w^{\prime\prime}\). By (Conj) and Lemma 3.22 we know that \(c^{-1}w^{\prime}c\equiv_{\mathrm{fin}}w^{\prime\prime}\), so \(h^{-1}\equiv_{\mathrm{fin}}ch_{P}^{-1}c^{-1}\equiv_{\mathrm{fin}}cwc^{-1} \equiv_{\mathrm{fin}}cw^{\prime}c^{-1}\equiv_{\mathrm{fin}}w^{\prime\prime}\).
**Lemma 3.27**.: _We can choose the relations in (Prod) to follow from those in \(\mathrm{Rel}_{\mathrm{fin}}\)._
Proof.: Let \(w\) be an exact normalish form \(h_{1}\cdots h_{m}\) with nuclear code \(A\), and let \(h\) be a nuclear generator whose nuclear code \(B\) is a subset of \(A\). Suppose first that \(B\) intersects the nuclear code of each \(h_{i}\). Since
\(|B|\leq N\) by the definition of \(N\) (see the text after Corollary 3.19), it follows that \(m\leq N\), so \(w\) is rigidly conjugate to one of our model normalisation forms \(w_{0}\), say with nuclear code \(A_{0}\). Let \(c\) be a rigid conjugator from \(w\) to \(w_{0}\), and note that \(c\) also rigidly conjugates \(h\) to some nuclear generator \(h_{0}\) whose nuclear code \(B_{0}\) is contained in \(A_{0}\). By (Conj) and Lemma 3.22, we know that \(w_{0}h_{0}^{-1}\equiv_{\mathrm{fin}}cwh^{-1}c^{-1}\). By (ProdFin), we have \(w_{0}h_{0}^{-1}\equiv_{\mathrm{fin}}f_{0}w_{0}^{\prime}\), where \(f_{0}\in V_{\Gamma,E}\) and \(w_{0}^{\prime}\) is some exact normalish form whose nuclear code contains \(A_{0}\setminus B_{0}\).
Now observe that \(c^{-1}\) acts as a canonical similarity on \(\mathfrak{C}_{\alpha}\) for each \(\alpha\in A_{0}\setminus B_{0}\). Let \(A_{1}\) be a refinement of \(A_{0}\) that contains \(A_{0}\setminus B_{0}\) and such that \(c^{-1}\) acts as a canonical similarity on \(C_{\alpha}\) for each \(\alpha\in A_{1}\). By Lemma 3.25, there exists \(f_{1}\in V_{\Gamma,E}\) and an exact normalish form \(w_{1}\) with nuclear code \(A_{1}\) so that \(f_{0}w_{0}^{\prime}\equiv_{\mathrm{fin}}f_{1}w_{1}\). Then \(c^{-1}\) rigidly conjugates \(w_{1}\) to some exact normalish form \(w_{2}\) whose nuclear code contains \(A\setminus B\), and it follows from (Conj) and Lemma 3.22 that \(w_{2}\equiv_{\mathrm{fin}}c^{-1}w_{1}c\). Then
\[wh^{-1}\equiv_{\mathrm{fin}}c^{-1}w_{0}h_{0}^{-1}c\equiv_{\mathrm{fin}}c^{-1} f_{0}w_{0}^{\prime}c\equiv_{\mathrm{fin}}c^{-1}f_{1}w_{1}c\equiv_{\mathrm{fin}}c^{-1} f_{1}cw_{2}\]
and combining the \(c^{-1}f_{1}c\) on the right using (VRel) yields the desired normalisation form.
All that remains is the case where \(B\) does not intersect the nuclear code of each \(h_{i}\) in the original normalish form \(h_{1}\cdots h_{m}\). By (DisjComm) and Lemma 3.24, we can permute the \(h_{i}\) freely, so we may assume that \(B\) intersects the nuclear codes of \(h_{1},\ldots,h_{j}\) for some \(j\) and does not intersect the nuclear codes of \(h_{j+1},\ldots,h_{m}\). By (DisjComm) and Lemma 3.24, we know that
\[h_{1}\cdots h_{m}h^{-1}\equiv_{\mathrm{fin}}h_{1}\cdots h_{j}h^{-1}h_{j+1} \cdots h_{m}.\]
Let \(A_{1}^{\prime}\) and \(A_{2}^{\prime}\) be the nuclear codes for \(h_{1}\cdots h_{j}\) and \(h_{j+1}\cdots h_{m}\), respectively, and note that \(B\subseteq A_{1}^{\prime}\). By the argument above, there exists a normalish form \(w^{\prime\prime}\) whose nuclear code \(A^{\prime\prime}\) contains \(A_{1}^{\prime}\setminus B\) such that \(h_{1}\cdots h_{j}h^{-1}\equiv_{\mathrm{fin}}w^{\prime\prime}\). Then \(wh^{-1}\equiv_{\mathrm{fin}}w^{\prime\prime}h_{j+1}\cdots h_{m}\), and the word on the right is a normalish form whose nuclear code contains \((A_{1}^{\prime}\setminus B)\cup A_{2}^{\prime}=A\setminus B\), as desired.
Proof of Theorem 3.1.: By Lemmas 3.21, 3.22, 3.23, 3.24, 3.26, and 3.27, all of the relations \(\mathrm{Rel}_{\mathrm{inf}}\) listed in Definition 3.9 follow from the relations \(\mathrm{Rel}_{\mathrm{fin}}\) listed in Definition 3.20. By Proposition 3.16, the relations in \(\mathrm{Rel}_{\mathrm{inf}}\) give a presentation for \(G\), and therefore the relations in \(\mathrm{Rel}_{\mathrm{fin}}\) do as well.
The relations in \(\mathrm{Rel}_{\mathrm{fin}}\) use an infinite generating set consisting of all the elements of \(V_{\Gamma,E}\) together with all of the nuclear generators. Furthermore, the relations (VRel) and (Model) are infinite families. However, for each relation \(h=ch_{P}c^{-1}\) in (Model), we can use a Tietze
transformation to remove this relation as well as the generator \(h\), leaving us with only the finitely many model nuclear generators. Furthermore, since \(V_{\Gamma,E}\) is finitely presented by Corollary 2.21, we can use Tietze transformations to remove all but finitely many of the generators from \(V_{\Gamma,E}\) and all but finitely many of the relations from (VRel), which leaves us with a finite presentation for \(G\).
We would like to reiterate Question 1.2, which asks whether all full, contracting RSGs have type \(\mathrm{F}_{\infty}\), which is stronger than being finitely presented. Note that this is true for the special case of \(V_{\Gamma,E}\), by Corollary 2.21. In terms of a topological proof, for \(V_{\Gamma,E}\) one can mimic the "standard" approach to proving type \(\mathrm{F}_{\infty}\) for certain Thompson-like groups. As soon as \(\mathcal{N}\) contains non-identity elements however, several steps of the general proof outline break down and do not have clear alternatives. Even for contracting Rover-Nekrashevych groups, it remains an open question whether they always have type \(\mathrm{F}_{\infty}\).
## 4. Embedding hyperbolic groups into RSGs
The main result of [1] is that every hyperbolic group admits a faithful rational representation [1, Theorem 1], and the main result of this section is the following improvement:
**Theorem 4.1**.: _Every hyperbolic group embeds into a full, contracting RSG._
The key improvement here is "local to global": roughly speaking, in [1] it was proved that each element individually has only finitely many local actions, and here we prove that there is a common finite set of local actions containing all but finitely many of each element's local actions. In what follows we will often be discussing a group \(G\) with some fixed finite generating set and associated word metric \(d\). In such cases we will often identify \(G\) with its Cayley graph, and so for example refer to geodesics in \(G\). In this case we write \(C(x)\) for the **cone** on \(x\in G\), namely \(C(x)\coloneqq\{y\in G\mid d(1,y)=d(1,x)+d(x,y)\}\). For \(g\in G\) we also write \(|g|\) for the word length of \(g\), i.e. \(|g|\coloneqq d(1,g)\). If \(G\) happens to be hyperbolic, then we will also assume a constant of hyperbolicity \(\delta>0\).
In Subsections 4.1 and 4.2, we briefly recall the definition of the horofunction boundary, as well as the tree of atoms and associated machinery from [1]. In Subsection 4.3 we prove that the action of \(G\) on its horofunction boundary is particularly well-behaved when \(G\) has \(\mathbb{Z}\) as a proper free factor. Finally, in Subsections 4.4 and 4.5 we prove that the image of a hyperbolic group in the rational group
is contracting as long as the action on the horofunction boundary is well-behaved.
### The horofunction boundary and the tree of atoms
Gromov defined a compact boundary \(\partial_{h}X\) for any metric space \(X\), known as the horofunction boundary (see [1, Section 7.5] or [1, Chapter II.8]). If \(G\) is a group and \(d\) is the word metric on \(G\) with respect to some finite generating set, then \(\partial_{h}G\) is a compact, totally disconnected, metrizable space. We will define the horofunction boundary in this more restricted context.
Let \(F(G,\mathbb{Z})\) be the abelian group of all integer-valued functions on \(G\), and let \(\overline{F}(G,\mathbb{Z})\) be the quotient of \(F(G,\mathbb{Z})\) by the subgroup of all constant functions. Viewing \(F(G,\mathbb{Z})=\mathbb{Z}^{G}\) as a topological space with the product topology, we also get a (quotient) topology on \(\overline{F}(G,\mathbb{Z})\). For each \(x\in G\), let \(d_{x}\colon G\to\mathbb{Z}\) be the function \(d_{x}(y)\coloneqq d(x,y)\), and let \(\overline{d}_{x}\) denote the image of this function in \(\overline{F}(G,\mathbb{Z})\). Then the mapping \(x\mapsto\overline{d}_{x}\) defines a topological embedding \(i\colon G\to\overline{F}(G,\mathbb{Z})\).
**Definition 4.2** (Horofunction boundary).: The **horofunction boundary**\(\partial_{h}G\) of \(G\) is the set of all limit points of \(i(G)\) in \(\overline{F}(G,\mathbb{Z})\).
A function \(h\colon G\to\mathbb{Z}\) whose image in \(\overline{F}(G,\mathbb{Z})\) lies in \(\partial_{h}G\) is known as a **horofunction**. This terminology comes from hyperbolic geometry, where each point on the boundary of hyperbolic \(n\)-space has associated horofunctions whose level sets are horospheres. The horofunctions associated to Gromov's horofunction boundary are similar to, but distinct from, the horofunctions for \(\delta\)-hyperbolic spaces introduced by Coornaert and Papadopoulos [10].
It is a fact that \(\partial_{h}G\) is always compact and totally disconnected [1, Proposition 1.28], and \(G\) acts on \(\partial_{h}G\) by homeomorphisms. Unlike some other boundaries, the horofunction boundary is not a quasi-isometry invariant, and indeed the homeomorphism type of \(\partial_{h}G\) can depend on the finite generating set chosen for \(G\). The horofunction boundary is convenient for us primarily because it is totally disconnected, and as a result can sometimes be identified with a clopen subset of a subshift of finite type.
As described in [1], the horofunction boundary of a group can be realized as the space of ends of a certain infinite, rooted tree called the tree of atoms, which we will now define. Let \(B_{n}\) denote the \(n\)-ball in \(G\), that is, the ball of radius \(n\) centered at the identity, and define an equivalence relation \(\sim\) on \(G\) by \(x\sim y\) if \(\overline{d}_{x}\) and \(\overline{d}_{y}\) agree on \(B_{n}\). That is, \(x\sim y\) if \(d_{x}-d_{y}\) is constant on \(B_{n}\). It turns out that there are only finitely many equivalence classes [1, Proposition 3.3], and
these are the **atoms for \(B_{n}\)**. Note that any atom for \(B_{n+1}\) must be contained in an atom for \(B_{n}\).
**Definition 4.3** (Tree of atoms).: The **tree of atoms**\(\mathcal{A}(G)\) of \(G\) is the tree with a vertex for each infinite atom of each \(B_{n}\) (\(n\geq 0\)), and with an edge from an atom of \(B_{n}\) to an atom of \(B_{n+1}\) whenever the latter is contained in the former.
For example, the root of \(\mathcal{A}(G)\) is the unique atom of \(B_{0}\), namely all of \(G\) (assuming \(G\) is infinite). We denote by \(\mathcal{A}_{n}(G)\) the vertices of \(\mathcal{A}(G)\) representing the infinite atoms of \(B_{n}\). If \(A\) is an atom for \(B_{n}\), we denote by \(\overline{d}_{A}\) the function on \(B_{n}\) (up to an additive constant) that is equal to \(\overline{d}_{x}\) for all \(x\in A\).
Each infinite atom \(A\in\mathcal{A}_{n}(G)\) has a **shadow**
\[\partial A=\{\overline{h}\in\partial_{h}G\mid\overline{h}\text{ agrees with }\overline{d}_{A}\text{ on }B_{n}\},\]
which is a clopen subset of \(\partial_{h}G\). These form a basis for the topology on \(\partial_{h}G\), and indeed \(\partial_{h}G\) is homeomorphic to the space of ends of \(\mathcal{A}(G)\), or equivalently the space of infinite descending paths in \(\mathcal{A}(G)\)[1, Theorem 3.6].
**Example 4.4**.: Figure 1(a) shows the tree of atoms for \(\mathbb{Z}^{2}\) as well as the horofunction boundary \(\partial_{h}\mathbb{Z}^{2}\) with respect to the generating set \(\{(1,0),(0,1)\}\). The set \(\mathcal{A}_{0}(\mathbb{Z}^{2})\) consists of the single root atom \(\mathbb{Z}^{2}\), which we have placed in the center of the figure. For \(n\geq 1\), the atoms of \(B_{n}\) are the sets \(X\times Y\), where each of \(X\) and \(Y\) is one of \(\{k\mid k\leq-n\}\), \(\{k\mid k\geq n\}\), or a singleton set \(\{k\}\) for \(-n<k<n\). For example, the
Figure 1. (a) The tree of atoms for \(\mathbb{Z}^{2}\) and the horofunction boundary. (b) The atoms for \(B_{3}\subseteq\mathbb{Z}^{2}\).
atoms for \(B_{3}\) are shown in Figure 1(b). Note that \(8n\) of the atoms for \(B_{n}\) are infinite, and thus appear in the tree of atoms. Of these \(8n\), four of them have three children each, while the remaining atoms have one child each. Note that the elements of \(\mathcal{A}(\mathbb{Z}^{2})\) happen to be in one-to-one correspondence with the elements of \(\mathbb{Z}^{2}\), but this is just an artifact of this example.
As shown in Figure 1(a), the horofunction boundary in this case is homeomorphic to the complement of \(\mathbb{Z}^{2}\) in \(\widehat{\mathbb{Z}}^{2}\), where \(\widehat{\mathbb{Z}}=\mathbb{Z}\cup\{\pm\infty\}\) is the two-point compactification of \(\mathbb{Z}\). For example, each point \((+\infty,n)\in\partial_{h}\mathbb{Z}^{2}\) corresponds to the horofunction \((x,y)\mapsto-x+|y-n|\), and the point \((-\infty,+\infty)\in\partial_{h}\mathbb{Z}^{2}\) corresponds to the horofunction \((x,y)\mapsto x-y\).
**Example 4.5**.: If \(F_{n}\) is a free group with a basis as generating set, then the atoms for \(F_{n}\) are precisely the same as the cones, the tree of atoms is isomorphic to the Cayley graph of \(F_{n}\), and the horofunction boundary is homeomorphic to the Gromov boundary \(\partial F_{n}\), i.e. the space of ends of the Cayley graph. In general, Webster and Winchester have proven that if \(G\) is a hyperbolic group then the Gromov boundary \(\partial G\) is a quotient of \(\partial_{h}G\), with the quotient map \(\partial_{h}G\twoheadrightarrow\partial G\) being finite-to-one [20]. Note that \(\partial_{h}G\) is always totally disconnected, whereas \(\partial G\) is often a connected space such as a sphere.
**Remark 4.6**.: Even though the group \(G\) acts on \(\partial_{h}G\) by homeomorphisms, there is no natural action of \(G\) on the tree of atoms or on the atoms themselves; indeed, the image of an atom under translation by an element of \(G\) might not even be an atom.
**Remark 4.7**.: Note that each function \(d_{x}\) is \(1\)-Lipschitz, meaning that \(d_{x}(y)-d_{x}(z)\in\{-1,0,1\}\) for every adjacent pair of vertices \(y\) and \(z\) in \(B_{n}\). As a result, we can visualize \(\overline{d}_{x}\) as a **vector field** on \(B_{n}\), i.e. an assignment of directions to some subset of the edges of \(B_{n}\). In particular, if \(e\) is an edge connecting \(y\) and \(z\), we direct \(e\) from \(y\) to \(z\) if \(d_{x}(y)-d_{x}(z)=1\), and from \(z\) to \(y\) if \(d_{x}(y)-d_{x}(z)=-1\), leaving \(e\) undirected if \(d_{x}(y)=d_{x}(z)\). Intuitively, all of the edges of \(B_{n}\) are directed to point towards \(x\), whenever possible. Note that two \(1\)-Lipschitz functions induce the same vector field if and only if they differ by a constant, so such a vector field precisely determines \(\overline{d}_{x}\). Clearly there are finitely many such vector fields on \(B_{n}\), which is why \(B_{n}\) has only finitely many different atoms. We will not make use of vector fields in this paper, but see [BB] for more on this viewpoint, including pictures.
### Types of atoms
Let \(G\) be a group with a fixed finite generating set and associated word metric \(d\), and let \(\mathcal{A}(G)\) be the resulting
tree of atoms for \(G\). The following definition is taken from [1, Definition 3.7].
**Definition 4.8** (Morphisms of subtrees, same type).: Given two infinite atoms \(A_{1}\in\mathcal{A}_{m}(G)\) and \(A_{2}\in\mathcal{A}_{n}(G)\) we say that an element \(g\in G\) is a **morphism** from \(A_{1}\) to \(A_{2}\) if the following hold:
1. \(gA_{1}=A_{2}\).
2. \(g(A_{1}\cap B_{m+k})=A_{2}\cap B_{n+k}\) for all \(k\geq 0\).
3. For each \(k>0\) and each \(A_{1}^{\prime}\in\mathcal{A}_{m+k}(A_{1})\) there exists \(A_{2}^{\prime}\in\mathcal{A}_{n+k}(A_{2})\) such that \(gA_{1}^{\prime}=A_{2}^{\prime}\).
If such a morphism exists, we say that \(A_{1}\) and \(A_{2}\) have **the same type**.
Of the three conditions above, condition (i) is the most fundamental, and we are not aware of any groups where condition (ii) does not follow from condition (i). Condition (iii) says that \(g\) induces an isomorphism from the subtree of descendant atoms of \(A_{1}\) to the subtree of descendant atoms of \(A_{2}\). There are examples where a group \(G\) has two atoms \(A\in\mathcal{A}_{n}(G)\) and \(A^{\prime}\in\mathcal{A}_{n+1}(G)\) that are equal as subsets of \(G\), but these two atoms nonetheless have different types, for instance because \(A^{\prime}\) is the only child of \(A\), but \(A^{\prime}\) has multiple children.
**Remark 4.9**.: Note that there can be only finitely many morphisms between a given pair of atoms, and the set of morphisms is closed under compositions, inverses, and restrictions to child atoms. In [1], these facts were used to give the tree of atoms of a hyperbolic group the structure of a "self-similar tree" but we will not need that terminology here.
Having the same type is an equivalence relation on atoms, and the corresponding equivalence classes are the **types** of atoms for \(G\). The following proposition is fundamental to our work on hyperbolic groups.
**Proposition 4.10**.: _[_1_, Corollary 3.28]_ _If \(G\) is a hyperbolic group, then \(\mathcal{A}(G)\) has only finitely many different types of atoms._
This phenomenon is not unique to hyperbolic groups. For example, \(\mathbb{Z}^{2}\) has exactly nine different types of infinite atoms with respect to the generating set \(\{(1,0),(0,1)\}\), including the type of the root atom (see Example 4.4). As with a group that has finitely many cone types, a group with finitely many different types of atoms must have a rational growth series. Indeed, it is conceivable that a group has finitely many types of atoms if and only if it has finitely many cone types, though neither direction is obvious.
**Definition 4.11** (Type graph).: If \(G\) is a group with finitely many types of (infinite) atoms, the corresponding **type graph** is the finite directed graph \(\Gamma\) with one node for each type, and with \(n\) directed edges from \(v\) to \(w\) if each atom of type \(v\) has \(n\) children of type \(w\). The **root node** of \(\Gamma\) is the node corresponding to the type of the root atom \(G\in\mathcal{A}_{0}(G)\).
For example, Figure 2 shows the type graphs for \(\mathbb{Z}^{2}\) and the free group \(F_{2}\) with respect to the usual generating sets.
If \(G\) has finitely many types of atoms and \(\Gamma\) is the associated type graph, then it is possible to identify \(\partial_{h}G\) with the cone \(\mathfrak{C}_{v}\subseteq\Sigma_{\Gamma}\), where \(v\) is the root node of \(\Gamma\). This identification takes the form of a homeomorphism \(\varphi\colon\mathfrak{C}_{v}\to\partial_{h}G\), which has the following properties:
1. For each path \(\alpha\in\operatorname{Cones}(v)\), the homeomorphism \(\varphi\) maps the cone \(\mathfrak{C}_{\alpha}\) to the shadow of some atom \(A_{\alpha}\in\mathcal{A}_{n}(G)\) with type corresponding to the node \(t(\alpha)\), where \(n\) is equal to the length of \(\alpha\). This determines an isomorphism of trees \(\operatorname{Cones}(v)\to\mathcal{A}(G)\).
2. If \(\alpha,\beta\in\operatorname{Cones}(v)\) and \(t(\alpha)=t(\beta)\), then the canonical similarity \(\mathfrak{C}_{\alpha}\to\mathfrak{C}_{\beta}\) induces a morphism from \(A_{\alpha}\) to \(A_{\beta}\). That is, there exists a morphism \(g\) from \(A_{\alpha}\) to \(A_{\beta}\) so that \[\varphi(\beta\cdot\omega)=g\,\varphi(\alpha\cdot\omega)\] for all \(\omega\in\mathfrak{C}_{t(\alpha)}\).
We will refer to such a \(\varphi\) as a **system of addresses** for \(\partial_{h}G\). If \(\varphi\) maps a sequence \(\omega\in\mathfrak{C}_{v}\) to a point \(\vec{h}\in\partial_{h}G\), we will say that \(\omega\) is the **address** of \(\vec{h}\). Similarly, if \(\alpha\in\operatorname{Cones}(\mathfrak{C}_{v})\), we will call \(\alpha\) the **address** of the atom \(A_{\alpha}\). Note that \(\mathfrak{C}_{v}\) has no empty cones, since every infinite atom in \(\mathcal{A}_{n}\) contains at least one infinite atom in \(\mathcal{A}_{n+1}\).
Figure 2. Type graphs for \(\mathbb{Z}^{2}\) and \(F_{2}\) with respect to the usual generating sets, where the black dot is the root node.
**Remark 4.12**.: There is not a single canonical choice for a system of addresses \(\varphi\). Instead, the construction of \(\varphi\) involves the choice of a "rigid structure" on the tree \(\mathcal{A}(G)\), as described in [1, Section 2.3]. (See also [1, Proposition 2.21], for which the construction of the isomorphism \(\Phi\) does not actually require the tree to be "branching".) However, it follows from the proof of [1, Proposition 2.18] that there are only finitely many different choices for \(\varphi\).
**Proposition 4.13**.: _Suppose \(G\) has finitely many types of atoms, and let \(\varphi\colon\mathfrak{C}_{v}\to\partial_{h}G\) be a system of addresses as above, so \(\varphi\) together with the action of \(G\) on \(\partial_{h}G\) induce an action of \(G\) on \(\mathfrak{C}_{v}\). If this action is rational then the image of \(G\) in \(\mathcal{R}_{\Gamma,\mathfrak{C}_{v}}\) is an RSG._
Proof.: We just need to show that for any \(\alpha,\beta\in\mathrm{Cones}(v)\) with \(t(\alpha)=t(\beta)\), there exists an element of \(G\) mapping \(\mathfrak{C}_{\alpha}\) to \(\mathfrak{C}_{\beta}\) by the canonical similarity. Indeed, this follows immediately from the fact that canonical similarities on cones in \(\mathfrak{C}_{v}\) correspond to morphisms on shadows of atoms in \(\partial_{h}G\).
The following theorem is a restatement of the main result from [1].
**Theorem 4.14** ([1]).: _Let \(G\) be a hyperbolic group, and let \(\varphi\colon\mathfrak{C}_{v}\to\partial_{h}G\) be a system of addresses for \(\partial_{h}G\). If \(\partial_{h}G\) has no isolated points, then the induced action of \(G\) on \(\mathfrak{C}_{v}\) is by rational homeomorphisms._
Between Proposition 4.13 and Theorem 4.14, so far the action of \(G\) on \(\partial_{h}G\) gives us a map from \(G\) to an RSG, which constitutes progress toward Theorem 4.1, that every hyperbolic group embeds in a full, contracting RSG. Obviously embedding into a contracting RSG would be enough to embed into a full, contracting RSG, so achieving "full" is not an issue. However, there are the following obstacles to further progress:
1. If \(G\) has non-trivial finite normal subgroups, then the action of \(G\) on \(\partial_{h}G\) might not be faithful (see Remark 4.16).
2. We do not know whether \(\partial_{h}G\) has no isolated points, or whether the type graph \(\Gamma\) for \(G\) has an irreducible core.
We conjecture that (ii) holds for every non-elementary hyperbolic group, i.e. every hyperbolic group that is not virtually cyclic. However, in the present paper we will sidestep these difficulties by first embedding \(G\) into the free product \(G\ast\mathbb{Z}\). We prove in the next subsection that \(G\ast\mathbb{Z}\) always has properties (i) and (ii) above, and therefore \(G\ast\mathbb{Z}\) is isomorphic to an RSG. Following this, it will just remain to prove the contracting property.
### Atoms in \(G*\mathbb{Z}\)
In this subsection we consider the horofunction boundary for a free product \(G*\mathbb{Z}\), where \(G\) is any non-trivial group. We always assume that the generating set for \(G*\mathbb{Z}\) consists of the generators for \(G\) together with a generator \(t\) for the \(\mathbb{Z}\) factor. The main result is the following.
**Theorem 4.15**.: _If \(G\) is a non-trivial group, then \(\partial_{h}(G*\mathbb{Z})\) has no isolated points, and \(G*\mathbb{Z}\) acts faithfully on \(\partial_{h}(G*\mathbb{Z})\). Furthermore, if \(G*\mathbb{Z}\) has finitely many types of atoms, then the associated type graph has an irreducible core._
**Remark 4.16**.: It is not true in general that the horofunction boundary \(\partial_{h}G\) of an infinite group \(G\) has no isolated points, or that \(G\) acts faithfully on \(\partial_{h}G\). For example, the horofunction boundary \(\partial_{h}\mathbb{Z}\) is a two-point space, and \(\mathbb{Z}\) acts trivially on \(\partial_{h}\mathbb{Z}\). More generally, for all \(n\geq 1\) the horofunction boundary of \(\mathbb{Z}^{n}\) has isolated points, though the action is faithful for \(n\geq 2\).
There are also non-elementary hyperbolic groups \(G\) for which the action of \(G\) on \(\partial_{h}G\) is not faithful. For example, if \(G\) has a non-trivial finite normal subgroup \(N\) and \(S\) is any generating set for \(G\) that is closed under multiplication by elements of \(N\), then the corresponding horofunction boundary \(\partial_{h}G\) is naturally homeomorphic to \(\partial_{h}(G/N)\), and in particular \(N\) acts trivially on \(\partial_{h}G\). Note that if no such \(N\) exists then the action of \(G\) on \(\partial_{h}G\) is automatically faithful, since in this case \(G\) acts faithfully on its Gromov boundary \(\partial G\), which is a quotient of \(\partial_{h}G\) (see [10]).
We do not know whether the horofunction boundary of a non-elementary hyperbolic group can have isolated points, nor do we know whether the subshift of finite type associated to a non-elementary hyperbolic group always has an irreducible core.
The proof of Theorem 4.15 occupies the remainder of this subsection. Let \(G\) be a non-trivial group, and for each \(n\) let \(B_{n}\) denote the \(n\)-ball in \(G*\mathbb{Z}\). If \(w\) is an element of \(G*\mathbb{Z}\) that ends with \(t\) (in the sense that any minimum-length word for \(w\) ends with \(t\)), then the cone for \(w\) is the set
\[C(w)=\{wh\mid h\in G*\mathbb{Z}\text{ and }h\text{ does not begin with }t^{-1}\}.\]
By symmetry, a similar description holds for \(C(w)\) if \(w\) ends with \(t^{-1}\).
**Lemma 4.17**.: _If \(w\) is any element of \(G*\mathbb{Z}\) that ends in \(t\), then \(C(w)\) is an atom in \(\mathcal{A}_{|w|}(G*\mathbb{Z})\). Moreover, any two such atoms have the same type._
Proof.: Let \(n=|w|\), and let \(A\) be the (_a priori_ finite or infinite) atom for \(B_{n}\) that contains \(w\). We claim that \(A=C(w)\). First, observe from the geometry of \(G*\mathbb{Z}\) that any geodesic from a point in \(C(w)\) to a point in \(B_{n}\) must pass through \(w\). It follows that \(\vec{d}_{x}=\vec{d}_{w}=\vec{d}_{A}\) for all \(x\in C(w)\), so \(C(w)\subseteq A\). Conversely, if \(x\in A\), then since \(w,wt^{-1}\in B_{n}\) and \(\vec{d}_{x}\) agrees with \(\vec{d}_{w}\) on \(B_{n}\), the vertex \(x\) must be farther from \(wt^{-1}\) than from \(w\), and therefore \(x\in C(w)\). We conclude that \(A=C(w)\), so \(C(w)\in\mathcal{A}_{n}(G*\mathbb{Z})\).
Now suppose \(C(w)\) and \(C(w^{\prime})\) are two such atoms, with \(n=|w|\) and \(n^{\prime}=|w^{\prime}|\). We claim that \(w^{\prime}w^{-1}\) is a morphism from \(C(w)\) to \(C(w^{\prime})\). Clearly \(w^{\prime}w^{-1}\) maps \(C(w)\) to \(C(w^{\prime})\), and indeed maps \(C(w)\cap B_{n+k}\) to \(C(w^{\prime})\cap B_{n^{\prime}+k}\) for all \(k\). We claim that \(w^{\prime}w^{-1}\) maps each infinite atom contained in \(C(w)\) to an atom of the appropriate level contained in \(w^{\prime}\). To see this, observe that if \(A\in\mathcal{A}_{n+k}(G*\mathbb{Z})\) is any atom contained in \(C(w)\), then
\[\vec{d}_{A}(p)=\vec{d}_{A}(w)+d(w,p)\]
for all \(p\in B_{n+k}\setminus C(w)\). In particular, \(A\) is completely determined by the restriction of \(\vec{d}_{A}\) to \(B_{n+k}\cap C(w)\). Similarly, any atom of \(\mathcal{A}_{n^{\prime}+k}(G*\mathbb{Z})\) contained in \(C(w^{\prime})\) is determined by the restriction of its distance function to \(C(w^{\prime})\cap B_{n^{\prime}+k}\). Thus \(w^{\prime}w^{-1}\) maps \(A\) to the atom \(A^{\prime}\in\mathcal{A}_{n^{\prime}+k}(G*\mathbb{Z})\) that is contained in \(C(w^{\prime})\) and satisfies
\[\vec{d}_{A^{\prime}}(p)=\vec{d}_{A}\big{(}w(w^{\prime})^{-1}p\big{)}\]
for all \(p\in C(w^{\prime})\cap B_{n^{\prime}+k}\). We conclude that \(w^{\prime}w^{-1}\) is a morphism from \(C(w)\) to \(C(w^{\prime})\), so these two atoms have the same type.
**Lemma 4.18**.: _Let \(G\) be a non-trivial group, and let \(A\in\mathcal{A}_{n}(G*\mathbb{Z})\) for some \(n\geq 1\), where \(G*\mathbb{Z}=G*\langle t\rangle\). Then \(C(t)\) has a descendant of the same type as \(A\), and \(A\) has a descendant of the same type as \(C(t)\)._
Proof.: Note first that if \(w\) is any element of \(G*\mathbb{Z}\) that ends in \(t^{-1}\), then \(C(w)\) is an atom by the same argument as in Lemma 4.17. Moreover, any two such atoms have the same type.
For the first statement, observe that \(C(t^{-1})\in\mathcal{A}_{1}(G*\mathbb{Z})\), so every element of \(\mathcal{A}_{n}(G*\mathbb{Z})\) for \(n\geq 1\) is either contained in \(C(t^{-1})\) or disjoint from it. Since \(C(t)\) has a descendant of the same type as \(C(t^{-1})\), namely \(C(tgt^{-1})\) for any non-trivial \(g\in G\), any descendant of \(C(t^{-1})\) has the same type as some descendant of \(C(t)\). Suppose then that \(A\in\mathcal{A}_{n}(G*\mathbb{Z})\) is an atom that is disjoint from \(C(t^{-1})\), so \(tA\subseteq C(t)\). We claim that \(tA\) is an atom with the same type as \(A\).
To see this, note that \(t\) maps the complement of \(C(t^{-1})\) isometrically to \(C(t)\). Moreover, \(t\) maps \(B_{n}\setminus C(t^{-1})\) to \(B_{n+1}\cap C(t)\) for each \(n\). If
\(a\in A\), it follows that
\[\vec{d}_{ta}(p)=\vec{d}_{a}(t^{-1}p)\]
for all \(p\in C(t)\cap B_{n+1}\). Since every geodesic from \(ta\) to \((G*\mathbb{Z})\setminus C(t)\) passes through \(t\), we also know that
\[\vec{d}_{ta}(p)=\vec{d}_{ta}(t)+d(p,t)\]
for all \(p\in B_{n+1}\setminus C(t)\). It follows that \(tA\) is contained in a single atom \(A^{\prime}\in\mathcal{A}_{n+1}(G*\mathbb{Z})\), with
\[\vec{d}_{A^{\prime}}(p)=\begin{cases}\vec{d}_{A}(t^{-1}p)&\text{if }p\in C(t) \cap B_{n+1},\\ \vec{d}_{A}(1)+d(p,t)&\text{if }p\in B_{n+1}\setminus C(t).\end{cases}\]
Furthermore, if \(x\) is any point in \(A^{\prime}\), then \(x\) must lie in \(C(t)\) since \(\vec{d}_{A^{\prime}}(t)<\vec{d}_{A^{\prime}}(1)\). In this case, it is easy to see that \(\vec{d}_{t^{-1}x}\) agrees with \(\vec{d}_{A}\) on \(B_{n}\), and therefore \(x\in tA\). This proves that \(tA\) is an atom. Moreover, the same argument applies to any descendant of \(A\), so \(t\) maps the descendants of \(A\) to descendants of \(tA\). Since \((tA)\cap B_{n+k+1}=t(A\cap B_{n+k})\) for all \(k\geq 0\), we conclude that \(t\) is a morphism from \(A\) to \(tA\), so \(tA\) has the same type as \(A\).
For the other direction, let \(A\in\mathcal{A}_{n}(G*\mathbb{Z})\) for some \(n\geq 1\), and let \(a\) be a point of \(A\). If \(a\) does not end with \(t^{-1}\), then any geodesic from any point in \(C(at)\) to \(B_{n}\) must pass through \(a\), and it follows easily that \(C(at)\subseteq A\). If \(a\) ends in \(t^{-1}\), we can fix a non-trivial element \(f\in G\). Then any geodesic from any point in \(C(aft)\) to \(B_{n}\) must pass through \(a\), so again \(C(aft)\subseteq A\). In either case, this gives a descendant of \(A\) with the same type as \(C(t)\).
Proof of Theorem 4.15.: We claim first that \(\partial_{h}(G*\mathbb{Z})\) has no isolated points. By Lemma 4.18, every infinite atom in \(G*\mathbb{Z}\) contains an atom with the same type as \(C(t)\). But \(C(t)\) contains at least one pair of disjoint infinite atoms, e.g. \(C(t^{2})\) and \(C(tgt)\) for some non-trivial element \(g\in G\). It follows that every infinite atom in \(G*\mathbb{Z}\) contains at least one disjoint pair of infinite atoms. In \(\partial_{h}(G*\mathbb{Z})\), this means that every basic open set contains at least one pair of disjoint basic open sets, which proves that \(\partial_{h}(G*\mathbb{Z})\) has no isolated points.
To prove \(G*\mathbb{Z}\) acts faithfully on \(\partial_{h}(G*\mathbb{Z})\), let \(w\) be any non-trivial element of \(G*\mathbb{Z}\). Let \(v\) be any element of \(G*\mathbb{Z}\) that ends with \(t\), and does not have a minimum-length word that starts with the same generator (or inverse generator) as a minimum length word for \(w\) or \(w^{-1}\). Then \(v\) and \(wv\) both end with \(t\) and \(C(v)\) and \(C(wv)\) are disjoint. Then \(w\) maps the infinite atom \(C(v)\) to the infinite atom \(C(wv)\), which is disjoint from \(C(v)\), so \(w\) acts non-trivially on the horofunction boundary.
Finally, suppose \(G*\mathbb{Z}\) has finitely many types of (infinite) atoms. By Lemma 4.18, every node of the type graph \(\Gamma\) has a directed path to the type of \(C(t)\), and there is a directed path from the type of \(C(t)\) to every other node in \(\Gamma\) except for the basepoint (i.e. the type of \(G*\mathbb{Z}\) itself). It follows that the induced subgraph \(\Gamma_{0}\) on all nodes but the basepoint is strongly connected. Furthermore, \(\Gamma_{0}\) cannot be a directed cycle since \(\partial_{h}(G*\mathbb{Z})\) is infinite, so we conclude that \(\Sigma_{\Gamma}\) has an irreducible core.
### The contracting lemma
In this subsection we prove a geometric lemma about atoms in hyperbolic groups that is the main content of the proof that hyperbolic groups are contracting. Throughout this subsection, let \(G\) be an infinite \(\delta\)-hyperbolic group, and let \(B_{n}\) denote the \(n\)-ball in \(G\).
For any \(x\in G\setminus B_{n-1}\), let \(N(x,B_{n})\) be the set of **nearest neighbors** of \(x\) in \(B_{n}\). This can be described in the following equivalent ways:
1. It is the set of all \(p\in B_{n}\) so that \(d(x,p)\leq d(x,q)\) for all \(q\in B_{n}\).
2. It is the set of all points at which geodesics from \(1\) to \(x\) intersect the \(n\)-sphere \(S_{n}\coloneqq B_{n}\setminus B_{n-1}\).
3. It is the set of points in \(B_{n}\) at which \(\vec{d}_{x}\) obtains its minimum value.
It follows from (iii) that \(N(x,B_{n})\) depends only on which atom for \(B_{n}\) contains \(x\). Thus, if \(A\in\mathcal{A}_{n}(G)\), we can define
\[N(A)\coloneqq N(x,B_{n})\]
where \(x\) is any point in \(A\).
Our goal in this subsection is to prove the following lemma.
**Lemma 4.19** (The contracting lemma).: _Let \(g\in G\), and let \(A\in\mathcal{A}_{n}(G)\) with \(n>2|g|+39\delta+13\). Then there exists \(A^{\prime}\in\mathcal{A}(G)\) such that \(gA\subseteq A^{\prime}\) and \(N(A^{\prime})\) lies within the \((18\delta+6)\)-neighborhood of \(g\,N(A)\)._
The important thing here is that \(18\delta+6\) is a constant. We will show in Subsection 4.5 that the local action \(g|_{A}\) depends only on the \(G\)-orbit of the pair \(\big{(}g\,N(A),N(A^{\prime})\big{)}\) together with a finite amount of additional data, from which it will follow that \(G\) has finite nucleus.
The proof of Lemma 4.19 involves some technical results from [1]. If \(x\in G\setminus B_{n-1}\), the **visible set**\(V(x,B_{n})\) is the set of all points \(p\in B_{n}\) such that every geodesic \([p,x]\) intersects \(B_{n}\) only at \(p\). Again, it is possible to prove that \(V(x,B_{n})\) depends only on which atom for \(B_{n}\) contains \(x\)[1, Proposition 2.15]. In particular, if \(A\in\mathcal{A}_{n}(G)\) we can define
\[V(A)\coloneqq V(x,B_{n})\]
where \(x\) is any point of \(A\). The following proposition lists some further properties of \(N(A)\) and \(V(A)\).
**Proposition 4.20**.: _Let \(x\in G\setminus B_{n-1}\). Then:_
1. \(N(x,B_{n})\) _has diameter at most_ \(2\delta\)_._
2. \(N(x,B_{n})\subseteq V(x,B_{n})\subseteq S_{n}\)_, and_ \(V(x,B_{n})\) _is contained in a_ \((4\delta+2)\)_-neighborhood of each point in_ \(N(x,B_{n})\)_._
3. _If_ \(b\in B_{n}\)_, then there exists a geodesic_ \([b,x]\) _that contains a point of_ \(V(x,B_{n})\)_._
4. _If_ \(A\in\mathcal{A}_{n}(G)\) _and_ \(U\) _is a subset of_ \(B_{n}\) _that contains_ \(V(x,B_{n})\cup V(A)\)_, then_ \(x\in A\) _if and only if_ \(\overline{d}_{x}\) _agrees with_ \(\overline{d}_{A}\) _on_ \(U\)_._
Proof.: Statement (i) follows from the standard fact than any two geodesics with the same endpoints synchronously \(2\delta\)-fellow travel. Statement (ii) follows from Propositions 3.19 and 3.21 of [1], while statements (iii) and (iv) are Propositions 3.16 and 3.18 in [1], respectively.
We now prove two lemmas that we will need for the proof of Lemma 4.19. For \(\overline{f}\in\overline{F}(G,\mathbb{Z})\) and \(S\subseteq G\), write
\[\|\overline{f}\|_{S}\coloneqq\frac{1}{2}\sup\bigl{\{}|f(s)-f(s^{\prime})| \;\big{|}\;s,s^{\prime}\in S\bigr{\}}\]
where \(f\) is any representative for \(\overline{f}\) in \(F(G,\mathbb{Z})\).
**Lemma 4.21**.: _Let \(g\in G\), let \(\epsilon\in\mathbb{N}\), and let \(A\in\mathcal{A}_{n}(G)\), where_
\[n\geq 2|g|+\delta+2\epsilon. \tag{1}\]
_Let \(U\) be the \((6\delta+2+2\epsilon)\)-neighborhood of \(g\,N(A)\). If \(\;\bigl{\|}\overline{d}_{1}-\overline{d}_{g}\bigr{\|}_{U}\leq\epsilon\), then there exists \(A^{\prime}\in\mathcal{A}(G)\) such that \(gA\subseteq A^{\prime}\) and \(N(A^{\prime})\) is contained in \(U\)._
Proof.: Note that by Proposition 4.20(ii), the set \(g\,V(A)\) is contained in the \((4\delta+2)\)-neighborhood of \(g\,N(A)\), which means that \(g\,V(A)\subseteq U\). Hence, \(g\,N(A)\subseteq g\,V(A)\subseteq U\cap gS_{n}\).
Now, since \(\bigl{\|}\overline{d}_{1}-\overline{d}_{g}\bigr{\|}_{U}\leq\epsilon\), there exists \(C\in\mathbb{Z}\) so that
\[0\leq d(u,1)-d(u,g)+C\leq 2\epsilon \tag{2}\]
for all \(u\in U\). In particular,
\[U\cap gS_{n}\subseteq\bigcup_{k\leq i\leq k+2\epsilon}S_{i}\qquad\text{and} \qquad U\cap S_{k}\subseteq gB_{n} \tag{3}\]
where \(k\coloneqq n-C\). Note that our hypothesis on \(n\) ensures that \(k\) is a positive integer. Indeed, since \(|d(u,1)-d(u,g)|\leq|g|\), it follows from (2) that \(C\leq|g|+2\epsilon\), so by (1),
\[k=n-C\geq n-|g|-2\epsilon\geq|g|+\delta \tag{4}\]
and hence \(k>0\).
Now consider a point \(x\in gA\). We claim that \(x\notin B_{k-1}\) and \(V(x,B_{k})\subseteq U\). For the first statement, we know from (1) that \(|g|\leq n\), so \(g^{-1}\in B_{n}\). Since \(g^{-1}x\in A\), it follows from Proposition 4.20(iii) that some geodesic \([g^{-1},g^{-1}x]\) passes through \(V(A)\). Hence the geodesic \([1,x]\coloneqq g[g^{-1},g^{-1}x]\) passes through a point \(y\) of \(g\,V(A)\). Then \(y\in U\cap gS_{n}\), so by (3) we have that \(k\leq|y|\leq k+2\epsilon\). It follows that \(|x|\geq|y|\geq k\), so \(x\notin B_{k-1}\).
To prove that \(V(x,B_{k})\subseteq U\), let \([g,x],[1,x],[1,g]\) be a geodesic triangle. Since \(g^{-1}x\in A\), the geodesic \([1,g^{-1}x]\coloneqq g^{-1}[g,x]\) passes through \(N(A)\), so \([g,x]\) passes through \(g\,N(A)\) at some point \(q\). Then \(q\in U\cap gS_{n}\), so by (3) we have
\[k\leq|q|\leq k+2\epsilon. \tag{5}\]
Combining this with (4), we deduce that \(|q|\geq|g|+\delta\). In particular, \(q\) does not lie within \(\delta\) of any interior point of \([1,g]\). Also, \(d(q,g)=n>\delta\), so by the thin triangle condition, it follows that \(q\) lies within \(\delta\) of some point \(x^{\prime}\in[1,x]\). Let \(q^{\prime}\) be the point at which \([1,x]\) passes through \(S_{k}\), so \(q^{\prime}\in N(x,B_{k})\). Since \(d(q,x^{\prime})\leq\delta\) and \(\big{|}|q|-k\big{|}\leq 2\epsilon\) by (5), it follows from the triangle inequality that \(\big{|}|x^{\prime}|-k\big{|}\leq\delta+2\epsilon\). In particular, \(d(x^{\prime},q^{\prime})\leq\delta+2\epsilon\), so \(d(q,q^{\prime})\leq 2\delta+2\epsilon\), again by the triangle inequality. But Proposition 4.20(ii) tells us that \(V(x,B_{k})\) lies within a \((4\delta+2)\)-neighborhood of \(q^{\prime}\), so \(V(x,B_{k})\) lies within a \((6\delta+2+2\epsilon)\)-neighborhood of \(q\). Since \(q\in g\,N(A)\), we conclude that \(V(x,B_{k})\subseteq U\).
We are now ready to prove the desired statements. Since \(A\) is infinite, there is an infinite atom \(A^{\prime}\in\mathcal{A}_{k}(G)\) that intersects \(gA\). Note that \(V(A^{\prime})\subseteq U\), since \(A^{\prime}\) contains a point of \(gA\) and \(V(x,B_{k})\subseteq U\) for every \(x\in gA\). In particular, \(N(A^{\prime})\subseteq U\). We claim that \(gA\subseteq A^{\prime}\). Let \(x\in gA\), and fix a point \(p\in A^{\prime}\cap gA\). Since \(x,p\in gA\), we know that \(\vec{d}_{x}\) agrees with \(\vec{d}_{p}\) on \(gB_{n}\). Since \(p\in A^{\prime}\), we know that \(\vec{d}_{p}\) agrees with \(\vec{d}_{A^{\prime}}\) on \(B_{k}\). But \(U\cap S_{k}\) is contained in \(gB_{n}\) by (3) and is also contained in \(B_{k}\), so \(\vec{d}_{x}\) agrees with \(\vec{d}_{A^{\prime}}\) on \(U\cap S_{k}\). Since \(V(A^{\prime})\subseteq U\cap S_{k}\) and \(V(x,B_{k})\subseteq U\cap S_{k}\), it follows from Proposition 4.20(iv) that \(x\in A^{\prime}\), and therefore \(gA\subseteq A^{\prime}\).
**Lemma 4.22**.: _Let \(p\in B_{n}\), and let \(S\) be a subset of \(G\setminus B_{n}\). If_
\[\operatorname{diam}(S)<2n-2|p|-4\delta\]
_then \(\left\|\overline{d}_{p}-\overline{d}_{1}\right\|_{S}\leq 6\delta+2\)._
Proof.: Note that \(d_{1}\) is just word length. It suffices to prove that
\[\left|\left(d(x,p)-|x|\right)-\left(d(y,p)-|y|\right)\right|\leq 2(6\delta+2)\]
for all \(x,y\in S\). Let \(x,y\in S\), and fix geodesics \([x,y]\), \([x,p]\), and \([y,p]\). Let \(m_{xy}\) be the point on the geodesic \([x,y]\) which is farthest from \(x\) but still within \(\delta\) of \([x,p]\). By the thin triangle condition, \(m_{xy}\) must also lie within \(\delta+1\) of \([y,p]\), so there exist points \(m_{xp}\in[x,p]\) and \(m_{yp}\in[y,p]\) so that \(d(m_{xy},m_{xp})\leq\delta\) and \(d(m_{xy},m_{yp})\leq\delta+1\). Then
\[\left|\left(d(x,p)-d(y,p)\right)-\left(d(x,m_{xy})-d(y,m_{xy}) \right)\right|\] \[=\left|d(x,m_{xp})+d(m_{xp},p)-d(y,m_{yp})-d(m_{yp},p)-d(x,m_{xy} )+d(y,m_{xy})\right|\] \[\leq|d(x,m_{xp})-d(x,m_{xy})|+|d(y,m_{yp})-d(y,m_{xy})|+|d(m_{xp},p)-d(m_{yp},p)|\] \[\leq d(m_{xp},m_{xy})+d(m_{yp},m_{xy})+d(m_{xp},m_{yp})\] \[\leq\delta+(\delta+1)+(2\delta+1)=4\delta+2.\]
Now fix geodesics \([x,1]\), \([y,1]\), and \([1,p]\). We know that either \(d(x,m_{xy})\leq\frac{1}{2}d(x,y)\) or \(d(y,m_{xy})\leq\frac{1}{2}d(x,y)\). Let us assume the first case, the other being analogous. We see that
\[d(x,m_{xy})+|m_{xy}|\geq|x|\geq n+1\]
from which it follows that
\[|m_{xy}|\geq n+1-\frac{d(x,y)}{2}\geq n+1-\frac{\operatorname{diam}(S)}{2}\]
so we have
\[d(m_{xy},m_{yp})+|m_{yp}|\geq|m_{xy}|\geq n+1-\frac{\operatorname{diam}(S)}{2}\]
and since \(d(m_{xy},m_{yp})\leq\delta+1\) we get
\[|m_{yp}|\geq n+1-\frac{\operatorname{diam}(S)}{2}-(\delta+1)>|p|+\delta\]
and therefore \(m_{yp}\) does not lie within \(\delta\) of \([1,p]\). Applying the thin triangle condition to \([1,p]\cup[y,p]\cup[y,1]\), there exists a point \(m_{y}\in[y,1]\) such that \(d(m_{yp},m_{y})\leq\delta\). A similar argument for the triangle \([1,p]\cup[x,p]\cup[x,1]\) yields a point \(m_{x}\in[x,1]\) such that \(d(m_{xp},m_{x})\leq\delta\). Then \(d(m_{xy},m_{x})\leq 2\delta\) and \(d(m_{xy},m_{y})\leq 2\delta+1\), so by the same reasoning as above
\[\left|\left(|x|-|y|\right)-\left(d(x,m_{xy})-d(y,m_{xy})\right)\right|\] \[\leq d(m_{x},m_{xy})+d(m_{y},m_{xy})+d(m_{x},m_{y})\] \[\leq 2\delta+(2\delta+1)+(4\delta+1)=8\delta+2.\]
By the triangle inequality, it follows that
\[\big{|}\big{(}d(x,p)-|x|\big{)}-\big{(}d(y,p)-|y|\big{)}\big{|}\\ =\big{|}\big{(}d(x,p)-d(y,p)\big{)}-\big{(}|x|-|y|\big{)}\big{|}\\ \leq(4\delta+2)+(8\delta+2)=12\delta+4\]
as desired.
We are now ready to prove the contracting lemma.
Proof of Lemma 4.19.: Let \(g\in G\), let \(A\in\mathcal{A}_{n}(G)\) with \(n>2|g|+39\delta+13\), and let \(U\) be the \((18\delta+6)\)-neighborhood of \(g\,N(A)\). We must prove that there exists \(A^{\prime}\in\mathcal{A}(G)\) such that \(gA\subseteq A^{\prime}\) and \(N(A^{\prime})\subseteq U\).
By Proposition 4.20(i), the diameter of \(g\,N(A)\) is at most \(2\delta\), so
\[\operatorname{diam}(U)\leq 2\delta+2(18\delta+6)=38\delta+12.\]
Since \(N(A)\subseteq S_{n}\), we know that \(g\,N(A)\) is disjoint from \(B_{n-|g|-1}\), so \(U\) is disjoint from \(B_{n^{\prime}}\), where \(n^{\prime}=n-|g|-1-(18\delta+6)\). We now have
\[2n^{\prime}-2|g|-4\delta \geq 2\big{(}n-|g|-1-(18\delta+6)\big{)}-2|g|-4\delta\] \[=2n-4|g|-40\delta-14\] \[>2\big{(}2|g|+39\delta+13\big{)}-4|g|-40\delta-14\] \[=38\delta+12\geq\operatorname{diam}(U).\]
By Lemma 4.22, it follows that \(\big{\|}\overline{d}_{1}-\overline{d}_{g}\big{\|}_{U}\leq 6\delta+2\). Set \(\epsilon:=6\delta+2\). Then \(\big{\|}\overline{d}_{1}-\overline{d}_{g}\big{\|}_{U}\leq\epsilon\) and
\[2|g|+\delta+2\epsilon=2|g|+13\delta+4<n.\]
Since \(18\delta+6=6\delta+2+2\epsilon\), it follows from Lemma 4.21 that there exists \(A^{\prime}\in\mathcal{A}(G)\) such that \(gA\subseteq A^{\prime}\) and \(N(A^{\prime})\subseteq U\).
### Proof that \(G\) has finite nucleus
The goal of this subsection is to prove the following theorem.
**Theorem 4.23**.: _Let \(G\) be a hyperbolic group for which \(\partial_{h}G\) has no isolated points, and let \(\Gamma\) be the type graph for \(G\) with root node \(r\). Then the image of \(G\) in \(\mathcal{R}_{\Gamma,\mathfrak{c}_{r}}\) has finite nucleus._
We will make use of the contracting lemma (Lemma 4.19) proven in the last subsection, and we will also need some additional technical results from [1].
Let \(B_{n}\) denote the \(n\)-ball in \(G\). If \(A\in\mathcal{A}_{n}(G)\), let \(N(A)\) be the set of nearest neighbors of \(A\) in \(B_{n}\), and let
\[\widehat{N}(A)\coloneqq S_{n}\cap B_{4\delta+2}(N(A))\]
where \(B_{4\delta+2}(N(A))\) denotes the \((4\delta+2)\)-neighborhood of \(N(A)\). The following proposition is a version of the main technical result in [1].
**Proposition 4.24**.: _Let \(A,A^{\prime}\in\mathcal{A}(G)\), let \(g\in G\), and suppose that_
1. \(g\,\widehat{N}(A)=\widehat{N}(A^{\prime})\)_,_
2. \(g\,\overline{d}_{A}\) _agrees with_ \(\overline{d}_{A^{\prime}}\) _on_ \(\widehat{N}(A^{\prime})\)_, and_
3. \(g\,C(x)=C(gx)\) _for all_ \(x\in\widehat{N}(A)\)_._
_Then \(g\) is a morphism from \(A\) to \(A^{\prime}\)._
Proof.: By [1, Proposition 3.21], the set \(\widehat{N}(A)\) contains the proximal set for \(A\) as defined in [1, Definition 3.20]. Thus the given proposition follows from [1, Proposition 3.27].
Now fix a system of addresses \(\varphi\colon\mathfrak{C}_{r}\to\partial_{h}G\), and let \(A_{\alpha}\) denote the atom associated to each \(\alpha\in\operatorname{Cones}(\mathfrak{C}_{r})\). If \(g\in G\), let \(g|_{r}\) denote the homeomorphism of \(\mathfrak{C}_{r}\) induced by \(g\) via \(\varphi\). We write \(g|_{\alpha}\coloneqq(g|_{r})|_{\alpha}\) and \(\overline{g}(\alpha)\coloneqq\overline{g|_{r}}(\alpha)\) for each \(\alpha\in\operatorname{Cones}(\mathfrak{C}_{r})\). Set
\[G^{\varphi}=\big{\{}g|_{r}\bigm{|}g\in G\big{\}}\]
and let \(\mathcal{N}_{G}\) denote the nucleus of \(G^{\varphi}\). Note that \(G^{\varphi}\) may be a proper quotient of \(G\) if the action of \(G\) on \(\partial_{h}G\) is not faithful.
If \(A_{\alpha}\) and \(A_{\beta}\) are atoms with \(t(\alpha)=t(\beta)\), then our system of addresses determines a canonical morphism \(g\) from \(A_{\alpha}\) to \(A_{\beta}\), which has the property that \(g|_{\alpha}\) is the identity on \(\mathfrak{C}_{t(\alpha)}\). If \(g\) is an arbitrary morphism from \(A_{\alpha}\) to \(A_{\beta}\), then \(g|_{\alpha}\) need not be the identity, but it is still a self-homeomorphism of \(\mathfrak{C}_{t(\alpha)}\).
**Proposition 4.25**.: _Let \(v\) be a node in \(\Gamma\), let \(A_{\alpha},A_{\beta}\in\mathcal{A}(G)\) be atoms of type \(v\), and let_
\[\operatorname{Mor}(v,G)=\big{\{}g|_{\alpha}\bigm{|}g\text{ is a morphism from }A_{\alpha}\text{ to }A_{\beta}\big{\}}.\]
_Then \(\operatorname{Mor}(v,G)\) does not depend on the chosen cones \(A_{\alpha}\) and \(A_{\beta}\), and is a finite subgroup of \(\mathcal{R}_{\Gamma,\mathfrak{C}_{v}}\)._
Proof.: Note that each \(g|_{\alpha}\) is a homeomorphism of \(\mathfrak{C}_{v}\), and it is apparent from the definition that morphisms are closed under composition and inverses. That is, if \(g\) is a morphism from \(A_{\alpha}\) to \(A_{\beta}\) and \(h\) is a morphism from \(A_{\beta}\) to \(A_{\gamma}\), then \(g^{-1}\) is a morphism from \(A_{\beta}\) to \(A_{\alpha}\) and \(hg\) is a morphism from \(A_{\alpha}\) to \(A_{\gamma}\). Furthermore, since each \(g|_{\alpha}\) and \(h|_{\beta}\) are homeomorphisms of \(\mathfrak{C}_{v}\), we have \((hg)|_{\alpha}=h|_{\beta}\circ g|_{\alpha}\). It follows easily that \(\operatorname{Mor}(v,G)\) is a subgroup of \(\mathcal{R}_{\Gamma,\mathfrak{C}_{v}}\) and it does not depend on the chosen cones.
To prove that \(\operatorname{Mor}(v,G)\) is finite, observe that if \(A_{\alpha}\) is a cone of type \(v\) and \(k\in\mathbb{N}\) so that \(A_{\alpha}\cap S_{k}\neq\emptyset\), then any self-morphism of \(A_{\alpha}\) must
permute the points of \(A_{\alpha}\cap S_{k}\). It follows that there are only finitely many morphisms from \(A_{\alpha}\) to \(A_{\alpha}\), so \(\operatorname{Mor}(v,G)\) is finite.
Two local actions \(g|_{\alpha},h|_{\beta}\colon\mathfrak{C}_{v}\to\mathfrak{C}_{w}\) are **morphism-equivalent** if there exist \(k\in\operatorname{Mor}(v,G)\) and \(\ell\in\operatorname{Mor}(w,G)\) such that
\[h|_{\beta}=\ell\circ(g|_{\alpha})\circ k.\]
Morphism-equivalence is the same as the notion of "equivalent restrictions" on the self-similar tree of atoms given in [1]. Clearly each morphism-equivalence class is finite, so if we want to prove that the nucleus \(\mathcal{N}_{G}\) of \(G^{\varphi}\) is finite it suffices to prove that \(\mathcal{N}_{G}\) has only finitely many morphism-equivalence classes.
The proof of rationality in [1, Section 3.6] hinges on a notion of a "mapping triple", which is an ordered triple \((g,A,A^{\prime})\) satisfying certain properties. Our proof here will use a different notion of a mapping triple, which we now define.
**Definition 4.26** (Mapping triple).: A **mapping triple** is an ordered triple \((g,A_{\alpha},A_{\beta})\) such that
1. \(g\in G\),
2. \(A_{\alpha}\in\mathcal{A}_{n}(G)\) for some \(n>2|g|+39\delta+13\), and
3. \(A_{\beta}\in\mathcal{A}(G)\) is an atom for which \(gA_{\alpha}\subseteq A_{\beta}\) and \(N(A_{\beta})\) is contained in the \((18\delta+6)\)-neighborhood of \(g\,N(A_{\alpha})\).
By the contracting lemma (Lemma 4.19), for any \(g\in G\) and any \(A_{\alpha}\in\mathcal{A}_{n}(G)\) with \(n>2|g|+39\delta+13\), there exists \(A_{\beta}\in\mathcal{A}(G)\) such that \((g,A_{\alpha},A_{\beta})\) is a mapping triple. In particular, for all \(g\in G\), for all but finitely many \(\alpha\), there exists a mapping triple of the form \((g,A_{\alpha},A_{\beta})\).
We say that two mapping triples \((g,A_{\alpha},A_{\beta})\) and \((h,A_{\zeta},A_{\eta})\) are **equivalent** if there exists a morphism \(\ell\) from \(A_{\beta}\) to \(A_{\eta}\) such that \(h^{-1}\ell g\) is a morphism from \(A_{\alpha}\) to \(A_{\zeta}\).
**Lemma 4.27**.: _If the mapping triples \((g,A_{\alpha},A_{\beta})\) and \((h,A_{\zeta},A_{\eta})\) are equivalent then \(g|_{\alpha}\) and \(h|_{\zeta}\) are morphism-equivalent._
Proof.: Choose a morphism \(\ell\) from \(A_{\beta}\) to \(A_{\eta}\) such that \(k\coloneqq h^{-1}\ell g\) is a morphism from \(A_{\alpha}\) to \(A_{\zeta}\). Since \(hkA_{\alpha}=hA_{\zeta}\), we have that \(\overline{hk}(\alpha)=\overline{h}(\zeta)=\overline{h}(\overline{k}(\alpha))\), so \((hk)|_{\alpha}=h|_{\zeta}\circ k|_{\alpha}\) by Lemma 2.10.
Let \(\beta^{\prime}=\overline{g}(\alpha)\) and \(\eta^{\prime}=\overline{h}(\zeta)\), so \(A_{\beta^{\prime}}\) is the smallest atom in \(A_{\beta}\) that contains \(gA_{\alpha}\), and \(A_{\eta^{\prime}}\) is the smallest atom in \(A_{\eta}\) that contains \(hA_{\zeta}\). Since \(\ell\) is a morphism, it maps the tree of descendants of \(A_{\beta}\) isomorphically to the tree of descendants of \(A_{\zeta}\), and since \(\ell gA_{\alpha}=hkA_{\alpha}=hA_{\zeta}\), it follows that \(\ell A_{\beta^{\prime}}=A_{\eta^{\prime}}\). Indeed, \(\ell\) must be a morphism
from \(\beta^{\prime}\) to \(\eta^{\prime}\). Now we compute
\[\overline{\ell g}(\alpha)=\overline{hk}(\alpha)=\overline{h}(\zeta)=\eta^{\prime }=\overline{\ell}(\beta^{\prime})=\overline{\ell}(\overline{g}(\alpha)),\]
so \((\ell g)|_{\alpha}=\ell|_{\beta^{\prime}}\circ g|_{\alpha}\) by Lemma 2.10. Since \(hk=\ell g\), we conclude that \(h|_{\zeta}\circ k|_{\alpha}=\ell|_{\beta^{\prime}}\circ g|_{\alpha}\). But \(k|_{\alpha}\in\operatorname{Mor}(t(\alpha),G)\) and \(\ell|_{\beta^{\prime}}\in\operatorname{Mor}(t(\beta^{\prime}),G)\), so \(g|_{\alpha}\) and \(h|_{\zeta}\) are morphism-equivalent.
Now we are poised to prove Theorem 4.23 that the image \(G^{\varphi}\) of \(G\) in \(\mathcal{R}_{\Gamma,\mathfrak{e}_{r}}\) has finite nucleus.
Proof of Theorem 4.23.: The proof closely follows the proof of Theorem 3.10 given in Subsection 3.6 of [2]. That proof established that each \(g\in G\) individually has only finitely many local actions, and here we need to prove that all the \(g\in G\) collectively have a finite set of local actions containing all but finitely many of each element's local actions.
Define the **signature** of a mapping triple \((g,A_{\alpha},A_{\beta})\), to be the following information:
1. The sets \(g\,\widehat{N}(A_{\alpha})\) and \(\widehat{N}(A_{\beta})\),
2. The functions \(g\,\overline{d}_{A_{\alpha}}\) on \(g\,\widehat{N}(A_{\alpha})\) and \(\overline{d}_{A_{\beta}}\) on \(\widehat{N}(A_{\beta})\), and
3. The set \(g\,C(g^{-1}p)\) for each \(p\in g\,\widehat{N}(A_{\alpha})\), and the cone \(C(q)\) for each \(q\in\widehat{N}(A_{\beta})\).
We say that two mapping triples \((g,A_{\alpha},A_{\beta})\) and \((h,A_{\zeta},A_{\eta})\) have **equivalent signatures** if there exists \(\ell\in G\) so that
1. \(\ell\) maps \(g\,\widehat{N}(A_{\alpha})\) to \(h\,\widehat{N}(A_{\zeta})\) and \(\widehat{N}(A_{\beta})\) to \(\widehat{N}(A_{\eta})\),
2. \(\ell g\,\overline{d}_{A_{\alpha}}\) agrees with \(h\,\overline{d}_{A_{\zeta}}\) on \(h\,\widehat{N}(A_{\zeta})\), and \(\ell\,\overline{d}_{A_{\beta}}\) agrees with \(\overline{d}_{A_{\eta}}\) on \(\widehat{N}(A_{\eta})\), and
3. \(\ell g\,C(g^{-1}p)=h\,C(h^{-1}\ell p)\) for all \(p\in g\,\widehat{N}(A_{\alpha})\), and \(\ell\,C(q)=C(\ell q)\) for all \(q\in\widehat{N}(A_{\beta})\).
It follows immediately from Proposition 4.24 that if two mapping triples \((g,A_{\alpha},A_{\beta})\) and \((h,A_{\zeta},A_{\eta})\) have equivalent signatures via an element \(\ell\in G\), then \(\ell\) is a morphism from \(A_{\beta}\) to \(A_{\eta}\) and \(k\coloneqq h^{-1}\ell g\) is a morphism from \(A_{\alpha}\) to \(A_{\zeta}\), so the two mapping triples are equivalent. Then by Lemma 4.27, it follows that the local actions \(g|_{\alpha}\) and \(h|_{\zeta}\) are morphism-equivalent. To summarize, equivalent signatures implies equivalent mapping triples implies equivalent local actions.
As observed previously, the contracting lemma (Lemma 4.19) tells us that, for any \(g\in G\) and any \(A_{\alpha}\in\mathcal{A}_{n}(G)\) with \(n>2|g|+39\delta+13\), there exists \(A_{\beta}\in\mathcal{A}(G)\) such that \((g,A_{\alpha},A_{\beta})\) is a mapping triple. In particular, for every \(p\in\mathcal{N}_{G}\) there exists a mapping triple \((g,A_{\alpha},A_{\beta})\)
such that \(g|_{\alpha}=p\). Therefore, to prove that \(\mathcal{N}_{G}\) is finite, it suffices to prove that there are only finitely many equivalence classes of signatures.
Consider a mapping triple \((g,A_{\alpha},A_{\beta})\). By Proposition 4.20(i), the sets \(N(A_{\alpha})\) and \(N(A_{\beta})\) have diameter at most \(2\delta\), and the same holds for \(g\,N(A_{\alpha})\). By the definition of a mapping triple, \(N(A_{\beta})\) is contained in the \((18\delta+6)\)-neighborhood of \(g\,N(A_{\alpha})\), so it follows that \(N(A_{\beta})\cup g\,N(A_{\alpha})\) has diameter at most \((18\delta+6)+2(2\delta)=22\delta+6\). Then \(\widehat{N}(A_{\beta})\cup g\,\widehat{N}(A_{\alpha})\) has diameter at most \((22\delta+6)+2(4\delta+2)=30\delta+10\). In particular, up to the action of \(G\) there are only finitely many \(G\)-orbits of pairs \(\big{(}g\,\widehat{N}(A_{\alpha}),\widehat{N}(A_{\beta})\big{)}\). Since a hyperbolic group has only finitely many cone types [10], every such \(G\)-orbit corresponds to finitely many equivalence classes of signatures, so we conclude that there are only finitely many equivalence classes of signatures.
Finally, we can prove the main result of this section, that every hyperbolic group embeds into a full, contracting RSG.
Proof of Theorem 4.1.: Let \(G\) be a non-trivial hyperbolic group. Since \(G\) embeds into the hyperbolic group \(G\ast\mathbb{Z}\), we can assume without loss of generality that our group \(G\) has a proper \(\mathbb{Z}\) free factor. By Proposition 4.10, the tree of atoms of any hyperbolic group has finitely many types, so by Theorem 4.23 we conclude that \(\partial_{h}(G)\) has no isolated points, the action of \(G\) on \(\partial_{h}(G)\) is faithful, and the associated type graph has an irreducible core. By Theorem 4.14, the action is also rational, i.e. there exists a subshift of finite type \(\Sigma_{\Gamma}\) and a node \(r\) of \(\Gamma\) such that, identifying \(\partial_{h}G\) with \(\mathfrak{C}_{r}\), we have that \(G\) embeds as a subgroup of \(\mathcal{R}_{\Gamma,\partial_{h}G}\). By Proposition 4.13 this embedded copy of \(G\) is an RSG. It is also contracting, by Theorem 4.23. Now \(G\) embeds into \([[\,G\mid\partial_{h}G\,]]\), and this is a full, contracting RSG by Proposition 2.44 and Theorem 2.46.
## 5. Boone-Higman embeddings
Now we are poised to prove our main result, Theorem A from the introduction, that hyperbolic groups satisfy the Boone-Higman conjecture. The last remaining key step is the following.
**Proposition 5.1**.: _Every full, contracting RSG embeds into a finitely presented simple group._
Outside the application to hyperbolic groups, Proposition 5.1 also yields the following, which is immediate from the fact that Rover-Nekrashevych groups are full RSGs (see Example 2.35).
**Corollary 5.2**.: _Every contracting Rover-Nekrashevych group embeds into a finitely presented simple group. Hence, all contracting self-similar groups and all contracting Rover-Nekrashevych groups satisfy the Boone-Higman conjecture. \(\square\)_
Before embarking on the proof of Proposition 5.1, let us explain where our finitely presented simple groups come from. (First we should mention that full, contracting RSGs are not always themselves simple, even up to finite index; for example, one can check that \([[\,F_{2}\mid\partial_{h}F_{2}\,]]\) has abelianization \(\mathbb{Z}^{2}\).) The source of our examples is so called twisted Brin-Thompson groups, introduced by the first and fourth authors in [1]. Given a group \(G\) acting faithfully on a set \(S\), consider the Cantor space \(\mathfrak{C}^{S}\), where \(\mathfrak{C}\) is the usual Cantor set \(\{0,1\}^{\mathbb{N}}\), and \(\mathfrak{C}^{S}\) is given the product topology. Thompson's group \(V=V_{2}\) acts on \(\mathfrak{C}\), so there is an action of the (restricted, permutational) wreath product \(V\wr_{S}G\) on \(\mathfrak{C}^{S}\), with each copy of \(V\) acting on the appropriate coordinate, and \(G\) permuting the coordinates. Now the **twisted Brin-Thompson group**\(SV_{G}\) is the full group induced by \(V\wr_{S}G\), i.e.
\[SV_{G}\coloneqq[[\,V\wr_{S}G\mid\mathfrak{C}^{S}\,]].\]
The group \(SV_{G}\) is always simple, but it may or may not have good finiteness properties. We should mention that twisted Brin-Thompson groups admit no isometric action with loxodromic elements on any hyperbolic metric space, by a result of Balasubramanya-Fournier-Facio-Genevois [1], so it is interesting that in some sense our embedding of hyperbolic groups into finitely presented simple groups serves to completely eliminate this feature of hyperbolicity.
The following, from [10], is an improved version of a result from [1], which essentially says that groups admitting certain actions satisfy the Boone-Higman conjecture. Following Cameron [12], a group of permutations of a set \(S\) is **oligomorphic** if for every \(n\geq 1\) the induced action on \(S^{n}\) has finitely many orbits.
**Theorem 5.3**.: _[_10_, Theorem 4.2]_ _Let \(G\) be a finitely presented, oligomorphic group of permutations of a set \(S\) such that the stabilizer in \(G\) of any finite subset of \(S\) is finitely generated. Then the twisted Brin-Thompson group \(SV_{G}\) is a finitely presented simple group into which \(G\) embeds._
Our goal is to verify the conditions of this theorem for full, contracting RSGs. We begin by observing that full groups often have oligomorphic actions. For the following proposition, an action of a group \(G\) on a set \(S\) is **highly transitive** if \(G\) acts transitively on \(n\)-tuples of distinct elements of \(S\) for all \(n\geq 1\).
**Proposition 5.4**.: _Let \(X\) be a Hausdorff space that has a basis of clopen sets, and let \(G\) be any full group of homeomorphisms of \(X\). Then \(G\) acts highly transitively (and hence oligomorphically) on each of its orbits._
Proof.: Let \(S\) be a \(G\)-orbit in \(X\), and let \(S^{\prime}=\{s_{1},\ldots,s_{n}\}\) be a finite subset of \(S\). Choose \(g_{1},\ldots,g_{n}\in G\) so that \(g_{i}(s_{1})=s_{i}\) for each \(i\), where \(g_{1}\) is the identity. Since \(X\) is Hausdorff and has a basis of clopen sets, we can choose a clopen neighborhood \(E\) of \(s_{1}\) so that the clopen sets \(g_{1}(E),\ldots,g_{n}(E)\) are pairwise disjoint. For each \(i\neq j\), let \(g_{ij}\) be the homeomorphism that maps \(g_{i}(E)\) to \(g_{j}(E)\) by \(g_{j}g_{i}^{-1}\), maps \(g_{j}(E)\) to \(g_{i}(E)\) by \(g_{i}g_{j}^{-1}\), and is the identity elsewhere. Since \(G\) is full, each \(g_{ij}\) lies in \(G\), and these generate a subgroup of \(G\) that can permute the elements of \(S^{\prime}\) in any desired fashion. It follows easily that the action is highly transitive.
Since every compact, totally disconnected metrizable space has a basis of clopen sets, it follows from Proposition 5.4 that every full RSG acts highly transitively on each of its orbits.
Our next goal is to explore point stabilizers. We say that a point \(\omega\) in a subshift \(\Sigma_{\Gamma}\) is **rational** if it is eventually repeating, i.e. if there exist finite words \(\sigma\) and \(\tau\) such that \(\omega=\sigma\cdot\tau\cdot\tau\cdots\), written \(\omega=\sigma\cdot\tau^{\infty}\) (so \(\sigma\) is some prefix and \(\tau\) repeats forever). It is easy to see that the set of rational points in any clopen set \(E\subseteq\Sigma_{\Gamma}\) is stabilized by the action of the rational group \(\mathcal{R}_{\Gamma,E}\).
If \(G\leq\operatorname{Homeo}(X)\) is a group of homeomorphisms of a space \(X\), the stabilizer \(\operatorname{Stab}_{G}(x)\) of a point \(x\in X\) has a normal subgroup \(\operatorname{Fix}_{G}^{0}(x)\) consisting of all elements that are the identity in some neighborhood of \(x\). The quotient
\[[G]_{x}\coloneqq\operatorname{Stab}_{G}(x)\big{/}\!\operatorname{Fix}_{G}^{0}(x)\]
is called the **group of germs** of \(G\) at \(x\). If \(g\in\operatorname{Stab}_{G}(x)\), its image \([g]_{x}\) in \([G]_{x}\) is the **germ** of \(g\) at \(x\).
**Proposition 5.5**.: _Let \(G\leq\mathcal{R}_{\Gamma,E}\) be an RSG with a finite nucleus, and let \(\omega\in E\) be a rational point. Then the group of germs \([G]_{\omega}\) is virtually infinite cyclic._
Proof.: Recall that \(\omega=\sigma\cdot\tau^{\infty}\), and let us assume \(\tau\) is chosen to not be a power of a shorter word. Clearly \(t(\sigma)=t(\sigma\cdot\tau)\). Since \(G\) is an RSG, there exists \(f\in G\) that takes \(\mathfrak{C}_{\sigma}\) to \(\mathfrak{C}_{\sigma\cdot\tau}\) by the canonical similarity. Note that \(f\) fixes \(\omega\). Now we claim that the cyclic group \(\langle[f]_{\omega}\rangle\) has finite index in \([G]_{\omega}\).
Consider the function \(\mathcal{N}\to\mathcal{N}\) that sends \(p\) to \(p|_{\tau}\). Since \(\mathcal{N}\) is finite, every element of \(\mathcal{N}\) is either periodic or pre-periodic under this function.
Let \(\mathcal{N}_{\tau}\) denote the set of periodic points, and let \(M\) be the least common multiple of their periods. Now take an arbitrary \(h\in\operatorname{Stab}_{G}(\omega)\). For all sufficiently large \(i\), the local action of \(h\) at \(\sigma\cdot\tau^{i}\) lies in \(\mathcal{N}\). By Lemma 2.9, in fact for all sufficiently large \(i\), the local action of \(h\) at \(\sigma\cdot\tau^{i}\) lies in \(\mathcal{N}_{\tau}\). This implies that the sequence \(i\mapsto h|_{\sigma\cdot\tau^{iM}}\) is eventually constant. We therefore get a well defined function \(\lambda\colon\operatorname{Stab}_{G}(\omega)\to\mathcal{N}_{\tau}\) sending \(h\) to the eventually constant value of this sequence. Note that if \(h\) and \(h^{\prime}\) agree in an open neighborhood of \(\omega\) then they have the same \(\lambda\) value, so \(\lambda\) induces a well defined function, which we will also call \(\lambda\), from \([G]_{\omega}\) to \(\mathcal{N}_{\tau}\).
We now claim that every fiber of \(\lambda\) lies in a right coset of \(\langle[f]_{\omega}\rangle\). Since \(\mathcal{N}_{\tau}\) is finite, this will prove that \(\langle[f]_{\omega}\rangle\) has finite index in \([G]_{\omega}\) as desired. Let \(p\in\mathcal{N}_{\tau}\), and let \(h,h^{\prime}\in\operatorname{Stab}_{G}(\omega)\) with \(\lambda(h)=\lambda(h^{\prime})=p\). Then we can fix a sufficiently large \(i\) so that \(h|_{\sigma\cdot\tau^{iM}}=h^{\prime}|_{\sigma\cdot\tau^{iM}}=p\). Let \(\zeta\coloneqq\bar{h}(\sigma\cdot\tau^{iM})\) and \(\zeta^{\prime}\coloneqq\overline{h^{\prime}}(\sigma\cdot\tau^{iM})\). Thus we have \(\omega=h(\omega)=h(\sigma\cdot\tau^{iM}\cdot\tau^{\infty})=\zeta\cdot p(\tau^ {\infty})\) and \(\omega=h^{\prime}(\omega)=h^{\prime}(\sigma\cdot\tau^{iM}\cdot\tau^{\infty})= \zeta^{\prime}\cdot p(\tau^{\infty})\), so we conclude that \(\omega=\zeta\cdot p(\tau^{\infty})=\zeta^{\prime}\cdot p(\tau^{\infty})\). Since \(\omega=\sigma\cdot\tau^{\infty}\) and \(\zeta^{\prime}\) are both prefixes of \(\omega\), up to increasing \(i\) as needed to ensure that \(\sigma\) is a prefix of both \(\zeta\) and \(\zeta^{\prime}\), this means that \(\zeta=\sigma\cdot\tau^{j}\cdot\alpha\) and \(\zeta^{\prime}=\sigma\cdot\tau^{j^{\prime}}\cdot\alpha^{\prime}\) for some \(j,j^{\prime}\geq 0\) and some \(\alpha,\alpha^{\prime}\) proper prefixes of \(\tau\). Writing \(\tau=\alpha\cdot\beta=\alpha^{\prime}\cdot\beta^{\prime}\), so \(\beta\) and \(\beta^{\prime}\) are nonempty suffixes of \(\tau\), we see that \(p(\tau^{\infty})=\beta\cdot\tau^{\infty}=\beta^{\prime}\cdot\tau^{\infty}\), and since \(\tau\) is not a proper power this implies \(\beta=\beta^{\prime}\); this then implies that \(\alpha=\alpha^{\prime}\). Without loss of generality, say \(j\leq j^{\prime}\). Then \(h^{\prime}\) agrees with \(f^{j^{\prime}-j}h\) on the cone \(\mathfrak{C}_{\sigma\cdot\tau^{iM}}\), so \([h]_{\omega}\) and \([h^{\prime}]_{\omega}\) differ by left multiplication by an element of \(\langle[f]_{\omega}\rangle\), and we are done.
**Remark 5.6**.: If \(G\leq\mathcal{R}_{\Gamma,E}\) is an RSG with finite nucleus, then a similar argument shows that the group of germs \([G]_{\omega}\) at any irrational point \(\omega\in E\) is finite.
**Remark 5.7**.: If \(G\) is a hyperbolic group, it is well-known that the stabilizer of any point in the Gromov boundary \(\partial G\) is virtually cyclic, and it follows that the same holds for stabilizers in \(\partial_{h}G\). By Proposition 5.5, this was a necessary condition for \(G\) to have finite nucleus. Indeed, given a group \(G\) acting on a Cantor space \(X\), if there exist points in \(X\) whose stabilizers are not virtually cyclic, then by Proposition 5.5 there is no way to assign addresses to points in \(X\) so that the corresponding action is rational with finite nucleus.
The argument in the following proposition follows the arguments given by the first author, James Hyde, and the third author in [BHM], as well as some unpublished simplifications partially due to James Hyde.
**Proposition 5.8**.: _Let \(G\leq\mathcal{R}_{\Gamma,E}\) be a full, contracting RSG, and let \(S^{\prime}\) be a finite set of rational points in \(E\). Then the stabilizer \(\operatorname{Stab}_{G}(S^{\prime})\) is finitely generated._
Proof.: Let \(\operatorname{Fix}_{G}(S^{\prime})\) be the subgroup of \(\operatorname{Stab}_{G}(S^{\prime})\) consisting of elements that fix \(S^{\prime}\) pointwise. Since \(\operatorname{Fix}_{G}(S^{\prime})\) has finite index in \(\operatorname{Stab}_{G}(S^{\prime})\), it suffices to prove that \(\operatorname{Fix}_{G}(S^{\prime})\) is finitely generated. This group fits into an exact sequence
\[\operatorname{Fix}_{G}^{0}(S^{\prime})\hookrightarrow\operatorname{Fix}_{G}(S ^{\prime})\to\prod_{\omega\in S^{\prime}}[G]_{\omega}\]
where \(\operatorname{Fix}_{G}^{0}(S^{\prime})\) is the group of elements that are the identity in some neighborhood of \(S^{\prime}\). Each \([G]_{\omega}\) is virtually cyclic by Proposition 5.5, so the product \(\prod_{\omega\in S^{\prime}}[G]_{\omega}\) is virtually free abelian, and hence the image of \(\operatorname{Fix}_{G}(S^{\prime})\) must be finitely generated. Therefore, it suffices to prove that \(\operatorname{Fix}_{G}^{0}(S^{\prime})\) is contained in a finitely generated subgroup of \(\operatorname{Fix}_{G}(S^{\prime})\).
Let \(S^{\prime}=\{\omega_{1},\ldots,\omega_{n}\}\), and write each \(\omega_{i}\) as \(\sigma_{i}\cdot\tau_{i}^{\infty}\), where the prefixes \(\sigma_{i}\) are long enough so that the cones \(\mathfrak{C}_{\sigma_{1}},\ldots,\mathfrak{C}_{\sigma_{n}}\) are pairwise disjoint subsets of \(E\) whose union \(U=\bigcup_{i=1}^{n}\mathfrak{C}_{\sigma_{i}}\) is not all of \(E\). The cones \(\mathfrak{C}_{\sigma_{i}\cdot\tau_{i}}\) are also pairwise disjoint and have proper union, call it \(U^{\prime}\), and clearly \(t(\sigma_{i}\cdot\tau_{i})=t(\sigma_{i})\) for all \(i\). Since \(\Sigma_{\Gamma}\) has an irreducible core, by Corollary 2.31 there exists \(f\in V_{\Gamma,E}\) such that \(f\) acts on each \(\mathfrak{C}_{\sigma_{i}}\) by the canonical similarity \(\mathfrak{C}_{\sigma_{i}}\mapsto\mathfrak{C}_{\sigma_{i}\cdot\tau_{i}}\), for all \(1\leq i\leq n\). In particular \(f\) fixes \(S^{\prime}\). Also note that \(U\supset f(U)\supset f^{2}(U)\supset\cdots\), and any open neighborhood of \(S^{\prime}\) contains \(f^{k}(U)\) for all sufficiently large \(k\). This implies that \(\operatorname{Fix}_{G}^{0}(S^{\prime})\) equals the union of the conjugates \(f^{i}\operatorname{Fix}_{G}(U)f^{-i}\) for \(i\geq 0\). Now it suffices to show that \(\operatorname{Fix}_{G}(U)\) is finitely generated, since then \(\operatorname{Fix}_{G}^{0}(S^{\prime})\) will be contained in the finitely generated group generated by \(\operatorname{Fix}_{G}(U)\) and \(f\). But \(\operatorname{Fix}_{G}(U)\) is isomorphic to the full RSG in \(\mathcal{R}_{\Gamma,E\setminus U}\) with the same nucleus as \(G\), as described in Theorem 2.46. In particular \(\operatorname{Fix}_{G}(U)\) is finitely generated by Theorem 3.1, so it follows that \(\operatorname{Stab}_{G}(S^{\prime})\) is finitely generated.
Now we can prove the main result of this section.
Proof of Proposition 5.1.: Let \(G\leq\mathcal{R}_{\Gamma,E}\) be a full, contracting RSG. Since \(\Sigma_{\Gamma}\) has an irreducible core, we know that \(G\) is finitely presented by Theorem 3.1. Let \(S\) be the \(G\)-orbit of a rational point \(\omega\) in \(E\). Note that \(S\) is dense in \(E\), so \(G\) acts faithfully on \(S\). This action is oligomorphic by Proposition 5.4, and the stabilizer in \(G\) of any finite subset of \(S\) is finitely generated by Proposition 5.8. By Theorem 5.3, the twisted Brin-Thompson group \(SV_{G}\) is finitely presented, and it is simple, as all twisted Brin-Thompson groups are. Since \(G\) embeds in \(SV_{G}\), we are done.
Finally we can prove Theorem A, that hyperbolic groups embed in finitely presented simple groups, and thus satisfy the Boone-Higman conjecture.
Proof of Theorem A.: Let \(G\) be a hyperbolic group. By Theorem 4.1 there exists a full, contracting RSG into which \(G\) embeds as a subgroup. This embeds in a finitely presented simple group by Proposition 5.1.
| 1973年ボーン-ヒガマン予想は、有限生成群で可解な単語問題を持つ群は、有限生成単純群に埋め込まれることを予測しています。この論文では、hyperbolic群がこの予想に当てはまることを示し、つまり、それぞれのhyperbolic群は有限生成単純群に埋め込まれます。この結果から、有限生成群の「一般化」的なケースではこの予想が成り立つことが示されています。私たちの重要なツールは、 "同等性群 (RSGs)" という新しい群の族で、その独自性も持ち合わせています。私たちは、それぞれhyperbolic群は完全な、収縮したRSGに埋め込まれ、完全な、収縮したRSGは有限生成単純群に埋め込まれることを証明しました。これにより、予想が成立することを示しています。私たちの研究のもう一つの結果は、収縮性を持つ自己相似群はすべて、ボーン-ヒ |
2309.11758 | SAM-OCTA: A Fine-Tuning Strategy for Applying Foundation Model to OCTA
Image Segmentation Tasks | In the analysis of optical coherence tomography angiography (OCTA) images,
the operation of segmenting specific targets is necessary. Existing methods
typically train on supervised datasets with limited samples (approximately a
few hundred), which can lead to overfitting. To address this, the low-rank
adaptation technique is adopted for foundation model fine-tuning and proposed
corresponding prompt point generation strategies to process various
segmentation tasks on OCTA datasets. This method is named SAM-OCTA and has been
experimented on the publicly available OCTA-500 dataset. While achieving
state-of-the-art performance metrics, this method accomplishes local vessel
segmentation as well as effective artery-vein segmentation, which was not
well-solved in previous works. The code is available at:
https://github.com/ShellRedia/SAM-OCTA. | Chengliang Wang, Xinrun Chen, Haojian Ning, Shiying Li | 2023-09-21T03:41:08 | http://arxiv.org/abs/2309.11758v1 | # SAM-OCTA: A Fine-Tuning Strategy for Applying Foundation Model to
###### Abstract
In the analysis of optical coherence tomography angiography (OCTA) images, the operation of segmenting specific targets is necessary. Existing methods typically train on supervised datasets with limited samples (approximately a few hundred), which can lead to overfitting. To address this, the low-rank adaptation technique is adopted for foundation model fine-tuning and proposed corresponding prompt point generation strategies to process various segmentation tasks on OCTA datasets. This method is named SAM-OCTA and has been experimented on the publicly available OCTA-500 dataset. While achieving state-of-the-art performance metrics, this method accomplishes local vessel segmentation as well as effective artery-vein segmentation, which was not well-solved in previous works. The code is available at: [https://github.com/ShellRedia/SAM-OCTA](https://github.com/ShellRedia/SAM-OCTA).
Chengliang Wang, Xinrun Chen, Haojian Ning Shiying Li Chengqing University
College of Computer Science
Chongqing, China
Xiang'an Hospital of Xiamen University
Department of Ophthalmology
Xiamen, China OCTA, Image Segmentation, Prompting wide applicability to various image segmentation tasks without the need for prior re-training [5]. However, medical images differ significantly from natural images in terms of quality, noise, resolution, and other factors, which can affect SAM's segmentation performance. Thus, further research and optimization efforts are required to fully harness the potential of SAM in medical image segmentation [6].
We find that adopting a fine-tuning approach to SAM and introducing prompt information can enhance and guide the model's segmentation, aiming to improve some complex OCTA segmentation cases. We call our method as SAM-OCTA and summarize the contributions as follows:
(1) Applying Low-Rank Adaptation (LoRA) technology for fine-tuning the SAM model enables it to perform effective segmentation of specific targets within OCTA images.
(2) A strategy for generating prompt points has been proposed, which enhances the segmentation performance of FAZ and artery-vein tasks within OCTA samples.
## 1 Introduction
Optical coherence tomography angiography (OCTA) is an innovative and non-invasive imaging technique that enables the visualization of retinal microvasculature with high resolution and without needing dye injection [1]. It is a valuable tool for disease staging and preclinical diagnosis [2].
Certain specific retinal structures, such as retinal vessels (RV) and the avascular zone (FAZ) of the macula, usually need to be segmented from the raw data of OCTA for further analysis [2, 3]. Researchers have been actively exploring deep learning-based methods for image quality assessment and segmentation to address these challenges and enhance the accuracy and efficiency of OCTA image analysis. Most deep learning segmentation methods related to OCTA are based on self-designed neural networks and modules. This requires training the model from scratch, which can lead to overfitting issues. Foundational models, trained on large-scale data, can be applied to various scenarios [4].
Segment Anything Model (SAM) was introduced as a foundational model for addressing natural image tasks. This benchmark model demonstrated, for the first time, the promising wide applicability to various image segmentation tasks without the need for prior re-training [5]. However, medical images differ significantly from natural images in terms of quality, noise, resolution, and other factors, which can affect SAM's segmentation performance. Thus, further research and optimization efforts are required to fully harness the potential of SAM in medical image segmentation [6].
We find that adopting a fine-tuning approach to SAM and introducing prompt information can enhance and guide the model's segmentation, aiming to improve some complex OCTA segmentation cases. We call our method as SAM-OCTA and summarize the contributions as follows:
(1) Applying Low-Rank Adaptation (LoRA) technology for fine-tuning the SAM model enables it to perform effective segmentation of specific targets within OCTA images.
(2) A strategy for generating prompt points has been proposed, which enhances the segmentation performance of FAZ and artery-vein tasks within OCTA samples.
## 2 Related Work
### OCTA Segmentation Models
As a typical architecture for deep image processing, the vision transformer (ViT) is frequently used for segmentation tasks in OCTA [7]. In OCTA images, the distribution of RV is extensive, and it requires the models to effectively utilize the global information in the images. The TCU-Net, OCT2Former, and StruNet methods have improved ViT, achieving continuous RV segmentation and addressing issues such as vessel discontinuities or missing segments [8, 9, 10]. Other methods, from the perspectives of efficiency, denoising, and the utilization of three-dimensional data, have designed a series of techniques and strategies, achieving promising segmentation results on OCTA datasets [2, 11, 12, 13, 14]. The above-mentioned methods have demonstrated that existing deep networks are capable of achieving precise segmentation of RV and FAZ.
### SAM and Related Fine-tuning Approaches
The SAM is a foundational vision model for general image segmentation. With the ability to segment diverse objects,
parts, and visual structures in various scenarios, SAM takes prompts in the form of points, bounding boxes, or coarse masks as input. Its remarkable zero-shot segmentation capabilities enable its easy transfer to numerous applications through simple prompting [5]. Although SAM has established an efficient data engine for model training, there are relatively few cases collected for medical applications or other rare image scenarios. Therefore, some fine-tuning methods have been applied to SAM to improve its performance in certain segmentation failure cases [15, 16]. The common characteristic of these fine-tuning methods is that they introduce additional network layers on top of the pre-trained SAM. By adding a small number of trainable parameters, fine-tuning becomes feasible through training on the new dataset. The advantage of fine-tuning methods lies in their ability to preserve SAM's strong zero-shot capabilities and flexibility.
## 3 Method
In this paper, we fine-tuned the pre-trained SAM using OCTA datasets and corresponding annotations. The process is shown in Figure 1. SAM consists of three parts: an image encoder, a flexible prompt encoder, and a fast mask decoder [5].
### Fine-tuning of Image Encoder
The image encoder utilizes a ViT pre-trained with the masked auto-encode method. The ViT model comes in three variants: vit-b, vit-l, and vit-h which can only process fixed-size inputs (e.g. \(1024*1024*3\)). To support input images of different resolutions, scaling and padding operations are employed. In this study, we used the image encoder from the "vit-h" model for the fine-tuning process.
As shown in Figure 2, OCTA data is inherently in 3D format, but most datasets provide en-face 2D projection forms. En-face projection is obtained through layer-wise segmentation based on vascular anatomical structures. As SAM requires three-channel images as input, in this work, we stack projection layers in different depths of OCTA images to adapt to this input format. The benefit of this approach is that it preserves the vascular structure information in the OCTA images while fully utilizing SAM's feature-extracting capabilities. Fine-tuning aims to retain SAM's powerful image-understanding capabilities while enhancing its performance on OCTA. The approach used in this paper involves utilizing the LoRA technique [17], which introduces additional linear network layers in each transformer block of the image encoder, similar in form to a ResNet block. During the training process, the weights of the SAM are frozen, and only the newly introduced parameters are updated.
### Prompt Points Generation Strategy
The prompt encoder is divided into two types of prompts: sparse prompts (points, boxes, text) and dense prompts (masks). In our work, we chose points as the prompt for OCTA segmentation. For each sample's prompt points input, assuming there are \(n\) points in the prompt point input, it can be represented as \((x_{1},y_{1},1),(x_{2},y_{2},1),...,(x_{n},y_{n},0)\), where \(x\) and \(y\) denote the coordinates of prompt points in the image. The values "1" and "0" indicate positive (foreground) and negative (background) points, respectively. The prompt encoder of SAM will perform embedding on this input, and due to its pre-training, it can appropriately integrate with the information from the input image.
The prompt points generation strategy has two types: the global mode and the local mode. The global mode is applied to all OCTA segmentation tasks, while the local mode is specific to artery/vein segmentation. The study of segmenting individual vessels, as a local segmentation task, has not been attempted in previous works. By the prompt encoder, more accurate regional vessel segmentation can be achieved in OCTA datasets. For this task, the first step is to identify and label all connected components in the segmentation masks with unique identifiers. Due to the weak connectivity at the endpoints of some vessels in OCTA labels, we adopt the eight-connectivity criterion. Then positive points are randomly selected within each connected component. Due to varying numbers of vessels in different samples, to standardize their data format, negative points are added from the background adjacent to the connected components. The prompt points generation process can be described as Figure 3.
Figure 1: Schematic diagram illustrating the fine-tuning of SAM using OCTA samples.
Figure 2: OCTA Structural Diagram. (a) Three-dimensional volume rendering with arrows indicating different projection directions. (b) En-face projection. (c) B-scan projection.
### Mask Decoder
The role of the mask decoder is to efficiently map the image embeddings, prompt embeddings, and output tokens to a segmentation mask. A modified version of the transformer decoder block is employed, followed by a dynamic mask prediction head. For an image input and corresponding prompt input, the mask decoder outputs multiple segmentation masks to represent objects at different semantic levels. In this work, the loss function (in the fine-tuning process) is computed based on the segmentation output with the highest confidence.
## 4 Experiments
### Datasets and Preprocessing
The publicly available dataset used in this paper is OCTA-500 [18]. The OCTA-500 dataset contains 500 samples, classified based on the field of view (FoV): \(3mm\)+\(3mm\) (3M) and \(6mm*6mm\) (6M). The corresponding image resolutions are \(304*304\) and \(400*400\), with 200 and 300 samples respectively. The OCTA-500 dataset provides annotations for RV, FAZ, capillary, artery, and vein. The adopted data augmentation tool is Albumentations [19]. The data augmentation strategies include horizontal flipping, brightness and contrast adjustment, and random slight rotation.
### Experimental Settings
The SAM is deployed on A100 graphic cards with 80 GB memory. The 10-fold cross-validation is adopted to evaluate the training results. The optimizer used is AdamW, and the learning rate adopts a warm-up strategy, starting from \(10^{-5}\) and gradually increasing to \(10^{-3}\).
The loss functions used for fine-tuning vary depending on the segmentation tasks. For FAZ and capillary, the Dice loss is employed. However, for RV, artery, and vein, the clDice loss is utilized which is more feasible for tubular segmentation [20]. These two loss functions can be represented as:
\[L_{clDice}=0.2*L_{Dice}+0.8*L^{\prime}_{clDice},\]
where \(L_{Dice}=1-\frac{2*|\hat{Y}\cap Y|}{|\hat{Y}|+|Y|}\),
\[L^{\prime}_{clDice}=1-2*\frac{Tpre(\hat{Y}_{s},Y)*Tsens(Y_{s}, \hat{Y})}{Tpre(Y_{s},Y)+Tsens(Y_{s},\hat{Y})},\] \[Y\rightarrow\textit{the ground-truth},\] \[\hat{Y}\rightarrow\textit{the predicted value},\] \[Y_{s},\hat{Y_{s}}\rightarrow\textit{soft-skeleton(}Y,\hat{Y}), \textit{and}\] \[Tprec,Tsens\rightarrow\textit{precision and sensitivity}.\]
### Results
We conducted extensive experiments with various cases on the OCTA datasets. The segmentation results using metrics Dice, and Jaccard, which are calculated as follows:
\[Dice(\hat{Y},Y)=\frac{2|\hat{Y}\cap Y|}{|\hat{Y}|+|Y|},\ \ Jaccard(\hat{Y},Y)=\frac{|\hat{Y}\cap Y|}{|\hat{Y} \cup Y|}.\]
#### 4.3.1 Global Mode
We have experimented with various prompt point generation strategies, including the number of points and the generation area for negative points, and have selected the best metrics as the final results. RV and FAZ are common segmentation tasks in previous studies. Therefore, we will summarize the comparative results in Table 1. The experimental data from previous methods are referenced from [21]. Our method's comprehensive performance reaches the state-of-the-art level.
For segmentation tasks involving global vessels such as RV and capillary, the impact of prompt points is not significant. However, for FAZ, artery, and vein segmentation, prompt points lead to a noticeable improvement in segmentation performance. The segmentation results can be observed in Figures 4, 5, and Table 2. It can be inferred that the effect of prompt point information is more pronounced within a local region. For the widely distributed vessels, importing more prompt points has a limited effect. However, the prompt points can help improve the boundary delineation of the FAZ.
Figure 4: Segmentation results of SAM-OCTA in RV, capillary, and FAZ, with white arrows indicating areas of improvement with added prompt points.
Figure 3: Illustration of Prompt Points Generation. Green and red points represent positive and negative points, respectively.
#### 4.3.2 Local Mode
The local mode primarily focuses on precisely segmenting vessels in local regions. The segmentation targets include the artery and vein. For each sample, two positive points are selected on the target vessels, and two negative points are selected on the adjacent region.
Due to the morphological similarities between retinal arteries and veins, as well as the complexities introduced by factors such as age, gender, and disease conditions, deep learning methods often encounter segmentation disconnections or confusion in the artery-vein segmentation task [22]. Without prompt points, the OCTA-SAM is prone to confusion when an artery and a vein are closed or overlapping. Table 2 reveals the substantial metrics improvement with prompts. From Figure 5, it can be seen that the introduced prompt points (especially the negative points on the vein when segmenting artery) assist the model in effectively distinguishing different types of vessels, thereby improving the artery-vein segmentation results.
## 5 Conclusion
We propose a fine-tuning method for SAM for OCTA image segmentation and design prompt point generation strategies as global and local modes. It excels in both RV and FAZ tasks while also firstly exploring and achieving good results in the artery-vein segmentation task on the OCTA-500 dataset. This is expected to assist in the analysis and diagnosis of related diseases with varying impacts on arteries and veins.
## Acknowledgement
This work is supported by the Chongqing Technology Innovation \(~{}\&~{}\) Application Development Key Project (cstc2020jscx; dxwtBX0055; cstb2022tiad-kpx0148).
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{2}{c}{Label} & \multicolumn{2}{c}{RV} & \multicolumn{4}{c}{FAZ} \\ \hline \multirow{3}{*}{Method} & \multicolumn{2}{c}{OCTA-500(3M)} & \multicolumn{2}{c}{OCTA-500(6M)} & \multicolumn{2}{c}{OCTA-500(3M)} & \multicolumn{2}{c}{OCTA-500(6M)} \\ \cline{2-9} & Dice \(\uparrow\) & Jaccard \(\uparrow\) & Dice \(\uparrow\) & Jaccard \(\uparrow\) & Dice \(\uparrow\) & Jaccard \(\uparrow\) & Dice \(\uparrow\) & Jaccard \(\uparrow\) \\ \hline U-Net (2015) & 0.9068 & 0.8301 & 0.8876 & 0.7987 & 0.9747 & 0.9585 & 0.8770 & 0.8124 \\ IPN (2020) & 0.9062 & 0.8325 & 0.8864 & 0.7973 & 0.9505 & 0.9091 & 0.8802 & 0.7980 \\ IPN V2+ (2021) & 0.9274 & 0.8667 & 0.8941 & 0.8095 & 0.9755 & 0.9532 & 0.9084 & 0.8423 \\ FARGO (2021) & 0.9112 & 0.8374 & 0.8798 & 0.7864 & 0.9785 & 0.9587 & 0.8930 & 0.8355 \\ Joint-Seg (2022) & 0.9113 & 0.8378 & 0.8972 & 0.8117 & 0.9843 & 0.9693 & 0.9051 & 0.8424 \\ \hline SAM-OCTA (ours) & **0.9199** & **0.8520** & 0.8869 & 0.7975 & **0.9838** & **0.9692** & **0.9073** & **0.8473** \\ \hline \hline \end{tabular}
\end{table}
Table 1: RV and FAZ Segmentation Results on OCTA-500 (underscores indicate the top two highest values).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{2}{c}{FoV} & \multicolumn{2}{c}{OCTA-500(3M)} & \multicolumn{2}{c}{OCTA-500(6M)} \\ \hline \multicolumn{2}{c}{Prompts} & \multicolumn{1}{c}{\(\bigstar\)} & \multicolumn{1}{c}{\(\bigstar\)} & \(\bigstar\) & \(\bigstar\) \\ \hline \multicolumn{2}{c}{Global Mode} & \multicolumn{2}{c}{} & \multicolumn{1}{c}{Global Mode} \\ \cline{2-5} \multirow{2}{*}{RV} & Dice \(\uparrow\) & 0.9165 & 0.9199 & 0.8865 & 0.8869 \\ & Jaccard \(\uparrow\) & 0.8431 & 0.8520 & 0.7955 & 0.7975 \\ & Dice \(\uparrow\) & 0.9545 & 0.9838 & 0.8787 & 0.9073 \\ & Jaccard \(\uparrow\) & 0.9345 & 0.9692 & 0.7991 & 0.8473 \\ & Dice \(\uparrow\) & 0.8813 & 0.8785 & 0.8337 & 0.8379 \\ & Jaccard \(\uparrow\) & 0.7881 & 0.7837 & 0.7152 & 0.7213 \\ & Dice \(\uparrow\) & 0.8342 & 0.8747 & 0.8352 & 0.8602 \\ & Jaccard \(\uparrow\) & 0.7528 & 0.7785 & 0.7325 & 0.7572 \\ & Dice \(\uparrow\) & 0.8409 & 0.8817 & 0.8263 & 0.8526 \\ & Jaccard \(\uparrow\) & 0.7463 & 0.7897 & 0.7168 & 0.7474 \\ \hline \multicolumn{2}{c}{Local Mode} \\ \cline{2-5} \multirow{2}{*}{Artery} & Dice \(\uparrow\) & 0.7393 & 0.8707 & 0.6865 & 0.7922 \\ & Jaccard \(\uparrow\) & 0.6339 & 0.7792 & 0.5699 & 0.6851 \\ & Dice \(\uparrow\) & 0.7742 & 0.8352 & 0.7053 & 0.8167 \\ & Jaccard \(\uparrow\) & 0.6658 & 0.7267 & 0.5823 & 0.7014 \\ \hline \end{tabular}
\end{table}
Table 2: The effect of prompt points on segmentation tasks.
Figure 5: The performance of SAM-OCTA on artery and vein segmentation tasks. (a) Global mode; (b) Local mode. The red and blue vessels respectively represent arteries and veins. Red and cyan dots represent corresponding prompt points. Yellow dots represent negative background prompt points. For the artery segmentation, the red and cyan dots are respectively positive and negative points. For the vein, it should be the opposite. White arrows indicate areas of improvement with added prompt points. | OCTA画像の分析において、特定のターゲットを分割する操作は必要です。既存の方法では、スーパーvisedデータセットに訓練する傾向があり、サンプル数が限られています(およそ数え切れないほど)。これは、過学習を引き起こす可能性があります。この問題に対処するため、基盤モデルの微調整に低ランク適応技術を採用し、OCTAデータセットでのさまざまな分割タスクを処理するための対応するプロンプトポイント生成戦略を提案しました。この方法はSAM-OCTAと呼ばれ、公開されているOCTA-500データセットで実験が行われました。この方法によって、先進的な性能指標が達成された一方で、局所血管分割と効果的な動脈と静脈分割を実現しました。これは、過去の研究では解決されていなかった問題です。コードは次のURLを参照してください:https://github.com/ShellRedia/SAM-OCTA. |
2309.09117 | Contrastive Decoding Improves Reasoning in Large Language Models | We demonstrate that Contrastive Decoding -- a simple, computationally light,
and training-free text generation method proposed by Li et al 2022 -- achieves
large out-of-the-box improvements over greedy decoding on a variety of
reasoning tasks. Originally shown to improve the perceived quality of long-form
text generation, Contrastive Decoding searches for strings that maximize a
weighted difference in likelihood between strong and weak models. We show that
Contrastive Decoding leads LLaMA-65B to outperform LLaMA 2, GPT-3.5 and PaLM
2-L on the HellaSwag commonsense reasoning benchmark, and to outperform LLaMA
2, GPT-3.5 and PaLM-540B on the GSM8K math word reasoning benchmark, in
addition to improvements on a collection of other tasks. Analysis suggests that
Contrastive Decoding improves over existing methods by preventing some abstract
reasoning errors, as well as by avoiding simpler modes such as copying sections
of the input during chain-of-thought. Overall, Contrastive Decoding outperforms
nucleus sampling for long-form generation and greedy decoding for reasoning
tasks, making it a powerful general purpose method for generating text from
language models. | Sean O'Brien, Mike Lewis | 2023-09-17T00:29:32 | http://arxiv.org/abs/2309.09117v2 | # Contrastive Decoding Improves Reasoning in Large Language Models
###### Abstract
We demonstrate that Contrastive Decoding - a simple, computationally light, and training-free text generation method proposed by Li et al 2022 - achieves large out-of-the-box improvements over greedy decoding on a variety of reasoning tasks. Originally shown to improve the perceived quality of long-form text generation, Contrastive Decoding searches for strings that maximize a weighted difference in likelihood between strong and weak models. We show that Contrastive Decoding leads LLAMA-65B to outperform LLAMA 2, GPT-3.5 and PaLM 2-L on the HellaSwag commonsense reasoning benchmark, and to outperform LLAMA 2, GPT-3.5 and PaLM-540B on the GSM8K math word reasoning benchmark, in addition to improvements on a collection of other tasks. Analysis suggests that Contrastive Decoding improves over existing methods by preventing some abstract reasoning errors, as well as by avoiding simpler modes such as copying sections of the input during chain-of-thought. Overall, Contrastive Decoding outperforms nucleus sampling for long-form generation and greedy decoding for reasoning tasks, making it a powerful general purpose method for generating text from language models.
## 1 Introduction
Text is generated from large language models (LLMs) in different ways for different tasks. For open-ended text generation tasks, truncated sampling is normally used, as the most likely strings under a model tend to be short and uninteresting (Holtzman et al., 2020). For reasoning problems, greedy decoding is normally preferred, to avoid risking sampling errors. This bifurcation is undesirable; for example it increases the likelihood of reasoning errors during open-ended generation.
We explore the use of Contrastive Decoding (Li et al., 2022) for solving reasoning problems with LLMs. Contrastive Decoding (CD) searches for strings that maximize a weighted difference in likelihood between a stronger _expert_ and a weaker _amateur_ model, and was shown to outperform existing methods for open-ended text generation. It achieves this by avoiding undesirable modes of the expert model's distribution, such as short or generic strings, which tend to be the most likely under any model, including the amateur.
We show that Contrastive Decoding outperforms greedy decoding on reasoning problems. On GSM8K, a widely used benchmark consisting of grade-school word math problems, contrastive decoding improves the performance of various LLaMA models by up to 8 absolute percentage points. This result outperforms LLaMA 2, which has 5 billion more parameters and is trained on 40% more data. On HellaSwag, using the CD objective to rank answers leads LLaMA to outperform all existing models except GPT-4. We find general improvement on arithmetic reasoning and multiple-choice ranking tasks, including on models as large as LLaMA-65B, suggesting that Contrastive Decoding could bring such widespread improvements to much larger models.
We also analyze the cause of the improvement from Contrastive Decoding. Empirically, we find that Contrastive Decoding performs less surface-level copying from the prompt than greedy decoding and misses fewer reasoning steps. This result suggests that, similarly to findings in Li et al. (2022), Contrastive Decoding works by reducing repetitive or other undesirable modes of the model distribution. Our current method yields mixed results for commonsense reasoning tasks and slightly degrades factual retrieval, both trends that encourage further refinement of the method.
Overall, we show that Contrastive Decoding not only substantially improves LLM accuracies on a range of benchmarks, but is also the first generation algorithm to achieve state-of-the-art results in both reasoning and text generation problems. These results allow a more unified method for improving generation from language models across tasks.
## 2 Contrastive Decoding
### Simplified Formulation
The original Contrastive Decoding formulation from Li et al. (2022) explicitly chooses two parameters: \(\alpha\) and the intermediate temperature of the amateur distribution \(\tau_{a}\), with the intermediate temperature of the expert fixed at \(\tau_{e}=1\). We slightly refactor the hyperparameter choice to be more interpretable and simplify the algorithm by working directly in logit space.
Figure 3: CD accentuates what the expert model has learned that the amateur model has not. Results are taken from greedy decoding with a 65B parameter expert, using \(\alpha=0.1\), \(\beta=0.5\) for CD.
Let \(s_{a}^{(i)}\) and \(s_{e}^{(i)}\) be the unnormalized scores (logits) assigned to token \(i\) by the amateur and expert models, respectively. \(\alpha\) is the same hyperparameter in the original paper: a proportion of the maximum probability assigned by the expert model, with any tokens assigned a lower probability masked out. \(\beta\) is a hyperparameter corresponding to the strength of the amateur penalty. We include a leading \((1+\beta)\) coefficient to the expert logits to decouple the strength of the contrastive penalty from the expected scale of the output logits, cleanly delineating between the contrastive tradeoff and the final sampling temperature. This matches the formulation of DE Experts (Liu et al., 2021), with the expert model serving both as the base prior and steering expert.
**1.** Determine \(\alpha\)-mask.
\(V_{valid}=\{j\in V,s_{e}^{(j)}\geq\log\alpha+\max_{k\in V}s_{e}^{(k)}\}\)
**2.** Subtract amateur logits.
\(s_{CD}^{(i)}=\begin{cases}(1+\beta)s_{e}^{(i)}-\beta s_{a}^{(i)}&i\in V_{valid }\\ -\infty&i\not\in V_{valid}\end{cases}\)
A PyTorch implementation for this formulation, as well as the original, can be found in subsection A.1 of the appendix. Our implementation takes three lines of readable code.
### Probabilistic Interpretation
Our implementation of \(\alpha\)-masking has the same interpretation as in Li et al. (2022), given that the expert temperature is fixed to \(\tau_{e}=1\). We show the equivalence in A.2.
Further, we can consider the post-softmax probabilities produced by CD as a perturbation of the probabilities predicted by the expert model. Not including \(\alpha\)-masking, the probability assigned to token \(i\) by CD is a normalized adjustment of the probability assigned by the expert model:
\[p_{CD}^{(i)}\propto p_{e}^{(i)}\left(\frac{p_{e}^{(i)}}{p_{a}^{(i)}}\right)^{\beta} \tag{1}\]
It is therefore clear that as \(\beta\to 0\) the contrastive penalty disappears, and as \(\beta\to\infty\) the distribution collapses to the argmax of \(p_{e}^{(i)}/p_{a}^{(i)}\), which is the original formulation from Li et al. (2022).
## 3 Experiments
### Experimental Setup
**Models.** We use untuned models from the LLaMA 1 family (Touvron et al., 2023) at all scales. Unless otherwise stated, we use an untuned LLaMA-65B as the expert and an untuned, LLaMA-architecture model with 1.5B parameters trained on the same data as the other LLaMA 1 models as an amateur. For one ablation study, we use models from the FLAN-T5 family (Chung et al., 2022).
**Decoding Parameters.** We set \(\beta=0.5\) and \(\alpha=0.1\) for all experiments unless otherwise stated. We use greedy decoding, except for self-consistency experiments for which we sample at \(\tau=0.7\) following Touvron et al. (2023).
**Prompting.** For generation tasks, we use 8-shot chain-of-thought prompting, in line with Touvron et al. (2023). The examples are the same as in LLaMA for tasks contained in that paper, and taken from Wei et al. (2023) for other mathematical tasks.
**Datasets.** Following prior works, we evaluate on a number of datasets. The following tasks measure performance on algebraic word problems: **AQuA** (Ling et al., 2017), **ASDiv**(Miao et al.,
2021), **GSM8K**(Cobbe et al., 2021), and **SVAMP**(Patel et al., 2021). We also evaluate on **MATH**(Hendrycks et al., 2021), a larger and more challenging benchmark.
For commonsense reasoning, we measure open-ended performance on **CommonsenseQA**(Talmor et al., 2019) and **StrategyQA**(Geva et al., 2021). We also evaluate on a battery of multiple-choice reasoning benchmarks: both the easy and challenge splits of the **AI2 Reasoning Challenge** dataset (Clark et al., 2018), **BoolQ**(Clark et al., 2019), **HellaSwag**(Zellers et al., 2019), **MMLU**(Hendrycks et al., 2021), **PIQA**(Bisk et al., 2019), **SIQA**(Sap et al., 2019), and **WinoGrande**(Sakaguchi et al., 2019).
### Hyperparameter Selection
Contrastive decoding has three major hyperparameters: the masking ratio \(\alpha\), the contrastive strength \(\beta\) and the size of the amateur model. We find that results are fairly insensitive to \(\alpha\) as long as \(\hat{\beta}\) is reasonably small (below 1); unless otherwise stated we use \(\alpha=0.1\) across experiments.
Next we consider the size of the amateur model. In agreement with Li et al. (2022), we find that performance benefits from smaller amateur models ( Figure 4); while a 1B-parameter amateur helps reasoning performance, a 7B-parameter amateur harms it. We also examine different types of amateurs; ablation studies show that a partially-trained amateur performs better than a fully-trained one, and that a poorly-prompted expert can be successfully used as an amateur as well (see subsection 4.2).
Finally, we examine the effect of \(\beta\). The optimal value depends on the task, but for both generation tasks like GSM8K and multiple-choice ranking tasks like PIQA we find that \(\beta=0.5\) performs well. Setting \(\beta\) too high can place too much weight in the contrastive penalty and harm performance, especially with a larger gap between amateur and expert models. \(\beta=0\) corresponds to standard greedy decoding with no contrastive penalty. Results of \(\beta\) hyperparameter sweeps can be found in Table 1, Figure 4, Figure 5 and B.
The best result on GSM8K, with LLaMA-65B and \(\beta=0.25\), is 57.7 (Table 1), outperforming PaLM-540B (56.5), LLaMA-2 (56.8) and GPT-3.5 (57.1).* (Anil et al., 2023; OpenAI, 2023)
Footnote *: OpenAI (2023) evaluates GPT-3.5 5-shot; all others are 8-shot.
### Arithmetic Reasoning
We find that contrastive decoding tends to help on arithmetic reasoning tasks with chain-of-thought prompting; see Table 2 for all results. One exception to this is the MATH dataset, which proves to be challenging for both standard and contrastive decoding. We conjecture that because contrastive decoding amplifies skills that the expert has learned better than the amateur, it cannot help on tasks that are well beyond the expert's ability.
We also experiment with normalizing the \(\alpha\)-masked CD scores via softmax, then temperature sampling from the resulting distribution. This permits CD to generate multiple candidate reasoning chains to be used for self-consistency (taking the majority answer) (Wang et al., 2023). We show across both mathematical and commonsense reasoning, CD improves self-consistency performance.
\begin{table}
\begin{tabular}{|c|c c c c|} \hline Expert & \(\beta=0\) & \(\beta=0.25\) & \(\beta=0.5\) & \(\beta=1\) \\ \hline
7B & 10.7 & 11.5 & **13.6** & 11.0 \\ \hline
13B & 17.0 & 21.0 & **22.9** & 20.4 \\ \hline
30B & 35.2 & 40.0 & **43.4** & 42.0 \\ \hline
65B & 51.0 & **57.7** & 56.8 & 44.6 \\ \hline \end{tabular}
\end{table}
Table 1: Results on GSM8K. \(\beta=0.5\) tends to give good results across expert sizes.
Figure 4: Results on GSM8K with LLaMA-65B as the expert. While a 7B amateur harms performance, a 1.5B amateur helps.
### Commonsense Reasoning
Results are more mixed for CommonsenseQA and StrategyQA. For both of these tasks, we 8-shot prompt our model and compute the exact match score against the ground-truth answers. We find that contrastive decoding harms performance for smaller models, but that this harm equalizes somewhat for the 65B model and evens out when using self-consistency. See Table 3 for full results.
\begin{table}
\begin{tabular}{c c|c c c c c|c} \hline Model & CD & AQuA & ASDiv & GSM8K & MATH & SVAMP & Average \\ \hline \hline \(\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{ }}}}}}}}}}}}}\) & 21.0\({}^{*}\) & 40.2 & 10.7 & 3.0 & 27.3 & 20.4 \\
13B & ✗ & 18.1\({}^{*}\) & 49.0 & 17.4 & 4.2 & 39.4 & 25.6 \\
30B & ✗ & 23.8 & 60.1 & 35.3 & 6.9 & 55.9 & 36.4 \\
65B & ✗ & 33.3 & 67.2 & 51.0 & 10.6 & 69.1 & 46.2 \\
65B maj@20 & ✗ & 38.2 & 73.6 & 68.0 & \(-\)† & 77.3 & 64.3 \\ \hline \(\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ }}}}}}}}}}}}}\) & 19.0\({}^{*}\) (2.0) & 39.7 (-0.5) & 14.3 (+3.6) & 2.9 (-0.1) & 31.5 (+4.2) & 21.5 (+1.1) \\
13B & ✓� & 16.0\({}^{*}\) (2.1) & 52.0 (+3.0) & 22.7 (+5.5) & 3.8 (-0.4) & 43.1 (+3.7) & 27.5 (+1.9) \\
30B & ✓� & 29.8 (+6.0) & 62.5 (+2.4) & 43.1 (+8.1) & 8.1 (+1.2) & 59.3 (+3.4) & 40.6 (+4.2) \\
65B & ✓� & 36.9 (+3.6) & 71.9 (+4.7) & 56.8 (+5.8) & 10.3 (-0.3) & 67.8 (-1.3) & 48.7 (+2.5) \\
65B maj@20 & ✓� & **39.4** (+1.2) & **77.4** (+3.8) & **74.0** (+6.0) & \(-\)† & **79.0** (+1.7) & **67.5** (+3.2) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on math generation tasks. Contrastive decoding generally improves performance.
Figure 5: Two examples of sweeping through \(\beta\) values on multiple-choice reasoning tasks across model scales. Dashed horizontal lines mark performance without contrastive decoding.
\begin{table}
\begin{tabular}{c c|c c|c} \hline Model & CD & CSQA & StrategyQA & Average \\ \hline
7B & ✗ & 40.0 & 59.2 & 49.6 \\
13B & 60.4 & 64.5 & 62.5 \\
30B & 66.4 & 68.7 & 67.6 \\
65B & 77.5 & 69.5 & 73.5 \\
65B maj@20 & ✗ & 77.0 & **79.3** & 78.2 \\ \hline TB & ✓ & 37.3 (-2.7) & 58.3 (-0.9) & 47.8 (-1.8) \\
13B & ✓� & 58.5 (-1.9) & 65.5 (+1.0) & 62.0 (-0.5) \\
30B & ✓� & 62.8 (-3.6) & 67.6 (-1.1) & 65.2 (-2.4) \\
65B & ✓� & 77.1 (-0.4) & 71.5 (+2.0) & 74.3 (+0.8) \\
65B & maj@20 & ✓� & **77.9** (+0.9) & **79.3** (+0.0) & **78.6** (+0.4) \\ \hline \hline \end{tabular}
\end{table}
Table 3: CD harms commonsense reasoning with a smaller expert, but performance evens out with a larger expert-amateur gap.
### Contrastive Ranking
We further evaluate a contrastive objective as a scoring function to rank answers to multiple-choice questions. These tasks are zero-shot, multiple-choice cloze tasks; instead of open-ended generation the model scores each potential completion, length-normalizing following Touvron et al. (2023). We find comparable performance across most tasks, with more substantive gains on HellaSwag and ARC-Challenge. Notably, on HellaSwag CD leads LLaMA-65B to score 88.0, which outperforms LLaMA-2 (85.3), GPT-3.5 (85.5) (OpenAI, 2023) and PALM 2-Large (86.8) (Anil et al., 2023).
## 4 Additional Studies
### Effects of Contrastive Decoding
**CD is worse at arithmetic but better at logical reasoning.** We conduct a manual error analysis of 100 randomly selected examples from the GSM8K set between continuations from greedy decoding and CD (\(\beta=0.5,\alpha=0.1\)). We follow Wang et al. (2023) and categorize wrong answers as primarily being due to an arithmetic error, a missing step or a semantic misunderstanding. We add one category of "degeneration," chosen when the model lapses into excessive repetition. Our small-scale analysis finds that CD makes more arithmetic errors, but that this is offset by better semantic reasoning and fewer missing steps (see Table 5).
To further explore the claim that the benefit of CD does not stem from arithmetic evaluation, we generate a toy dataset of 1,0000 multiplication and subtraction equations with operands up to four digits and then 8-shot prompt models to complete the expression, measuring exact match accuracy. We find that CD does not improve performance on this task, and in fact may degrade it slightly. Results are shown in Table 8.
**CD reduces copying from the prompt.** We analyze 26,000 sampled generations from CD-sampling on GSM8K against the corresponding set from temperature sampling; both of these sets of generations are used in our self-consistency study. We find that responses are roughly the same length and follow the few-shot template roughly the same proportion of the time. This rules out the
\begin{table}
\begin{tabular}{|c|c c|} \hline & Standard & CD \\ \hline Correct \% & 44.6 & **51.1** \\ Parseable \% & 95.2 & **95.6** \\ Average \# chars & 215.2 & 217.2 \\ \hline \end{tabular}
\end{table}
Table 6: High-level generation statistics from sampled generations on GSM8K. Responses are similar lengths, despite the performance improvement from CD.
\begin{table}
\begin{tabular}{c|c c c c c c c|c|c} \hline \(\beta\) & ARC-E & ARC-C & BoolQ & HSWag & PIQA & SIQA & WGrande & MMLU & Avg \\ \hline
0.0 & **79.1** & 56.1 & 84.2 & 84.2 & 82.6 & 52.3 & 77.3 & **63.5** & 72.4 \\ \hline
0.5 & 79.0 & 59.5 & **84.3** & 87.4 & **83.1** & **53.3** & **77.8** & 63.4 & **74.9** \\
1.0 & 76.9 & **59.7** & 84.1 & **88.0** & 82.9 & **53.3** & 76.5 & 63.2 & 74.5 \\ \hline \end{tabular}
\end{table}
Table 4: Results on multiple-choice reasoning tasks. CD generally provides a modest boost.
\begin{table}
\begin{tabular}{c|c c c|c|c} CD & Arithmetic & Missing Step & Semantic & Degeneration & Total Errors \\ \hline ✗ & **4\%** & 22\% & 24\% & 4\% & 54\% \\ \hline ✓ & 8\% & **20\%** & **21\%** & **3\%** & **52\%** \\ \hline \end{tabular}
\end{table}
Table 5: Proportion of errors in of a set of 100 GSM8K questions. CD makes more arithmetic errors, but omits fewer steps and avoids semantic misunderstandings.
Figure 6: CD reduces copying from the question in the generated Chain of Thought, as measured by n-gram overlap on GSM8K generations.
hypothesis that contrastive decoding simply leads the model to follow the template better, prevents degeneration or induces longer answers with more reasoning steps. Further, we run an automatic evaluation of greedy generations using ROSCOE (Golovneva et al., 2022) but do not find significant differences in any of these metrics. However, we measure the precision and recall of the tokens in the prompt by the sampled generations and find that CD systematically reduces token-level copying from the prompt. This may be related to increased reasoning ability, as surface-level copying from the prompt does not provide new information to the problem.
**CD can harm factual recall.** Our primary claim is that contrastive decoding improves chain-of-thought reasoning. However, we also test CD on two pure factual-recall tests that do not utilize chain-of-thought: OpenBookQA (Mihaylov et al., 2018) and TriviaQA (Joshi et al., 2017). OpenBookQA ("OBQA"), is a multiple-choice completion task, while TriviaQA is a 5-shot generation task. Reusing the same setup from reasoning leads to a slight degradation of performance, as seen in Table 7.
**CD outperforms other reasoning enhancements in FLOP efficiency.** We note that contrastive decoding introduces relatively little overhead in comparison to other reasoning-enhancing methods. We estimate that with a 1.5B amateur and 65.2B expert, contrastive decoding increases the total number of FLOPs by \(3.25\%\) (see section C of the appendix). This compares favorably to self-consistency, which requires several extra full generation loops. We show in Figure 9 that CD is significantly more efficient than self-consistency.
### Ablation Studies
\(\alpha\)**-masking alone is not enough.** When sampling and performing self-consistency, \(\alpha\)-masking prevents the sampling of tokens the expert finds to be unlikely. It is natural to ask what portion of the benefit comes purely from \(\alpha\)-masking and not the contrastive objective itself.
To answer this, we set \(\beta=0\) but \(\alpha=0.1\); that is, we mask out candidates based on the expert but do not apply the contrastive objective. When sampling one path, we expect \(\alpha\)-masking to improve over temperature sampling alone as it eliminates unlikely results and thus provides a closer approximation to greedy sampling. This holds, but as we increase the number of paths we find no benefit from \(\alpha\)-masking alone. This suggests that the contrastive objective, and not \(\alpha\)-masking, is the primary source of improved self-consistency results. See Figure 7 for results of this ablation.
**CD requires chain-of-thought prompting to improve results.** We next study whether contrastive decoding provides an advantage in the absence of chain-of-thought prompting. We remove the chains of thought from the GSM8K fewshot prompt, and find that as expected performance drops for both standard and contrastive decoding (Figure 8); further, without chains of thought contrastive decoding provides no consistent improvement. As with the MATH dataset, solving problems without explicit reasoning steps may be too challenging of a task for the expert model, and thus leave too small a gap between the expert and amateur to contrastively exploit.
**CD can benefit non-LLaMA models.** We conduct a short study to show that CD can benefit models outside of the LLaMA family. For this study, we choose the FLAN-T5 family as it is open-source, has a wide range of model sizes that share a single tokenizer, and obtains good performance on chain-of-thought reasoning tasks. We use FLAN-T5-XXL (11B) as the expert model and FLAN-T5-Small (80M) as amateur. We evaluate on GSM8K using the 8-shot random prompts from Fu
\begin{table}
\begin{tabular}{|c|c c c|} \hline CD & OBQA & TriviaQA* & CD & 7B & 13B & 30B & 65B \\ \hline ✗ & **60.0** & **72.2** & **31.0** & **36.3** & **52.3** & **58.4** \\ ✓ & 57.8 (\(\geq\)2.4) & 69.9 (\(\geq\)2.1) & 30.9 & 35.6 & 52.2 & 57.6 \\ \hline \end{tabular}
\end{table}
Table 7: CD can harm performance on a synthetic task of evaluating arithmetic expressions.
\begin{table}
\begin{tabular}{|c|c c c c|} \hline CD & OBQA & TriviaQA* & CD & 7B & 13B & 30B & 65B \\ \hline ✗ & **31.0** & **36.3** & **52.3** & **58.4** \\ ✓ & 30.9 & 35.6 & 52.2 & 57.6 \\ \hline \end{tabular}
\end{table}
Table 8: CD slightly harms performance on a synthetic task of evaluating arithmetic expressions.
et al. (2023); note that GSM8K is within the set of tasks that FLAN-T5 is finetuned on. CD provides a slight boost in performance, as seen in Table 9. We leave more extensive experiments on other families of models to future work.
Small-scale amateurs beat "negative prompting."We experiment to determine if there is a more effective weak amateur model to use for contrastive decoding. We define a set of "negative prompts" by sampling 7B model outputs on the fewshot prompts and collecting the incorrect responses. We use these responses as fewshot prompts to mimic the failure modes of the family of models. These negative prompts should harm the performance of models they are prompted with, and specifically bias results towards the error distribution of the 65B model.
We find that contrasting with a negative prompt does not harm performance, but does not improve it as much as contrasting with a small amateur (see Table 10). In an ablation study, we find that negative prompting does not harm performance that much; prompting a 65B model with incorrect fewshot examples on GSM8K gives a score of 41.3, which underperforms prompting with correct examples (51.2) but significantly beats non-chain-of-thought prompting (13.5). This supports Wang et al. (2023), who find that even incorrect chain-of-thought rationales improve reasoning. A prompting strategy which better incapacitates the expert model might yield better results.
Mid-training checkpoints make for good amateurs.We experiment with checkpoints of a mid-training 7B-parameter LLaMA model taken 10% and 23% of the way through the full training run. Even while a fully-trained 7B amateur harms performance on GSM8K, we find that a partially-trained amateur improves performance. We do not perform extensive hyperparameter sweeps here, instead reusing \(\alpha=0.1\), \(\beta=0.5\) as before. We do not pursue partially-trained amateurs for our main results as results may vary based on the order of training data, but this result allows us to interpret contrastive decoding as a first-order optimization step over the output of a model, highlighting the high-level behaviors that it learns later on in the course of training. See Table 11 for full results.
## 5 Related Work
**Steering methods for reasoning.** Other works more explicitly model the error distribution of reasoning steps and use this to steer decoding. For example GRACE (Khalifa et al., 2023) uses a contrastive loss to train an external step-level discriminator, which it then uses to select between candidate steps sampled from a base model. Using the interpretation of contrastive decoding as mutual distinguishability between amateur and expert, we see that our method is close to FUDGE (Yang and Klein, 2021) where the binary predictor is an estimate of the probability that the generated token has come from the expert rather than the amateur.
**Prompting Methods for Reasoning.** There are many recent prompting methods to improve language model reasoning; see Qiao et al. (2023) for a survey. We perform our experiments with chain-of-thought prompting (Wei et al., 2023).
**Sampling methods** Several decoding methods exist to improve the quality of generations from large language models. For open-ended generation, truncated sampling schemes like top-\(k\) sampling (Fan et al., 2018), nucleus sampling (Holtzman et al., 2020) and typical sampling (Meister et al., 2023) have been shown to reduce repetition in comparison to greedy decoding and beam search while producing more coherent generations than standard temperature sampling. However, sampling can still introduce errors into logical chains, and so greedy decoding is used to more effectively solve reasoning tasks. (Wei et al., 2023; Anil et al., 2023)
**Contrastive Generation Methods.** Our formulation's objective can be interpreted as a special case of DE experts (Liu et al., 2021), using the larger model as both an expert and base LM prior. Yona et al. (2023) identify model biases with Contrastive Input Decoding, a contrastive-decoding-style technique similar to negative prompting that operates on perturbed text inputs.
Concurrently to our work, Chuang et al. (2023) propose DoLA, which improves factuality and reasoning through contrastive decoding between the predictions of later layers and earlier layers in a language model. We study a wider array of reasoning tasks and demonstrate that a 7B amateur is too large, finding greater gains in reasoning just by scaling down the amateur to 1.5B parameters.
Our paper differentiates itself from Li et al. (2022), which initially proposed Contrastive Decoding, in several ways: by testing on standard reasoning benchmarks, by our exploration of \(\beta\) as a hyper-parameter, by ablations with various types of amateurs, and by a careful analysis of the combination of Contrastive Decoding with chain-of-thought prompting and self-consistency.
## 6 Limitations
Our investigation is also limited mainly to the LLaMA family of models. While the method continues to provide benefit to larger LLaMA models, further work is required to definitively establish the effect of contrastive decoding on larger, tuned models.
## 7 Conclusion
Our study shows that contrastive decoding can improve chain-of-thought reasoning in large language models. While challenges like factual recall remain, this strengthens the case for contrastive decoding as a simple, general-purpose method to elicit more desirable behavior from large language models.
\begin{table}
\begin{tabular}{c|c c c} Expert & Greedy & NP & CD & CD + NP \\ \hline
7B & 10.7 & 11.4 & **14.3** & 12.7 \\ \hline
13B & 17.4 & 17.5 & **22.7** & 20.7 \\ \hline
30B & 35.3 & 36.9 & **43.1** & 42.9 \\ \hline
65B & 51.0 & 52.0 & **56.8** & 54.7 \\ \end{tabular}
\begin{tabular}{c|c} Amateur & Amateur Tokens & GSM8K \\ \hline
7B & 130B & **57.0** \\ \hline
7B & 300B & 56.8 \\ \hline
7B & 1.3T & 49.9 \\ \end{tabular}
\end{table}
Table 10: On GSM8K, negative prompting outperforms greedy decoding but weakens CD.
\begin{table}
\begin{tabular}{c|c c} Amateur & Amateur Tokens & GSM8K \\ \hline
7B & 130B & **57.0** \\ \hline
7B & 300B & 56.8 \\ \hline
7B & 1.3T & 49.9 \\ \end{tabular}
\end{table}
Table 11: Early-training checkpoints can be good amateurs, even when late-stage checkpoints harm performance.
### Reproducibility Statement
The training process and model architecture for the 1.5B-parameter LLaMA model used as the amateur in several results is publicly available, but the weights are not, which limits public reproducibility of results relying on that model. The results on FLAN-T5, as well as the negative-prompting study and examination of 7B-LLaMA as an amateur, are all built on entirely open-source models and data.
| Contrastive デコード -- 2022 年 リ et al に提案された、単純で計算量が少ない、トレーニング不要のテキスト生成方法 -- は、さまざまな推理タスクにおいて、 greedy デコードよりも大きなアウト-オブ-ザ-ボックスの改善を達成する。Originally は、長文テキスト生成の perceivd 改善を指し示すものだったが、Contrastive デコードは、強弱モデル間の尤度差を最大化する文字列を検索する。私たちは、Contrastive デコードが LLaMA-65B を LLaMA 2、GPT-3.5、PaLM2-L と比べることで、HellaSwag commonsense reasoning benchmark で outperform し、LLaMA 2、GPT-3.5、PaLM-540B を超える。また、他のタスクでの改善も示した。分析から、Contrastive デコードは既存の方法よりも抽象的な推理エラー |
2309.12251 | Planning Optimal Trajectories for Mobile Manipulators under End-effector
Trajectory Continuity Constraint | Mobile manipulators have been employed in many applications that are
traditionally performed by either multiple fixed-base robots or a large robotic
system. This capability is enabled by the mobility of the mobile base. However,
the mobile base also brings redundancy to the system, which makes mobile
manipulator motion planning more challenging. In this paper, we tackle the
mobile manipulator motion planning problem under the end-effector trajectory
continuity constraint in which the end-effector is required to traverse a
continuous task-space trajectory (time-parametrized path), such as in mobile
printing or spraying applications. Our method decouples the problem into: (1)
planning an optimal base trajectory subject to geometric task constraints,
end-effector trajectory continuity constraint, collision avoidance, and base
velocity constraint; which ensures that (2) a manipulator trajectory is
computed subsequently based on the obtained base trajectory. To validate our
method, we propose a discrete optimal base trajectory planning algorithm to
solve several mobile printing tasks in hardware experiment and simulations. | Quang-Nam Nguyen, Quang-Cuong Pham | 2023-09-21T16:52:29 | http://arxiv.org/abs/2309.12251v2 | Planning Optimal Trajectories for Mobile Manipulators under End-effector Trajectory Continuity Constraint
###### Abstract
Mobile manipulators have been employed in many applications which are usually performed by multiple fixed-base robots or a large-size system, thanks to the mobility of the mobile base. However, the mobile base also brings redundancies to the system, which makes trajectory planning more challenging. One class of problems recently arising from mobile 3D printing is the trajectory-continuous tasks, in which the end-effector is required to follow a designed continuous trajectory (time-parametrized path) in task space. This paper formulates and solves the optimal trajectory planning problem for mobile manipulators under end-effector trajectory continuity constraint, which allows considerations of other constraints and trajectory optimization. To demonstrate our method, a discrete optimal trajectory planning algorithm is proposed to solve mobile 3D printing tasks in multiple experiments.
## I Introduction
A mobile manipulator often consists of a manipulator, such as a robotic arm, and a mobile base which helps extend the workspace of the manipulator [1, 2, 3]. This mobility feature enables the mobile manipulator to be employed in many large-scale applications which are usually considered impossible for a fixed-base robot with a similar size. The tasks for mobile manipulators can be divided into two categories: sequencing tasks and continuous tasks.
Examples of sequencing tasks are drilling [3], picking [4], inspection [5], etc. These tasks usually consist of multiple subtasks, or targets. The challenge of this class of tasks is finding the optimal sequence to perform the subtasks.
In the second category, there are path-continuous tasks and _trajectory-continuous tasks_, both of which require the mobile manipulator to perform a single continuous task. In path-continuous tasks, the end-effector must travel along a continuous path. Meanwhile, in trajectory-continuous tasks, the end-effector is required to follow a continuous trajectory (a time-parametrized path). Examples of these tasks are robotic writing [6], spraying [7], 3D printing [1, 2], etc.
In addition to finding feasible trajectories, trajectory optimization is of great importance especially in mobile manipulation. Due to the centimetre-range uncertainty of the mobile base, it is desirable to minimize an objective such as travel time or distance, effort, number of stops, etc. This problem for sequencing tasks has been solved such as by clustering the task space before task sequencing [3]. However, to the best of our knowledge, mobile base trajectory optimization has not been addressed for trajectory-continuous tasks.
In this paper, we propose a method for planning optimal mobile manipulator trajectory in trajectory-continuous tasks, which consists in:
* Formulating the problem using configuration spacetime which reveals that this is indeed a constrained optimal trajectory planning problem subject to multiple constraints (geometric constraint, end-effector trajectory continuity and reachability, collisions, and velocity constraint) and trajectory optimization.
* Finding the optimal mobile base trajectory numerically in a discrete optimal trajectory planning algorithm.
The remainder of this paper is organized as follows. In section II, we discuss related works. Section III formulates the problem and gives an overview of some solution approaches. In section IV, we propose a discrete optimal trajectory planning method. Section V presents the applications of our method in large-scale mobile 3D printing and section VI provides some discussions.
## II Related Works
In recent years, mobile manipulator has gained attention because of its potential in various types of tasks and brought with it challenges to motion planning. A survey can be found in [8], while technical backgrounds are explained in [9, 10].
For sequencing tasks which consist of multiple subtasks or targets between which the motion is not pre-determined, the main problem is to find the optimal sequence to visits all targets that minimizes travel time or distance, etc. This challenge has been addressed recently in [3, 4, 5, 11].
For path-continuous tasks where the robot's end-effector is required to follow a path in task space without time-parametrization, feasible base poses can be found based on reachability map [12] or inverse reachability distribution [13]. Besides, the whole-body robot motion can be controlled
Fig. 1: Mobile 3D printing an NTU shape (\(3\times 0.75\times 0.15m\)) with nozzle speed \(10cm/s\) ([https://youtu.be/yyBv3xGCInk](https://youtu.be/yyBv3xGCInk))
and planned online, such as using Model Predictive Control (MPC) [14, 15, 16] or Constrained Sequential Linear Quadratic Optimal Control (Constrained SLQ) [17].
Recently, from the field of large-scale mobile 3D printing for building and construction, a new class of mobile manipulator tasks has emerged which we call trajectory-continuous tasks. Compared to path-continuous tasks, these trajectory-continuous tasks require the end-effector to follow a designed trajectory (a time-parametrized path) in task space. For example, in 3D printing, the nozzle must move along the printing path at a designed speed.
To print an object larger than the reach of the manipulator, instead of using multiple robots in stationary [18], the recent trend is printing-while-moving [1]. However, most works in this field have been done with the mobile base trajectory planned manually. More recently in [2, 19], some efforts have been made in path planning for mobile manipulators based on RRT* and inverse reachability. Limitations of this method are that it does not consider velocity constraint and only returns a jagged path which requires post-processing such as smoothing and time-parametrization [19].
In this paper, we would like to formulate the mobile manipulator motion planning problem in trajectory-continuous tasks using the configuration spacetime, which reveals that this is indeed a constrained optimal trajectory planning problem. Based on this insight, we use a light-weight, reachability guaranteed, geometric reachable region [3] to guide a discrete optimal trajectory planning algorithm in a discretized admissible base configuration spacetime which considers multiple constraints and trajectory optimization.
## III Problem Formulation
### _Configuration spacetime_
The _configuration space (C-space)_\(\mathcal{C}\) is an \(n\)-dimensional manifold in which each point has \(n\)_generalized coordinates_ representing the system configuration:
\[\mathbf{q}\equiv(q_{1},...,q_{n})\in\mathcal{C} \tag{1}\]
The term _spacetime_ was originated from Jules Henri Poincare in the field of Relativity, but it has also been used in Classical Mechanics [20]. Spacetime is a representation of the evolution of a dynamical system in space and time.
The term _configuration spacetime_ (C-spacetime) was first used by L.P. Eisenhart (1939), A. Lichnerowicz (1955), H.D. Dombrowski and K. Horneffer (1963) [20]. Configuration spacetime \(\mathcal{X}\) is defined as the topological product of a real time axis \(\mathcal{T}\subseteq\mathbb{R}\) and the C-space: \(\mathcal{X}=\mathcal{T}\otimes\mathcal{C}\). Thus, the configuration spacetime is an \((n+1)\)-manifold:
\[\mathcal{X}=\{\mathbf{x}\equiv(t,\mathbf{q})\mid t\in\mathcal{T},\mathbf{q}\in \mathcal{C}\} \tag{2}\]
where each point \(\mathbf{x}=(t,q_{1},...,q_{n})\) is called an _event_ which occurs at time \(t\) (_temporal coordinate_) when the system is having the configuration \((q_{1},...,q_{n})\) (_spatial coordinates_).
The _trajectory_ of a system is a curve \(\mathbf{q}(t)\) in C-space, or equivalently \(\mathbf{x}(s)\) in C-spacetime. In this paper, we consider \(\mathbf{q}(t)\) to be continuous, so \(\mathbf{x}(s)\) is a continuous sequence of events. Here, the path parameter \(s\) goes from \(s=0\) at a start event to \(s=1\) at an end event (see Fig. 2). The tangent vector at any point along the trajectory
\[\mathbf{x}^{\prime}\equiv\frac{d\mathbf{x}}{ds}=\begin{pmatrix}t^{\prime}\\ \mathbf{q}^{\prime}\end{pmatrix} \tag{3}\]
must satisfy \(t^{\prime}\equiv dt/ds>0\) since time is monotonic.
The _spacetime velocity_ at any point along the trajectory is parallel to the tangent vector at that point:
\[\dot{\mathbf{x}}\equiv\frac{d\mathbf{x}}{dt}=\frac{\mathbf{x}^{\prime}}{t^{ \prime}}=\begin{pmatrix}1\\ \dot{\mathbf{q}}\end{pmatrix} \tag{4}\]
where \(\dot{\mathbf{q}}\equiv d\mathbf{q}/dt\) is called the _generalized velocity_.
### _Mobile manipulators in configuration spacetime_
For example, our mobile manipulator consists of a 6-DOF manipulator \(\mathbf{q}_{m}=(\theta_{1},...,\theta_{6})\) and a 3-DOF planar mobile base \(\mathbf{q}_{b}=(x,y,\varphi)\) where \(\varphi\in\mathbb{S}^{1}\) is the orientation (yaw angle) of the base. Therefore, the whole-body configuration and C-space of a mobile manipulator are
\[\mathbf{q}=(\theta_{1},...,\theta_{6},x,y,\varphi)\in\mathcal{C},\quad \mathcal{C}\subseteq\mathbb{R}^{8}\otimes\mathbb{S}^{1} \tag{5}\]
We follow the decoupled approach which treats the manipulator and the mobile base separately \(\mathcal{C}=\mathcal{M}\otimes\mathcal{B}\) where the _manipulator configuration space_ is \(\mathcal{M}\subseteq\mathbb{R}^{6}\) and the _base configuration space (B-space)_ is \(\mathcal{B}\subseteq\mathbb{R}^{2}\otimes\mathbb{S}^{1}\).
We define the _base configuration spacetime (B-spacetime)_
\[\mathbf{x}=(t,x,y,\varphi)\in\mathcal{X},\quad\mathcal{X}\subseteq\mathbb{R} ^{3}\otimes\mathbb{S}^{1} \tag{6}\]
where we denote \(\mathbf{x},\mathcal{X}\) instead of \(\mathbf{x}_{b},\mathcal{X}_{b}\) since we apply the concept of spacetime only on the mobile base, meanwhile the manipulator will be analysed using kinematic reachability.
Fig. 2: Visualization of a trajectory in the base configuration spacetime (B-spacetime). The \(4^{\text{th}}\) dimension (orientation \(\varphi\)) is not shown. Any point \(s\in[0,1]\) along the trajectory must be inside an admissible B-space \(\mathcal{B}_{a}(t)\) and its velocity must be within a cone of admissible velocities \(\mathcal{V}_{a}\). The spacetime region of admissible events (admissible B-spacetime \(\mathcal{X}_{a}\)) is obtained as the admissible B-space changes in time.
### _Constraints_
#### Iv-C1 Geometric constraints
In a 6D task, the end-effector must follow as a rigid body motion, so the task space is generally \(\mathbb{R}^{3}\otimes SO(3)\). For 5D tasks, the geometric constraints are end-effector's 3D position and 2D direction, so the task space is \(\mathbb{R}^{3}\otimes\mathbb{S}^{2}\). For example, 3D printing is a 5D task in which the nozzle points vertically downwards or tilts slightly from it, but the rotation around vertical axis is free. In this paper, we consider 5D tasks which bring more redundancies to the problem (\(n-5\) compared to \(n-6\)).
#### Iv-C2 End-effector trajectory-continuity constraint
The end-effector is required to follow a continuous task-space trajectory \(\mathbf{p}(t)\) instead of just a geometric path.
#### Iv-C3 Collisions
The spacetime representation allows us to consider collisions between the mobile base and either static or dynamic environment by setting a safe distance from the obstacles. For a known dynamic environment, the sets of collision-free configurations of the base can be found at different times which give us \(\mathcal{B}_{free}(t)\subseteq\mathcal{B}\). For static environment, we simply have \(\mathcal{B}_{free}(t)=\mathcal{B}_{free}(0)\;\forall t\in\mathcal{T}\).
#### Iv-C4 End-effector reachability constraint
For each end-effector pose in its task-space trajectory \(\mathbf{p}(t)\), the base configuration \(\mathbf{q}_{b}(t)\) must be kept within an admissible set of configurations \(\mathcal{B}_{a}(t)\subseteq\mathcal{B}_{free}(t)\) so that the end-effector can reach the pose without violating joint limits. The set of admissible events is therefore a subset of the B-spacetime which we call _admissible B-spacetime_ (see Fig. 2):
\[\mathcal{X}_{a}=\{\mathbf{x}=(t,\mathbf{q}_{b})\;|\;t\in\mathcal{T},\;\mathbf{ q}_{b}\in\mathcal{B}_{a}(t)\}\subseteq\mathcal{X} \tag{7}\]
This set can be found based on the manipulator kinematic reachability analysis which will be discussed in section IV.B.
#### Iv-C5 Velocity constraint
In this paper, we consider the (translational and rotational) velocity limits of the mobile base, so the _set of admissible spacetime velocities_ is
\[\mathcal{V}_{a}=\left\{\hat{\mathbf{x}}=(1,\mathbf{q}_{b})\;|\;\dot{x}^{2}+ \dot{y}^{2}\leq v_{max}^{2},\;\dot{\phi}^{2}\leq\omega_{max}^{2}\right\} \tag{8}\]
This constraint is visualized in Fig. 2: at every point on the trajectory, the spacetime velocity must be kept inside a cone.
### _Problem formulation_
The ultimate goal of mobile manipulator trajectory planning for trajectory-continuous tasks is to find the whole-body joint trajectory to perform a task without violating the constraints and optionally optimize an optimization objective.
Since finding the whole-body trajectory directly is computationally expensive, we focus on the decoupled approach which plans the trajectories for the mobile base and the manipulator separately. It is worth noting that although the whole-body planning problem is decoupled into manipulator and mobile base planning problems, they are still related: the manipulator's kinematic reachability constrains the admissible configurations of the mobile base so that the manipulator can reach the desired pose.
#### Iv-D1 Planning mobile base trajectory
The _feasible trajectory planning_ problem for mobile base is to find a continuous trajectory inside the admissible B-spacetime, such that the velocity at every trajectory point is an admissible velocity:
\[\mathbf{x}(s)\in\mathcal{X}_{a},\quad\dot{\mathbf{x}}(s)\in\mathcal{V}_{a} \quad\forall s\in[0,1] \tag{9}\]
Constraints: geometric, end-effector's trajectory-continuity and reachability constraints, and collisions, have been considered in \(\mathcal{X}_{a}\); while velocity constraint is realized by \(\mathcal{V}_{a}\).
_Optimal trajectory planning_: among feasible solutions, it is desirable to find an optimal trajectory to minimize a cost:
\[\mathbf{x}^{opt}(s)=\arg\min_{\mathbf{x}(s)}J[\mathbf{x}(s)] \tag{10}\]
where \(J[\mathbf{x}(s)]\) is the cost functional, typically formulated as an integral of a Lagrangian. For example, the cost functional for minimum-effort motion is: (we denote \(a\cdot B\cdot a\equiv a^{T}Ba\))
\[J[\mathbf{q}_{b}(t)]\equiv\int_{0}^{t_{end}}L(\mathbf{q}_{b}, \mathbf{q}_{b},t)dt=\int_{0}^{t_{end}}\dot{\mathbf{q}}_{b}\cdot\mathbf{I}_{q} \cdot\dot{\mathbf{q}}_{b}\,dt \tag{11}\] \[\Leftrightarrow\quad J[\mathbf{x}(s)]=\int_{0}^{1}\mathbf{x}^{\prime} \cdot\mathbf{I}_{x}\cdot\mathbf{x}^{\prime}\,\frac{ds}{t^{\prime}},\quad \mathbf{I}_{x}=\begin{pmatrix}0&0\\ 0&\mathbf{I}_{q}\end{pmatrix}\]
where the mathematical weight matrix \(\mathbf{I}_{q}\) is approximately proportional to the robot's inertial matrix.
#### Iv-D2 Manipulator trajectory planning
Given the mobile base trajectory, the manipulator trajectory planning problem is to find an appropriate motion of the manipulator so that its end-effector follow the designed task-space trajectory.
The method of using manipulator's kinematic reachability to confine the admissible configurations of the base guarantees that IK solutions exist for finding manipulator joints for every pair of base and end-effector poses \(\mathbf{q}_{b}(t),\mathbf{p}(t)\). The manipulator joint trajectory can be computed using differential IK. The rest of this paper will focus on mobile base trajectory planning.
### _Solution approaches_
A notable work recently in this area is [2]. The authors proposed a sampling-based path planning algorithm (Task-Consistent RRT*) which then requires a post-processing phase to smooth and time-parametrize the path to get the final trajectory [19]. A possible approach is to extend this method to run in spacetime and consider velocity constraint, which will produce a trajectory instead of just a path.
We notice that the problem formulation using configuration spacetime in this section reveals an important insight: the problem is indeed constrained optimal trajectory planning instead of path planning. Since time always marches forward and the mobile base is confined inside the admissible B-spacetime, which guides the base from start to goal, the difficulty does not lie on exploring a region to find the path to goal but on constraints and optimization. Therefore, we propose a fast, complete, optimal mobile base trajectory planning algorithm based on the discrete optimal planning approach [10] in the discretized admissible B-spacetime.
## IV Discrete Optimal Trajectory Planning
### _Discretization_
Let us sample \(N+1\) points uniformly (in time) along the end-effector trajectory, so the corresponding mobile base trajectory consists of \(N+1\) time steps: \(i=0,...,N\). The path parameter is \(s=i\Delta s\), \(\Delta s=1/N\) and a time instance is
\[t=t_{end}s=i\Delta t,\quad\Delta t=t_{end}/N \tag{12}\]
We discretize \(\mathcal{V}_{a}\) by specifying a set of velocity step sizes \(\Delta v_{x},\Delta v_{y},\Delta\omega\) and find admissible combinations of \((k_{x},k_{y},k_{\varphi})\) so that the admissible velocities must satisfy velocity constraint (8): \(\dot{\mathbf{x}}=(1,k_{x}\Delta v_{x},k_{y}\Delta v_{y},k_{\varphi}\Delta \omega)\in\mathcal{V}_{a}\). Next, we define the set of admissible controls:
\[\mathcal{U}_{a}=\{\mathbf{u}=\dot{\mathbf{x}}\Delta t\mid\dot{\mathbf{x}}\in \mathcal{V}_{a}\} \tag{13}\]
We set the step sizes of the spatial coordinates as follows:
\[\Delta x=\Delta v_{x}\Delta t,\quad\Delta y=\Delta v_{y}\Delta t,\quad\Delta \varphi=\Delta\omega\Delta t \tag{14}\]
so that for any admissible controls, the mobile base at any time step \(i\) moves from one grid point to another. We use the explicit _forward Euler_ discretization scheme:
\[\mathbf{x}^{i+1}-\mathbf{x}^{i}=\dot{\mathbf{x}}^{i}\Delta t=\mathbf{u}^{i} \tag{15}\]
where the superscripts represent different time steps.
### _Kinematic Reachability Analysis_
We implement the kinematic reachability analysis introduced in [3]. Firstly, we discretize the space relative to the manipulator into 3D voxels of size \(\delta\times\delta\times\delta\). Secondly, we analyse the geometry of the end-effector trajectory to specify a range of orientations. For example, in 3D printing tasks, the nozzle must be pointed downwards, and we allow \(10^{\circ}\) deviation from the vertical direction. Next, for each voxel position, we compute Inverse Kinematics (IK) and mark the valid voxels at which the end-effector can reach within the range of orientations determined previously. At the end of the process, a valid voxel cloud is obtained (see Fig. 2(a)).
From the valid voxels cloud, we determine a _geometric reachable region_ for each sampling point in end-effector trajectory. This can be imagined as slicing the valid voxels cloud using: 2 horizontal planes bounding the height \(h\) of the trajectory point: \(z=h\pm\delta/2\), 1 vertical plane \(x=X_{min}\) to keep a safe distance from the robot, and 2 spherical surfaces to close a region containing only valid voxels (see Fig. 2(b)).
Using the geometric reachable region with parameters \(X_{min},Z_{min},Z_{max},R_{min},R_{max}\), the discretized admissible B-space \(\mathcal{B}_{a}^{i}\) for each end-effector pose \(\mathbf{p}(i\Delta t)\) can be obtained (see Fig. 4). Then, the discretized admissible B-spacetime is
\[\mathcal{X}_{a}=\bigcup_{i=0}^{N}\mathcal{X}_{a}^{i},\quad\mathcal{X}_{a}^{ i}=\{(i\Delta t,\mathbf{q}_{b})\mid\mathbf{q}_{b}\in\mathcal{B}_{a}^{i}\} \tag{16}\]
In most cases, the admissible values of \(\varphi\) are confined in a certain range based on the required orientation of the end-effector such as in [3]. However, in 3D printing, the nozzle axis coincides with the mobile base axis, making \(\varphi\in\mathbb{S}^{1}\).
### _Planning optimal mobile base trajectory_
We introduce the following definitions which are inspired by similar concepts in control and motion planning [21].
**Definition 1** (One-step feasible set): _The one-step feasible set \(\mathcal{Q}(\mathcal{I})\) is a set of admissible events \(\mathbf{x}\in\mathcal{X}_{a}\) that can reach at least one event \(\dot{\mathbf{x}}\in\mathcal{I}\) using one admissible control \(\mathbf{u}\in\mathcal{U}_{a}\), that is \(\dot{\mathbf{x}}=f(\mathbf{x},\mathbf{u})=\mathbf{x}+\mathbf{u}\) (event transition equation)_
\[\mathcal{Q}(\mathcal{I})=\{\mathbf{x}\mid f(\mathbf{x},\mathbf{u})\in \mathcal{I},\,\mathbf{x}\in\mathcal{X}_{a},\,\mathbf{u}\in\mathcal{U}_{a}\} \tag{17}\]
**Definition 2** (\(i\)-stage feasible set): _The \(i\)-stage feasible set \(\mathcal{K}^{i}(\mathcal{I}^{N})\) is the set of admissible events that can reach at least one event in a final set \(\mathcal{I}^{N}\) after a sequence of \(N-i\) admissible controls. This set can be computed iteratively by_
\[\mathcal{K}^{N}(\mathcal{I}^{N}) =\mathcal{I}^{N}\cap\mathcal{X}_{a} \tag{18}\] \[\mathcal{K}^{i}(\mathcal{I}^{N}) =\mathcal{Q}(\mathcal{K}^{i+1}(\mathcal{I}^{N}))\]
_In our case, the goal set is \(\mathcal{I}_{N}=\mathcal{X}_{a}^{N}\) so (18) becomes_
\[\mathcal{K}^{N}(\mathcal{X}_{a}^{N}) =\mathcal{X}_{a}^{N}\cap\mathcal{X}_{a}=\mathcal{X}_{a}^{N} \tag{19}\] \[\mathcal{K}^{i}(\mathcal{X}_{a}^{N}) =\mathcal{Q}(\mathcal{K}^{i+1}(\mathcal{X}_{a}^{N}))\subseteq \mathcal{X}_{a}^{i}\]
_which suggests a multistage graph where its stages are the same as the time steps of the trajectory \(i=0,1,...,N\), and very node in the graph lies on a grid point of the discretized admissible B-spacetime (\(\mathcal{K}^{i}\subseteq\mathcal{X}_{a}^{i}\))._
From (11), the numerical cost functional is:_
\[J=\sum_{i=0}^{N-1}l(\mathbf{u}^{i})=\sum_{i=0}^{N-1}\mathbf{u}^{i}\cdot \mathbf{I}_{x}\cdot\mathbf{u}^{i}\frac{1}{\Delta t} \tag{20}\]
Fig. 4: Discretized admissible B-space \(\mathcal{B}_{a}(0)\) for end-effector pose \(\mathbf{p}(0)\); vertical axis shows \(\varphi\in(-\pi,\pi]\) for \(\varphi\in\mathbb{S}^{1}\)
Fig. 3: Visualization of Kinematic Reachability Analysis
The numerical problem is: finding the minimum-cost trajectory in a multi-source (\(\mathbf{x}^{0}\in\mathcal{X}_{a}^{0}\)) multi-goal (\(\mathbf{x}^{N}\in\mathcal{X}_{a}^{N}\)) multistage graph. Instead of constructing the graph before running a graph search (such as using Dijkstra's algorithm), we propose running _backward value iterations_[10] (same as memoization in Dynamic Programming) concurrently with graph construction: while connecting the nodes using (19), concurrently calculate and store the _minimum cost-to-go_\(G(\mathbf{x})\) with the corresponding _optimal next-event_\(\mathbf{\tilde{x}}^{*}(\mathbf{x})\) by:
\[\begin{split} G(\mathbf{x}\in\mathcal{K}^{N})=0\\ G(\mathbf{x}\in\mathcal{K}^{i})=\min_{\mathbf{x}\in\mathcal{K}^{ i+1}}\big{\{}l(\mathbf{\tilde{x}}-\mathbf{x})+G(\mathbf{\tilde{x}})\mid\mathbf{ \tilde{x}}-\mathbf{x}\in\mathcal{U}_{a}\big{\}}\\ \mathbf{\tilde{x}}^{*}(\mathbf{x}\in\mathcal{K}^{i})=\arg\min_{ \mathbf{x}\in\mathcal{K}^{i+1}}\big{\{}l(\mathbf{\tilde{x}}-\mathbf{x})+G( \mathbf{\tilde{x}})\mid\mathbf{\tilde{x}}-\mathbf{x}\in\mathcal{U}_{a}\big{\}} \end{split} \tag{21}\]
The minimum cost \(J_{min}=\min_{\mathbf{x}}\{G(\mathbf{x}\in\mathcal{K}^{0})\}\) can be found as soon as the graph is fully connected, and the memoization can be used to recover the optimal trajectory. The whole procedure is summarized in Algorithm 1 below.
```
Input: Discretized admissible base configuration spacetime \(\mathcal{X}_{a}=\{\mathcal{X}_{a}^{0},...,\mathcal{X}_{a}^{N}\}\), set of admissible controls \(\mathcal{U}_{a}\) Output: Optimal trajectory \(\{\mathbf{x}^{0},...,\mathbf{x}^{N}\}\)
1 Initialization: \(\mathcal{K}^{N}\leftarrow\mathcal{X}_{a}^{N}\) and \(G(\mathbf{x})\gets 0\,\forall\mathbf{x}\in\mathcal{K}^{N}\) /* Backward iterations */
2for\(i\in[N-1,...,0]\)do
3\(\mathcal{K}^{i}\leftarrow\emptyset\) for\(\mathbf{x}\in\mathcal{X}_{a}^{i}\)do
4\(ValidNode\leftarrow False\); \(G(\mathbf{x})\leftarrow\infty\) for\(\mathbf{\tilde{x}}\in\mathcal{K}^{i+1}\) such that \(\mathbf{\tilde{x}}-\mathbf{x}\in\mathcal{U}_{a}\)do
5\(ValidNode\gets True\) if\(l(\mathbf{\tilde{x}}-\mathbf{x})+G(\mathbf{\tilde{x}})<G(\mathbf{x})\)then
6\(G(\mathbf{x})\gets l(\mathbf{\tilde{x}}-\mathbf{x})+G(\mathbf{\tilde{x}})\)
7\(\mathbf{\tilde{x}}^{*}(\mathbf{x})\leftarrow\mathbf{\tilde{x}}\)
8ifValidNodethen
9\(\mathcal{K}^{i}.Insert(\mathbf{x})\)
10if\(\mathcal{K}^{i}=\emptyset\)then
11returnInfeasible /* recover the optimal trajectory */
12\(\mathbf{x}^{0}\leftarrow\arg\min_{\mathbf{x}}G(\mathbf{x}\in\mathcal{K}^{0})\)
13for\(i\in[0,...,N-1]\)do
14\(\mathbf{\tilde{x}}^{i+1}\leftarrow\mathbf{\tilde{x}}^{*}(\mathbf{x}^{i})\)
```
**Algorithm 1**Mobile base trajectory planning
### _Completeness_
**Theorem 1** (Completeness): _Algorithm 1 only reports Infeasible when there is indeed no feasible trajectory._
Proof:: Since Algorithm 1 only reports Infeasible when it runs into \(\mathcal{K}^{i}=\emptyset\), we can show by contradiction that: if there exists a feasible trajectory \(\{\mathbf{x}^{0},...,\mathbf{x}^{N}\}\), then \(\mathcal{K}^{i}\) contains \(\mathbf{x}^{i}\) for all \(i\in[0,N]\). Using backward induction:
Initialization: \(\mathbf{x}^{N}\in\mathcal{K}^{N}\) by construction.
Induction: Assume that \(\mathbf{x}^{i+1}\in\mathcal{K}^{i+1}\); Since \(\mathbf{x}^{i}\) and \(\mathbf{x}^{i+1}\) are both in a feasible trajectory, they satisfy \(\mathbf{x}^{i+1}-\mathbf{x}^{i}\in\mathcal{U}_{a}\), which means \(\mathbf{x}^{i}\in\mathcal{Q}(\mathcal{K}^{i+1})=\mathcal{K}^{i}\) according to (17) and (19).
### _Optimality_
**Theorem 2** (Optimality): _If Algorithm 1 returns an output trajectory, then it is indeed the optimal trajectory._
Proof:: Firstly, if Algorithm 1 returns a sequence \(\{\mathbf{x}^{0},...,\mathbf{x}^{N}\}\), one can show by forward induction that this sequence is a feasible trajectory.
Secondly, Algorithm 1 tracks the minimum cost-to-go and optimal steps from every node to the goal stage in the same way as dynamic programming. Therefore, by the principle of optimality [10], Algorithm 1 returns the trajectory with minimum cost-to-go from stage 0 to N, which is equivalent to the optimal trajectory with minimum cost functional.
## V Experiments
Our mobile manipulator consists of a DENSO VS-087 6-DOF arm mounted on a Clearpath Ridgeback 3-DOF omni-directional mobile base. The following experiments illustrate our method in 3D printing with increasing difficulty levels. A hardware demo and a simulation example are shown in the accompanying video ([https://youtu.be/yyBv3xGClnk](https://youtu.be/yyBv3xGClnk)).
### _Experiment 1: Mobile 1D printing_
Figure 5 shows the optimal mobile base trajectory solutions for 3 cases of mobile base in printing a horizontal line. Since the B-spacetime for 3D base is 4-dimensional (\(t,x,y,\varphi\)), Fig. (c)c can only show the solution in task space.
### _Experiment 2: Mobile 3D printing (hardware demo)_
Figure 6 shows a real 3D printing task of a U shape (5 layers) with total printing path length of \(d=19.85m\). Our algorithm found an optimal mobile base trajectory in \(3.8s\) (using \(\Delta t=3s\), \(\Delta v_{x}=\Delta v_{y}=5cm/s\), \(\Delta\omega=\pi/30\,rad/s\)).
### _Experiments 3-4: Comparisons_
We would like to compare our method with manual planning in [1]. Figure 7 shows our solution where the mobile base moves in a region \(55\times 20cm\) instead of \(80\times 10cm\), with an average speed \(2cm/s\) which is lower than \(3.5cm/s\) in [1].
Next, we benchmark Algorithm 1 and compare it with a baseline (Dijkstra's algorithm) in the 3D printing task in Fig. 1: an NTU shape with length \(d=112.9m\) (10 layers). Table I compares the computation time of two methods (both using \(\Delta t=2.5s\), \(\Delta v_{x}=\Delta v_{y}=5cm/s\), \(\Delta\omega=\pi/30\,rad/s\)). Algorithm 1 is faster by running the search during graph construction.
## VI Discussions
### _Complexity analysis_
#### Vi-A1 Complexity with respect to task length
Table I shows that the computation time of Algorithm 1 increases linearly, since only the number of stages increases with task length \(N\propto d\). This is a crucial property for large-scale applications.
#### Vi-A2 Complexity with respect to discretization step sizes
Number of stages is \(N\propto 1/\Delta t\); number of nodes per stage is \(|\mathcal{X}_{a}^{i}|\propto(1/\Delta x\Delta y\Delta\varphi)\propto(1/\Delta v ^{3}\Delta v_{x}\Delta v_{y}\Delta\omega)\). Based on Algorithm 1, we expect its complexity to be no higher than \(O\left((1/\Delta t)^{7}(1/\Delta v_{x}\Delta v_{y}\Delta\omega)^{2}\right)\). Figure 8 shows in our test case that, as \(\Delta v=\Delta v_{x}=\Delta v_{y}\) and \(\Delta t\) decrease, the computation time increases polynomially with order \(O((1/\Delta v)^{3.9})\) and \(O((1/\Delta t)^{6.0})\) respectively (based on log-log analysis).
In our experiments, we use coarse step sizes, then apply linear interpolation to match with the controller's rate, so the trajectory is continuous and piecewise \(C^{1}\)-continuous.
### _Notes on Reachability_
The geometric reachable region can be seen as an alternative for the inverse reachability map (IRM) [2] with some advantages. First, it is light-weight, represents the reachability using geometric parameters, hence does not require matching between IRM and the grid. In our tests, the process of finding discretized admissible B-spacetime took negligible time. Second, IK solutions exist everywhere inside the geometric reachable region, which satisfies of the manipulator reachability constraint. Although this does not guarantee a continuous joint trajectory between the IK solutions [22], it is obtained in all our test cases.
### _Notes on Kinematics_
The cost functional for minimum effort in (11) effectively prevents unnecessarily large velocity changes, since a lower, stabler velocity profile has a lower effort, such as the shortest straight line in spacetime in Fig. 4(a). This property makes our hardware experiment possible without post-processing. Second-order kinematic constraints such as acceleration bounds will be considered in our future works.
## VII Conclusion
In this paper, we have proposed a method for formulating and solving the optimal trajectory planning for mobile manipulators under end-effector trajectory continuity constraint. We have also presented a fast, complete, and optimal algorithm to implement our method in experiments of large-scale 3D printing tasks with different levels of difficulty. Our future directions include: smoothing the mobile base trajectory, considering second-order kinematic constraints, extending to hardware motion control, and implementing sampling-based methods in our configuration spacetime formulation.
## Acknowledgment
This research was supported by the National Research Foundation, Prime Minister's Office, Singapore under its Medium-Sized Centre funding scheme, CES_SDC Pte Ltd, Sembcorp Architects & Engineers Pte Ltd, and Chip Eng Seng Construction Ltd.
\begin{table}
\begin{tabular}{|c|c c c c|} \hline Planning time for: & 8 layers & 10 layers & 12 layers & 14 layers \\ \hline Algorithm 1 & 89.9s & 108.4s & 129.7s & 155.1s \\ Baseline (Dijkstra) & 120.3s & 155.2s & 184.4s & 515.0s \\ \hline \end{tabular}
\end{table} TABLE I: Planning time comparison (in NTU test, Fig. 1)
Fig. 5: Mobile 1D printing a \(2.1m\) line with different base’s DOFs. (a) and (b) are seen in B-spacetime, (c) in task space.
Fig. 8: Algorithm complexity with respect to discretization
Fig. 6: Mobile 3D printing a U shape (\(0.9\times 0.675\times 0.05m\)) with nozzle speed \(10cm/s\) ([https://youtu.be/yyBv3xGClnk](https://youtu.be/yyBv3xGClnk))
Fig. 7: Optimal base trajectory for comparison with [1] | モビリティハンドラーは、従来は複数の固定基底ロボットまたは大型ロボットシステムで実行される多くのアプリケーションで使用されています。この能力は、モビリティベースの可動性により実現されています。しかし、モビリティベースはシステムに冗長性を導入するため、モビリティハンドラーの運動計画がより困難になることがあります。この論文では、エンドエフェクタトラジェトリの連続性を満たすモバイルハンドラー運動計画の問題に取り組んでいます。エンドエフェクタは、モバイル印刷や噴射アプリケーションなど、連続的なタスク空間経路(時間パラメータ化されたパス)を移動する必要があります。私たちの方法は、問題を次のとおり解体しています。(1) 基礎経路を、幾何学的タスク制約、エンドエフェクタトラジェトリの連続性、衝突回避、およびベース速度制約に従って最適化します。これは、(2) マニピュレータの軌跡が取得 |
2303.18041 | Isometries of wall-connected twin buildings | We introduce the notion of a wall-connected twin building and show that the
local-to-global principle holds for these twin buildings. As each twin building
satisfying Condition (co) (introduced in [7]) is wall-connected, we obtain a
strengthening of the main result of [7] that covers also the thick irreducible
affne twin buildings of rank at least 3. | Sebastian Bischof, Bernhard Mühlherr | 2023-03-31T13:20:36 | http://arxiv.org/abs/2303.18041v1 | # Isometries of wall-connected twin buildings
###### Abstract
We introduce the notion of a wall-connected twin building and show that the local-to-global principle holds for these twin buildings. As each twin building satisfying Condition (co) (introduced in [7]) is wall-connected, we obtain a strengthening of the main result of [7] that covers also the thick irreducible affine twin buildings of rank at least \(3\).
**Keywords** Twin buildings, Local-to-global principle, affine RGD-systems
**Mathematics Subject Classification** 20E42, 51E24
## 1 Introduction
In [10] Tits gave a complete classification of all thick irreducible spherical buildings of rank at least \(3\). The decisive tool in this classification is the extension theorem for local isometries between two spherical buildings (Theorem 4.1.2 in loc. cit.). Inspired by the paper [11] on Kac-Moody groups over fields, Ronan and Tits introduced twin buildings (see [13, 88/89 and 89/90] and [12]). Twin buildings are natural generalizations of spherical buildings because there is an opposition relation on the set of its chambers. It was conjectured in [12] that the extension theorem for local isometries can be generalized to \(2\)-spherical twin buildings. This conjecture was confirmed in [7] for twin buildings that satisfy a technical condition (called Condition (co) in [7]) and it was shown that Condition (co) is "almost always" satisfied. More precisely, if the twin building in question has no rank two residues isomorphic to \(B_{2}(2),G_{2}(2),G_{2}(3)\) or \({}^{2}F_{4}(2)\), then the extension theorem holds (see Corollary 1.7 in [7]).
It seemed first that Condition (co) was just a convenient assumption for making the ideas in [7] work, but that the extension theorem should hold without any additional hypothesis. After a while, however, there were serious doubts about this (see Paragraph 2.3 in [13, 97/98]) and the question about the validity of the local-to-global principle for all \(2\)-spherical buildings remained an open problem. It is particularly unsatisfying that it is even not known whether the extension theorem holds for twin buildings of affine type. It was observed by Abramenko and Muhlherr that the arguments in [7] can be modified to treat some cases in which Condition (co) does not hold. But these modifications were not good enough to prove the extension theorem for all affine twin buildings.
In this paper we introduce a condition for twin buildings that we call wall-connectedness. It is inspired by the content of [4] and its definition (given in Definition (4.8)) is somewhat technical. It turns out that each twin building satisfying Condition (co) is wall-connected but that the converse is not true. The main goal of this paper is to provide the following improvement of the main result in [7].
**Main result:** The extension theorem holds for wall-connected twin buildings.
For a precise statement of our main result we refer to Corollary (5.18). It turns out that all \(3\)-spherical twin buildings and all irreducible affine twin buildings of rank at least \(3\) are wall-connected (see Section 6). Thus, our main result yields the following:
**Corollary:** The extension theorem holds for all \(3\)-spherical twin buildings and all irreducible affine twin buildings of rank at least \(3\).
### Content of the paper
In Section 2 we fix notation and state some results about parallel residues in a building. In Section 3 we give the definition of a twin building and prove some elementary facts which we will need later. This section is also to be understood as a preliminary section. The first part of the next section is concerned with compatible path's as defined in [4]. In the second part of this section we define \(P\)-compatible path's which generalizes compatible path's to the situation of twin buildings. Later on we prove some result about them. In particular, our proof of the extension theorem relies heavily on Proposition (4.4). At the end of this section we give the definition of a wall-connected twin building. Section 5 is divided in three parts. In the first part we state the definition of an isometry and some basic results about them. A crucial lemma is Lemma (5.5). We will use this lemma in combination with Proposition (4.4) to prove the extension theorem for wall-connected twin buildings. The main step is Proposition (5.13).
The rest of the paper is concerned essentially with the fact that affine twin buildings of rank at least \(3\) are wall-connected.
### Acknowledgement
We thank Richard Weiss for communicating us the proof of Proposition (6.11).
## 2 Preliminaries
### Coxeter system
A _Coxeter system_ is a pair \((W,S)\) consisting of a group \(W\) and a set \(S\subseteq W\) of generators of \(W\) of order \(2\) such that the set \(S\) and the relations \((st)^{m_{st}}\) for all \(s,t\in S\) constitute a presentation of \(W\), where \(m_{st}\) denotes the order of \(st\) in \(W\) for all \(s,t\in S\).
Let \((W,S)\) be a Coxeter system and let \(\ell:W\to\mathbb{N},w\mapsto\min\{k\in\mathbb{N}_{0}\ |\ \exists s_{1},\ldots,s_{k}\in S:w=s_{1} \cdots s_{k}\}\) denote the corresponding length function. The _Coxeter diagram_ corresponding to \((W,S)\) is the labeled graph \((S,E(S))\), where \(E(S)=\{\{s,t\}\ |\ m_{st}>2\}\) and where each edge \(\{s,t\}\) is labeled by \(m_{st}\) for all \(s,t\in S\). We call the Coxeter diagram _irreducible_, if the underlying graph is connected, and we call it \(2\)_-spherical_, if \(m_{st}<\infty\) for all \(s,t\in S\). The _rank_ of a Coxeter diagram is the cardinality of the set of its vertices. It is well-known that the pair \((\langle J\rangle,J)\) is a Coxeter system (cf. [3, Ch. IV, SS1 Theoreme 2]). We call \(J\subseteq S\)_spherical_ if \(\langle J\rangle\) is finite. Given a spherical subset \(J\) of \(S\), there exists a unique element of maximal length in \(\langle J\rangle\), which we denote by \(r_{J}\) (cf. [1, Corollary 2.19]); moreover, \(r_{J}\) is an involution.
**(2.1) Convention**.: From now on we let \((W,S)\) be a Coxeter system of finite rank.
**(2.2) Lemma**.: _Let \((W,S)\) be a Coxeter system and let \(w\in W,s,t\in S\) be such that \(\ell(sw)=\ell(w)-1=\ell(wt)\). Then either \(\ell(swt)=\ell(w)-2\) or \(swt=w\)._
Proof.: We put \(w^{\prime}:=sw\). Then \(\ell(sw^{\prime})=\ell(w)=\ell(w^{\prime})+1\). We assume that \(\ell(swt)\neq\ell(w)-2\). Then \(\ell(swt)=\ell(w)\) and hence \(\ell(w^{\prime}t)=\ell(swt)=\ell(w)=\ell(w^{\prime})+1\). Using Condition (**F**) of [1] on page 79 we obtain either \(\ell(sw^{\prime}t)=\ell(w^{\prime})+2\) or \(sw^{\prime}t=w^{\prime}\). Since \(\ell(sw^{\prime}t)=\ell(wt)=\ell(w)-1=\ell(w^{\prime})\) we have \(wt=sw^{\prime}t=w^{\prime}=sw\) and the claim follows.
### Buildings
A _building of type_\((W,S)\) is a pair \(\Delta=(\mathcal{C},\delta)\) where \(\mathcal{C}\) is a non-empty set and where \(\delta:\mathcal{C}\times\mathcal{C}\to W\) is a _distance function_ satisfying the following axioms, where \(x,y\in\mathcal{C}\) and \(w=\delta(x,y)\):
1. \(w=1_{W}\) if and only if \(x=y\);
2. if \(z\in\mathcal{C}\) satisfies \(s:=\delta(y,z)\in S\), then \(\delta(x,z)\in\{w,ws\}\), and if, furthermore, \(\ell(ws)=\ell(w)+1\), then \(\delta(x,z)=ws\);
3. if \(s\in S\), there exists \(z\in\mathcal{C}\) such that \(\delta(y,z)=s\) and \(\delta(x,z)=ws\).
The _rank_ of \(\Delta\) is the rank of the underlying Coxeter system. The elements of \(\mathcal{C}\) are called _chambers_. Given \(s\in S\) and \(x,y\in\mathcal{C}\), then \(x\) is called _\(s\)-adjacent_ to \(y\), if \(\delta(x,y)=\langle s\rangle\). The chambers \(x,y\) are called _adjacent_, if they are \(s\)-adjacent for some \(s\in S\). A _gallery_ joining \(x\) and \(y\) is a sequence \((x=x_{0},\ldots,x_{k}=y)\) such that \(x_{l-1}\) and \(x_{l}\) are adjacent for any \(1\leq l\leq k\); the number \(k\) is called the _length_ of the gallery. For any two chambers \(x\) and \(y\) we put \(\ell_{\Delta}(x,y):=\ell(\delta(x,y))\).
Given a subset \(J\subseteq S\) and \(x\in\mathcal{C}\), the _\(J\)-residue_ of \(x\) is the set \(R_{J}(x):=\{y\in\mathcal{C}\mid\delta(x,y)\in\langle J\rangle\}\). Each \(J\)-residue is a building of type \((\langle J\rangle,J)\) with the distance function induced by \(\delta\) (cf. [1, Corollary 5.30]). A _residue_ is a subset \(R\) of \(\mathcal{C}\) such that there exist \(J\subseteq S\) and \(x\in\mathcal{C}\) with \(R=R_{J}(x)\). It is a basic fact that the subset \(J\) is uniquely determined by the set \(R\); it is called the _type_ of \(R\) and the _rank_ of \(R\) is defined to be the cardinality of \(J\). A residue is called _spherical_ if its type is a spherical subset of \(S\). Let \(R\) be a spherical residue of type \(J\) and let \(x,y\in R\). Then \(x,y\) are called _opposite in \(R\)_ if \(\delta(x,y)=r_{J}\). If \(R=\mathcal{C}\), we say for short that \(x,y\) are _opposite_. If \((W,S)\) is spherical we call two residues \(R_{1}\) of type \(J_{1}\) and \(R_{2}\) of type \(J_{2}\)_opposite_ if \(R_{1}\) contains a chamber opposite to a chamber of \(R_{2}\) and if \(J_{1}=r_{S}J_{2}r_{S}\).
A _panel_ is a residue of rank \(1\). An _\(s\)-panel_ is a panel of type \(\{s\}\) for some \(s\in S\). The building \(\Delta\) is called _thick_, if each panel of \(\Delta\) contains at least three chambers.
Given \(x\in\mathcal{C}\) and \(k\in\mathbb{N}_{0}\) then \(E_{k}(x)\) denotes the union of all residues of rank at most \(k\) containing \(x\). It is a fact, that the set \(E_{k}(x)\) determines the chamber \(x\) uniquely, if \(k<|S|\).
For \(x\in\mathcal{C}\) and any \(J\)-residue \(R\subseteq\mathcal{C}\) there exists a unique chamber \(z\in R\) such that \(\ell_{\Delta}(x,y)=\ell_{\Delta}(x,z)+\ell_{\Delta}(z,y)\) holds for any \(y\in R\) (cf. [1, Proposition 5.34]). The chamber \(z\) is called the _projection of \(x\) onto \(R\)_ and is denoted by \(\operatorname{proj}_{R}x\). Moreover, if \(z=\operatorname{proj}_{R}x\) we have \(\delta(x,y)=\delta(x,z)\delta(z,y)\) for each \(y\in R\).
Let \(R,Q\) be two residues. Then we define the mapping \(\operatorname{proj}_{Q}^{R}:R\to Q,x\mapsto\operatorname{proj}_{Q}x\) and we put \(\operatorname{proj}_{Q}R:=\{\operatorname{proj}_{Q}r\mid r\in R\}\). The residues \(R,Q\) are called _parallel_ if \(\operatorname{proj}_{R}Q=R\) and \(\operatorname{proj}_{Q}R=Q\).
**(2.3) Lemma**.: _Two residues \(R,Q\) are parallel if and only if \(\operatorname{proj}_{Q}^{R}\) and \(\operatorname{proj}_{R}^{Q}\) are bijections inverse to each other._
Proof.: One implication is obvious; the other is [6, Proposition 21.10].
**(2.4) Lemma**.: _Let \(P_{1}\) and \(P_{2}\) be two parallel panels of type \(s_{1}\) and \(s_{2}\), respectively. Then \(s_{2}=w^{-1}s_{1}w\), where \(w:=\delta(x,\operatorname{proj}_{P_{2}}x)\) does not depend on the choice of \(x\) in \(P_{1}\)._
_Conversely, if \(x\) and \(y\) are chambers with \(\delta(x,y)=w\), where \(w\) satisfies \(s_{2}=w^{-1}s_{1}w\) and \(\ell(s_{1}w)=\ell(w)+1\), then the \(s_{1}\)-panel on \(x\) is parallel to the \(s_{2}\)-panel on \(y\)._
Proof.: This is [4, Lemma 14].
Let \(P_{1}\) and \(P_{2}\) be two parallel panels. Then, by the previous lemma, \(\delta(x,\operatorname{proj}_{P_{2}}x)\) does not depend on the choice of \(x\in P_{1}\). Thus we define \(\delta(P_{1},P_{2}):=\delta(x,\operatorname{proj}_{P_{2}}x)\), where \(x\) is a chamber in \(P_{1}\).
**(2.5) Lemma**.: _Let \(R\) be a spherical \(J\)-residue and let \(R_{1},R_{2}\) be two residues in \(R\), which are opposite in \(R\). Then \(R_{1}\) and \(R_{2}\) are parallel._
Proof.: This is a consequence of [6, Proposition 21.24].
**(2.6) Lemma**.: _Let \(R\) be a rank \(2\) residue and let \(P,Q\) be two parallel panels contained in \(R\). Then either \(P=Q\) or \(R\) is spherical and \(P\) and \(Q\) are opposite in \(R\). In particular, if \(P\neq Q\) and \(J\) is the type of \(R\), we have \(\ell(\delta(P,Q))=\ell(r_{J})-1\)._
Proof.: This is [4, Lemma 18].
A subset \(\Sigma\subseteq\mathcal{C}\) is called _convex_ if \(\operatorname{proj}_{P}c\in\Sigma\) for every \(c\in\Sigma\) and every panel \(P\subseteq\mathcal{C}\) which meets \(\Sigma\). A subset \(\Sigma\subseteq\mathcal{C}\) is called _thin_ if \(P\cap\Sigma\) contains exactly two chambers for every panel \(P\subseteq\mathcal{C}\) which meets \(\Sigma\). An _apartment_ is a non-empty subset \(\Sigma\subseteq\mathcal{C}\), which is convex and thin.
## 3 Twin buildings
Let \(\Delta_{+}=(\mathcal{C}_{+},\delta_{+}),\Delta_{-}=(\mathcal{C}_{-},\delta_{-})\) be two buildings of the same type \((W,S)\). A _codistance_ (or a _twinning_) between \(\Delta_{+}\) and \(\Delta_{-}\) is a mapping \(\delta_{*}:(\mathcal{C}_{+}\times\mathcal{C}_{-})\cup(\mathcal{C}_{-}\times \mathcal{C}_{+})\to W\) satisfying the following axioms, where \(\varepsilon\in\{+,-\},x\in\mathcal{C}_{\varepsilon},y\in\mathcal{C}_{-\varepsilon}\) and \(w=\delta_{*}(x,y)\):
1. \(\delta_{*}(y,x)=w^{-1}\);
2. if \(z\in\mathcal{C}_{-\varepsilon}\) is such that \(s:=\delta_{-\varepsilon}(y,z)\in S\) and \(\ell(ws)=\ell(w)-1\), then \(\delta_{*}(x,z)=ws\);
3. if \(s\in S\), there exists \(z\in\mathcal{C}_{-\varepsilon}\) such that \(\delta_{-\varepsilon}(y,z)=s\) and \(\delta_{*}(x,z)=ws\).
A _twin building of type_\((W,S)\) is a triple \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) where \(\Delta_{+}=(\mathcal{C}_{+},\delta_{+}),\Delta_{-}=(\mathcal{C}_{-},\delta_{ -})\) are buildings of type \((W,S)\) and where \(\delta_{*}\) is a twinning between \(\Delta_{+}\) and \(\Delta_{-}\).
**(3.1) Lemma**.: _Given \(\varepsilon\in\{+,-\},x\in\mathcal{C}_{\varepsilon},y\in\mathcal{C}_{-\varepsilon}\) and let \(w=\delta_{*}(x,y)\). Then for any \(y^{\prime}\in\mathcal{C}_{-\varepsilon}\) with \(\delta_{-\varepsilon}(y,y^{\prime})=s\in S\) we have \(\delta_{*}(x,y^{\prime})\in\{w,ws\}\)._
Proof.: This follows similar to [1, Lemma 5.139].
We put \(\mathcal{C}:=\mathcal{C}_{+}\cup\mathcal{C}_{-}\) and define the distance function \(\delta:\mathcal{C}\times\mathcal{C}\to W\) by setting \(\delta(x,y):=\delta_{+}(x,y)\) (resp. \(\delta_{-}(x,y),\delta_{*}(x,y)\)) if \(x,y\in\mathcal{C}_{+}\) (resp. \(x,y\in\mathcal{C}_{-},(x,y)\in\mathcal{C}_{\varepsilon}\times\mathcal{C}_{-\varepsilon}\) for some \(\varepsilon\in\{+,-\}\)).
Given \(x,y\in\mathcal{C}\) then we put \(\ell(x,y):=\ell(\delta(x,y))\). If \(\varepsilon\in\{+,-\}\) and \(x,y\in\mathcal{C}_{\varepsilon}\), then we put \(\ell_{\varepsilon}(x,y):=\ell(\delta_{\varepsilon}(x,y))\) and for \((x,y)\in\mathcal{C}_{\varepsilon}\times\mathcal{C}_{-\varepsilon}\) we put \(\ell_{*}(x,y):=\ell(\delta_{*}(x,y))\).
Let \(\varepsilon\in\{+,-\}\). For \(x\in\mathcal{C}_{\varepsilon}\) we put \(x^{\operatorname{op}}:=\{y\in\mathcal{C}_{-\varepsilon}\mid\delta_{*}(x,y)=1_ {W}\}\). It is a direct consequence of (Tw1) that \(y\in x^{\operatorname{op}}\) if and only if \(x\in y^{\operatorname{op}}\) for any pair \((x,y)\in\mathcal{C}_{\varepsilon}\times\mathcal{C}_{-\varepsilon}\). If \(y\in x^{\operatorname{op}}\) then we say that \(y\) is _opposite_ to \(x\) or that \((x,y)\) _is a pair of opposite chambers._
A _residue_ (resp. _panel_) of \(\Delta\) is a residue (resp. panel) of \(\Delta_{+}\) or \(\Delta_{-}\); given a residue \(R\subseteq\mathcal{C}\) then we define its type and rank as before. Two residues \(R,T\subseteq\mathcal{C}\) in different halves are called _opposite_ if they have the same type and if there exists a pair of opposite chambers \((x,y)\) such that \(x\in R,y\in T\). The twin building \(\Delta\) is called _thick_ if \(\Delta_{+}\) and \(\Delta_{-}\) are thick.
Let \(\varepsilon\in\{+,-\}\), let \(J\) be a spherical subset of \(S\) and let \(R\) be a \(J\)-residue of \(\Delta_{\varepsilon}\). Given a chamber \(x\in\mathcal{C}_{-\varepsilon}\) then there exists a unique chamber \(z\in R\) such that \(\ell_{*}(x,y)=\ell_{*}(x,z)-\ell_{\varepsilon}(z,y)\) holds for any chamber \(y\in R\) (cf. [1, Lemma 5.149]). The chamber \(z\) is called the _projection of \(x\) onto \(R\)_ and is denoted by \(\operatorname{proj}_{R}x\). Moreover, if \(z=\operatorname{proj}_{R}x\) we have \(\delta_{*}(x,y)=\delta_{*}(x,z)\delta_{\varepsilon}(z,y)\) for each \(y\in R\).
**(3.2) Lemma**.: _Let \(R_{1}\subseteq R_{2}\) be two spherical residues of \(\Delta\) and let \(x\in\mathcal{C}\). Then \(\operatorname{proj}_{R_{1}}x=\operatorname{proj}_{R_{1}}\operatorname{proj}_ {R_{2}}x\)._
Proof.: Let \(r\in R_{1}\). Following [5, Proposition 2] we compute the following, where we have \({}^{\prime}+^{\prime}\) if \(x,R_{1},R_{2}\) are in the same half, and \({}^{\prime}-^{\prime}\) if \(x\) and \(R_{1},R_{2}\) are in different halves:
\[\ell(x,r) =\ell(x,\operatorname{proj}_{R_{2}}x)\pm\ell(\operatorname{proj}_ {R_{2}}x,r)\] \[=\ell(x,\operatorname{proj}_{R_{2}}x)\pm\left(\ell(\operatorname{ proj}_{R_{2}}x,\operatorname{proj}_{R_{1}}\operatorname{proj}_{R_{2}}x)+\ell( \operatorname{proj}_{R_{1}}\operatorname{proj}_{R_{2}}x,r)\right)\] \[=\ell(x,\operatorname{proj}_{R_{1}}\operatorname{proj}_{R_{2}}x) \pm\ell(\operatorname{proj}_{R_{1}}\operatorname{proj}_{R_{2}}x,r)\]
Since this holds for any \(r\in R_{1}\), the uniqueness of \(\operatorname{proj}_{R_{1}}x\) yields the claim.
**(3.3) Lemma**.: _Let \(\varepsilon\in\{+,-\}\) and let \(R\subseteq\mathcal{C}_{\varepsilon}\) and \(T\subseteq\mathcal{C}_{-\varepsilon}\) be two opposite residues of spherical type \(J\subseteq S\). Then for any pair \((x,y)\in R\times T\) the following are equivalent:_
1. \(\operatorname{proj}_{T}x=y\)_;_
2. \(\delta_{*}(x,y)=r_{J}\)_;_
3. \(\operatorname{proj}_{R}y=x\)_._
Proof.: This is [2, Lemma 3.4].
Let \(\varepsilon\in\{+,-\}\) and let \(R\subseteq\mathcal{C}_{\varepsilon},T\subseteq\mathcal{C}_{-\varepsilon}\) be spherical residues. Then we define the mapping \(\operatorname{proj}_{T}^{R}:R\to T,x\mapsto\operatorname{proj}_{T}x\) and we put \(\operatorname{proj}_{T}R:=\{\operatorname{proj}_{T}r\mid r\in R\}\). The residues \(R\) and \(T\) are called _parallel_ if \(\operatorname{proj}_{R}T=R\) and \(\operatorname{proj}_{T}R=T\).
**(3.4) Lemma**.: _Let \(\varepsilon\in\{+,-\},R\subseteq\mathcal{C}_{\varepsilon},T\subseteq\mathcal{ C}_{-\varepsilon}\) be two spherical residues._
1. \(\operatorname{proj}_{T}R\) _is a spherical residue in_ \(T\)_._
2. \(R\) _and_ \(T\) _are parallel if and only if_ \(\operatorname{proj}_{T}^{R}\) _and_ \(\operatorname{proj}_{R}^{T}\) _are bijections inverse to each other._
3. \(\operatorname{proj}_{R}T\) _and_ \(\operatorname{proj}_{T}R\) _are parallel._
4. _If_ \(R\) _and_ \(T\) _are opposite then they are parallel._
Proof.: Assertion \((a)\) is [1, Exercise 5.168]. Let \(x\in\operatorname{proj}_{R}T\). Then there exists \(y\in T\) such that \(x=\operatorname{proj}_{R}y\). Note that \(\ell_{*}(y,x)=\ell_{*}(\operatorname{proj}_{T}x,x)-\ell_{-\varepsilon}(y, \operatorname{proj}_{T}x)\). Since \(\ell_{*}(c,d)\geq\ell_{*}(c,e)-\ell_{-\varepsilon}(e,d)\) for any \(c\in\mathcal{C}_{\varepsilon}\) and \(d,e\in\mathcal{C}_{-\varepsilon}\) the following hold:
\[\ell_{*}(y,x)-\ell_{\varepsilon}(x,\operatorname{proj}_{R} \operatorname{proj}_{T}x) =\ell_{*}(y,\operatorname{proj}_{R}\operatorname{proj}_{T}x)\] \[\geq\ell_{*}(\operatorname{proj}_{T}x,\operatorname{proj}_{R} \operatorname{proj}_{T}x)-\ell_{-\varepsilon}(y,\operatorname{proj}_{T}x)\] \[\geq\ell_{*}(\operatorname{proj}_{T}x,x)-\ell_{-\varepsilon}(y, \operatorname{proj}_{T}x)\] \[=\ell_{*}(y,x).\]
This implies \(\operatorname{proj}_{R}\operatorname{proj}_{T}x=x\) and the restriction of the projection mappings are bijections inverse to each other.
One implication in Assertion \((b)\) is obvious. For the other we note that \(\operatorname{proj}_{R}T=R\) and \(\operatorname{proj}_{T}R=T\) and we proved that the restriction of the projection mappings are bijections inverse to each other. Assertion \((c)\) is now a consequence of Assertion \((b)\) and Assertion \((d)\) is [9, Proposition (4.3)].
**(3.5) Lemma**.: _Let \(\varepsilon\in\{+,-\},P\subseteq\mathcal{C}_{\varepsilon},Q\subseteq\mathcal{ C}_{-\varepsilon}\) be two panels. Then \(P,Q\) are parallel if and only if \(|\operatorname{proj}_{P}Q|\geq 2\)._
Proof.: We follow the ideas of [4, Lemma 13]. If \(P,Q\) are parallel, the claim follows. Therefore let \(|\operatorname{proj}_{P}Q|\geq 2\). Since \(\operatorname{proj}_{P}Q\) is a residue contained in \(P\) by Lemma (3.4)\((a)\), we have \(\operatorname{proj}_{P}Q=P\). By Lemma (3.4)\((c)\) the residues \(\operatorname{proj}_{Q}P\) and \(\operatorname{proj}_{P}Q=P\) are parallel. Thus we have \(|\operatorname{proj}_{Q}P|=|P|\geq 2\). Using the same argument we obtain \(\operatorname{proj}_{Q}P=Q\) and the panels \(P\) and \(Q\) are parallel.
**(3.6) Lemma**.: _Let \(\varepsilon\in\{+,-\}\), let \(P\subseteq\mathcal{C}_{\varepsilon},Q\subseteq\mathcal{C}_{-\varepsilon}\) be two parallel panels and let \(R\) be a spherical residue containing \(Q\). Then \(\operatorname{proj}_{R}P\) is a panel parallel to both \(P\) and \(Q\)._
Proof.: For a proof see [4, Lemma 17]. We note that the facts which are used in [4] for buildings are proved in this paper for twin buildings.
Let \(\Sigma_{+}\subseteq\mathcal{C}_{+}\) and \(\Sigma_{-}\subseteq\mathcal{C}_{-}\) be apartments of \(\Delta_{+}\) and \(\Delta_{-}\), respectively. Then the set \(\Sigma:=\Sigma_{+}\cup\Sigma_{-}\) is called _twin apartment_ if \(|x^{\operatorname{op}}\cap\Sigma|=1\) for each \(x\in\Sigma\). If \((x,y)\) is a pair of opposite chambers, then there exists a unique twin apartment \(A(x,y)=\{z\in\mathcal{C}\ |\ \delta(x,z)=\delta(y,z)\}=\{z\in\mathcal{C}\ |\ \ell(z,x)=\ell(z,y)\}\) containing \(x\) and \(y\) (cf. [1, Exercise 5.187, Proposition 5.179(1)]). For \(\varepsilon\in\{+,-\}\) we put \(A_{\varepsilon}(x,y):=A(x,y)\cap\mathcal{C}_{\varepsilon}\). Furthermore, for any two chambers there exists a twin apartment containing them (cf. [1, Proposition 5.179(3)]).
**(3.7) Lemma**.: _Let \((x,y)\) be a pair of opposite chambers and let \(P\subseteq\mathcal{C}\) be a panel which meets \(A(x,y)\). Then \(A(x,y)\cap P=\{\operatorname{proj}_{P}x\neq\operatorname{proj}_{P}y\}\)._
Proof.: We have \(\operatorname{proj}_{P}x,\operatorname{proj}_{P}y\in A(x,y)\cap P\) (cf. [1, Lemma 5.173 (6)]). Since \(A_{\varepsilon}(x,y)\) is thin for each \(\varepsilon\in\{+,-\}\), we obtain \(|A(x,y)\cap P|=2\). We assume that \(\operatorname{proj}_{P}x=\operatorname{proj}_{P}y\). Then there exists a chamber \(\operatorname{proj}_{P}x\neq z\in A(x,y)\cap P\). We can assume that \(y\) is in the same half of the twin building as the panel \(P\). This implies
\[\ell(y,z)=\ell(x,z)=\ell(x,\operatorname{proj}_{P}x)-\ell(\operatorname{proj} _{P}x,z)=\ell(y,\operatorname{proj}_{P}y)-1=\ell(y,z)-2.\]
This yields a contradiction and the claim follows.
**(3.8) Lemma**.: _Let \(\varepsilon\in\{+,-\},s\in S\) and let \(\Delta\) be a thick twin building. Then for all \(x,y\in\mathcal{C}_{-\varepsilon}\) the following properties are equivalent:_
1. \(\delta_{-\varepsilon}(x,y)\in\langle s\rangle\)_;_
2. \(\forall z\in x^{\operatorname{op}}\exists z^{\prime}\in y^{\operatorname{op}}: \delta_{\varepsilon}(z,z^{\prime})=s\)_._
Proof.: If \(x=y\) the claim follows because of the thickness of the twin building. Let \(\delta_{-\varepsilon}(x,y)=s\) and \(z\in x^{\operatorname{op}}\). Then \(\delta_{*}(z,y)\in\{1_{W},s\}\) by Lemma (3.1). If \(\delta_{*}(z,y)=1_{W}\), the claim follows because of the thickness again. Let \(\delta_{*}(z,y)=s\). Then \(\delta_{*}(y,z)=s\) and we obtain \(z^{\prime}\in\mathcal{C}_{-\varepsilon}\) such that \(\delta_{-\varepsilon}(z,z^{\prime})=s\) and \(\delta_{*}(y,z^{\prime})=1_{W}\) by (Tw3). Therefore we have \((ii)\).
Now we assume \((ii)\). There exists a twin apartment \(\Sigma\) containing \(x\) and \(y\). Let \(w=\delta_{-\varepsilon}(y,x)\) and let \(z\in\Sigma\) be the unique chamber which is opposite to \(x\). This implies \(\Sigma=A(x,z)\) and we obtain \(\delta_{*}(y,z)=\delta_{-\varepsilon}(y,x)=w\). By condition there exists a chamber \(z^{\prime}\in y^{\operatorname{op}}\) such that \(\delta_{-\varepsilon}(z,z^{\prime})=s\). Then \(1_{W}=\delta_{*}(y,z^{\prime})\in\{w,ws\}\) by Lemma (3.1). The claim follows.
## 4 Wall-connected twin buildings
### Compatible path's
Let \(\Delta=(\mathcal{C},\delta)\) be a thick building of type \((W,S)\) and let \(\Gamma\) be the graph whose vertices are the panels of \(\Delta\) and in which two panels form an edge if and only if there exists a rank \(2\) residue in which the two panels are opposite. For two adjacent panels \(P,Q\), there exists a unique rank \(2\) residue containing \(P\) and \(Q\), that will be denoted by \(R(P,Q)\). A path \(\gamma=(P_{0},\ldots,P_{k})\) in \(\Gamma\) is called _compatible_ if \(\operatorname{proj}_{R(P_{i-1},P_{i})}P_{0}=P_{i-1}\) for all \(1\leq i\leq k\). The number \(k\) is the _length_ of that path \(\gamma\). The sequence \((J_{1},\ldots,J_{k})\) where \(J_{i}\) is the type of \(R(P_{i-1},P_{i})\) will be called the _type_ of \(\gamma\).
We obtain \(\operatorname{proj}_{R(P_{i-1},P_{i})}x=\operatorname{proj}_{P_{i-1}}x\) for all \(x\in P_{0}\) and \(1\leq i\leq k\), since \(\operatorname{proj}_{R(P_{i-1},P_{i})}x\in P_{i-1}\). Furthermore, we obtain \(\operatorname{proj}_{P_{i}}x=\operatorname{proj}_{P_{i}}\operatorname{proj}_ {P_{i-1}}x\) for any \(1\leq i\leq k\) by Lemma (3.2).
**Lemma**.: _Two panels are parallel if and only if there exists a compatible path from one to the other._
Proof.: This is [4, Lemma 19].
**Proposition**.: _Let \((P_{0},\ldots,P_{k})\) be a compatible path. Then the following hold:_
1. \(\operatorname{proj}_{P_{k}}^{P_{0}}=\operatorname{proj}_{P_{k}}^{P_{i}} \circ\operatorname{proj}_{P_{i}}^{P_{0}}\) _for any_ \(0\leq i\leq k\)_._
2. \(\delta(P_{0},P_{k})=\delta(P_{0},P_{i})\delta(P_{i},P_{k})\) _for any_ \(0\leq i\leq k\)_;_
3. \(\ell(\delta(P_{0},P_{k}))=\ell(\delta(P_{0},P_{i}))+\ell(\delta(P_{i},P_{k}))\) _for any_ \(0\leq i\leq k\)_._
4. \((P_{k},\ldots,P_{0})\) _is a compatible path._
Proof.: At first we prove Assertion \((c)\) by induction on \(k\). For \(k=0\) there is nothing to show. Thus we let \(k>0\) and \(x\in P_{0}\). Then \(\ell_{\Delta}(x,\operatorname{proj}_{P_{k}}x)=\ell_{\Delta}(x,\operatorname{ proj}_{R(P_{k-1},P_{k})}x)+\ell_{\Delta}(\operatorname{proj}_{R(P_{k-1},P_{k})}x, \operatorname{proj}_{P_{k}}x)\). Moreover, we have \(\operatorname{proj}_{R(P_{k-1},P_{k})}x=\operatorname{proj}_{P_{k-1}}x\) and \(\operatorname{proj}_{P_{k}}x=\operatorname{proj}_{P_{k}}\operatorname{proj}_ {P_{k-1}}x=\operatorname{proj}_{P_{k}}\operatorname{proj}_{P_{k-1}}x\). This implies
\[\ell_{\Delta}(x,\operatorname{proj}_{P_{k}}x)=\ell_{\Delta}(x,\operatorname{ proj}_{P_{k-1}}x)+\ell(\delta(P_{k-1},P_{k}))=\ell(\delta(P_{0},P_{k-1}))+\ell( \delta(P_{k-1},P_{k}))\]
Using induction, we have \(\ell(\delta(P_{0},P_{k-1}))=\ell(\delta(P_{0},P_{i}))+\ell(\delta(P_{i},P_{k-1}))\) for any \(1\leq i\leq k-1\). We deduce that \(\ell(\delta(P_{0},P_{k}))\geq\ell(\delta(P_{0},P_{i}))+\ell(\delta(P_{i},P_{k}))\). In particular, we obtain
\[\ell_{\Delta}(x,\operatorname{proj}_{P_{k}}x)\geq\ell_{\Delta}(x,\operatorname {proj}_{P_{i}}x)+\ell_{\Delta}(\operatorname{proj}_{P_{i}}x,\operatorname{ proj}_{P_{k}}\operatorname{proj}_{P_{i}}x)\geq\ell_{\Delta}(x,\operatorname{ proj}_{P_{k}}x)\]
This finishes the proof of Assertion \((c)\). Now Assertion \((b)\) is a direct consequence of Assertion \((c)\) and Assertion \((a)\) follows from Assertion \((c)\) and the uniqueness of the projection chamber. Note that \((a)\) also implies that \(P_{i}\) and \(P_{k}\) are parallel. For Assertion \((d)\) we use Lemma (2.3) and the equation of the projection mappings of Assertion \((a)\) and compute the following for each \(0\leq i\leq j\leq k\):
\[\operatorname{proj}_{P_{i}}^{P_{k}}=\operatorname{proj}_{P_{i}}^{P_{0}}\circ \operatorname{proj}_{P_{0}}^{P_{k}}=\left(\operatorname{proj}_{P_{i}}^{P_{j}} \circ\operatorname{proj}_{P_{j}}^{P_{0}}\right)\circ\operatorname{proj}_{P_{0} }^{P_{k}}=\operatorname{proj}_{P_{i}}^{P_{j}}\circ\operatorname{proj}_{P_{j}}^ {P_{k}}\,.\]
We have to show that \(\operatorname{proj}_{R(P_{i-1},P_{i})}P_{k}=P_{i}\). For that we show \(\operatorname{proj}_{R(P_{i-1},P_{i})}x=\operatorname{proj}_{P_{i}}x\) for each \(x\in P_{k}\). Let \(x\in P_{k}\) and let \(r_{i}:=\operatorname{proj}_{R(P_{i-1},P_{i})}x,p_{i}:=\operatorname{proj}_{P_{ i}}x\) and \(p_{i-1}:=\operatorname{proj}_{P_{i-1}}x\).
Using Assertion \((c)\) and the fact that \(\operatorname{proj}_{P_{i-1}}p_{i}=p_{i-1}\) we have
\[\ell_{\Delta}(x,p_{i-1}) =\ell(\delta(P_{i-1},P_{k}))\] \[=\ell(\delta(P_{0},P_{k}))-\ell(\delta(P_{0},P_{i-1}))\] \[=\ell(\delta(P_{0},P_{i}))+\ell(\delta(P_{i},P_{k}))-\ell(\delta (P_{0},P_{i-1}))\] \[=\ell(\delta(P_{k},P_{i}))+\ell(\delta(P_{i},P_{i-1}))\] \[=\ell_{\Delta}(x,p_{i})+\ell_{\Delta}(p_{i},\operatorname{proj}_ {P_{i-1}}p_{i})\] \[=\ell_{\Delta}(x,r_{i})+\ell_{\Delta}(r_{i},p_{i})+\ell_{\Delta} (p_{i},p_{i-1}).\]
Since \(\ell_{\Delta}(r_{i},p_{i-1})\leq\ell(r_{J_{i}})-1=\ell(\delta(P_{i-1},P_{i}))= \ell(\delta(P_{i},P_{i-1}))=\ell_{\Delta}(p_{i},p_{i-1})\), where \(J_{i}\) is the type of the residue \(R(P_{i-1},P_{i})\), we obtain
\[\ell_{\Delta}(x,r_{i})+\ell_{\Delta}(r_{i},p_{i})+\ell_{\Delta}( p_{i},p_{i-1}) =\ell_{\Delta}(x,p_{i-1})\] \[=\ell_{\Delta}(x,r_{i})+\ell_{\Delta}(r_{i},p_{i-1})\] \[\leq\ell_{\Delta}(x,r_{i})+\ell_{\Delta}(p_{i},p_{i-1}).\]
This yields \(r_{i}=p_{i}\). Since \(P_{i},P_{k}\) are parallel, we obtain \(\operatorname{proj}_{R(P_{i-1},P_{i})}P_{k}=\{\operatorname{proj}_{R(P_{i-1}, P_{i})}x\mid x\in P_{k}\}=P_{i}\) and the claim follows.
**(4.3) Lemma**.: _Let \(s\in S\) and let \(w\in W\) be such that \(w^{-1}sw\in S\) and \(\ell(sw)=\ell(w)+1\). Let \(P,P^{\prime}\) be \(s\)-panels and \(Q,Q^{\prime}\) be \(w^{-1}sw\)-panels such that \(\delta(P,Q)=w=\delta(P^{\prime},Q^{\prime})\). If \((J_{1},\ldots,J_{k})\) is the type of a compatible path from \(P\) to \(Q\), then there exists a compatible path from \(P^{\prime}\) to \(Q^{\prime}\) of the same length and type._
Proof.: This is [4, Lemma 26].
### \(P\)-compatible path's
Let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) be a thick twin building of type \((W,S)\), let \(\varepsilon\in\{+,-\}\) and let \(P\subseteq\mathcal{C}_{-\varepsilon},P_{0},\ldots,P_{k}\subseteq\mathcal{C}_{\varepsilon}\) be panels such that \((P_{0},\ldots,P_{k})\) is a compatible path. Then we call this path \(P\)_-compatible_ if \(P_{0}\) is opposite to \(P\), and if \(\operatorname{proj}_{R(P_{i-1},P_{i})}P=P_{i}\) for all \(1\leq i\leq k\). We obtain \(\operatorname{proj}_{R(P_{i-1},P_{i})}x=\operatorname{proj}_{P_{i}}x\) for all \(x\in P\) and \(1\leq i\leq k\), since \(\operatorname{proj}_{R(P_{i-1},P_{i})}x\in P_{i}\). Furthermore, we obtain \(\operatorname{proj}_{P_{i-1}}x=\operatorname{proj}_{P_{i-1}}\operatorname{proj} _{P_{i}}x\) for any \(1\leq i\leq k\) by Lemma (3.2).
**(4.4) Proposition**.: _Let \(\varepsilon\in\{+,-\}\) and \(P\subseteq\mathcal{C}_{-\varepsilon},P_{0},\ldots,P_{k}\subseteq\mathcal{C}_{\varepsilon}\) be panels such that \((P_{0},\ldots,P_{k})\) is a \(P\)-compatible path. Then the following hold for all \(0\leq i\leq k\):_
1. \(\operatorname{proj}_{I_{0}}^{P}=\operatorname{proj}_{P_{0}}^{P_{i}}\circ \operatorname{proj}_{P_{i}}^{P}\)_._
2. \(\operatorname{proj}_{I_{0}}^{P_{0}}=\operatorname{proj}_{P_{0}}^{P_{i}}\circ \operatorname{proj}_{P_{i}}^{P_{0}}\)_._
3. \(\ell_{*}(x,\operatorname{proj}_{P_{i}}x)=\ell(\delta(P_{0},P_{i}))+1\) _for each_ \(x\in P\)_._
Proof.: It suffices to show the claim only for \(i=k\), since \((P_{0},\ldots,P_{i})\) is a \(P\)-compatible path. For \(k=0\) there is nothing to show. For \(x\in P\) we have \(\operatorname{proj}_{P_{k-1}}\operatorname{proj}_{P_{k}}x=\operatorname{proj}_{ P_{k-1}}x\). Using induction and Proposition (4.2)\((a)\) and \((d)\) we obtain
\[\operatorname{proj}_{P_{0}}x=\operatorname{proj}_{P_{0}}\operatorname{proj}_{P_{ k-1}}x=\operatorname{proj}_{P_{0}}\operatorname{proj}_{P_{k-1}}\operatorname{proj}_{P_{k}}x= \operatorname{proj}_{P_{0}}\operatorname{proj}_{P_{k}}x.\]
This proves Assertion \((a)\). Using Lemma (2.3) and Lemma (3.4)\((b)\) and \((d)\), the panels \(P\) and \(P_{i}\) are parallel. Using again Lemma (2.3) and Lemma (3.4)\((b)\) and \((d)\), Assertion \((b)\) is a consequence of Assertion \((a)\). For Assertion \((c)\) it also suffices to show the claim for \(i=k\)
Let \(x\in P\). Since \(\operatorname{proj}_{P_{k-1}}\operatorname{proj}_{P_{k}}x=\operatorname{proj}_{P_{k-1 }}x\), we infer \(\ell_{*}(x,\operatorname{proj}_{P_{k-1}}x)=\ell_{*}(x,\operatorname{proj}_{P_{k}} x)-\ell_{-\varepsilon}(\operatorname{proj}_{P_{k}}x,\operatorname{proj}_{P_{k-1 }}x)=\ell_{*}(x,\operatorname{proj}_{P_{k}}x)-\ell(\delta(P_{k-1},P_{k}))\). Using Proposition (4.2)(\(c\)) and induction we obtain \(\ell_{*}(x,\operatorname{proj}_{P_{k}}x)=\ell_{*}(x,\operatorname{proj}_{P_{k- 1}}x)+\ell(\delta(P_{k-1},P_{k}))=\ell(\delta(P_{0},P_{k}))+1\) and the claim follows.
**(4.5) Lemma**.: _Let \(\varepsilon\in\{+,-\},P_{1}\subseteq\mathcal{C}_{\varepsilon},P_{2}\subseteq \mathcal{C}_{-\varepsilon}\) be two parallel panels. Let \(s_{i}\) be the type of \(P_{i}\). Then \(s_{2}=w^{-1}s_{1}w\), where \(w:=\delta_{*}(x,\operatorname{proj}_{P_{2}}x)\) does not depend on the choice of \(x\) in \(P_{1}\)._
_Conversely, if \(x\) and \(y\) are chambers with \(\delta_{*}(x,y)=w\), where \(w\) satisfies \(s_{2}=w^{-1}s_{1}w\) and \(\ell(s_{1}w)=\ell(w)-1\), then the \(s_{1}\)-panel of \(x\) is parallel to the \(s_{2}\)-panel of \(y\)._
Proof.: We follow the ideas of [4, Lemma 14]. Let \(x_{1}\neq y_{1}\in P_{1}\), let \(x_{2}:=\operatorname{proj}_{P_{2}}x_{1},y_{2}:=\operatorname{proj}_{P_{2}}y_{1}\) and let \(w=\delta_{*}(x_{1},x_{2}),w^{\prime}=\delta_{*}(y_{1},y_{2})\). Then we have \(\delta_{*}(x_{2},y_{1})=w^{-1}s_{1}\) and \(\delta_{*}(y_{1},x_{2})=w^{\prime}s_{2}\). In particular, \(s_{1}w=w^{\prime}s_{2}\) and hence \(w^{\prime}=s_{1}ws_{2}\). Since \(\ell(w^{\prime}s_{2})=\ell(w^{\prime})-1\), we have \(\ell(s_{1}ws_{2})=\ell(w^{\prime})=\ell(w^{\prime}s_{2})+1=\ell(s_{1}w)+1=\ell (w)\). Since \(\ell(ws_{2})\ell(w)-1\), the claim follows from Lemma (2.2). For the converse let \(P_{1}\) the \(s_{1}\)-panel of \(x\) and let \(P_{2}\) the \(s_{2}\)-panel of \(y\). Let \(x\in P_{1}\). Then \(\operatorname{proj}_{P_{2}}x=y\), since \(\ell(ws_{2})=\ell(s_{1}w)=\ell(w)-1\). Choose \(\operatorname{proj}_{P_{2}}x\neq p\in P_{2}\). Then \(\delta_{*}(x,p)=ws_{2}\). By (Tw3) there exists \(z\in P_{1}\) such that \(\delta_{*}(z,p)=s_{1}ws_{2}=w\). Since \(\ell(s_{1}ws_{2})=\ell(w)=\ell(ws_{2})+1\) we have \(z=\operatorname{proj}_{P_{1}}p\). It follows from \(\ell(s_{1}w)=\ell(w)-1\) that \(x=\operatorname{proj}_{P_{1}}\operatorname{proj}_{P_{2}}x\). Since \(\delta_{*}(x,p)\neq\delta_{*}(z,p)\), we deduce that \(x\) and \(\operatorname{proj}_{P_{1}}p\) are different. The claim follows now from Lemma (3.5).
Let \(P_{1}\) and \(P_{2}\) be two parallel panels in different halves. By the previous lemma \(\delta_{*}(x,\operatorname{proj}_{P_{2}}x)\) does not depend on the choice of \(x\in P_{1}\). Therefore we define \(\delta(P_{1},P_{2}):=\delta_{*}(x,\operatorname{proj}_{P_{2}}x)\), where \(x\) is a chamber in \(P_{1}\).
**(4.6) Theorem**.: _Let \(\varepsilon\in\{+,-\},P\subseteq\mathcal{C}_{\varepsilon},Q\subseteq\mathcal{ C}_{-\varepsilon}\) be two panels. Then \(P,Q\) are parallel if and only if there exists a \(P\)-compatible path \((Q_{0},\dots,Q_{k}=Q)\)._
Proof.: Let \((Q_{0},\dots,Q_{k}=Q)\) be a \(P\)-compatible path. Using Proposition (4.4)(\(a\)) we have \(\operatorname{proj}_{Q_{0}}^{P}=\operatorname{proj}_{Q_{0}}^{Q}\circ \operatorname{proj}_{Q}^{P}\). By Lemma (3.4)(\(d\)) we have \(\operatorname{proj}_{Q_{0}}P=Q_{0}\). Thus \(|\operatorname{proj}_{Q}P|\geq 2\) and Lemma (3.5) finishes the claim.
Now we assume that \(P\) and \(Q\) are parallel. We show the claim via induction on the distance \(\ell:=\ell(\delta(P,Q))\). If \(\ell=1\) then \((Q)\) is a \(P\)-compatible path and we are done. Now we assume that \(\ell>1\). Let \(x\in P\). Then there exists a chamber \(e\in\mathcal{C}_{-\varepsilon}\) which is adjacent to a chamber in \(Q\) and satisfies \(\ell_{*}(x,\operatorname{proj}_{Q}x)-2=\ell_{*}(x,e)\). Let \(R\) be the unique rank \(2\) residue containing \(Q\) and \(e\). By Lemma (3.6) the panels \(Q\) and \(\operatorname{proj}_{R}P\) are parallel. Since \(Q\) and \(\operatorname{proj}_{R}P\) are not opposite in \(R\) we have \(\operatorname{proj}_{R}P=Q\) by Lemma (2.6). Note also that \(\operatorname{proj}_{R}x=\operatorname{proj}_{Q}x\). Let \(w=\delta_{*}(x,\operatorname{proj}_{Q}x)\). Let \(q\in\mathcal{C}_{-\varepsilon}\) such that \(\delta_{-\varepsilon}(\operatorname{proj}_{Q}x,q)=w^{-1}\). Then \(\delta_{*}(x,q)=1_{W}\) by [1, Lemma 5.140(2)]. Let \(s\) be the type of \(P\) and let \(t\) be the type of \(Q\). Then \(\ell(wt)=\ell(w)-1\). In particular, \(\delta_{-\varepsilon}(q,\operatorname{proj}_{Q}q)=wt\). Note that for \(v:=tw^{-1}\) we have \(\ell(tv)=\ell(w^{-1})=\ell(w)=\ell(wt)+1=\ell(v)+1\). Since \(P,Q\) are parallel, we have \(s=wtw^{-1}\) by Lemma (4.5). This implies \(v^{-1}tv=wtttw^{-1}=s\). Using Lemma (2.4) we obtain that the \(s\)-panel of \(q\) and the \(t\)-panel of \(\operatorname{proj}_{Q}q\) (i.e. \(Q\)) are parallel. Let \(Q_{0}\) be the \(s\)-panel of \(q\). By [4, Lemma 17]\(\operatorname{proj}_{R}Q_{0}\) is parallel to both, \(Q_{0}\) and \(Q\). Since \(\ell(wr_{J})=\ell(w)-\ell(r_{J})\), where \(J\) is the type of \(R\), we have \(\operatorname{proj}_{R}Q_{0}\neq Q\). Using induction, there exists a \(P\)-compatible path \((Q_{0},\dots,Q_{k}=\operatorname{proj}_{R}Q_{0})\). Clearly, \((Q_{0},\dots,Q_{k},Q)\) is a compatible path. The fact that \(\operatorname{proj}_{R(Q_{k},Q)}P=\operatorname{proj}_{R}P=Q\) finishes the claim.
**(4.7) Theorem**.: _Let \(\varepsilon\in\{+,-\},s\in S\) and let \(P,Q\subseteq\mathcal{C}_{\varepsilon}\) be two parallel panels and let \(c_{-}\in\mathcal{C}_{-\varepsilon}\) such that \(\mathcal{P}_{s}(c_{-})\) and \(P\) are opposite and such that \(\ell(\delta(P,Q))+1=\ell_{*}(c_{-},\operatorname{proj}_{Q}c_{-})\). Then every compatible path \((P_{0}=P,\dots,P_{k}=Q)\) is a \(\mathcal{P}_{s}(c_{-})\)-compatible path._
Proof.: At first we show \(\ell_{*}(c_{-},\operatorname{proj}_{P_{i}}c_{-})=\ell(P,P_{i})+1\). We have \(\ell_{*}(c_{-},\operatorname{proj}_{P}\operatorname{proj}_{Q}c_{-})\geq\ell_{* }(c_{-},\operatorname{proj}_{Q}c_{-})-\ell_{\varepsilon}(\operatorname{proj}_{Q }c_{-},\operatorname{proj}_{P}\operatorname{proj}_{Q}c_{-})=1\) by hypothesis. Since the panels \(\mathcal{P}_{s}(c_{-}),P\) are opposite, we obtain \(\operatorname{proj}_{P}\operatorname{proj}_{Q}c_{-}=\operatorname{proj}_{P}c_{-}\) by Lemma (3.3). Let \(c_{+}\in P\backslash\{\operatorname{proj}_{P}c_{-}\}\). Then \(c_{+}\in c_{-}^{\operatorname{op}}\). Since \(c_{+}\neq\operatorname{proj}_{P}c_{-}=\operatorname{proj}_{P}\operatorname{ proj}_{Q}c_{-}\), we have \(\ell_{\varepsilon}(\operatorname{proj}_{Q}c_{-},c_{+})=\ell(\delta(P,Q))+1\). This yields \(\operatorname{proj}_{Q}c_{-}\in A(c_{+},c_{-})\), since \(\ell_{\varepsilon}(c_{+},\operatorname{proj}_{Q}c_{-})=\ell_{*}(c_{-}, \operatorname{proj}_{Q}c_{-})\). Now we will show that \(A(c_{+},c_{-})\cap P_{i}\neq\emptyset\). Let \(z_{i}:=\operatorname{proj}_{P_{i}}\operatorname{proj}_{Q}c_{-}\). Then by Proposition (4.2)\((a),(c),(d)\) the following hold:
\[\ell_{\varepsilon}(\operatorname{proj}_{Q}c_{-},c_{+}) =\ell(\delta(P,Q))+1\] \[=\ell(\delta(P,P_{i}))+\ell(\delta(P_{i},Q))+1\] \[=\ell_{\varepsilon}(\operatorname{proj}_{Q}c_{-},z_{i})+\ell_{ \varepsilon}(z_{i},\operatorname{proj}_{P}z_{i})+\ell_{\varepsilon}( \operatorname{proj}_{P}\operatorname{proj}_{Q}c_{-},c_{+})\] \[=\ell_{\varepsilon}(\operatorname{proj}_{Q}c_{-},z_{i})+\ell_{ \varepsilon}(z_{i},c_{+}).\]
This yields that \(z_{i}\) lies on a minimal gallery between \(\operatorname{proj}_{Q}c_{-}\) and \(c_{+}\). The definition of convexity implies that any element on a minimal gallery between two chambers of a convex set is contained in the convex set. Since \(A_{\varepsilon}(c_{+},c_{-})\) is convex, we infer \(z_{i}\in A(c_{+},c_{-})\cap P_{i}\neq\emptyset\). By Lemma (3.7) we obtain \(A(c_{+},c_{-})\cap P_{i}=\{\operatorname{proj}_{P_{i}}c_{+},\operatorname{proj} _{P_{i}}c_{-}\}\). We put \(c_{i}:=\operatorname{proj}_{P_{i}}c_{-}\). Then the following hold for every \(0\leq i\leq k\):
\[\ell_{*}(c_{-},c_{i})=\ell_{\varepsilon}(c_{+},c_{i})=\ell_{\varepsilon}(c_{+}, \operatorname{proj}_{P_{i}}c_{+})+\ell_{\varepsilon}(\operatorname{proj}_{P_{ i}}c_{+},c_{i})=\ell(\delta(P,P_{i}))+1.\]
Now we want to show that the panels \(P_{i}\) and \(\mathcal{P}_{s}(c_{-})\) are parallel. Let \(P_{i}\) be a \(t\)-panel and let \(w^{\prime}=\delta(P,P_{i})\). Then we have \(t=w^{\prime-1}sw^{\prime}\) by Lemma (2.4). Let \(w=\delta_{*}(c_{-},\operatorname{proj}_{P_{i}}c_{-})\) and let \(\operatorname{proj}_{P}c_{-}\neq c_{+}\in P\). Note that \(\operatorname{proj}_{P}c_{-}=\operatorname{proj}_{P}\operatorname{proj}_{P_{i} }c_{-}\) for any \(0\leq i\leq k\) as above. Then \(w=\delta_{*}(c_{-},\operatorname{proj}_{P_{i}}c_{-})=\delta_{\varepsilon}(c_{+ },\operatorname{proj}_{P_{i}}c_{-})=w^{\prime}t=sw^{\prime}\) by the definition of \(A(c_{+},c_{-})\). We have \(w^{-1}sw=(sw^{\prime})^{-1}s(sw^{\prime})=t\), \(\ell(sw)=\ell(w^{\prime})=\ell(sw^{\prime})-1=\ell(w)-1\) and hence Lemma (4.5) implies that the panels \(\mathcal{P}_{s}(c_{-})\) and \(P_{i}\) are parallel. In particular, we have \(\ell(x,\operatorname{proj}_{P_{i}}x)=\ell(c_{-},\operatorname{proj}_{P_{i}}c_ {-})\) by Lemma (4.5).
We now show the claim. Since \(\mathcal{P}_{s}(c_{-})\) and \(P_{i}\) are parallel, it suffices to show that \(\operatorname{proj}_{R(P_{i-1},P_{i})}x=\operatorname{proj}_{P_{i}}x\) for all \(1\leq i\leq k\) and \(x\in\mathcal{P}_{s}(c_{-})\). Let \(1\leq i\leq k\), let \(\operatorname{proj}_{P_{i-1}}x\neq y\in P_{i-1}\) and let \(J_{i}\subseteq S\) be the type of \(R(P_{i-1},P_{i})\). Let \(r_{i}:=\operatorname{proj}_{R(P_{i-1},P_{i})}x,p_{i-1}:=\operatorname{proj}_{P_{ i-1}}x\) and \(p_{i}:=\operatorname{proj}_{P_{i}}x\). Then we obtain:
\[\ell_{*}(x,r_{i})-\ell(r_{J_{i}})\leq\ell_{*}(x,y)=\ell_{*}(x,p_{i-1})-\ell_{ \varepsilon}(p_{i-1},y)=\ell_{*}(x,r_{i})-\ell_{\varepsilon}(r_{i},p_{i-1})-1\]
Therefore, we have \(\ell_{\varepsilon}(r_{i},p_{i-1})\leq\ell(r_{J_{i}})-1=\ell(\delta(P_{i-1},P_{ i}))\). Using Proposition (4.2)\((c)\), we deduce:
\[\ell_{*}(x,r_{i})-\ell_{\varepsilon}(r_{i},p_{i}) =\ell_{*}(x,p_{i})\] \[=\ell(\delta(P,P_{i}))+1\] \[=\ell(\delta(P,P_{i-1}))+\ell(\delta(P_{i-1},P_{i}))+1\] \[=\ell_{*}(x,p_{i-1})+\ell(\delta(P_{i-1},P_{i}))\] \[=\ell_{*}(x,r_{i})-\ell_{\varepsilon}(r_{i},p_{i-1})+\ell(\delta( P_{i-1},P_{i}))\] \[\geq\ell_{*}(x,r_{i}).\]
This implies \(\ell_{\varepsilon}(r_{i},p_{i})=0\) and the claim follows.
**(4.8) Definition**.: Let \(\varepsilon\in\{+,-\},s\in S,c\in\mathcal{C}_{-\varepsilon}\). Two panels \(Q_{1},Q_{2}\subseteq\mathcal{C}_{\varepsilon}\) are called _wall-adjacent of type \((c,s)\)_ if both panels \(Q_{1},Q_{2}\) are opposite to \(\mathcal{P}_{s}(c)\) and if there exist a panel
\(T\subseteq\mathcal{C}_{\varepsilon}\) and \(\mathcal{P}_{s}(c)\)-compatible paths (\(P_{0}=Q_{1},\ldots,P_{k}=T\)) and (\(P_{0}^{\prime}=Q_{2},\ldots,P_{k^{\prime}}^{\prime}=T\)) of the same length and type.
For any \(\varepsilon\in\{+,-\}\) and any pair \((c,s)\in\mathcal{C}_{\varepsilon}\times S\) we define a graph \(\Gamma_{s}(c)\) with vertex set \(\{Q\subseteq\mathcal{C}_{-\varepsilon}\mid\mathcal{P}_{s}(c),Q\text{ are opposite panels}\}\) and two vertices \(Q_{1},Q_{2}\) are joined by an edge if \(Q_{1},Q_{2}\) are wall-adjacent of type \((c,s)\). The twin building \(\Delta\) is called _wall-connected_ if the graph \(\Gamma_{s}(c)\) is connected for each pair \((c,s)\in\mathcal{C}\times S\). We refer to Lemma (6.7) for the motivation of the terminology _wall-connected_.
## 5 Isometries
### Definition and basic facts
**(5.1) Convention**.: From now on all buildings (and hence all twin buildings) are assumed to be thick.
Let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*}),\Delta^{\prime}=(\Delta^{\prime}_{+},\Delta^{\prime}_{-},\delta^{\prime}_{*})\) be two twin buildings of type \((W,S)\). We define \(\mathcal{C}^{\prime},\Delta^{\prime}_{+},\Delta^{\prime}_{-},\delta^{\prime},\ell^{\prime}\) as in the case of \(\Delta\). Let \(\mathcal{X}\subseteq\mathcal{C},\mathcal{X}^{\prime}\subseteq\mathcal{C}^{\prime}\). A mapping \(\varphi:\mathcal{X}\rightarrow\mathcal{X}^{\prime}\) is called _isometry_ if the following conditions are satisfied:
1. The mapping \(\varphi\) is bijective.
2. For \(\varepsilon\in\{+,-\}\) we have \(\varphi(\mathcal{X}\cap\mathcal{C}_{\varepsilon})\subseteq\mathcal{C}^{\prime}_ {\varepsilon}\).
3. If \(x,y\in\mathcal{X}\) then \(\delta^{\prime}(\varphi(x),\varphi(y))=\delta(x,y)\).
It is easy to see that \(\varphi^{-1}\) is also an isometry. Given \(\mathcal{X}\subseteq\mathcal{C},\mathcal{X}^{\prime}\subseteq\mathcal{C}^{\prime}\), an isometry \(\varphi:\mathcal{X}\rightarrow\mathcal{X}^{\prime}\) and \((y,y^{\prime})\in\mathcal{C}\times\mathcal{C}^{\prime}\), then the pair \((y,y^{\prime})\) will be called \(\varphi\)_-admissible_ if the mapping \(y\mapsto y^{\prime}\) extends \(\varphi\) to an isometry from \(\mathcal{X}\cup\{y\}\) onto \(\mathcal{X}^{\prime}\cup\{y^{\prime}\}\). Let \((x,x^{\prime})\) be a \(\varphi\)-admissible pair. Then \((x^{\prime},x)\) is \(\varphi^{-1}\)-admissible. In particular, \((x,\varphi(x))\) is \(\varphi\)-admissible for any \(x\in\mathcal{X}\). Let \(\mathcal{Y}\subseteq\mathcal{C},\mathcal{Y}^{\prime}\subseteq\mathcal{C}^{\prime}\) and \(\psi:\mathcal{Y}\rightarrow\mathcal{Y}^{\prime}\) be another isometry. Then the pair \((\varphi,\psi)\) will be called _admissible_, if there exists an isometry from \(\mathcal{X}\cup\mathcal{Y}\) onto \(\mathcal{X}^{\prime}\cup\mathcal{Y}^{\prime}\) such that \(\varphi\) and \(\psi\) are restrictions of that isometry.
**(5.2) Lemma**.: _Let \(\varepsilon\in\{+,-\}\) and \(\varphi:\mathcal{C}_{\varepsilon}\rightarrow\mathcal{C}^{\prime}_{\varepsilon}\) be a bijection. If \(\delta_{\varepsilon}(x,y)=\delta^{\prime}_{\varepsilon}(\varphi(x),\varphi(y))\) for any \(x,y\in\mathcal{C}_{\varepsilon}\) with \(\delta_{\varepsilon}(x,y)\in S\), then \(\varphi\) is an isometry._
Proof.: This is [1, Lemma 5.61].
**(5.3) Lemma**.: _Let \(\mathcal{X},\mathcal{Y}\subseteq\mathcal{C},\mathcal{X}^{\prime},\mathcal{Y} ^{\prime}\subseteq\mathcal{C}^{\prime}\) be such that \(\mathcal{X}\cap\mathcal{Y}=\emptyset\) and \(\mathcal{X}^{\prime}\cap\mathcal{Y}^{\prime}=\emptyset\). Let \(\varphi:\mathcal{X}\rightarrow\mathcal{X}^{\prime}\) and \(\psi:\mathcal{Y}\rightarrow\mathcal{Y}^{\prime}\) be two isometries such that \((z,\psi(z))\) is \(\varphi\)-admissible for any \(z\in\mathcal{Y}\). Then \((\varphi,\psi)\) is admissible._
Proof.: This is a consequence of [2, Lemma 4.1].
**(5.4) Lemma**.: _Let \(J\) be a spherical subset of \(S\), let \(R\subseteq\mathcal{C},R^{\prime}\subseteq\mathcal{C}^{\prime}\) be \(J\)-residues, let \(\varphi:R\to R^{\prime}\) be an isometry, and let \((x,x^{\prime})\) be a \(\varphi\)-admissible pair. Then \(\varphi(\operatorname{proj}_{R}x)=\operatorname{proj}_{R^{\prime}}x^{\prime}\)._
Proof.: This is [9, Lemma (4.4)].
**(5.5) Lemma**.: _Let \(R_{+},R_{-}\subseteq\mathcal{C}\) be spherical and parallel residues in \(\Delta\) and \(R^{\prime}_{+},R^{\prime}_{-}\subseteq\mathcal{C}^{\prime}\) be spherical and parallel residues in \(\Delta^{\prime}\), let \(\varphi:R_{+}\cup R_{-}\to R^{\prime}_{+}\cup R^{\prime}_{-}\) be an isometry such that \(\varphi(R_{+})=R^{\prime}_{+}\) and \(\varphi(R_{-})=R^{\prime}_{-}\). Then \(\varphi(x)=\operatorname{proj}_{R^{\prime}_{\varepsilon}}\varphi(\operatorname{ proj}_{R_{-\varepsilon}}x)\) for each \(x\in R_{\varepsilon}\) for each \(\varepsilon\in\{+,-\}\)._
Proof.: This is a consequence of the previous lemma, Lemma (2.3) and Lemma (3.4)\((b)\)
**(5.6) Lemma**.: _Let \(\varphi_{+}:\mathcal{C}_{+}\to\mathcal{C}^{\prime}_{+}\) be an isometry, let \((x,x^{\prime})\in\mathcal{C}_{-}\times\mathcal{C}^{\prime}_{-}\) such that \(\varphi_{+}(x^{\mathrm{op}})\subseteq(x^{\prime})^{\mathrm{op}}\). Then \((x,x^{\prime})\) is a \(\varphi_{+}\)-admissible pair._
Proof.: This is [9, Lemma (7.4)].
**(5.7) Lemma**.: _Let \(x_{-}\in\mathcal{C}_{-}\), \(x^{\prime}_{-},y^{\prime}_{-}\in\mathcal{C}^{\prime}_{-}\) and let \(\varphi:\mathcal{C}_{+}\cup\{x_{-}\}\to\mathcal{C}^{\prime}_{+}\cup\{x^{ \prime}_{-}\},\psi:\mathcal{C}_{+}\cup\{x_{-}\}\to\mathcal{C}^{\prime}_{+}\cup \{y^{\prime}_{-}\}\) be two isometries such that \(\varphi(z)=\psi(z)\) for any \(z\in x_{-}^{\mathrm{op}}\). Then \(x^{\prime}_{-}=y^{\prime}_{-}\) and \(\varphi=\psi\)._
Proof.: This is [7, Lemma 4.10].
**(5.8) Theorem**.: _Let \((c_{+},c_{-})\) be a pair of opposite chambers in \(\Delta\). The only isometry of \(\Delta\) fixing \(E_{1}(c_{+})\cup\{c_{-}\}\) is the identity._
Proof.: This is [9, Theorem (3.2)].
**(5.9) Theorem**.: _Let \(\Delta,\Delta^{\prime}\) be \(2\)-spherical and of rank at least three. Let \((c_{+},c_{-}),(c^{\prime}_{+},c^{\prime}_{-})\) be two pairs of opposite chambers in \(\Delta\) and \(\Delta^{\prime}\), respectively, and let \(\varphi:E_{2}(c_{+})\cup\{c_{-}\}\to E_{2}(c^{\prime}_{+})\to\{c^{\prime}_{-}\}\) be an isometry. Then \(\varphi\) extends to an isometry from \(\mathcal{C}_{+}\cup\{c_{-}\}\) onto \(\mathcal{C}^{\prime}_{+}\cup\{c^{\prime}_{-}\}\)._
Proof.: This is a consequence of [2, Theorem 6.5].
### Isometries and wall-connected twin buildings
Let \(\Delta,\Delta^{\prime}\) be two twin buildings of type \((W,S)\). Let \((c_{+},c_{-})\in\mathcal{C}_{+}\times\mathcal{C}_{-},(c^{\prime}_{+},c^{ \prime}_{-})\in\mathcal{C}^{\prime}_{+}\times\mathcal{C}^{\prime}_{-}\) be pairs of opposite chambers and let \(\varphi_{+}:\mathcal{C}_{+}\to\mathcal{C}^{\prime}_{+}\) be an isometry such that \(\varphi_{+}(c_{+})=c^{\prime}_{+}\). Furthermore let \((c_{-},c^{\prime}_{-})\) be \(\varphi_{+}\)-admissible.
**(5.10) Lemma**.: _Let \((P_{0},\ldots,P_{k})\) be an \(\mathcal{P}_{s}(c_{-})\)-compatible path. Then \((\varphi_{+}(P_{0}),\ldots,\varphi_{+}(P_{k}))\) is an \(\mathcal{P}_{s}(c^{\prime}_{-})\)-compatible path._
Proof.: Clearly, \((\varphi_{+}(P_{0}),\ldots,\varphi_{+}(P_{k}))\) is a compatible path by Lemma (5.4). Now \(\varphi_{+}(P_{0})\) and \(\mathcal{P}_{s}(\varphi_{+}(c_{-}))=\mathcal{P}_{s}(c^{\prime}_{-})\) are opposite and by Proposition (4.4)\((c)\) and Lemma (5.4) we have \(\ell(c^{\prime}_{-},\mathrm{proj}_{\varphi_{+}(P_{k})}\,c^{\prime}_{-})=\ell( \varphi_{+}(c_{-}),\varphi_{+}(\mathrm{proj}_{P_{k}}\,c_{-}))=\ell(c_{-}, \mathrm{proj}_{P_{k}}\,c_{-})=\ell(P_{0},P_{k})+1=\ell(\varphi_{+}(P_{0}), \varphi_{+}(P_{k}))+1\). Now the claim follows from Theorem (4.7).
**(5.11) Lemma**.: _Let \(\Delta,\Delta^{\prime}\) be \(2\)-spherical. Then \(\Gamma_{s}(c_{-})\) is connected if and only if \(\Gamma_{s}(c^{\prime}_{-})\) is connected._
Proof.: It suffices to show that if \(Q_{1},Q_{2}\) are wall-adjacent of type \((c_{-},s)\), then \(\varphi_{+}(Q_{1}),\varphi_{+}(Q_{2})\) are wall-adjacent of type \((c^{\prime}_{-},s)\), too. Let \(Q_{1},Q_{2}\) be wall-adjacent of type \((c_{-},s)\). By definition \(Q_{1},Q_{2}\in\mathcal{P}_{s}(c_{-})^{\mathrm{op}}\) and there exist a panel \(T\subseteq\mathcal{C}_{+}\) and \(\mathcal{P}_{s}(c_{-})\)-compatible paths (\(P_{0}=Q_{1},\ldots,P_{k}=T\)) and (\(P_{0}^{\prime}=Q_{2},\ldots,P_{k}^{\prime}=T\)) of the same length and type. Lemma (5.10) implies that \(\varphi_{+}\) maps a \(\mathcal{P}_{s}(c_{-})\)-compatible to a \(\mathcal{P}_{s}(c^{\prime}_{-})\)-compatible path. Thus \(\varphi_{+}(Q_{1}),\varphi_{+}(Q_{2})\) are wall-adjacent of type \((c^{\prime}_{-},s)\) and the claim follows.
**(5.12) Definition**.: For \(x\in c_{-}^{\mathrm{op}}\) and \(s\in S\) we define the mapping
\[\varphi_{s}^{x}:\mathcal{P}_{s}(c_{-})\to\mathcal{P}_{s}(c^{\prime}_{-}),d\mapsto \left(\mathrm{proj}_{\mathcal{P}_{s}(c^{\prime}_{-})}^{\mathcal{P}_{s}(\varphi_ {+}(x))}\circ\varphi_{+}\circ\mathrm{proj}_{\mathcal{P}_{s}(x)}^{\mathcal{P}_ {s}(c_{-})}\right)(d).\]
Since \(\mathcal{P}_{s}(c_{-}),\mathcal{P}_{s}(x)\) are parallel by Lemma (3.4)\((d)\) for each \(x\in c_{-}^{\mathrm{op}}\) and \(\varphi_{+}\) is an isometry it follows again by Lemma (3.4)\((d)\) that \(\varphi_{s}^{x}\) is a bijection and hence an isometry. In particular, \(\varphi_{s}^{x}(c_{-})=c^{\prime}_{-}\).
**(5.13) Proposition**.: _Let \(s\in S,x,z\in c_{-}^{\mathrm{op}}\) and let \(P=\mathcal{P}_{s}(x),Q=\mathcal{P}_{s}(z)\) be wall-adjacent of type \((c_{-},s)\). Then we have \(\varphi_{s}^{x}=\varphi_{s}^{z}\)._
Proof.: By definition there exist a panel \(T\subseteq\mathcal{C}_{+}\) and \(\mathcal{P}_{s}(c_{-})\)-compatible path's \((P_{0}=P,\ldots,P_{k}=T)\) and \((Q_{0}=Q,\ldots,Q_{k}=T)\) of the same length and type. By the previous lemma \((\varphi_{+}(P_{0}),\ldots,\varphi_{+}(P_{k}))\) and \((\varphi_{+}(Q_{0}),\ldots,\varphi_{+}(Q_{k}))\) are \(\mathcal{P}_{s}(c^{\prime}_{-})\)-compatible paths. Let \(Z\in\{P,Q\}\). By Proposition (4.4)\((a),(b)\) we obtain
\[\operatorname{proj}_{Z}^{\mathcal{P}_{s}(c_{-})} =\operatorname{proj}_{Z}^{T}\circ\operatorname{proj}_{T}^{ \mathcal{P}_{s}(c_{-})}\] \[\operatorname{proj}_{\mathcal{P}_{s}(c^{\prime}_{-})}^{\varphi_ {+}(Z)} =\operatorname{proj}_{\mathcal{P}_{s}(c^{\prime}_{-})}^{\varphi_ {+}(T)}\circ\operatorname{proj}_{\varphi_{+}(T)}^{\varphi_{+}(Z)}\]
By Lemma (5.5) we obtain \(\operatorname{proj}_{\varphi_{+}(T)}^{\varphi_{+}(Z)}\circ\varphi_{+}\circ \operatorname{proj}_{Z}^{T}=\varphi_{+}|_{T}\), since the panels \(Z\) and \(T\) are parallel by Lemma (4.1). We have
\[\operatorname{proj}_{\mathcal{P}_{s}(c^{\prime}_{-})}^{\varphi_ {+}(Z)}\circ\varphi_{+}\circ\operatorname{proj}_{Z}^{\mathcal{P}_{s}(c_{-})} =\operatorname{proj}_{\mathcal{P}_{s}(c^{\prime}_{-})}^{\varphi_ {+}(T)}\circ\operatorname{proj}_{\varphi_{+}(T)}^{\varphi_{+}(Z)}\circ \varphi_{+}\circ\operatorname{proj}_{Z}^{T}\circ\operatorname{proj}_{T}^{ \mathcal{P}_{s}(c_{-})}\] \[=\operatorname{proj}_{\mathcal{P}_{s}(c^{\prime}_{-})}^{\varphi_ {+}(T)}\circ\varphi_{+}\circ\operatorname{proj}_{T}^{\mathcal{P}_{s}(c_{-})}\]
This finishes the claim.
**(5.14) Corollary**.: _Let \(x,z\in c_{-}^{\operatorname{cp}}\) and \(s\in S\) such that \(\Gamma_{s}(c_{-})\) is connected. Then \(\varphi_{s}^{x}=\varphi_{s}^{z}\)._
Proof.: This follows by induction on the length of a path in \(\Gamma_{s}(c_{-})\) and Proposition (5.13).
### Extending isometries of wall-connected twin buildings
Let \(\Delta,\Delta^{\prime}\) be two twin buildings of type \((W,S)\). Let \((c_{+},c_{-})\in\mathcal{C}_{+}\times\mathcal{C}_{-},(c^{\prime}_{+},c^{\prime }_{-})\in\mathcal{C}^{\prime}_{+}\times\mathcal{C}^{\prime}_{-}\) be pairs of opposite chambers and let \(\varphi_{+}:\mathcal{C}_{+}\to\mathcal{C}^{\prime}_{+}\) be an isometry such that \(\varphi_{+}(c_{+})=c^{\prime}_{+}\). Furthermore let \((c_{-},c^{\prime}_{-})\) be \(\varphi_{+}\)-admissible. Assume that \(\Delta\) is wall-connected. By Corollary (5.14) we have \(\varphi_{s}^{x}=\varphi_{s}^{z}\) for any \(x,z\in c_{-}^{\operatorname{op}}\) and \(s\in S\). We denote this mapping by \(\varphi_{s}\).
**(5.15) Lemma**.: _For each \(s\in S\) the pair \((\varphi_{+},\varphi_{s})\) is admissible._
Proof.: Let \(d_{-}\in\mathcal{P}_{s}(c_{-})\). We will show that \(\varphi_{+}(d_{-}^{\operatorname{op}})\subseteq\varphi_{s}(d_{-})^{\operatorname {op}}\). Let \(y\in d_{-}^{\operatorname{op}}\). By Lemma (3.8) there exists \(x\in c_{-}^{\operatorname{op}}\) such that \(\delta_{+}(y,x)=s\). Since \(y\in d_{-}^{\operatorname{op}}\) we have \(\operatorname{proj}_{\mathcal{P}_{s}(x)}d_{-}\neq y\). This implies \(s=\delta_{+}(\operatorname{proj}_{\mathcal{P}_{s}(x)}d_{-},y)=\delta^{\prime}_ {+}(\varphi_{+}(\operatorname{proj}_{\mathcal{P}_{s}(x)}d_{-}),\varphi_{+}(y))\). Since \(\mathcal{P}_{s}(c^{\prime}_{-})\) and \(\mathcal{P}_{s}(\varphi_{+}(x))\) are opposite, Lemma (3.3) and the definition of \(\varphi_{s}\) yield \(\delta^{\prime}_{*}(\varphi_{s}(d_{-}),\varphi_{+}(\operatorname{proj}_{ \mathcal{P}_{s}(x)}d_{-}))=s\) by Lemma (3.3). By (Tw2) we have \(\delta^{\prime}_{*}(\varphi_{s}(d_{-}),\varphi_{+}(y))=1_{W}\). By Lemma (5.6) the pair \((d_{-},\varphi_{s}(d_{-}))\) is \(\varphi_{+}\)-admissible for all \(d_{-}\in\mathcal{P}_{s}(c_{-})\). The claim follows now by Lemma (5.3).
**(5.16) Lemma**.: _The isometry \(\varphi_{+}\) extends uniquely to an isometry \(\varphi:\mathcal{C}_{+}\cup E_{1}(c_{-})\to\mathcal{C}^{\prime}_{+}\cup E_{1}( c^{\prime}_{-})\). In particular, for every chamber \(x\in\mathcal{C}_{-}\) there exists a unique chamber \(x^{\prime}\in\mathcal{C}^{\prime}_{-}\) such that \((x,x^{\prime})\) is \(\varphi_{+}\)-admissible._
Proof.: Let \(s,t\in S\). Then \(\varphi_{s}(c_{-})=c^{\prime}_{-}=\varphi_{t}(c_{-})\). Therefore the mapping \(\varphi_{-}:E_{1}(c_{-})\to E_{1}(c^{\prime}_{-}),x\mapsto\varphi_{s}(x)\), if \(x\in\mathcal{P}_{s}(c_{-})\), is well-defined. Moreover, \(\varphi_{-}\) is bijective and for all \(x,y\in E_{1}(c_{-})\) with \(\delta_{-}(x,y)\in S\) we have \(\delta_{-}(x,y)=\delta^{\prime}_{-}(\varphi_{-}(x),\varphi_{-}(y))\). Let \(x,y\in E_{1}(c_{-})\) with \(\delta_{-}(x,y)\notin S\). Then there exists \(s\neq t\in S\) with \(x\in\mathcal{P}_{s}(c_{-})\) and \(y\in\mathcal{P}_{t}(c_{-})\) and we have \(\delta_{-}(x,y)=\delta_{-}(x,c_{-})\delta_{-}(c_{-},y)=st\). We deduce \(\delta^{\prime}_{-}(\varphi_{-}(x),\varphi_{-}(c_{-}))=s\) and \(\delta^{\prime}_{-}(\varphi_{-}(c_{-}),\varphi_{-}(y))=t\) and hence \(\delta^{\prime}_{-}(\varphi_{-}(x),\varphi_{-}(y))=st\). Thus \(\varphi_{-}\) is an isometry and we obtain that \((x,\varphi_{-}(x))\) is \(\varphi_{+}\)-admissible for all \(x\in E_{1}(c_{-})\) by Lemma (5.15). By Lemma (5.3) the pair \((\varphi_{+},\psi_{-})\) is admissible and hence the mapping \(\varphi_{-}\) extends \(\varphi_{+}\) to an isometry from \(\mathcal{C}_{+}\cup E_{1}(c_{-})\) to \(\mathcal{C}^{\prime}_{+}\cup E_{1}(c^{\prime}_{-})\).
The uniqueness of \(x^{\prime}\) follows from Lemma (5.7). The existence follows by the first assertion of the lemma and induction on \(\ell_{-}(c_{-},x)\)
**(5.17) Theorem**.: _Let \(x\in\mathcal{C}_{-}\) and let \(x^{\prime}\in\mathcal{C}^{\prime}_{-}\) be the unique chamber such that \((x,x^{\prime})\) is \(\varphi_{+}\)-admissible. Then \(\varphi_{-}:\mathcal{C}_{-}\to\mathcal{C}^{\prime}_{-},x\mapsto x^{\prime}\) is an isometry._
Proof.: Let \((y,x^{\prime})\) be \(\varphi_{+}\)-admissible. Then \((x^{\prime},y)\) and \((x^{\prime},x)\) are \(\varphi_{+}^{-1}\)-admissible. The uniqueness part of the previous lemma implies \(x=y\) and hence \(\varphi_{-}\) is injective. Let \(y^{\prime}\in\mathcal{C}^{\prime}_{-}\) and let \(y\in\mathcal{C}_{-}\) be the unique chamber such that \((y^{\prime},y)\) is \(\varphi_{+}^{-1}\)-admissible. Then \((y,y^{\prime})\) is \(\varphi_{+}\)-admissible and hence \(\varphi_{-}\) is surjective. Thus \(\varphi_{-}\) is a bijection. Now we will show that \(\varphi_{-}\) preserves \(s\)-adjacency. Let \(x,y\in\mathcal{C}_{-},s\in S\) such that \(\delta_{-}(x,y)=s\). Note that \(\varphi_{+}\) is an isometry and that \((x,\varphi_{-}(x)),(y,\varphi_{-}(y))\) are \(\varphi_{+}\)-admissible. Let \(z\in\varphi_{+}(x)^{\mathrm{op}}\). Then \(\varphi_{+}^{-1}(z)\in x^{\mathrm{op}}\) and Lemma (3.8) yields \(z^{\prime}\in y^{\mathrm{op}}\) such that \(\delta_{+}(z,\varphi_{+}(z^{\prime}))=\delta_{+}(\varphi_{+}^{-1}(z),z^{\prime })=s\). Again, Lemma (3.8) implies \(\delta^{\prime}_{-}(\varphi_{-}(x),\varphi_{-}(y))\in\langle s\rangle\). Since \(\varphi_{-}\) is injective, we have \(\delta^{\prime}_{-}(\varphi_{-}(x),\varphi_{-}(y))=s\). Now Lemma (5.2) finishes the claim.
**(5.18) Corollary**.: _Let \(\Delta\) be \(2\)-spherical, thick, wall-connected and of rank at least three. Then any isometry \(\varphi:E_{2}(c_{+})\cup\{c_{-}\}\to E_{2}(c^{\prime}_{+})\cup\{c^{\prime}_{-}\}\) extends uniquely to an isometry from \(\mathcal{C}_{+}\cup\mathcal{C}_{-}\) onto \(\mathcal{C}^{\prime}_{+}\cup\mathcal{C}^{\prime}_{-}\)._
Proof.: By Theorem (5.9), Theorem (5.17) and Lemma (5.3) we obtain an isometry \(\Phi:\mathcal{C}\to\mathcal{C}^{\prime}\) such that \(\Phi|_{E_{2}(c_{+})\cup\{c_{-}\}}=\varphi\). The uniqueness follows from Theorem (5.8).
## 6 Wall-connected twin buildings
### Chamber systems
Let \(I\) be a set. A _chamber system_ over \(I\) is a pair \(\mathbf{C}=(\mathcal{C},(\sim_{i})_{i\in I})\) where \(\mathcal{C}\) is a non-empty set whose elements are called _chambers_ and where \(\sim_{i}\) is an equivalence relation on the set of chambers for each \(i\in I\). Given \(i\in I\) and \(c,d\in\mathcal{C}\), then \(c\) is called _\(i\)-adjacent_ to \(d\) if \(c\sim_{i}d\). The chambers \(c,d\) are called _adjacent_ if they are \(i\)-adjacent for some \(i\in I\).
A _gallery_ in \(\mathbf{C}\) is a sequence \((c_{0},\ldots,c_{k})\) such that \(c_{\mu}\in\mathcal{C}\) for all \(0\leq\mu\leq k\) and such that \(c_{\mu-1}\) is adjacent to \(c_{\mu}\) for all \(1\leq\mu\leq k\). The chamber system \(\mathbf{C}\) is said to be _connected_, if for any two chambers \(c,d\) there exists a gallery \((c_{0}=c,\ldots,c_{k}=d)\). For a subset \(\mathcal{E}\subseteq\mathcal{C}\) the restriction \((\mathcal{E},(\sim_{i}|_{\mathcal{E}\times\mathcal{E}})_{i\in I})\) is again a chamber system over \(I\).
Let \(\Delta=(\mathcal{C},\delta)\) be a building of type \((W,S)\). Then we define the chamber system \(\mathbf{C}(\Delta)\) as follows: The set of chambers is given by the set of chambers \(\mathcal{C}\) of \(\Delta\) and two chambers \(x,y\) are defined to be \(s\)-adjacent if \(\delta(x,y)\in\langle s\rangle\).
### \(3\)-spherical twin buildings
Let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) be a twin building of type \((W,S)\) and let \(\varepsilon\in\{+,-\}\). For each pair \((c,k)\in\mathcal{C}_{\varepsilon}\times\mathbb{N}_{0}\) we put \(c^{\mathrm{op}(k)}:=\{d\in\mathcal{C}_{-\varepsilon}\mid\ell_{*}(c,d)\leq k\}\). We remark that \(c^{\mathrm{op}}=c^{\mathrm{op}(0)}\) for any \(c\in\mathcal{C}\). We say that the twin building \(\Delta\) satisfies Condition \(\left(\mathrm{co}\right)_{k}\), if for any \(\varepsilon\in\{+,-\}\) and any chamber \(c\in\mathcal{C}_{\varepsilon}\) the chamber system given by the restriction of \(\mathbf{C}(\Delta_{-\varepsilon})\) to \(c^{\mathrm{op}(k)}\) is connected. We say for short that \(\Delta\) satisfies Condition \(\left(\mathrm{co}\right)_{0}\).
For buildings \(\Delta=(\mathcal{C},\delta)\) of spherical type \((W,S)\) we have also a notion of \(c^{\mathrm{op}(k)}\), i.e. \(c^{\mathrm{op}(k)}=\{d\in\mathcal{C}\mid\ell(c,d)\geq\ell(r_{S})-k\}\). We say that a spherical building satisfies Condition \(\left(\mathrm{co}\right)_{k}\) if \(c^{\mathrm{op}(k)}\) is connected for any chamber \(c\in\mathcal{C}\).
**(6.1) Proposition**.: _Let \((W,S)\) be a spherical Coxeter system of rank \(3\) such that \(m_{st}\leq 4\) for all \(s,t\in S\) and let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) be a thick twin building of type \((W,S)\). Then \(\Delta_{+},\Delta_{-}\) and \(\Delta\) satisfy Condition \(\left(\mathrm{co}\right)_{1}\)._
Proof.: If \(m_{st}\leq 3\) for all \(s,t\in S\) the assertion follows from [7, Lemma 6.1 and Theorem 1.5]. Suppose now that \(m_{st}=4\) for some \(s,t\in S\). Again by [7] the assertion holds if the \(\{s,t\}\)-residues are not isomorphic to the building associated with the group \(C_{2}(2)\). In all the cases we mentioned so far, \(\Delta_{+},\Delta_{-},\Delta_{*}\) even satisfies the stronger condition (co). If the \(\{s,t\}\)-residue is isomorphic to the building associated to the group \(C_{2}(2)\), then the verification of Condition \(\mbox{(co)}_{1}\) boils down to an elementary calculation.
A _twin residue_ is a pair \((R,T)\) of opposite residues in a twin building. It is a basic fact that a twin residue is again a twin building. The _type_ (resp. _rank_) of a twin residue is defined to be the type (resp. rank) of the residues. Note that if \(P\) is a panel contained in \(R\) and if \((P_{0},\ldots,P_{k})\) is a \(P\)-compatible path in the twin residue \((R,T)\), then it is also a \(P\)-compatible path in the twin building \(\Delta\). In particular, if \(c\in R\) and if \(s\in S\) is contained in the type of \((R,T)\) and if \(Q_{1},Q_{2}\subseteq T\) are wall-adjacent of type \((c,s)\) in \((R,T)\), then \(Q_{1},Q_{2}\) are wall-adjacent of type \((c,s)\) in \(\Delta\).
**(6.2) Corollary**.: _Any \(3\)-spherical thick twin building \(\Delta\) satisfies Condition \(\mbox{(co)}_{1}\)._
Proof.: At first we convince ourselves that any rank \(3\) residue (which is spherical by definition) satisfies \(\mbox{(co)}_{1}\). Let \(R\) be a \(J\)-residue of rank \(3\) and \(x\in R\). For \(y\in x^{\rm op}\) the residues \(R_{J}(y),R\) are opposite in \(\Delta\), i.e. \((R_{J}(y),R)\) is a thick spherical twin building. Hence the previous proposition implies that \(R\) satisfies Condition \(\mbox{(co)}_{1}\).
The proof is similar to the proof of [7, Theorem 1.5]. Let \(c\) be a chamber of \(\Delta\) and let \(x\neq y\in c^{\rm op(1)}\). Let \(G=(x=c_{0},\ldots,c_{k}=y)\) be a gallery. We can assume that \(c_{i}\neq c_{i+1}\). Let \(i\) be minimal such that \(\ell(c,c_{i})=\max\{\ell(c,c_{j})\mid 0\leq j\leq k\}\). If \(\ell(c,c_{i})\leq 1\), we are done. Thus we can assume \(\ell(c,c_{i})>1\). Then \(\ell(c,c_{i-1})<\ell(c,c_{i})\geq\ell(c,c_{i+1})\). Let \(\delta(c_{i-1},c_{i})=s\) and \(\delta(c_{i},c_{i+1})=t\) (\(s=t\) is possible). As \(\ell(c,c_{i-1})\geq 1\), there exists \(r\in S\) such that \(\ell(\delta(c,c_{i-1})r)=\ell(c,c_{i-1})-1\). Let \(R\) be a \(J\)-residue containing \(c_{i}\), where \(|J|=3\) and \(\{r,s,t\}\subseteq J\). Using similar arguments as in [7, Lemma 6.1 and Theorem 1.5] we obtain a gallery \((c_{0},\ldots,c_{i-1}=d_{0},\ldots,d_{m}=c_{i+1},\ldots,c_{k})\) with \(\ell(c,d_{j})<\ell(c,c_{i})\) for any \(0\leq j\leq m-1\). Iterating this procedure we get a gallery from \(x\) to \(y\) which is contained in \(c^{\rm op(1)}\).
**(6.3) Theorem**.: _Let \(\Delta\) be a \(2\)-spherical, thick twin building of type \((W,S)\) satisfying Condition \(\mbox{(co)}_{1}\). If any rank \(3\) twin residue is wall-connected, then \(\Delta\) is wall-connected._
Proof.: Let \(\varepsilon\in\{+,-\},c\in{\cal C}_{\varepsilon}\) and \(s\in S\). We have to show that \(\Gamma_{s}(c)\) is connected. Let \(x,y\in c^{\rm op}\). By assumption there exists a gallery \((c_{0}=x,\ldots,c_{k}=y)\) such that \(\ell_{*}(c,c_{i})\leq 1\) for all \(0\leq i\leq k\). Let \(J=\{\delta_{-\varepsilon}(c_{0},c_{1}),\delta_{-\varepsilon}(c_{1},c_{2})\}\). Let \(x^{\prime}\in R_{J}(c_{0})\cap E_{1}(c_{2})\) be opposite to \(c\) and let \(J^{\prime}=J\cup\{s\}\). Then \(|J^{\prime}|\leq 3\). Let \(K\subseteq S\) with \(|K|=3\) and \(J^{\prime}\subseteq K\). By assumption the twin residue \((R_{K}(c),R_{K}(c_{0}))\) is wall-connected. Thus there exist \(P_{0}={\cal P}_{s}(x),\ldots,P_{m}={\cal P}_{s}(x^{\prime})\) such that \(P_{i-1},P_{i}\) are wall-adjacent of type \((c,s)\) for all \(1\leq i\leq m\). Applying induction to the shorter gallery \((x^{\prime},c_{2},\ldots,c_{k})\) the claim follows.
**(6.4) Corollary**.: _Every \(3\)-spherical, thick twin building is wall-connected._
Proof.: Let \(\Delta\) be a \(3\)-spherical thick twin building. By Corollary (6.2) \(\Delta\) satisfies Condition \(\mbox{(co)}_{1}\). Let \((R,Z)\) be a twin residue of \(\Delta\) of type \(J\) and rank \(3\). Let \(c\in R\) and \(s\in J\). If \((R,Z)\) is wall-connected, the claim follows from the previous theorem. Thus let \(Q_{1},Q_{2}\subseteq Z\) such that \(Q_{1},Q_{2}\in{\cal P}_{s}(c)^{\rm op}\). Then \(T:={\rm proj}_{Z}\,{\cal P}_{s}(c)\) is a panel which is parallel to \({\cal P}_{s}(c),Q_{1},Q_{2}\) by Lemma (3.4) and Lemma (3.6). Let \(w:=\delta(Q_{1},T)=\delta(Q_{2},T)\). Then \(\ell(sw)=\ell(w)+1\), since \(Q_{1},Q_{2}\) are \(s\)-panels. Let \(t\) be the type of \(T\). By Lemma (2.4) we have \(t=w^{-1}sw\). By Lemma (4.1) there exists a compatible path \((P_{0}=Q_{1},\ldots,P_{k}=T)\). Using Lemma (4.3) there exists a compatible path \((P_{0}^{\prime}=Q_{2},\ldots,P_{k}^{\prime}=T)\) of the same length and type. Since
\(\ell(\delta(Q_{1},T))+1=\ell(\delta(Q_{2},T))+1=\ell_{*}(c,\operatorname{proj}_{T}c)\), both compatible paths are \(\mathcal{P}_{s}(c)\)-compatible by Theorem (4.7) and \((R,Z)\) is wall-connected.
**(6.5) Corollary**.: _Every \(2\)-spherical, thick twin building which satisfies Condition_ (co) _is wall-connected._
Proof.: Using similar arguments as in Theorem (6.3) and Corollary (6.4) the claim follows.
### Wall-connected RGD-systems
A _reflection_ is an element of \(W\) that is conjugate to an element of \(S\). For \(s\in S\) we let \(\alpha_{s}:=\{w\in W\mid\ell(sw)>\ell(w)\}\) be the _simple root_ corresponding to \(s\). A _root_ is a subset \(\alpha\subseteq W\) such that \(\alpha=v\alpha_{s}\) for some \(v\in W\) and \(s\in S\). We denote the set of all roots by \(\Phi\). A root \(\alpha\in\Phi\) is called _positive_ (resp. _negative_), if \(1_{W}\in\alpha\) (resp. \(1_{W}\notin\alpha\)). We let \(\Phi_{+}\) (resp. \(\Phi_{-}\)) be the set of all positive (resp. negative) roots. For each root \(\alpha\in\Phi\) we denote the _opposite root_ by \(-\alpha\) and we denote the unique reflection which interchanges these two roots by \(r_{\alpha}\). Two roots \(\alpha\neq\beta\in\Phi\) are called _prenilpotent_ (or \(\{\alpha,\beta\}\) is called a _prenilpotent pair_) if \(\alpha\cap\beta\neq\emptyset\neq(-\alpha)\cap(-\beta)\). For a prenilpotent pair \(\{\alpha,\beta\}\) we define \([\alpha,\beta]:=\{\gamma\in\Phi\mid\alpha\cap\beta\subseteq\gamma\text{ and }(-\alpha)\cap(-\beta)\subseteq-\gamma\}\) and \((\alpha,\beta):=[\alpha,\beta]\backslash\{\alpha,\beta\}\).
An _RGD-system of type \((W,S)\)_ is a pair \(\mathcal{D}=\big{(}G,\left(U_{\alpha}\right)_{\alpha\in\Phi}\big{)}\) consisting of a group \(G\) together with a family of subgroups \(U_{\alpha}\) (called _root groups_) indexed by the set of roots \(\Phi\), which satisfies the following axioms, where \(H:=\bigcap_{\alpha\in\Phi}N_{G}(U_{\alpha}),U_{\pm}:=\langle U_{\alpha}\mid \alpha\in\Phi_{\pm}\rangle\):
1. For each \(\alpha\in\Phi\), we have \(U_{\alpha}\neq\{1\}\).
2. For each prenilpotent pair \(\{\alpha,\beta\}\subseteq\Phi\), the commutator group \([U_{\alpha},U_{\beta}]\) is contained in the group \(U_{(\alpha,\beta)}:=\langle U_{\gamma}\mid\gamma\in(\alpha,\beta)\rangle\).
3. For every \(s\in S\) and each \(u\in U_{\alpha_{s}}\backslash\{1\}\), there exists \(u^{\prime},u^{\prime\prime}\in U_{-\alpha_{s}}\) such that the product \(m(u):=u^{\prime}uu^{\prime\prime}\) conjugates \(U_{\beta}\) onto \(U_{s\beta}\) for each \(\beta\in\Phi\).
4. For each \(s\in S\), the group \(U_{-\alpha_{s}}\) is not contained in \(U_{+}\).
5. \(G=H\langle U_{\alpha}\mid\alpha\in\Phi\rangle\).
It is well-known that any RGD-system \(\mathcal{D}\) acts on a twin building, which is denoted by \(\Delta(\mathcal{D})\) (cf. [1, Section 8.9]). This twin building is a so-called _Moufang twin building_ (cf. [1, Section 8.3]). There is a distinguished pair of opposite chambers in \(\Delta(\mathcal{D})\), which we will denote by \((c_{+},c_{-})\).
**(6.6) Lemma**.: _For \(\varepsilon\in\{+,-\}\) the group \(U_{\varepsilon}\) acts simply transitively on the set of chambers opposite \(c_{\varepsilon}\)._
Proof.: This is [1, Corollary 8.32].
We say that an RGD-system \(\mathcal{D}=\big{(}G,\left(U_{\alpha}\right)_{\alpha\in\Phi}\big{)}\) is _wall-connected_, if the following condition is satisfied
\[\forall(\varepsilon,s)\in\{+,-\}\times S:U_{\varepsilon}=\langle U_{\beta} \mid\beta\in\Phi_{\varepsilon},o(r_{\beta}s)<\infty\rangle\] (wc)
For the notion of _twin roots_ we refer to [1, Section 5.8.5]. Let \(\alpha\) be a twin root. Then we define the _wall_ associated to \(\alpha\) by the set of all panels \(P\) such that \(P\) is stabilized by \(r_{\alpha}\).
**(6.7) Lemma**.: _Let \(\varepsilon\in\{+,-\},P\subseteq\mathcal{C}_{\varepsilon},Q\subseteq\mathcal{ C}_{-\varepsilon}\) be two parallel panels and let \(s\in S\) be the type of \(P\). Then the reflection \(s\) stabilizes \(P\) and \(Q\)._
Proof.: By Theorem (4.6) there exists a \(P\)-compatible path \((Q_{0},\ldots,Q_{k}=Q)\). Since \(P\) and \(Q_{0}\) are opposite, both panels are stabilized by the reflection \(s\). The claim follows by induction and the fact, that opposite panels in a rank \(2\) residue are stabilized by the same reflection.
A criterion for wall-connectedness
**(6.8) Lemma**.: _Let \(s\in S,\varepsilon\in\{+,-\}\), let \(P:=\mathcal{P}_{s}(c_{\varepsilon})\subseteq\mathcal{C}_{\varepsilon}\) and let \(P_{0},\ldots,P_{k}\subseteq\mathcal{C}_{-\varepsilon}\) be panels such that \((P_{0},\ldots,P_{k})\) is a \(P\)-compatible path. Then the group \(\langle U_{\beta}\mid\beta\in\Phi_{\varepsilon},o(r_{\beta}s)<\infty\rangle\) acts transitively on the set of panels opposite \(P_{k}\) in \(R(P_{k-1},P_{k})\)._
Proof.: For \(s\in S\) we abbreviate \(W_{s}:=\langle U_{\beta}\mid\beta\in\Phi_{\varepsilon},o(r_{\beta}s)<\infty\rangle\). Since \(P_{k-1},P_{k}\) are opposite in \(R(P_{k-1},P_{k})\), it suffices to show that for any panel \(Q\subseteq R(P_{k-1},P_{k})\) which is opposite to \(P_{k}\) in \(R(P_{k-1},P_{k})\) there exists \(g\in W_{s}\) such that \(g.Q=P_{k-1}\). Let \(Q\) be such a panel. Then there exists \(y\in Q\) such that \(\operatorname{proj}_{P_{k}}c_{\varepsilon},y\) are opposite in \(R(P_{k-1},P_{k})\). Let \(x\in P_{k-1}\) be opposite to \(\operatorname{proj}_{P_{k}}c_{\varepsilon}\) in \(R(P_{k-1},P_{k})\). We will show that there exists \(g\in W_{s}\) such that \(g.y=x\). Let \((c_{0}=\operatorname{proj}_{P_{k}}c_{\varepsilon},\ldots,c_{k}=x)\) and \((d_{0}=\operatorname{proj}_{P_{k}}c_{\varepsilon},\ldots,d_{k}=y)\) be minimal galleries and let \(i=\max\{0\leq j\leq k\mid\forall 0\leq k\leq j:c_{k}=d_{k}\}\). We will show the hypothesis by induction on \(k-i\). If \(k-i=0\) there is nothing to show. Now let \(k-i>0\). Let \(\beta\) be the twin root such that \(c_{i}\in\beta,c_{i+1}\notin\beta\). Then \(c_{\varepsilon}\in\beta\). Since the twin building is a Moufang twin building, there exists \(g\in U_{\beta}\) such that \(g.d_{i+1}=c_{i+1}\) (cf. [1, Example 8.47]). Since \(o(r_{\beta}s)<\infty\) by Lemma (6.7) we have \(g\in W_{s}\). By induction we obtain \(h\in W_{s}\) such that \(hg.y=h.(g.y)=x\). This finishes the claim.
**(6.9) Theorem**.: _Let \(\mathcal{D}=\big{(}G,(U_{\alpha})_{\alpha\in\Phi}\big{)}\) be an RGD-system of type \((W,S)\) and let \((\varepsilon,s)\in\{+,-\}\times S\). Then the following are equivalent:_
1. \(U_{\varepsilon}=\langle U_{\beta}\mid\beta\in\Phi_{\varepsilon},o(r_{\beta}s )<\infty\rangle\)_;_
2. \(\Gamma_{s}(c_{\varepsilon})\) _is connected._
Proof.: Again, we abbreviate \(W_{s}:=\langle U_{\beta}\mid\beta\in\Phi_{\varepsilon},o(r_{\beta}s)<\infty\rangle\). At first we assume that \(\Gamma_{s}(c_{\varepsilon})\) is connected. Let \(x,y\in c_{\varepsilon}^{\text{op}}\) such that \(\mathcal{P}_{s}(x),\mathcal{P}_{s}(y)\) are wall-adjacent of type \((c_{\varepsilon},s)\). It suffices to show that there exists \(g\in W_{s}\) such that \(g.x=y\) (the general case follows by induction and Lemma (6.6) applied to \(y=h.x\) for \(h\in U_{\varepsilon}\)). Let \((P_{0}=\mathcal{P}_{s}(x),\ldots,P_{k}=T)\) and \((Q_{0}=\mathcal{P}_{s}(y),\ldots Q_{k}=T)\) be \(\mathcal{P}_{s}(c_{\varepsilon})\)-compatible paths of the same length and type. We show the hypothesis via induction on \(k\). If \(k=0\) we have \(\mathcal{P}_{s}(x)=\mathcal{P}_{s}(y)\) and therefore \(\delta_{-\varepsilon}(x,y)\in\langle s\rangle\). Then there exists \(g\in U_{\alpha_{s}}\) (\(\alpha_{s}\) is the twin root containing \(c_{\varepsilon}\) but not any \(s\)-adjacent chamber) with \(g.x=y\). Now let \(k>0\). By Lemma (6.8) there exists \(g\in W_{s}\) such that \(g.P_{k-1}=Q_{k-1}\). We obtain the \(\mathcal{P}_{s}(c_{\varepsilon})\)-compatible paths \((g.P_{0},\ldots,g.P_{k-1}=Q_{k-1})\) and \((Q_{0},\ldots,Q_{k-1})\). Using induction we obtain \(h\in W_{s}\) such that \(hg.x=h.(g.x)=y\).
Now we assume that \(U_{\varepsilon}=W_{s}\). Let \(\beta\in\Phi_{\varepsilon}\) such that \(o(r_{\beta}s)<\infty\) and let \(g\in U_{\beta}\). Since \(o(r_{\beta}s)<\infty\), there exists \(1\leq k\in\mathbb{N}\) such that \((r_{\beta}s)^{k}=1\). Then \((r_{\beta}s)^{k-1}r_{\beta}=s\) and hence we have \(v^{-1}tv=s\) for some \(v\in W\) and \(t\in\{r_{\beta},s\}\) Since \(r_{\beta}\) is a reflection, we have \(r_{\beta}=w^{-1}uw\) for some \(w\in W\) and \(u\in S\). In particular, we have \(v^{-1}sv=r\) for some \(v\in W,r\in S\). Note that \((sv)^{-1}s(sv)=r\). Thus let \(v^{\prime}\in\{v,sv\}\) be such that \(\ell(sv^{\prime})=\ell(v^{\prime})+1\). Let \(z\in A_{-\varepsilon}(c_{+},c_{-})\) be such that \(\delta_{-\varepsilon}(c_{-\varepsilon},z)=v^{\prime}\). Then \(\mathcal{P}_{s}(c_{-\varepsilon})\) and \(\mathcal{P}_{r}(z)\) are parallel by Lemma (2.4). Since \(\ell(v^{\prime}r)=\ell(sv^{\prime})=\ell(v^{\prime})+1\), we deduce \(z=\operatorname{proj}_{\mathcal{P}_{r}(z)}c_{-\varepsilon}\). By Lemma (4.1) there exists a compatible path \((P_{0}=\mathcal{P}_{s}(c_{-\varepsilon}),\ldots,P_{n}=\mathcal{P}_{r}(z))\). This compatible path is \(\mathcal{P}_{s}(c_{\varepsilon})\)-compatible by Theorem (4.7). Since \(\delta(\mathcal{P}_{s}(c_{-\varepsilon}),\mathcal{P}_{r}(z))=\delta(\mathcal{P} _{s}(g.c_{-\varepsilon}),\mathcal{P}_{r}(z))\) we have also a \(\mathcal{P}_{s}(c_{\varepsilon})\)-compatible path \((Q_{0}=\mathcal{P}_{s}(g.x),\ldots,Q_{n}=\mathcal{P}_{r}(z))\) by Lemma (4.3) of the same length and type. Hence \(\mathcal{P}_{s}(c_{-\varepsilon})\) and \(\mathcal{P}_{s}(g.c_{-\varepsilon})\) are wall-adjacent of type \((c_{\varepsilon},s)\). Using induction the claim follows from Lemma (6.6).
**(6.10) Corollary**.: _Let \(\mathcal{D}\) be an RGD-system of type \((W,S)\). Then the following are equivalent:_
1. \(\mathcal{D}\) _is wall-connected._
_._
2. \(\Delta(\mathcal{D})\) _is wall-connected._
Proof.: Using the fact that \(G\) acts transitive on the set of chambers in one half, the claim follows from Lemma (5.11) and the previous theorem.
### A result about Moufang polygons
We need a result about Moufang polygons. The proof communicated to us by Richard Weiss relies on the basic theory of Moufang polygons as developed in the first chapters of [14]. In this subsection we use the notation of [14].
For \(i<k<j,a_{i}\in U_{i},a_{j}\in U_{j}\) there exists \(a_{l}\in U_{l}\), \(i<l<j\), such that \([a_{i},a_{j}]=a_{i+1}\cdots a_{j-1}\). We define \([a_{i},a_{j}]_{k}:=a_{k}\) as well as \([U_{i},U_{j}]_{k}:=\{[a_{i},a_{j}]_{k}\mid a_{i}\in U_{i},a_{j}\in U_{j}\}\).
**(6.11) Proposition**.: _For each \(i+1\leq k\leq i+n-2\) we have \([U_{i},U_{i+n-1}]_{k}=U_{k}\)._
Proof.: By definition it suffices to show that \(U_{k}\subseteq[U_{i},U_{i+n-1}]_{k}\). We notice that [14, (6.4)] is also correct if we shift the indices. We will show the hypothesis by induction on \(k-i\). Let \(k-i=1\) and let \(a_{i+1}\in U_{i+1}\). By [14, (6.1)] we have \(U_{i+n-1}^{\mu(a_{i})}=U_{i+1}\), where \(\mu\) is the mapping defined in [14, (6.1)]. Thus there exists \(a_{i+n-1}\in U_{i+n-1}\) such that \(a_{i+1}=a_{i+n-1}^{\mu(a_{i})}\). For each \(i<j<i+n-1\) let \(b_{j}\in U_{j}\) such that \([a_{i},a_{i+n-1}^{-1}]=b_{i+1}\cdots b_{i+n-2}\). By [14, (6.4)(\(i\))] we have \(b_{i+1}=a_{i+n-1}^{\mu(a_{i})}=a_{i+1}\) and therefore \([a_{i},a_{i+n-1}]_{i+1}=a_{i+1}\). Now let \(k-i>1\). Using [14, (6.4)(\(iii\))] we obtain \([U_{i},U_{i+n-1}]_{k}=[U_{i+1},U_{i+n}]_{k}\) for each \(i+2\leq k\leq i+n-2\). Using induction the claim follows.
**(6.12) Corollary**.: _Let \(i+1\leq k\leq i+n-2\). Then \(U_{k}\leq\langle U_{i},U_{i+1},\ldots,U_{k-1},U_{k+1},\ldots,U_{i+n-1}\rangle\)._
Proof.: This is a direct consequence of the previous proposition.
### Affine twin buildings of rank \(3\)
**(6.13) Proposition**.: _Let \(\mathcal{D}=(G,(U_{\alpha})_{\alpha\in\Phi})\) be an RGD-system of irreducible affine type and of rank \(3\). Then \(\mathcal{D}\) is wall-connected._
Proof.: We argue in the geometric realization of the Coxeter complex associated with \((W,S)\) in the euclidean plane (as visualized in [15, Figures \(2.1-2.3\)]. Thus we think of the Coxeter complex \(\Sigma\) as tessellation of the euclidean plane by chambers (i.e. the triangles). The walls of \(\Sigma\) correspond to the lines and each wall determines two halfplanes and these correspond to the roots of \(\Sigma\). We choose a chamber \(c\) and identify the set of fundamental reflections with the reflection of the euclidean plane whose wall is a wall of c. Moreover, the set of positive roots \(\Phi_{+}\) is identified with the set of halfspaces that contain \(c\). Let \(s\in S\). By definition it suffices to show that \(U_{\gamma}\subseteq U^{\prime}\) for each \(\gamma\in\Phi_{+}\), where \(U^{\prime}:=\langle U_{\beta}\mid\beta\in\Phi_{+},o(sr_{\beta})<\infty\rangle\). Let \(\gamma\) be a root in \(\Phi_{+}\). If \(o(sr_{\gamma})<\infty\), then \(U_{\gamma}\subseteq U^{\prime}\) by the definition of \(U^{\prime}\). Thus, it remains to consider the case where \(o(sr_{\gamma})=\infty\). We consider a gallery \((c=c_{0},...,c_{k-1},c_{k})\) in \(\Sigma\) such that \(r_{\gamma}\) switches \(c_{k-1}\) and \(c_{k}\) and such that \(k\) is minimal for this property. As \(o(sr_{\gamma})=\infty\), we have \(k\geq 2\) and therefore a unique rank \(2\) residue \(R\) containing the chambers \(c_{k-2},c_{k-1}\) and \(c_{k}\). We put \(d:=\operatorname{proj}_{R}c\) and remark that the wall of \(\gamma\) is not a wall of \(d\) by the minimality of \(k\). In particular, the gonality \(m\) of the residue \(R\) is at least \(3\). The residue \(R\) corresponds to a vertex \(v\) in the geometric realization of \(\Sigma\) and we let \(\Phi_{+}^{v}\) denote the set of all positive roots having \(v\) on their boundary. Let \(\alpha\) and \(\beta\) be the two roots in \(\Phi_{+}^{v}\) such that \(\{d\}=\alpha\cap\beta\cap R\). Then \(\alpha\neq\gamma\neq\beta\) and we have a natural numbering \((\alpha=\alpha_{1},\alpha_{2},\ldots,\alpha_{m}=\beta)\) of \(\Phi_{+}^{v}\) and \(1<\ell<m\) such that \(\gamma=\alpha_{\ell}\). Furthermore, we have \(o(sr_{\alpha_{i}})<\infty\) for all \(1\leq i\leq m\) with \(i\neq\ell\). Therefore we have \(U_{\alpha_{i}}\subseteq U^{\prime}\) for all \(1\leq i\leq m\) with \(i\neq\ell\) by the previous case. Thus, it follows
from the previous corollary that \(U_{\gamma}\subseteq U^{\prime}\). To show that \(U_{-}=\langle U_{\beta}\mid\beta\in\Phi_{-},o(sr_{\beta})<\infty\rangle\) follows in a similar fashion.
**(6.14) Lemma**.: _Let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*}),\Delta^{\prime}=(\Delta^{\prime}_{+},\Delta^{\prime}_{-},\delta^{\prime}_{*})\) be two thick, \(2\)-spherical twin buildings of rank \(\geq 3\), let \(c\in\mathcal{C}_{+},c^{\prime}\in\mathcal{C}^{\prime}_{+}\) and let \(\varphi:E_{2}(c)\to E_{2}(c^{\prime})\) be an isometry. Then \(\Delta\) is wall-connected if and only if \(\Delta^{\prime}\) is wall-connected._
Proof.: By [16, Proposition 7.1.6] there exist chambers \(d\in c^{\mathrm{op}}\) and \(d^{\prime}\in c^{\prime\mathrm{op}}\) such that the mapping \(d\to d^{\prime}\) extends the isometry \(\varphi\) to an isometry \(\psi:E_{2}(c)\cup\{d\}\to E_{2}(c^{\prime})\cup\{d^{\prime}\}\). If \(\Delta\) is wall-connected, then this isometry extends to an isometry of the whole twin buildings by Corollary (5.18). Now the claim follows from Lemma (5.11). If \(\Delta^{\prime}\) is wall-connected, then the isometry \(\psi^{-1}\) extends to an isometry of the whole twin buildings. Again, Lemma (5.11) implies that \(\Delta\) is wall-connected, too.
**(6.15) Convention**.: We label the diagrams \(\tilde{C}_{2}\) and \(\tilde{G}_{2}\) in a linear order by \(1,2,3\) such that \(o(s_{1}s_{2})=3\) in the case of \(\tilde{G}_{2}\).
**(6.16) Lemma**.: _Let \(\Delta,\Delta^{\prime}\) be two twin buildings of the same type \(\tilde{C}_{2}\) (resp. \(\tilde{G}_{2}\)). Suppose that the \(\{s_{1},s_{2}\}\)-residues of \(\Delta\) and \(\Delta^{\prime}\) are isomorphic to the building associated with \(C_{2}(2)\) (resp. \(A_{2}(2)\) or \(A_{2}(3)\)). Let \(c\in\Delta,c^{\prime}\in\Delta^{\prime}\) be chambers and let \(R\) and \(R^{\prime}\) denote the \(\{s_{2},s_{3}\}\)-residues containing \(c\) and \(c^{\prime}\) respectively. Then each isometry from \(R\) to \(R^{\prime}\) extends to an isometry from \(E_{2}(c)\) to \(E_{2}(c^{\prime})\)._
Proof.: We shall need the following elementary observation:
**Observation:** Let \(\Gamma\) be one of the buildings associated with \(A_{2}(2),A_{2}(3)\) or \(C_{2}(2)\) and let \(P\) be a panel of \(\Gamma\). Then the stabilizer of \(P\) in the full automorphism group of \(\Gamma\) induces all permutations on the set of chambers in \(P\).
The isometry \(\varphi:R\to R^{\prime}\) induces an isometry \(\mathcal{P}_{s_{2}}(c)\to\mathcal{P}_{s_{2}}(c^{\prime})\). By the observation there exists an isometry \(\psi:R_{\{s_{1},s_{2}\}}(c)\to R_{\{s_{1},s_{2}\}}(c^{\prime})\) as both residues are isomorphic to the building associated to one of the groups \(A_{2}(2),A_{2}(3),C_{2}(2)\). The claim follows.
**(6.17) Lemma**.: _Let \(\Delta\) be a twin building of type \(\tilde{C}_{2}\) such that the \(\{s_{1},s_{2}\}\)-residues are isomorphic to the buildings associated with \(C_{2}(2)\). Then \(\Delta\) is wall-connected._
Proof.: The \(\{s_{2},s_{3}\}\)-residues are Moufang quadrangles by [9, (8.3) Theorem 4] and since the \(s_{2}\)-panels have to contain precisely \(3\) chambers, the \(\{s_{2},s_{3}\}\)-residues are all isomorphic to \(C_{2}(2)\) or they are all isomorphic to the unique Moufang quadrangle of order \((2,4)\). Let \(c\) be a chamber of \(\Delta\). By (the proof of) [8, Proposition 4] and Lemma (6.16), there exists in both cases an RGD-system \(\mathcal{D}\) of type \(\tilde{C}_{2}\), a chamber \(c^{\prime}\) of \(\Delta(\mathcal{D})\) and an isometry \(\varphi:E_{2}(c)\to E_{2}(c^{\prime})\). Now the claim follows from Proposition (6.13), Corollary (6.10) and Lemma (6.14).
**(6.18) Lemma**.: _Let \(\Delta\) be a twin building of type \(\tilde{G}_{2}\) such that the \(\{s_{2},s_{3}\}\)-residues are isomorphic to the building associated with \(G_{2}(2)\) or \(G_{2}(3)\). Then \(\Delta\) is wall-connected._
Proof.: The \(\{s_{1},s_{2}\}\)-residues are Moufang planes by [9, (8.3) Theorem 4] and since the panel contain precisely \(3\) (resp. \(4\)) chambers, the \(\{s_{1},s_{2}\}\)-residues are all isomorphic to the building associated with \(A_{2}(2)\) (resp. \(A_{2}(3)\)). Let \(c\) be a chamber in \(\Delta\). By (the proof of) [8, Proposition 4] and Lemma (6.16) there exists an RGD-system \(\mathcal{D}\) of type \(\tilde{G}_{2}\), a chamber \(c^{\prime}\) in \(\Delta(\mathcal{D})\) and an isometry \(\varphi:E_{2}(c)\to E_{2}(c^{\prime})\). Now the claim follows from Proposition (6.13), Corollary (6.10) and Lemma (6.14).
**(6.19) Theorem**.: _Let \(\Delta\) be a thick irreducible twin building of affine type \((W,S)\) and of rank \(3\). Then \(\Delta\) is wall-connected._
Proof.: If there is no rank 2 residue of \(\Delta\) which is isomorphic to \(C_{2}(2),G_{2}(2)\) or \(G_{2}(3)\), then \(\Delta\) satisfies Condition (co) by [7, Section 1] and is therefore wall-connected by Corollary (6.5). If there is a residue isomorphic to \(C_{2}(2)\), then \(\Delta\) is wall-connected by [1, Corollary 5.157] and Lemma (6.17) and if there is a residue isomorphic to \(G_{2}(2)\) or \(G_{2}(3)\), then \(\Delta\) is wall-connected by [1, Corollary 5.157] and Lemma (6.18).
| 壁に接続された2つの建物概念を導入し、これらの2つの建物における局所-グローバルの原理を証明する。各壁に接続された2つの建物([7]に導入された条件(co)を満たす建物)は、[7]の主な結果を強化し、少なくとも3級の厚い不可分 affne 2つの建物もカバーする。
Please tell me if you would like to try translating another sentence. |
2301.00077 | A Study on a User-Controlled Radial Tour for Variable Importance in
High-Dimensional Data | Principal component analysis is a long-standing go-to method for exploring
multivariate data. The principal components are linear combinations of the
original variables, ordered by descending variance. The first few components
typically provide a good visual summary of the data. Tours also make linear
projections of the original variables but offer many different views, like
examining the data from different directions. The grand tour shows a smooth
sequence of projections as an animation following interpolations between random
target bases. The manual radial tour rotates the selected variable's
contribution into and out of a projection. This allows the importance of the
variable to structure in the projection to be assessed. This work describes a
mixed-design user study evaluating the radial tour's efficacy compared with
principal component analysis and the grand tour. A supervised classification
task is assigned to participants who evaluate variable attribution of the
separation between two classes. Their accuracy in assigning the variable
importance is measured across various factors. Data were collected from 108
crowdsourced participants, who performed two trials with each visual for 648
trials in total. Mixed model regression finds strong evidence that the radial
tour results in a large increase in accuracy over the alternatives.
Participants also reported a preference for the radial tour in comparison to
the other two methods. | Nicholas Spyrison, Dianne Cook, Kim Marriott | 2022-12-31T00:07:40 | http://arxiv.org/abs/2301.00077v1 | # A Study on a User-Controlled Radial Tour for Variable Importance in High-Dimensional Data
###### Abstract
Principal component analysis is a long-standing go-to method for exploring multivariate data. The principal components are linear combinations of the original variables, ordered by descending variance. The first few components typically provide a good visual summary of the data. _Tours_ also make linear projections of the original variables but offer many different views, like examining the data from different directions. The grand tour shows a smooth sequence of projections as an animation following interpolations between random target bases. The manual radial tour rotates the selected variable's contribution into and out of a projection. This allows the importance of the variable to structure in the projection to be assessed. This work describes a mixed-design user study evaluating the radial tour's efficacy compared with principal component analysis and the grand tour. A supervised classification task is assigned to participants who evaluate variable attribution of the separation between two classes. Their accuracy in assigning the variable importance is measured across various factors. Data were collected from 108 crowdsourced participants, who performed two trials with each visual for 648 trials in total. Mixed model regression finds strong evidence that the radial tour results in a large increase in accuracy over the alternatives. Participants also reported a preference for the radial tour in comparison to the other two methods.
Multivariate data visualization, variable importance, radial tour, linear dimension reduction,
## 1 Introduction
Despite decades of research, multivariate data continues to provide fascinating challenges for visualization. Data visualization is important because it is a key element of exploratory data analysis [43] for assessing model assumptions and as a cross-check on numerical summarization [50, 2, 26]. One challenge is measuring if a new technique yields a more informed perception of information than current practices.
Dimension reduction is commonly used with visualization to provide informative low-dimensional summaries of quantitative multivariate data. Principal component analysis (PCA) [34] is one of the first methods ever developed, and it remains very popular. Visualization of PCA is typically in the form of static scatterplots of a few leading components. When the scatterplot is accompanied by a visual representation of the basis they are called a bird [17]. A basis is a \(p\times d\) matrix of the linear combination of the \(p\) variables mapped to a smaller \(d\)-dimensional space. That is, it is an orthogonal rotation matrix, the magnitude, and the angle that the variables contribute.
Dynamic visualizations called _tours_[4] amitue through a sequence of linear projections (orthonormal bases). Instead of a static view, tours provide a smoothly changing view by interpolating between bases. There are various types of tours distinguished by how the paths are generated. Asimov originally animated between randomly selected bases in the _grand_ tour. The _manual_ tour [11] allows for user control over the basis changes. A selected variable (or component) can be rotated into or out of view or to a particular value. The _radial tour_[42] is a variant of the manual tour that fixes the contribution angle and changes the magnitude along the radius. The permanence of the data points from basis to basis holds information between intermediate interpolated projections, and the user control of the basis could plausibly lead to more information being perceived than a static display. This is a hypothesis that a user study could assess.
Empirical studies have rarely assessed tours. An exception is [31], who compares scatterplots of grand tours on 2D monitors with 3D (stereoscopic, not head-mounted) over \(n=15\) participants. Participants perform cluster detection, dimensionality estimation, and radial sparseness tasks on six-dimensional data. They find that stereoscopic 3D leads to more accuracy in cluster identification, though the time to interact with the display was much higher in the 3D environment. In this work, we extend the evaluation of torus which compares the radial tour as benchmarked against the grand tour and discrete pairs of principal components.
The contribution of this paper is an empirical user study comparing the radial tour against PCA and the grand tour for assessing variable attribution on clustered data. This is the first empirical evaluation of the radial or manual tour. We discuss how this fits with other multivariate data visualization techniques and coordinated views of linear projections.
We are particularly interested in assessing the effectiveness of the new radial tour relative to common practice with PCA and grand tour. The user influence over a basis, uniquely available in the radial tour, is crucial to testing variable sensitivity to the structure visible in projection. If the contribution of a variable is reduced and the feature disappears, then we say that the variable was sensitive to that structure. For example, Fig. 1 shows two projections of simulated data. Panel (a) has identified the separation between the two clusters. The contributions in panel (b) show no such cluster separation. The former has a large contribution of V2 in the direction of separation, while it is negligible in the right frame. Because of this, we say that V2 is sensitive to the separation of the clusters.
Variable sensitivity is important for the interpretation of machine learning models. They are the magnitude and direction of contribution to the model. It is important that developers maintain the interpretability of models. Exploratory Artificial Intelligence (XAI) [1, 3], is an emerging field that extends the interpretability of such black-box models. Multivariate data visualization is essential for exploring feature spaces and communicating interpretations of models [6, 47, 5].
The paper is structured as follows. Sect. 2 provides background on standard visualization methods and linear dimension reduction techniques. Sect. 3 describes the experimental factors, task, and accuracy measure used. The results of the study are discussed in Sect. 4. Con
clusions and potential future directions are discussed in Sect. 6. More results, participant demographics, and analysis of the response time are available in the Supplemental Materials.
## 2 Related work
Consider the data to be a matrix of \(n\) observations (rows) and \(p\) variables (columns), denoted as \(X_{n\times p}\).
### Orthogonal multivariate visualization
Grinstein [19] illustrates many multivariate visualization methods. In particular, this work shows examples of actual visuals. Liu [25] give a good classification and taxonomy of such methods. The content below focuses on the most common visuals that use the full data space before discussing linear combinations of those variables in projections.
#### 2.1.1 Scatterplot matrix
One could consider looking at \(p\) histograms or univariate densities. Doing so will miss features in two or more dimensions. Fig. 2 shows a scatterplot matrix [9] of the four principal components of simulated data. Such displays do not scale well with dimensions because each plot would get less and less space. Scatterplot matrices can only display information in two orthogonal dimensions, so features in three dimensions may not be fully resolved.
#### 2.1.2 Parallel coordinates plot
Another common way to display multivariate data is with a parallel coordinates plot [32]. Parallel coordinates plots scale well with dimensions but poorly with observations as the lines overcrowd the display. Parallel coordinate plots are asymmetric across variable ordering. In that, shuffling the order of the variable can lead to different conclusions. Another shortcoming is the graphical channel used to convey information. [29] suggests that position is the visual channel that is most perceptible to humans. In the case of parallel coordinates plots, the horizontal axes span variables rather than the values of one variable, causing the loss of a display dimension to be used by our most perceptible visual channel.
### Multivariate projections
At some point, visualization will be forced to turn to dimension reduction to scale better with the dimensionality of the data. Below we introduce linear projections and the common principal component analysis. Then we touch on nonlinear projections and exclude them from consideration.
#### 2.2.1 Linear
Let data, \(X\), contain \(n\) observations of \(p\) variables. A linear projection maps a higher \(p\)-dimensional space onto a smaller \(d\)-space with an affine mapping (where parallel lines stay parallel). A projection, \(Y\), is the resulting space of the data multiplied by a _basis_, \(A\), such that \(Y_{n\times d}=X_{n\times p}\times A_{p\times d}\). This is essentially a reorientation of the original variable. This intuition is conveyed by thinking of a shadow as a 2D projection of a 3D object. Rotating the object changes the shadow it casts and, correspondingly, the basis that maps the reorientation of the object.
#### 2.2.2 Principal component analysis
PCA is a good baseline of comparison for linear projections because of its frequent and broad use across disciplines. PCA [34] defines new components, linear combinations of the original variables, ordered by decreasing variation through the help of eigenvalue matrix decomposition. While the resulting dimensionality is the same size, the benefit comes from the ordered nature of the components. The data can be said to be approximated by the first several components. The exact number is subjectively selected given the variance contained in each component, typically guided from a scree plot [8]. Features with sizable signal regularly appear in the leading components that commonly approximate data. However, this is not always the case and component spaces should be fully explored to look for signal in components with less variation. This is especially true for cluster structure [14].
#### 2.2.3 Nonlinear
Nonlinear transformations bend and distort spaces that are not entirely accurate or faithful to the original variable space. Popular modern methods include t-SNE and UMAP [28, 44]. There are various quality metrics, such as Trustworthiness, Continuity, Normalized stress, and Average local error, have been introduced to describe the distortion of the space [16, 18]. Unfortunately, these distortions are hard to visualize and comprehend, effectively breaking the variable interpretability of the resulting space. The intuition of this can be demonstrated with map projections. Snyder [41] lists over 200 different projections that
Figure 1: Illustration of cluster separation affected by variable importance. Panel (a) is a projection mostly of V2 and V3, and the separation between clusters is in the direction of V2, not V3. This suggests V2 is important for clustering, but V3 is not. Panel (b) shows a projection of mostly V3 and V4, with no contribution from V2 and little from V3. That there is no separation between the clusters indicates that V3 and V4 are not important.
distort the surface of the earth to display as a 2D map, each with unique properties and use cases.
Because of the difficulty of interpreting the distortions of nonlinear spaces and the added subjectivity of hyperparameter selection, we exclude nonlinear techniques and instead, decide to compare three linear techniques.
### _Tours, animated linear projections_
A _tour_ animates through many linear projections. One of the insightful features of the tour is the permanence of the data points; one can track the relative changes of observations as the basis changes, as opposed to discretely jumping to an orthogonal view angle with no intermediate information. Types of tours are distinguished by the generation of their basis paths [13, 22]. In contrast with the discrete orientations of PCA, we compare continuous linear projection changes with grand and radial tours.
#### 2.3.1 Grand tours
Target bases are selected randomly in a grand tour [4]. These target bases are then geodesically interpolated for a smooth, continuous path. The grand tour is the first and most widely known tour. The random selection of target bases makes it a general unguided exploratory tool. The grand tour will make a good comparison that has a continuity of data points similar to the radial tour but lacks the user control enjoyed by PCA and radial tours.
#### 2.3.2 Manual and radial tours
Whether an analyst uses PCA or the grand tour, cannot influence the basis. They cannot explore the structure identified or change the contribution of the variables. User-controlled steering is a key aspect of _manual_ _tours_ that helps to test variable attribution.
The manual tour [11] defines its basis path by manipulating the basis contribution of a selected variable. A manipulation dimension is appended onto the projection plane, giving a full contribution to the selected variable. The target bases are then chosen to rotate this newly created manipulation space. This manipulation space is similarly orthogonally restrained. The data is projected through its interpolated basis and rendered into an animation. When the contribution of one variable changes, the contributions of the other variables must also change, to maintain the orthonormality of the basis. A key feature of the manual tour is that it allows users to control the variable contributions to the basis. Such manipulations can be queued in advance or selected in real time for human-in-the-loop analysis [21]. Manual navigation is relatively time-consuming due to the vast volume of resulting view space and the abstract method of steering the projection basis. First, it is advisable to identify a basis of particular interest and then use the manual tour as a more directed, local exploration tool to explore the sensitivity of a variable's contribution to the feature of interest.
To simplify the task and keep its duration realistic, we consider a variant of the manual tour called a _radial_ tour. In a radial tour, the magnitude of along the radius with a fixed angle of contribution, as seen in Fig. 3. The radial tour benefits from both continuity of the data alongside grand tours and user-steering via choosing the variable to rotate.
Manual tours have been recently made available in the **R** package **spinflex**[42], which facilitates manual tour (and radial variant). It also provides an interface for a layered composition of tours and exporting to gif and mp4 with **gganimate**[35] or html widget with **plotly**[40]. It is also compatible with tours made by **tourr**[48]. Now that we have a readily available means to produce various tours, we want to see how they fare against traditional discrete displays commonly used with PCA.
### _Other animated linear projections_
The work of [15] allows users to interactively change the face of a local display by navigating to adjacent faces on a global overview scatterplot matrix. This offers analysts a way to geometrically explore the transition between adjacent faces of a scatterplot matrix as though rotating the face of face at right angles. The interpolated bases between the orthogonal faces display linear combinations of three variables at varying degrees. This is what [27] called a _little tour_ with the addition of user control. It is a particular type of manual tour where only horizontal or vertical rotation is allowed.
Star Coordinates [20] also arrive at the biplot scatterplot displays starting from the perspective of radial parallel coordinates. [23] extend this idea, mapping it back to orthogonal projections. They provide a means to interpolate through PCA components, the orthogonal contributions of the scatterplot matrix, and the grand tour. This work also defines user-controlled interaction, similar to small steps in a manual or radial tour.
TripAdvisor [30] is an interactive application that plans sequential interpolation between distant target bases. It also provides an additional global context of a subset of possible frames with glyph representation and an overview of variable attribution by summarizing the top ten principal components. It allows for user-steering by using a "touchpad polygon". This touchpad allows for contribution magnitudes to be changed. This is similar to an incremental change with the manual tour.
The number of orthogonal axes in static plots as well as the number of bases to view in a tour increase quadratically with the dimensions, \(p\). This is why it is particularly important to properly select variables or otherwise reduce the dimensions before viewing. PCA, Linear discriminant analysis and entropy are common approaches to variable selection [37, 38, 46]. Such methods often yield a sort of screeplot [8] where the analyst selects a subjective, but informed, number of components to approximate the data while discarding the least information. The variable sensitivity we test for, in contrast, is the act of visual analysis of one variable's contribution to the structure. In practice, this is a tool for the analyst to fine-tune their variable selection or otherwise evaluate the resulting approximated space.
In order to further mitigate the view time, objective functions can be used to inform static or animated biplots. A dissimilarity statistic can be used to solve a basis path for showing a particularly interesting tour [24]. More generally projection pursuit can be used to conduct a guided tour of any objective function applied to an embedding space [12, 13]. However, the function optimized is likely to show some feature of interest if it is ultimately selected by the analyst. The ability to stop
Fig. 2: Scatterplot matrix of the first four principal components of 6D simulated data containing four classes. The separation between classes is primarily in PC1 and PC4. This is not uncommon because PCA is summarizing variance, not cluster structure.
and control the exploration at any point only stands to improve one's understanding of the data.
### Empirical evaluation
Some studies compare visualizations across complete contributions of variables. Chang [10] conducted an \(n=51\) participant study comparing parallel coordinate plots and scatterplot matrix either in isolation, sequentially, or as a coordinated view. Accuracy, completion time, and eye focus were measured for six tasks. Three tasks were more accurate with scatterplot matrix and three with parallel coordinates, while the coordinated view was usually marginally more accurate than the max of the separate visuals. Cao [7] compare nonstandardized line-glyph and star-glyphs with standardized variants (with and without fill under the curve). Each of the \(n=18\) participants performed 72 trials across the six visuals, two levels of dimensions, and two levels of observations. Visuals with variable standardization outperformed the nonstandardized variants, and the radial star-glyph reportedly outperformed the line variant.
Other studies have investigated the relative benefits of projecting to 2- or 3D scatterplots in PCA-reduced spaces. Gracia [18] conducted an \(n=40\) user study comparing 2- and 3D scatterplots on traditional 2D monitors. Participants perform point classification, distance perception, and outlier identification tasks. The results are mixed and primarily have small differences. There is some evidence to suggest a lower error in distance perception from a 3D scatterplot. Wagner Filho [45] performed an \(n=30\) mixed-design study on PCA reduced space using scatterplot displays between 2D on monitors, 3D on monitors, and 3D display with a head-mounted display. None of the tasks on any dataset lead to a significant difference in accuracy. However, the immersive display reduced effort and navigation, resulting in higher perceived accuracy and engagement. Sedlmair [39] instead used two expert coders to evaluate 75 datasets and four dimension reduction techniques across the displays of 2D scatterplots, interactive 3D scatterplots, and 2D scatterplot matrices. They suggested a tiered guidance approach finding that 2D scatterplots are often sufficient to resolve a feature. If not, try 2D scatterplots on a different dimension reduction technique before going to scatterplot matrix display or concluding a true negative. They find that interactive 3D scatterplots help in very few cases.
### Conclusion
Orthogonal axes visualizations either scale poorly with dimensionality or introduce an asymmetry of the variable ordering. Projections visualize the full \(p\)-data as fewer dimensions, traditionally 1-3 at a time. In linear, orthogonal projections, the resulting space is composed of a linear combination of the original variables that maintain variable interpretability. While nonlinear techniques distort and bend space in different ways that are hard to visualize and communicate.
Tours are linear projections that are animated over changes in the basis. Several more-recent, orthographic-star coordinate methods independently reach animated linear projections similar to tours. Some quality metrics and empirical studies compare techniques but scarcely with animated methods. Below we conduct a user study to compare the radial tour with PCA and the grand tour on a variable attribution task on clustered data.
## 3 User study
The experiment was designed to assess the performance of the radial tour relative to the grand tour and PCA for interpreting the variable attribution to the separation between two clusters. Data were simulated across three experimental factors: location of the cluster separation, cluster shape, and data dimensionality. Participant responses were collected using a web application and crowdsourced through prolific.co, [33] an alternative to MTurk.
### Objective
PCA will be used as a baseline for comparison as it is the most commonly used linear embedding. It will use static, discrete jumps between orthogonal components. The grand tour will act as a secondary control that will help evaluate the benefit of observation trackability between nearby animation bases but without user-control of its path. Lastly, the radial tour will be compared, which benefits from the continuity of animation and user control of the basis.
Then for some subset of tasks, we expect to find that the radial tour performs most accurately. Conversely, there is less to be certain about the accuracy of such limited grand tours as there is no objective function in selecting the bases; it is possible that the random selection of the target bases altogether avoids the bases showing cluster separation. However, given that the data dimensionality is modest, it is probable that the grand tour coincidentally regularly crossed bases with the correct information for the task.
Experimental factors and the definition of an accuracy measure are given below. The null hypothesis can be stated as:
\[H_{0}:\text{accuracy does not change across the visual methods}\] \[H_{\alpha}:\text{accuracy does change across the visual methods}\]
### Visual factors
The visual methods are tested mixed design, with each visual being evaluated twice by each participant. Scatterplot matrices or parallel coordinates could alternatively be used to visualize these spaces. However, we opt to focus on single lipid displays to focus on the differences between the radial tour and its most comparable visuals, rather than a comprehensive comparison of visual methods. The rest of this section
Figure 3: A radial tour changing the contribution of V2. The contribution is in the direction of cluster separation. When its contribution is removed, the clusters overlap (right). Because of this, we say that V2 is sensitive to the separation of these two species.
discusses the design standardization and unique input associated with each visual.
The visualization methods were standardized wherever possible. Data were displayed as 2D scatterplots with biplots. All aesthetic values (color-blind safe colors, shapes, sizes, absence of legend, and axis titles) were constant. The variable contribution biplot was always shown left of the scatterplot embeddings with their aesthetic values consistent. What did vary between visuals were their inputs.
PCA allowed users to select between the top four principal components for each axis regardless of the data dimensionality (four or six). Upon changing an axis, the visual would change to the new view of orthogonal components without displaying intermediate bases. There was no user input for the grand tour; users were instead shown a 15-second animation of the same randomly selected path (variables containing cluster separation were shuffled after simulation). Participants could view the same clip up to four times within the time limit. Radial tours allowed participants to select the manipulation variable. The starting basis was initialized to a half-clock design, where the variables were evenly distributed in half of the circle. This design was created to be variable agnostic while maximizing the independence of the variables. Selecting a new variable resets the animation where the new variable is manipulated to a complete contribution, zeroed contribution, and then back to its initial contribution. Animation and interpolation parameters were held constant across grand and radial tour (five bases per second with a step size of 0.1 radians between interpolated bases). Fig. 4 displays screen captures of the visuals in the application.
### Experimental factors
In addition to the visual method, data are simulated across three experimental factors. First, the _location_ of the separation between clusters is controlled by mixing a signal and a noise variable at different ratios. Secondly, the _shape_ of the clusters reflects varying distributions of the data. And third, the _dimension_-ality of the data is also tested. The levels within each factor are described below, and Fig. 5 gives a visual representation.
The _location_ of the separation between the clusters is at the heart of the measure. It would be good to test a few varying levels. To test the sensitivity, a noise and signal variable are mixed at different ratios. The separation between clusters is mixed at the following percentages: 0/100% (not mixed), 33/66%, 50/50% (evenly mixed).
In selecting the _shape_ of the clusters, the convention given by Scrucca et al. (2016) is followed. They describe 14 variants of model families containing three clusters. The model family name is the abbreviation of the clusters' respective volume, shape, and orientation. The levels are either _Equal_ or _V_ary. The models EEE, EEV, and EVV are used. For instance, in the EEV model, the volume and shape of clusters are constant, while the shape's orientation varies. The EVV model is modified by moving four-fifths of the data out in a "\(>\)" or banana-like shape.
_Dimension_-ality is tested at two modest levels: four dimensions containing three clusters and six with four clusters. Such modest dimensionality is required to limit the difficulty and search space to make the task realistic for crowdsourcing.
Fig. 4: Examples of the application displays for PCA, grand tour, and radial tour.
Fig. 5: Levels of the visuals and three experimental factors: location of cluster separation, the shape of clusters, and dimensionality of the sampled data.
### Task and evaluation
With our hypothesis formulated, let us turn our attention to the task and how it is evaluated. Participants were asked to "check any/all variables that contribute more than average to the cluster separation of the green circles and the orange triangles". This was further explained in the explanatory video as "mark any and all variable that carries more than their fair share of the weight, or one quarter in the case of four variables". The participant instruction video can be viewed at [https://vimeo.com/712674984](https://vimeo.com/712674984).
The instructions iterated several times in the video were: 1) use the input controls to find a basis that contains separation between the clusters of green circles and orange triangles, 2) look at the orientation of the variable contributions in the grey circle (bipolar axes orientation), and 3) select all variables that contribute more than uniformed distributed cluster separation in the scatterplot. Independent of the experimental level, participants were limited to 60 seconds for each evaluation of this task. This restriction did not impact many participants as the 25th, 50th, and 75th quantiles of the response time were about 7, 21, and 30 seconds, respectively.
The accuracy measure of this task was designed with a couple of features in mind. 1) Symmetric about the expected value, without preference for under- or over-guessing. 2) Heavier than linear weight with an increasing difference from the expected value. The following measure is defined for evaluating the task.
Let the data \(\mathbf{X}_{ijk},i=1,...,n;j=1,...,P;k=1,...,K\) be simulated observations containing clusters of observations of different distributions. Where \(n\) is the number of observations, \(p\) is the number of variables, and \(K\) indicates the number of clusters. Cluster membership is exclusive; an observation cannot belong to more than one cluster.
The weights, \(w\), is a vector of the variable-wise difference between the mean of two clusters of less \(1/p\), the expected cluster separation if it were uniformly distributed. Accuracy, \(A\) is defined as the signed square of these weights if selected by the participant. Participant responses are a logical value for each variable -- whether or not the participant thinks each variable separates the two clusters more than uniformly distributed separation. Weights comparing clusters 1 and 2 are calculated as follows:
\[A= \sum_{j=1}^{p}I(j)\cdot sign(w_{j})\cdot w_{j}^{2},\text{ where}\] \[w_{j}= \frac{\overline{X}_{j1}-\overline{X}_{j2}}{\sum_{j=1}^{p}(| \overline{X}_{j1}-\overline{X}_{j2}|)}-\frac{1}{p}\]
where
\(I\) is the indicator function, returning a binary response
\(\overline{X}_{j,k}\), mean of the \(j\)-th variable of the \(k\)-th cluster
where \(I(j)\) is the indicator function, the binary response for variable \(j\). Fig. 6 shows one projection of a simulation with its observed variable separation (wide bars), expected uniform separation (dashed line), and accuracy if selected (thin vertical lines).
### Randomized factor assignment
Now, with simulation and their artifacts in hand, this section covers how the experimental factors are assigned and demonstrate how this is experienced from the participant's perspective.
The study is sectioned into three periods. Each period is linked to a randomized level of visual and location. The order of dimension and shape are of secondary interest and are held constant in increasing order of difficulty; four then six dimensions and EEE, EEV, then EVV-banana, respectively.
Each period starts with an untimed training task at the simplest remaining experimental levels; location = 0/100%, shape = EEE, and four dimensions with three clusters. This serves to introduce and familiarize participants with input and visual differences. After the training, the participant performs two trials with the same visual and location level across the increasing difficulty of dimension and shape. The plot was removed after 60 seconds, though participants rarely reached this limit.
We assigned these factors based on the following order: visual methods, location, shape, and dimensionality. We first assigned three visual methods to three different sessions. The session order and the order of location follow a nested Latin square. The order of dimension and shape are assigned based on increasing order of difficulty.
Through pilot studies sampled by convenience (information technology and statistics Ph.D. students attending Monash University), it was estimated that three complete evaluations are needed to power the study properly, a total of \(N=3\times 3!^{2}=108\) participants.
### Participants
\(N=108\) participants were recruited via prolific.co (Palan and Schitter, 2018). Participants are restricted based on their claimed education requiring that they have completed at least an undergraduate degree (some 58,700 of the 150,400 users at the time). This restriction is used on the premise that linear projections and bipid displays will not be regularly used for consumption by general audiences. There is also the implicit filter that Prolific participants must be at least 18 years of age and have implicit biases of timezone, internet availability, language compatibility, and socioeconomic status. Participants were compensated for their time at \(\ell 7.50\) per hour, whereas the mean duration of the survey was about 16 minutes. Previous knowledge or familiarity was minimal, as validated in the follow-up survey. The Supplemental Materials include a heatmap distribution of age and education paneled across preferred pronouns of the participants that completed the survey, who are relatively young, well-educated, and slightly more likely to identify as males.
## 4 Results
To recap, the primary response variable is accuracy, as defined in Sect. 3.4. Two primary data sets were collected; the user study evaluations and the post-study survey. The former is the 108 participants with the experimental factors: visual, location of the cluster separation signal, the shape of the variance-covariance matrix, and the dimensionality of the data. Experimental factors and randomization were discussed in Sect. 3.3. A follow-up survey was completed by 84 of these 108 people. It collected demographic information (preferred pronoun, age, and education) and subjective measures for each visual (preference, familiarity, ease of use, and confidence).
Below a battery of mixed regression models is built to examine the degree of the evidence and the size of the effects of the experimental factors. Then, Likert plots and rank-sum tests to compare the subjective measures between the visuals.
### Accuracy
To quantify the contribution of the experimental factors to the accuracy, mixed-effects models were fit. All models have a random effect term on the participant and the simulation. These terms explain the amount of error attributed to the individual participant's effect and variation due to the random sampling data.
In building a set of models to test, a base model with only the visual term being compared with the full linear model term and progressively interacting with an additional experimental factor. The models with three and four interacting variables are rank deficient; there is not enough varying information in the data to explain all interacting terms.
\[\begin{array}{ll}\text{{Fixed effects}}&\text{{Full model}}\\ \alpha&\bar{Y}=\mu+\alpha+\mathbf{Z}+\mathbf{W}+\varepsilon\\ \alpha+\beta+\gamma+\delta&\bar{Y}=\mu+\alpha+\beta_{+}\gamma+\delta+\mathbf{Z }+\mathbf{W}+\varepsilon\\ \alpha\times\beta+\gamma+\delta&\bar{Y}=\mu+\alpha\times\beta_{+}\gamma+\delta+ \mathbf{Z}+\mathbf{W}+\varepsilon\\ \alpha\times\beta+\gamma+\delta&\bar{Y}=\mu+\alpha\times\beta_{+}\gamma+ \delta+\mathbf{Z}+\mathbf{W}+\varepsilon\\ \alpha\times\beta\times\gamma+\delta&\bar{Y}=\mu+\alpha\times\beta_{+}\gamma \times\delta+\mathbf{Z}+\mathbf{W}+\varepsilon\\ \alpha\times\beta\times\gamma\times\delta&\bar{Y}=\mu+\alpha\times\beta_{+} \gamma\times\delta+\mathbf{Z}+\mathbf{W}+\varepsilon\\ \end{array}\]
Table 1 compares the model summaries across increasing complexity. The \(\alpha\times\beta+\gamma+\delta\) model is selected to examine in more detail as it has relatively high condition \(R^{2}\) and not overly complex interacting terms. Table 2 looks at the coefficients for this model. There is strong evidence suggesting a relatively large increase in accuracy from the radial tour, though there is evidence that almost all of the increase the is lost under 33/66% mixing.
We also want to visually examine the conditional variables in the model. Fig. 7 illustrates the accuracy for each model term shown as mean and 95% confidence interval.
### Subjective measures
Modeling has proven that the use of the radial tour leads to a sizable improvement in the accuracy measure for this task. This is not the whole story. It is desirable to know what the users think of using the visuals. We follow the direction set by [45]. They observe four
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & AIC & BIC & R2 cond. & R2 marg. & RMSE \\ \hline a & **-71** & **-71** & -44.219 & 0.303 & 0.289 \\ a+b+c+d & -45 & -45 & 4.063 & 0.334 & 0.294 \\ a*b+c+d & -26 & -25 & 41.445 & 0.338 & 0.293 \\ a*b+c+d & 28 & 32 & 167.092 & **0.383** & 0.309 \\ a*b*c+d & 105 & 116 & **360.052** & 0.37 & **0.19** \\ \hline \hline \end{tabular} where
\(\mu\) is the intercept of the model
\(\alpha_{i}\) is the visual \(|\,i\in\,\)(peak, grand, radial)
\(\beta_{i}\) is the location \(|\,j\in\,\)(0100, 3366, 595/0% mix)
\(\beta_{i}\) is the label \(|\,k\in\,\)(EEE, EEV, EVV banana)
\(\delta\) is the dimension \(|\,l\in\,\)(4 \& 3, 6 \& 4) var \& clusters
\(\chi\sim\mathcal{N}(0,\,\sigma)\) is the random effect of participant
\(\psi\sim\mathcal{N}(0,\,\sigma)\) is the random effect of simulation
\(\epsilon\sim\mathcal{N}(0,\,\sigma)\) is the remaining error of the model
\end{table}
Table 1: Model performance of random effect models regressing accuracy. Complex models perform better in terms of \(R^{2}\) and RMSE, yet AIC and BIC penalize their large number of fixed effects in favor of the much simpler model containing only the visuals. Conditional \(R^{2}\) includes error explained by the random effects, while marginal does not.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Est & SE & df & t val & Prob \\ \hline (Intercept) & 0.10 & 0.06 & 16.1 & 1.54 & 0.143 \\
**Factor** & 0.06 & 0.04 & 622.1 & 1.63 & 0.104 \\ VisRadial & 0.14 & 0.04 & 617.0 & 3.77 & 0.000 *** \\
**Fixed effects** & & & & & \\ Loc33/66\% & -0.02 & 0.07 & 19.9 & -0.29 & 0.777 \\ Loc50/50\% & -0.04 & 0.07 & 20.0 & -0.66 & 0.514 \\ ShapeEEV & -0.05 & 0.06 & 11.8 & -0.82 & 0.427 \\ ShapeBanana & -0.09 & 0.06 & 11.8 & -1.54 & 0.150 \\ Dim6 & -0.01 & 0.05 & 11.8 & -0.23 & 0.824 \\
**Interactions** & & & & & \\ VisGrand:Loc33/66 & -0.02 & 0.06 & 588.9 & -0.29 & 0.774 \\ VisRadial:Loc33/66 & -0.12 & 0.06 & 586.5 & -2.13 & 0.033 * \\ VisGrand:Loc50/50 & -0.03 & 0.06 & 591.6 & -0.47 & 0.641 \\ VisRadial:Loc50/50 & -0.06 & 0.06 & 576.3 & -1.16 & 0.248 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The task accuracy model coefficients for \(\tilde{Y}=\alpha\times\beta+\gamma+\delta\), with visual = pca, location = 0/100%, shape = EEE, and dim = 4 held as baselines. Visual being radial is the fixed term with the strongest evidence supporting the hypothesis. Interacting with the location term, there is evidence suggesting radial performs with minimal improvement for 33/66% location mixing.
Figure 6: Illustration of how accuracy is measured. (L), Scatterplot and biplot of PC1 by PC4 of a simulated data set (R) illustrate cluster separation between the green circles and orange triangles. Bars indicate observed cluster separation, and (red/green) lines show the accuracy of the variable if selected. The horizontal dashed line has a height \(1/p\), the expected value of cluster separation. The accuracy weights equal the signed square of the difference between each variable value and the dashed line.
subjective measures. The following were used in this study: confidence, ease of use, prior familiarity, and preference. Each of these questions was asked of all for each visual as 5-point Likert items.
The 84 evaluations of the post-study survey are shown in Fig. 8. The figure uses Likert plots or stacked percentage bar plots and associated mean and 95% confidence intervals.
There was strong evidence to support that participants preferred the radial tour to either alternative. There is less evidence that the radial tour led to more confidence and was found easier to use than the grand tour. In confirmation of expectations, crowdsourced participants had low familiarity with all visuals, with no difference in mean supported.
## 4 Discussion
Data visualization is an integral part of understanding relationships in data and how models are fitted. When it comes to multivariate data giving a comprehensive view quickly becomes difficult as the dimensions become sizable. Analysts have the task of choosing which visualization technique to use. Because the viewing volume/time of multivariate spaces typically increase quadratically with dimensions dimension reduction must be properly conducted. While there are optimization methods for static and animated visuals, the particular function used is a guided choice of the analyst.
Sect. 2 discussed various types of visualization which are may be preferred for differing tasks and ends. The visualization and perception of multivariate spaces is a broad and heterogeneous task. This work focuses a subset of linear projections and especially sheds light on potential benefit of providing user control in conjunction with the animated projection over many bases as a radial tour.
The radial tour is a method for the analyst to choose a variable to alter its contribution to the basis. The animation over small changes to the basis allows the sensitivity of the structure to be assessed from the variable contribution. The hypothesis is that user control over the basis and the permanence of observations between intermediate frames may lead to a better perception of the variable attribution causing the separation of clusters.
A mixed modeling analysis of the study provides strong support for this conclusion. That is, there is significant evidence to suggest the use of the radial tour leads to a sizable increase in accuracy. One unexpected caveat is that mixing the location of the signal at 33/66% almost completely negates this gain. Perhaps this is because the "half-clock" basis used did not give enough weight to the variable containing the small fraction. It was also interesting to note that no level of the experimental factors alone had a significant effect on this setup. Lastly, the follow-up survey asked participants to evaluate measures of the visuals. Most notably, participants preferred the radial tour to the other visuals. Knowing that the radial tour outperforms alternatives and is the preferred choice can help inform the selection of visual methods for developers and analysts.
There are several implicit limitations to this study: the task, type of data, and levels of the factors to name a few. The expansion of any of these areas is conceptually simple, but exponentially increases the number of participants needed to properly power the study. Additionally the sample of crowd-sourced, educated, but unfamiliar users may not extrapolate well to more experienced users. There are several ways that future work could be extended. Aside from expanding the support of the experimental factors, more exciting directions include: introducing a new task, including more visualizations, and changing the experience level of the target population. It is difficult to achieve good coverage given the number of possible factors to vary.
## 5 Conclusion
This paper discussed a crowdsourced mixed design user study (\(n=108\)) comparing the efficacy of three linear projection techniques: PCA, grand tour, and radial tour. The participants performed a supervised cluster task, explicitly identifying which variables contribute to the separation of two target clusters. This was evaluated evenly over four experimental factors. In summary, mixed model regression finds strong evidence that using the radial tour sizably increases accuracy, especially when cluster separation location is not mixed at 33/66%. The effect sizes on accuracy are large relative to the change from the other
Figure 8: The subjective measures of the 84 responses of the post-study survey with five-point Likert items levels of agreement. (L) Likert plots (stacked percent bar plots) with (R) mean and 95% CI of the same measures. Participants are more confident using the radial tour and find it easier to use than the grand tour. The radial tour is the most preferred visual.
Figure 7: Accuracy of terms of the model \(\bar{Y}=\alpha\times\beta+\gamma+\delta\). Viewing the marginal accuracy of the terms corroborates the primary findings that the use of the radial tour leads to a significant increase in accuracy, at least over PCA, and this effect is particularly well supported when no location mixing is applied.
experimental factors and the random effect of data simulation, though smaller than the random effect of the participant. The radial tour was the most preferred of the three visuals.
There is no panacea for the comprehensive visualization of multivariate spaces. We have demonstrated that there is a definite value of user-control in linear projections. The agency of the analyst remains an important tool for the exploratory analysis of multivariate data.
## Acknowledgments
This research was supported by an Australian Government Research Training Program (RTP) scholarship. This article was created in **R**[36] and **markdown**[49]. Visuals were prepared with **spinifex**[42]. We thank Jieyang Chong for his help in proofreading this article. The code, response files, their analyses, and the study application are publicly available at [https://github.com/nspyrison/spinifex.study](https://github.com/nspyrison/spinifex.study). The participant instruction video can be viewed at [https://vimeo.com/712674984](https://vimeo.com/712674984).
| ```
主成分分析は多変量データの探索に longstanding な方法です。主成分は、原変数の線形 combinación です。原変数の順序は、減少する分散に従います。最初の幾つかは、データの良好な視覚的概要を提供します。観光 also は原変数の線形投影をしますが、さまざまな視点を提供します。たとえば、異なる方向からデータを見ます。大まかな観光では、ランダムなターゲットベース間の補間を伴うアニメーションとして投影のなめらかなシーケンスを表示します。手動の放射線ツアーでは、選択された変数の貢献を投影に回転させます。これにより、変数の貢献の重要性と構造が評価されます。この研究では、放射線ツアーの有効性を主成分分析と大まかなツアーと比較して評価する混合設計のユーザー調査を記述しています。参加者は、二つのクラスの分離の変数の属性を評価 |
2309.14620 | Static black hole in minimal Horndeski gravity with Maxwell and
Yang-Mills fields and some aspects of its thermodynamics | In this work we obtain a static spherically symmetric charged black hole
solution in the framework of minimal Horndeski gravity with additional Maxwell
and Yang-Mills fields. The obtained solution is examined, in particular its
asymptotics are studied. Thermodynamics of the black hole is investigated,
namely we use an effective surface gravity to derive black hole temperature. To
obtain the first law of black hole thermodynamics the Wald method is applied.
We also use the extended thermodynamics approach, namely it allows us to derive
the Smarr relation, Gibbs free energy and the thermal equation of state. The
study of thermal values in the extended space shows rich phase behaviour, in
particular domain where the first order phase transition takes place and the
critical point with the second order phase transition. We also study thermal
behaviour near the critical point, obtain critical exponents and analyse the
Ehrenfest's equations at the critical point. Finally, we calculate the
Prigogine-Defay ratio confirming the conclusion about the second order phase
transition at the critical point. | M. M. Stetsko | 2023-09-26T02:19:51 | http://arxiv.org/abs/2309.14620v1 | Static black hole in minimal Horndeski gravity with Maxwell and Yang-Mills fields and some aspects of its thermodynamics
###### Abstract
In this work we obtain a static spherically symmetric charged black hole solution in the framework of minimal Horndeski gravity with additional Maxwell and Yang-Mills fields. The obtained solution is examined, in particular its asymptotics are studied. Thermodynamics of the black hole is investigated, namely we use an effective surface gravity to derive black hole temperature. To obtain the first law of black hole thermodynamics the Wald method is applied. We also use the extended thermodynamics approach, namely it allows us to derive the Smarr relation, Gibbs free energy and the thermal equation of state. The study of thermal values in the extended space shows rich phase behaviour, in particular domain where the first order phase transition takes place and the critical point with the second order phase transition. We also study thermal behaviour near the critical point, obtain critical exponents and analyse the Ehrenfest's equations at the critical point. Finally, we calculate the Prigogine-Defay ratio confirming the conclusion about the second order phase transition at the critical point.
## 1 Introduction
Recent decade has become a period of outstanding progress of observational astrophysics, first of all due to long-time expected detection of the gravitational waves which required experimental setup of remarkably high accuracy [1]. In general experimental observations show astonishing agreement with theoretical predictions made in the framework of General Relativity which even nowadays is exceptionally successful theory of gravity [2]. But nonetheless on its attractive features there are some open issues which motivate people to look for alternative or more general approaches than Einsteinian theory of gravity that give answers to current puzzles. Among the most perplexing questions are the existence of singularities which as it is proved inevitably appear within general relativistic consideration, Dark Energy/Dark Matter issues, consistent description of early stage evolution of the Universe.
To overcome the mentioned difficulties various approaches were proposed and examined giving rise to different ways of modification of general relativistic setting of the problem. The key features, their advantages and possible difficulties of the diverse approaches are given in thorough reviews [3, 4, 5, 6, 7]. Here we focus on Scalar-Tensor gravity theories, namely on the so-called Horndeski gravity [8, 9] as one of the most promising approaches. We also point out that Scalar-Tensor theories of gravity may be considered as a conservative approach, since its formulation follows the way usually used in General Relativity. We also point out that Scalar-Tensor theories have rather long history, starting back from Brans-Dicke gravity established in early 1960-ies [10]. The latter one also gained its second renaissance since the beginning of the new century, particularly because of its tight bonds with \(F(R)\) gravity [6]. Strictly speaking the Brans-Dicke theory is just
a particular case of the general Horndeski gravity [9], but because of specific coupling between gravity and scalar sectors in Brans-Dicke-type of theories and in Horndeski gravity they are often considered separately.
In his seminal paper [8] Horndeski proposed the most general four dimensional Scalar-Tensor theory with the so-called derivative coupling between gravity and scalar fields which gives rise to the second order field equations. Horndeski gravity got its second revival, when relations with the generalized Galileon model were established [11]. The Galileons firstly appeared in studies of DGP model [12], they got their name due to a specific shift symmetry, namely \(\varphi\rightarrow\varphi^{\prime}=\varphi+b_{\mu}x^{\mu}+c\) (\(b_{\nu}\), \(c\) are constants). One of the most appealing features of Horndeski gravity related to the second order of the equations of motion is the absence of ghosts. On the other hand the Cauchy problem is well-posed in Horndeski gravity, making it an attractive model for various applications. Even though there is direct relation between the generalized Galileon theory and Horndeski gravity in four dimensions, higher dimensional generalization of Horndeski gravity has not been obtained yet [9]. Since its relation to the DGP model and due to the fact that Horndeski theory terms in four dimensional space-time can be derived via dimensional reduction [13, 14] it can be claimed that Horndeski gravity apart of its phenomenological origin, has some ties with String Theory, at least in the low-energy limit of the latter. We also point out that Horndeski gravity can be generalized to become a multiscalar theory [15], other interesting generalization is the so-called DHOST theories [16], namely the theories with higher order equations of motion, but with some degeneracy conditions removing the Ostrogradsky instability. Horndeski gravity got numerous applications in cosmology, the most remarkable of them are pointed out in the review [9].
Black holes and other compact objects like neutron stars have been attracted much attention since the second revival of Horndeski gravity [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]. Black hole solutions are important and useful toy models to study various effects, especially related to astrophysical black holes [35]. Gravity theories including General Relativity usually have complicated structure, therefore gaining some general results valid at least within a particular gravity theory might be a problem of immense difficulty, especially for the theories beyond General Relativity. Therefore, black hole solutions are those objects which allow to derive or test implications of theory and their study is very important problem.
Black holes in Horndeski theory are known to have nontrivial scalar field profile, particularly the scalar field may be time dependent [9, 21] or/and have singular behaviour at the event horizon. Nontrivial profile of the scalar field significantly affects on various properties of the black holes and usually require careful study. Even though there a lot of black hole solutions in Horndeski gravity not much attention is paid to the case where additional fields are taken into consideration [20, 27, 29, 31]. It can be explained by the following two reasons, namely the first one is directly related to cumbersome structure of the Horndeski theory giving rise to equations which are hardly tractable even under quite simple assumptions. The second reason, to our mind, is related to rather general point of view that the main impact of the Horndeski gravity should take place on cosmological scales, whereas for the compact object due to various screening mechanisms, they should mimic general relativistic black holes at least for a distant observer. But studies of black holes with additional material or gauge fields in Horndeski gravity allow not only to reveal some specific features caused by the particular choice of the gravity model, they in principle may give us more general and broad view of some basic notions of black hole physics and show the range of their applicability to various gravity models.
In this paper a static black hole solution in a particular case of Horndeski theory with additional Maxwell and Yang-Mills field is considered. As far as we known the interplay of Horndeski gravity and Yang-Mills field even though both of them are taken probably in their simplest form is studied for the first time. The Maxwell field in its standard form as well as for some of its nonlinear generalizations were considered in case of Horndeski theory [20, 27, 29, 31], whereas nonabelian fields were examined mainly within General Relativity [36, 37, 38, 39, 40, 41, 42, 43, 44, 45], or more general in Einstein-dilaton theory [46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134]. We also take into account the Maxwell field to examine interplay between the gauge fields in the framework of Horndeski gravity and as we will show there is an effective "coupling" between them which does not appear neither in General Relativity, nor in a more general Einstein-dilaton theory [47, 48]. We also pay considerable attention to study of various aspects of thermodynamics for the obtained solution.
The work is organized as follows. In the following section we obtain and study a static black hole solution in Horndeski gravity with additional abelian and nonabelian gauge fields. In the third section we obtain and examine black hole temperature. In the section four we use Wald approach to derive the first law of black hole thermodynamics, obtain other thermodynamics values such as entropy and heat capacity and examine the latter one. In the fifth section we use the extended thermodynamics approach to derive the extended first law and the Smarr relation. In the sixth section we obtain the Gibbs free energy and study its behaviour. Critical behaviour in the extended approach is studied in the section seven. Finally, in the last section there are some conclusions and future prospects.
Equations of motion for the theory with nonminimal derivative coupling and static black hole's solution
General Horndeski gravity gives rise to complicated equations which even for the geometries with high symmetry are difficult to handle with, therefore we consider one of its simplest particular cases, but which inherits distinctive feature of the general Horndeski gravity, namely its specific derivative coupling between gravity and additional scalar field. Similarly to general Horndeski gravity the equations of motion are of the second order making the theory free from Ostrogradski instability. We also take into account some gauge fields, namely we consider both abelian (electromagnetic) and nonabelian ones which are minimally coupled to gravity. The action for our system can be written in the form:
\[S=\frac{1}{16\pi}\int d^{n+1}x\sqrt{-g}\left(R-2\Lambda-\frac{1}{2}\left( \alpha g^{\mu\nu}-\eta G^{\mu\nu}\right)\partial_{\mu}\varphi\partial_{\nu} \varphi-\mbox{Tr}(F^{(a)}_{\mu\nu}F^{(a)\mu\nu})-{\cal F}_{\mu\nu}{\cal F}^{ \mu\nu}\right)+S_{GHY}, \tag{1}\]
where \(g_{\mu\nu}\) and \(g\) is the metric tensor and its determinant respectively, \(R\) and \(G_{\mu\nu}\) are the Ricci scalar and the Einstein tensor correspondingly, \(\varphi\) is the scalar field and \(\alpha\) and \(\eta\) are coupling constants for it and finally \(F^{(a)}_{\mu\nu}\) and \({\cal F}_{\mu\nu}\) are field strengths for nonabelian and abelian fields respectively. We note that since there is no potential for the scalar field in the action (1), functions we obtain and analyse show their dependence on the ratio of the coupling parameters \(\alpha/\eta\), therefore only one of them can be treated as a parameter which can be varied, but here we keep both in order to consider some limit cases. We also point out that \(S_{GHY}\) term in the action (1) denotes the so called boundary Gibbons-Hawking-York term which makes the variational problem well-defined. For this theory with nonminimal derivative coupling the Gibbons-Hawking-York term can be written in the form:
\[S_{GHY}=\frac{1}{8\pi}\int d^{n}x\sqrt{|h|}\left(K+\frac{\eta}{4}\left[\nabla ^{\mu}\varphi\nabla^{\nu}\varphi K_{\mu\nu}+(n^{\mu}n^{\nu}\nabla_{\mu}\varphi \nabla_{\nu}\varphi+(\nabla\varphi)^{2})K\right]\right), \tag{2}\]
where \(h\) is the determinant of the boundary metric \(h_{\mu\nu}\), \(K_{\mu\nu}\) and \(K\) denote the extrinsic curvature tensor and its trace correspondingly and finally \(n^{\mu}\) is the vector normal to the boundary hypersurface.
We point out here that the field tensors for the gauge fields are defined in the standard way, namely for the Yang-Mills field we write:
\[F^{(a)}_{\mu\nu}=\partial_{\mu}A^{(a)}_{\nu}-\partial_{\nu}A^{(a)}_{\mu}+\frac {1}{\bar{\sigma}}C^{(a)}_{(b)(c)}A^{(b)}_{\mu}A^{(c)}_{\nu}, \tag{3}\]
where \(A^{(a)}_{\mu}\) is the Yang-Mills potential, \(\bar{\sigma}\) is the coupling constant for nonabelian field and \(C^{(a)}_{(b)(c)}\) are the structure constants for corresponding gauge group. In this work the gauge group is chosen to be the special orthogonal one \(SO(n)\).
The Maxwell field tensor is defined in the standard fashion:
\[{\cal F}_{\mu\nu}=\partial_{\mu}{\cal A}_{\nu}-\partial_{\nu}{\cal A}_{\mu}, \tag{4}\]
and here \({\cal A}_{\mu}\) is the Maxwell field potential.
To obtain equations of motion for the system given by the action (1) the least action principle is used. For gravitational part we can write:
\[{\cal E}_{\mu\nu}:=G_{\mu\nu}+\Lambda g_{\mu\nu}-\left(\frac{1}{2}(\alpha T^{(1)} _{\mu\nu}+\eta T^{(2)}_{\mu\nu})+T^{(3)}_{\mu\nu}+T^{(4)}_{\mu\nu}\right)=0, \tag{5}\]
where we have:
\[T^{(1)}_{\mu\nu}=\nabla_{\mu}\varphi\nabla_{\nu}\varphi-\frac{1}{2}g_{\mu\nu} \nabla^{\lambda}\varphi\nabla_{\lambda}\varphi, \tag{6}\]
\[T^{(2)}_{\mu\nu}=\frac{1}{2}\nabla_{\mu}\varphi\nabla_{\nu}\varphi R-2\nabla^{ \lambda}\varphi\nabla_{\nu}\varphi R_{\lambda\mu}+\frac{1}{2}\nabla^{\lambda} \varphi\nabla_{\lambda}\varphi G_{\mu\nu}-g_{\mu\nu}\left(-\frac{1}{2}\nabla_ {\lambda}\nabla_{\kappa}\varphi\nabla^{\lambda}\nabla^{\kappa}\varphi\right.\]
\[\left.+\frac{1}{2}(\nabla^{2}\varphi)^{2}-R_{\lambda\kappa}\nabla^{\lambda} \varphi\nabla^{\kappa}\varphi\right)-\nabla_{\mu}\nabla^{\lambda}\varphi \nabla_{\nu}\nabla_{\lambda}\varphi+\nabla_{\mu}\nabla_{\nu}\varphi\nabla^{ 2}\varphi-R_{\lambda\mu\kappa\nu}\nabla^{\lambda}\varphi\nabla^{\kappa}\varphi, \tag{7}\]
\[T^{(3)}_{\mu\nu}=2{\rm Tr}\left(F^{(a)}_{\mu\lambda}F^{(a)\lambda}_{\nu}\right) -\frac{g_{\mu\nu}}{2}{\rm Tr}\left(F^{(a)}_{\lambda\kappa}F^{(a)\lambda\kappa} \right), \tag{8}\]
\[T^{(4)}_{\mu\nu}=2{\cal F}_{\mu\lambda}{\cal F}_{\nu}{}^{\lambda}-\frac{g_{\mu \nu}}{2}{\cal F}_{\lambda\kappa}{\cal F}^{\lambda\kappa}. \tag{9}\]
It is clear that in the right hand side of the equation (5) there are stress-energy tensors for the scalar and gauge fields given by the upper relations (6)-(9).
The least action principle also allows us to obtain equations of motion for the scalar and the gauge fields. For the scalar field \(\varphi\) we arrive at the following equation:
\[{\cal E}_{\varphi}:=(\alpha g_{\mu\nu}-\eta G_{\mu\nu})\nabla^{\mu}\nabla^{ \nu}\varphi=0. \tag{10}\]
For the Yang-Mills field we obtain:
\[{\cal E}_{A}{}^{(a)\nu}:=\nabla_{\mu}(F^{(a)\mu\nu})+\frac{1}{\bar{\sigma}}C^ {(a)}_{(b)(c)}A^{(b)}_{\mu}F^{(c)\mu\nu}=0. \tag{11}\]
Finally, for the abelian gauge field the standard Maxwell equations can be derived:
\[{\cal E}_{A}{}^{\nu}:=\nabla_{\mu}{\cal F}^{\mu\nu}=0. \tag{12}\]
Here we are going to obtain a static black hole's solution therefore the general form of the metric can be written in the following form:
\[ds^{2}=-U(r)dt^{2}+W(r)dr^{2}+r^{2}d\Omega^{2}_{(n-1)}, \tag{13}\]
where \(\Omega^{2}_{(n-1)}\) represents the element of length for a unit \(n-1\)-dimensional hypersphere and the metric functions \(U(r)\) and \(W(r)\) will be obtained from the equations of motion. We also point out here that in the present work we assume that \(n\geqslant 3\).
For a static electrically charged solution the gauge potential for the Maxwell (abelian) field can be chosen in the form \({\cal A}={\cal A}_{0}(r)dt\). From the Maxwell equations (12) we derive immediately that the electromagnetic field takes the form:
\[{\cal F}_{rt}=\frac{q}{r^{n-1}}\sqrt{UW} \tag{14}\]
It is known that the so-called Wu-Yang ansatz [36, 38, 41, 42, 46] being one of the simplest possible choices to satisfy the Yang-Mills equations (11) allowed to derive various solutions in pure Yang-Mills theory and if gravity was taken into account it brought to nontrivial black hole solutions. Therefore, the nonabelian gauge potential takes the form as follows:
\[{\bf A}^{(a)}=\frac{\bar{q}}{r^{2}}C^{(a)}_{(i)(j)}x^{i}dx^{j},\quad r^{2}= \sum_{j=1}^{n}x^{2}_{j}, \tag{15}\]
here we point out that in order to satisfy the equations of motion (11) we impose that \(\bar{q}=\bar{\sigma}\) and for simplicity auxiliary Cartesian coordinates \(x^{i}\) were used and the indices \(a,i,j\) can take the following values: \(1\leqslant a\leqslant n(n-1)/2\), \(2\leqslant j+1<i\leqslant n\). The relations between the coordinates \(x^{i}\) and the angular variables of a spherical coordinate system are standard:
\[x_{1}=r\cos\chi_{n-1}\sin\chi_{n-2}\ldots\sin\chi_{1}, x_{2}=r\sin\chi_{n-1}\sin\chi_{n-2}\ldots\sin\chi_{1},\] \[x_{3}=r\cos\chi_{n-2}\sin\chi_{n-3}\ldots\sin\chi_{1}, x_{4}=r\sin\chi_{n-2}\sin\chi_{n-3}\ldots\sin\chi_{1},\] \[\ldots\] \[x_{n}=r\cos\chi_{1}, \tag{16}\]
and the angular variables \(\chi_{i}\) have typical ranges of variation, namely for \(1\leqslant i\leqslant n-2\) we have \(0\leqslant\chi_{i}\leqslant\pi\) and \(0\leqslant\chi_{n-1}<2\pi\). Using the angular variables we can also represent the length element for the unit sphere:
\[d\Omega_{n-1}^{2}=d\chi_{1}^{2}+\sum_{j=2}^{n-1}\prod_{i=1}^{j-1}\sin^{2}\chi _{i}d\chi_{j}^{2}. \tag{17}\]
The gauge potential (15) might be rewritten in terms of angular variables, but its explicit form would not be as simple as for the Cartesian ones. Using the relation (3) one can calculate the gauge field \(F_{\mu\nu}^{(a)}\) and check that the equations of motion (11) are satisfied. The invariant for the Yang-Mills potential can be calculated, namely we arrive at:
\[\mbox{Tr}(F_{\rho\sigma}^{(a)}F^{(a)\rho\sigma})=(n-1)(n-2)\frac{\bar{q}^{2}} {r^{4}}. \tag{18}\]
Using the metric ansatz (13) and taking into account the gauge field tensors and their invariants we can write the equations (5) in the following form:
\[\frac{(n-1)}{2rW}\left(\frac{W^{\prime}}{W}+\frac{(n-2)}{r}(W-1) \right)\left(1+\frac{3}{4}\eta\frac{(\varphi^{\prime})^{2}}{W}\right)-\Lambda =\frac{\alpha}{4W}(\varphi^{\prime})^{2}+\] \[\frac{\eta}{2}\left(\frac{(n-1)(n-2)}{r^{2}W^{2}}\left(W-\frac{1 }{2}\right)(\varphi^{\prime})^{2}+\frac{(n-1)}{rW^{2}}\varphi^{\prime\prime} \varphi^{\prime}\right)+\frac{q^{2}}{r^{2(n-1)}}+\frac{(n-1)(n-2)\bar{q}^{2} }{2r^{4}}, \tag{19}\]
\[\frac{(n-1)}{2rW}\left(\frac{U^{\prime}}{U}-\frac{(n-2)}{r}(W-1) \right)\left(1+\frac{3}{4}\eta\frac{(\varphi^{\prime})^{2}}{W}\right)+\Lambda=\] \[\frac{\alpha}{4W}(\varphi^{\prime})^{2}-\frac{\eta(n-1)(n-2)}{4r^ {2}W}(\varphi^{\prime})^{2}-\frac{q^{2}}{r^{2(n-1)}}-\frac{(n-1)(n-2)\bar{q}^ {2}}{2r^{4}}, \tag{20}\]
where prime denotes the derivative with respect to \(r\).
The equation for the scalar field (10) may be also integrated at least for a once, as a result we obtain:
\[\sqrt{\frac{U}{W}}r^{n-1}\left[\alpha-\eta\frac{(n-1)}{2rW}\left(\frac{U^{ \prime}}{U}-\frac{(n-2)}{r}(W-1)\right)\right]\varphi^{\prime}=C, \tag{21}\]
where \(C\) is an integration constant. The latter relation might be used to express the derivative \(\varphi^{\prime}\) in terms of the metric functions, their derivatives and also as an explicit function of the radius \(r\), but in this general case the corresponding relation for the function \(\varphi^{\prime}\) turns to be of a complicated form and it would be difficult to operate with it. Therefore, for simplicity of the following calculations we set \(C=0\), even though the condition we impose gives rise just to a particular solution, but it is quite nontrivial and it is worth being studied. The condition \(C=0\) is equivalent to the following constraint:
\[\alpha g_{rr}-\eta G_{rr}=0. \tag{22}\]
Here we point out that the same condition (22) was used in our earlier works [29, 31] as well as in papers of other author were black holes with nonminimal derivative coupling were studied [17, 18, 19].
Now the equations (19)-(20) can be solved together with the upper relation (22). As a result we obtain:
\[\left(\varphi^{\prime}\right)^{2}=-\frac{4r^{2}W}{2\alpha r^{2}+\eta(n-1)(n-2) }\left(\Lambda+\frac{\alpha}{\eta}+q^{2}r^{2(1-n)}+\frac{(n-1)(n-2)}{2}\bar{q }^{2}r^{-4}\right); \tag{23}\]
\[UW=\frac{\left((\alpha-\Lambda\eta)r^{2}+\eta(n-1)(n-2)-\eta q^{2}r^{2(2-n)}- \eta(n-1)(n-2)\bar{q}^{2}r^{-2}/2\right)^{2}}{(2\alpha r^{2}+\eta(n-1)(n-2))^{2 }}. \tag{24}\]
The square of the derivative \(\varphi^{\prime}\) has to be positive outside of the black hole, it might be achieved if some conditions on the parameters \(\alpha\), \(\eta\), \(\Lambda\), \(q\) and \(\bar{q}\) are imposed. For instance, when both parameters \(\alpha\) and \(\eta\) are positive, the cosmological constant \(\Lambda\) should be negative to provide positivity of the \((\varphi^{\prime})^{2}\) in the outer domain. Similar conclusion is inferred if we impose \(\alpha>0\) and \(\eta<0\).
Finally, the metric function \(U(r)\) can be written in the following form:
\[U(r)=1-\frac{\mu}{r^{n-2}}-\frac{2\Lambda}{n(n-1)}r^{2}-\frac{(n- 2)}{(n-4)}\frac{\bar{q}^{2}}{r^{2}}+\frac{2q^{2}}{(n-1)(n-2)}r^{2(2-n)}+\frac{ 1}{2\alpha\eta(n-1)r^{n-2}}\times\] \[\left((\alpha+\Lambda\eta)^{2}\int\frac{r^{n+1}}{r^{2}+d^{2}}dr+ \eta^{2}q^{4}\int\frac{r^{5-3n}}{r^{2}+d^{2}}dr+2\eta(\alpha+\Lambda\eta)q^{2 }\int\frac{r^{3-n}}{r^{2}+d^{2}}dr+(n-1)(n-2)\times\right.\] \[\left.\eta\bar{q}^{2}\left((\alpha+\Lambda\eta)\int\frac{r^{n-3}} {r^{2}+d^{2}}dr+\eta q^{2}\int\frac{r^{-(n+1)}}{r^{2}+d^{2}}dr+\frac{\eta}{4}( n-1)(n-2)\bar{q}^{2}\int\frac{r^{n-7}}{r^{2}+d^{2}}dr\right)\right), \tag{25}\]
where \(d^{2}=\eta(n-1)(n-2)/2\alpha\). In the upper relation indefinite integrals are used, because of some peculiarities for even and odd dimensions of space \(n\). We also point out that the fourth term in the function \(U(r)\) (25) has additional logarithmic factor (\(\sim\ln r\)) if \(n=4\).
To obtain the explicit form of the metric function (25) all the integrals in (25) should be rewritten in terms of corresponding functions. Due to some subtleties for even and odd dimensions of space we write:
\[\int\frac{r^{m}}{r^{2}+d^{2}}dr=\sum_{j=0}^{(m-2)/2}(-1)^{j}d^{2j}\frac{r^{m-2 j-1}}{m-2j-1}+(-1)^{\frac{m}{2}}d^{m-1}\arctan\left(\frac{r}{d}\right), \tag{26}\]
and here \(m\) is a positive even number. If \(m\) is a positive odd number the latter integral might be written in the form:
\[\int\frac{r^{m}}{r^{2}+d^{2}}dr=\sum_{j=0}^{(m-3)/2}(-1)^{j}d^{2j}\frac{r^{m- 2j-1}}{m-2j-1}+(-1)^{\frac{m-1}{2}}\frac{d^{m-1}}{2}\ln\left(1+\frac{r^{2}}{d ^{2}}\right), \tag{27}\]
and if in the latter relation \(m=1\) there is just a logarithmic contribution. There are also integrals of the form:
\[\int\frac{r^{-m}}{r^{2}+d^{2}}dr=\sum_{j=0}^{(m-2)/2}(-1)^{j}\frac{r^{1+2j-m} }{(1+2j-m)d^{2(j+1)}}+\frac{(-1)^{\frac{m}{2}}}{d^{m+1}}\arctan\left(\frac{r}{ d}\right), \tag{28}\]
where \(m\) is an even positive. While in case of an odd integer the upper integral takes the form as follows:
\[\int\frac{r^{-m}}{r^{2}+d^{2}}dr=\sum_{j=0}^{(m-3)/2}(-1)^{j}\frac{r^{1+2j-m}} {(1+2j-m)d^{2(j+1)}}+\frac{(-1)^{\frac{m+1}{2}}}{2d^{m+1}}\ln\left(1+\frac{d^{ 2}}{r^{2}}\right). \tag{29}\]
Having used the written above integrals we write the explicit form for the metric function \(U(r)\) (25). In particular, for odd \(n\) we obtain:
\[U(r)=1-\frac{\mu}{r^{n-2}}-\frac{2\Lambda}{n(n-1)}r^{2}-\frac{(n-2 )}{(n-4)}\frac{\bar{q}^{2}}{r^{2}}+\frac{2q^{2}}{(n-1)(n-2)}r^{2(2-n)}+\frac{1} {2\alpha\eta(n-1)}\times\] \[\left[(\alpha+\Lambda\eta)^{2}\left(\sum_{j=0}^{\frac{n-1}{2}}(-1 )^{j}d^{2j}\frac{r^{2(1-j)}}{n-2j}+(-1)^{\frac{n+1}{2}}\frac{d^{n}}{r^{n-2}} \arctan\left(\frac{r}{d}\right)\right)+2\eta(\alpha+\Lambda\eta)q^{2}\left( \sum_{j=0}^{\frac{n-5}{2}}\frac{(-1)^{j}r^{6-2n+2j}}{(4-n+2j)d^{2(j+1)}}+\right.\right.\] \[\left.\left.\frac{(-1)^{\frac{n-3}{2}}}{d^{n-2}r^{n-2}}\arctan \left(\frac{r}{d}\right)\right)+\eta^{2}q^{4}\left(\sum_{j=0}^{\frac{3n-7}{2} }\frac{(-1)^{j}r^{2(4+j-2n)}}{(6+2j-3n)d^{2(j+1)}}+\frac{(-1)^{\frac{3n-5}{2}} }{d^{3n-4}r^{n-2}}\arctan\left(\frac{r}{d}\right)\right)+\eta(n-1)(n-2)\bar{q }^{2}\times\] \[\left.\left((\alpha+\Lambda\eta)\left(\sum_{j=0}^{\frac{n-5}{2}} \frac{(-1)^{j}d^{2j}r^{-2(1+j)}}{n-4-2j}+(-1)^{\frac{n-3}{2}}\frac{d^{n-4}}{r ^{n-2}}\arctan\left(\frac{r}{d}\right)\right)+\eta q^{2}\left(\sum_{j=0}^{ \frac{n-1}{2}}\frac{(-1)^{j}r^{2(1+j-n)}}{(2j-n)d^{2(j+1)}}+\right.\right.\] \[\left.\left.+\frac{(-1)^{\frac{n+1}{2}}}{d^{n+2}r^{n-2}}\arctan \left(\frac{r}{d}\right)\right)+\eta(n-1)(n-2)\frac{\bar{q}^{2}}{4}\left( \sum_{j=0}^{\frac{5-n}{2}}\frac{(-1)^{j}r^{2(j-2)}}{(2j+n-6)d^{2(j+1)}}+\frac{ (-1)^{\frac{7-n}{2}}}{d^{8-n}r^{n-2}}\arctan\left(\frac{r}{d}\right)\right) \right)\right]. \tag{30}\]
It should be pointed that the above relation, namely its last sum is valid when \(n<7\). If \(n>7\) in the last integral in the relation (25) we will have \(n-7>0\), therefore instead of the relation (28) one should use the relation (26).
For even \(n\) we write:
\[U(r)=1-\frac{\mu}{r^{n-2}}-\frac{2\Lambda}{n(n-1)}r^{2}-\frac{(n- 2)}{(n-4)}\frac{\bar{q}^{2}}{r^{2}}+\frac{2q^{2}}{(n-1)(n-2)}r^{2(2-n)}+\frac{ 1}{2\alpha\eta(n-1)}\times\] \[\left[(\alpha+\Lambda\eta)^{2}\left(\sum_{j=0}^{\frac{n-2}{2}}(-1 )^{j}d^{2j}\frac{r^{2(1-j)}}{n-2j}+(-1)^{\frac{n}{2}}\frac{d^{n}}{2r^{n-2}} \ln\left(1+\frac{r^{2}}{d^{2}}\right)\right)+2\eta(\alpha+\Lambda\eta)q^{2} \left(\sum_{j=0}^{\frac{n-6}{2}}\frac{(-1)^{j}r^{6-2n+2j}}{(4-n+2j)d^{2(j+1)}}+\right.\right.\] \[\left.\left.\frac{(-1)^{\frac{n-2}{2}}}{2(dr)^{n-2}}\ln\left(1+ \frac{d^{2}}{r^{2}}\right)\right)+\eta^{2}q^{4}\left(\sum_{j=0}^{\frac{3n-8}{2 }}\frac{(-1)^{j}r^{2(4+j-2n)}}{(6+2j-3n)d^{2(j+1)}}+\frac{(-1)^{\frac{3n-4}{2} }}{2d^{3n-4}r^{n-2}}\ln\left(1+\frac{d^{2}}{r^{2}}\right)\right)+\eta(n-1)(n-2 )\bar{q}^{2}\times\] \[\left.\left((\alpha+\Lambda\eta)\left(\sum_{j=0}^{\frac{n-6}{2}} \frac{(-1)^{j}d^{2j}r^{-2(1+j)}}{n-4-2j}+(-1)^{\frac{n-2}{2}}\frac{d^{n-4}}{2r ^{n-2}}\ln\left(1+\frac{r^{2}}{d^{2}}\right)\right)+\eta q^{2}\left(\sum_{j=0}^ {\frac{n-2}{2}}\frac{(-1)^{j}r^{2(1+j-n)}}{(2j-n)d^{2(j+1)}}+\right.\right.\] \[\left.\left.+\frac{(-1)^{\frac{n+2}{2}}}{2d^{n+2}r^{n-2}}\ln \left(1+\frac{d^{2}}{r^{2}}\right)\right)+\eta(n-1)(n-2)\frac{\bar{q}^{2}}{4} \left(\sum_{j=0}^{\frac{n-10}{2}}\frac{(-1)^{j}d^{2j}r^{-2(j+3)}}{n-8-2j}+(-1) ^{\frac{n-8}{2}}\frac{d^{n-8}}{2r^{n-2}}\ln\left(1+\frac{r^{2}}{d^{2}}\right) \right)\right)\right]. \tag{31}\]
It should be emphasized that in contrast with the relation (30) the upper relation is written for the case \(n>7\). If \(n<7\) for even \(n\) to calculate the last integral in the relation (25) one should use the relation (29) instead of (27) which is taken in (31). Due to special interest to lower dimensions, we also write explicit forms for metric functions when \(n=3\) and \(n=4\). Namely, if \(n=3\) the metric function \(U(r)\) takes the following form:
\[U(r)=1-\frac{\mu}{r}-\frac{\Lambda}{3}r^{2}+\frac{q^{2}+\bar{q}^{2 }}{r^{2}}+\frac{1}{4\alpha\eta}\left[(\alpha+\Lambda\eta)^{2}\left(\frac{r^{2}}{3 }-d^{2}\right)+\frac{\eta^{2}(q^{2}+\bar{q}^{2})^{2}}{d^{2}r^{2}}\times\right.\] \[\left.\left(\frac{1}{d^{2}}-\frac{1}{3r^{2}}\right)+\left((\alpha+ \Lambda\eta)^{2}d+\frac{\eta(q^{2}+\bar{q}^{2})}{d^{3}}\right)^{2}\frac{d}{r} \arctan\left(\frac{r}{d}\right)\right]. \tag{32}\]
Here we would like to point out that in the three-dimensional case (\(n=3\)) the contribution of abelian electric and noanbelian magnetic fields are completely identical to each other. For the case \(n=4\) the metric function \(U(r)\) is as follows:
\[U(r)=1-\frac{\mu}{r^{2}}-\frac{\Lambda}{6}r^{2}+\frac{q^{2}}{3r^{ 4}}-2\frac{\bar{q}^{2}}{r^{2}}\ln\left(\frac{r}{d}\right)+\frac{1}{6\alpha\eta }\left[\frac{(\alpha+\Lambda\eta)^{2}}{2}\left(\frac{r^{2}}{2}-d^{2}\right)+ \left((\alpha+\Lambda\eta)\frac{d^{4}}{2}+3\eta\bar{q}^{2}\right)\times\right.\] \[\left.\frac{(\alpha+\Lambda\eta)}{r^{2}}\ln\left(1+\frac{r^{2}}{d ^{2}}\right)+\frac{\eta^{2}q^{4}}{2d^{2}r^{4}}\left(-\frac{1}{3r^{4}}+\frac{1} {2(dr)^{2}}-\frac{1}{d^{4}}\right)+3\frac{\eta^{2}q^{2}\bar{q}^{2}}{d^{2}r^{4} }\left(\frac{1}{d^{2}}-\frac{1}{2r^{2}}\right)-\frac{9\eta^{2}\bar{q}^{4}}{2d ^{2}r^{4}}+\right.\] \[\left.\left(\frac{\eta^{2}}{2d^{2}}\left(\frac{q^{2}}{d^{2}}-3 \bar{q}^{2}\right)^{2}-\eta(\alpha+\Lambda\eta)q^{2}\right)\frac{1}{d^{2}r^{2} }\ln\left(1+\frac{d^{2}}{r^{2}}\right)\right]. \tag{33}\]
Even though the explicit expressions for the metric functions (30) and (31), as well as their particular cases (32) and (33) correspondingly, are rather cumbersome, some important conclusions about their behaviour can be derived relatively easily. First, for both types of parity of dimensions the behaviour of the metric function \(U(r)\) for large distances is asymptotically of AdS-type if both coupling parameters \(\alpha\) and \(\eta\) are positive (of the same sign), namely we can write:
\[U\simeq\frac{(\alpha-\Lambda\eta)^{2}}{2n(n-1)\alpha\eta}r^{2}=\frac{\eta\left( \alpha/\eta-\Lambda\right)^{2}}{2n(n-1)\alpha}r^{2}. \tag{34}\]
Since the gauge fields give decaying terms in outer far zone, it is natural that there is anti-de Sitterian term which shows leading behaviour if \(r\rightarrow\infty\), similar results were obtained if nonlinear electromagnetic field was taken into account [29, 31].
If the radius \(r\) becomes very small (\(r\to 0\)) the metric function \(U(r)\) shows singular behaviour which is defined by electromagnetic field part in general case. Namely, when \(r\to 0\) we can write:
\[U(r)\simeq-\frac{q^{4}}{3(n-1)^{2}(n-2)^{2}}r^{4(2-n)}. \tag{35}\]
Therefore, the leading term for \(r\to 0\) is related just to the Maxwell field what is rather expectable, but we point out that this asymptotic appears, because of interplay of the gauge field term with a specific Horndeski theory influence, even though the asymptotic (35) does not show explicit dependence on coupling parameters \(\alpha\) and \(\eta\). We point out that the Yang-Mills terms alone or the terms where effective coupling between the Maxwell and Yang-Mills fields are taken into account, show less singular behaviour in comparison with term (35) in the limit \(r\to 0\) if \(n>3\). For \(n=3\) both gauge fields give rise to contributions of the same order
Figure 1: Metric functions \(U(r)\) for various dimensions \(n\) (the left graph) and different values of the electric charge \(q\) (the right graph). For both graphs we have \(\Lambda=-2\), \(\alpha=0.2\), \(\eta=0.4\), \(\bar{q}=0.2\). For the left graph we have \(q=0.2\) and solid and dashed lines correspond to \(n=3\) and \(n=4\) respectively. For the right graph we have taken \(n=3\) and solid, dashed and dotted curves correspond to \(q=0.2\), \(q=0.4\) and \(q=0.8\) respectively.
what is clearly seen from the explicit form of the metric function for this case (32). Namely, in this case we have:
\[U(r)\simeq-\frac{(q^{2}+\bar{q}^{2})^{2}}{12r^{4}}. \tag{36}\]
One of the most important conclusions following from the asymptotic expressions (35) and (36) because of their negative signs and it means that the singular behaviour of the metric function \(U(r)\) if \(r\to 0\) is more similar to the Schwarzschild black hole than to the Reissner-Nordstrom one as it might be expected. That character of behaviour of the metric function is also clearly reflected on the graph of the metric function \(U(r)\) given by the Figure [1]. The second of the graphs of the Figure [1] also implies that apart of the only event horizon, namely the point where the function \(U(r)\) crosses the horizontal axis additional inner horizons may appear if one increases the electric charge \(q\), but it also may occur if the parameter \(\bar{q}\) goes up, but a detailed analysis of this issue will be considered elsewhere. The other important conclusion which is also directly related to the mentioned above features is that for any charge \(q\) or \(\bar{q}\) a naked singularity never occurs as it usually takes place within General Relativity if the charge of a black hole increases while other parameters of the black hole are held fixed.
We also briefly examine the particular case if \(\alpha=0\), namely when only derivative coupling between gravity and scalar field part is considered. The particular solution for \(\alpha=0\) is substantially simpler than the general one examined above. Namely, for the squared derivative \((\varphi^{\prime})^{2}\) and the product of the metric functions we obtain:
\[(\varphi^{\prime})^{2}=\frac{4r^{2}W}{\eta(n-1)(n-2)}\left( \Lambda+q^{2}r^{2(1-n)}+\frac{1}{2}(n-1)(n-2)\bar{q}^{2}r^{-4}\right), \tag{37}\] \[UW=\left(1-\frac{q^{2}}{(n-1)(n-2)}r^{2(2-n)}-\frac{\bar{q}^{2}} {2r^{2}}\right)^{2}. \tag{38}\]
The metric function \(U(r)\) can be written in the form:
\[U(r)=1-\frac{\mu}{r^{n-2}}-\frac{2\Lambda}{(n-1)(n-2)}r^{2}- \frac{(n-2)}{(n-4)}\frac{\bar{q}^{2}}{r^{2}}+\frac{2q^{2}}{(n-1)(n-2)}r^{2(2-n )}+\\ \frac{\Lambda^{2}}{(n-1)^{2}(n^{2}-4)}r^{4}-\frac{q^{4}}{3(n-1)^{ 2}(n-2)^{2}}r^{4(2-n)}-\frac{2\Lambda q^{2}}{(n-1)^{2}(n-2)(n-4)}r^{2(3-n)}\\ +\frac{\Lambda\bar{q}^{2}}{(n-1)(n-2)}-\frac{q^{2}\bar{q}^{2}}{n( n-1)}r^{2(1-n)}+\frac{(n-2)}{4(n-6)}\frac{\bar{q}^{4}}{r^{4}}. \tag{39}\]
It should be emphasized that in (39) we impose \(n\neq 4\) and \(n\neq 6\), if for instance \(n=4\) the forth term in the upper row and the third term in the middle one take additional logarithmic factor (\(\sim\ln r\)) and if \(n=6\) this factor appears in the last term in the bottom row, but for both cases it does not change drastically qualitative behaviour of the metric function \(U(r)\). We would like to note that for the particular case \(\alpha=0\) neither the product \(UW\) nor the function \(U(r)\) depend on the parameter \(\eta\). We point out that if \(r\to\infty\) the leading term of the metric function is of the order \(\sim\Lambda^{2}r^{4}\), and it is suppressed if \(\alpha\neq 0\), since this term is always positive it gives rise to the conclusion that there is no cosmological horizon for any sign of the cosmological constant. If \(r\to 0\) the leading term of the metric (39) is the same as for the general case, namely (35) and to some extent it is expectable since for small distances the metric is mainly defined by the leading electromagnetic field term. We also note that the product \(UW\to 1\) if \(r\to\infty\) and it becomes singular if \(r\to 0\), but this singular behaviour, which also takes place if \(\alpha\neq 0\) allows to moderate singularities for the invariants of the Riemann tensor in comparison with standard general relativity solutions [29].
## 3 Black hole temperature
One of the basic notions of black holes thermodynamics is temperature. The definition of the temperature is based on geometrical notion of surface gravity which can be applied not only to black holes within General
Relativity, but also to more general gravitational frameworks [49, 50, 51] including Horndeski gravity [52]. The surface gravity \(\kappa\) is defined as follows:
\[\kappa^{2}=-\frac{1}{2}\nabla_{a}\chi_{b}\nabla^{a}\chi^{b}, \tag{40}\]
where \(\chi^{\mu}\) is a Killing vector, which is null on the event horizon. Since in our work the static configuration (13) is considered, the time translation vector \(\chi^{\mu}=\partial/\partial t\) satisfies the mentioned condition. In the framework of General Relativity and in various other approaches to gravity the temperature is defined to be proportional to the surface gravity, namely:
\[T_{BH}=\frac{\kappa}{2\pi}=\frac{1}{4\pi}\frac{U^{\prime}(r_{+})}{\sqrt{U(r_{+ })W(r_{+})}}, \tag{41}\]
where \(r_{+}\) denotes the event horizon of the black hole. Having calculated the derivative \(U^{\prime}(r_{+})\) and after simple algebra we write the temperature in the form:
\[T_{BH}=\frac{1}{4\pi(n-1)r_{+}}\left(\left(\frac{\alpha}{\eta}-\Lambda\right) r_{+}^{2}+(n-1)(n-2)-\frac{q^{2}}{r_{+}^{2(n-2)}}-\frac{(n-1)(n-2)}{2r_{+}^{2}} \bar{q}^{2}\right). \tag{42}\]
Surface gravity has clear geometric meaning and as it is mentioned above it is widely applicable, including Horndeski theory [52], but even for the latter theory there are some subtleties. The authors [52] consider particular case of general Horndeski gravity similar to that considered here, but they also make several assumptions which single out a particular class of solutions that can be easily reduced to some general relativistic ones if Horndeski coupling parameter \(\eta\) is turned off. It is also supposed that the scalar field shares the Killing symmetry and since no peculiarities of the scalar field are pointed out, we assume that it is supposed to be regular, in particular at the event horizon. But in our case due to the constraint (22) the first of the assumptions may be violated, in addition the derivative of the scalar field has singular behaviour at the horizon, therefore the conclusions made in [52] cannot be applied directly to our solution. It was argued [54] that in Horndeski theory instead of the standard surface gravity its "effective" counterpart can be introduced and it can be explained by the fact that in general the speed of gravitons may differ form the speed of light [35, 53], namely these speeds differ if the Lagrangian for the gravitational perturbation contains Weyl tensor (the so-called Weyl criterion), what usually takes place in Horndeski case [53]. Consequently, the "effective" or modified surface gravity gives rise to a modified relation for the black hole temperature [54] which can be written in the form:
\[T=\frac{\kappa}{2\pi}\left(1+\frac{\eta}{4}\frac{(\varphi^{\prime})^{2}}{W} \right)\Big{|}_{r_{+}}=T_{BH}\left(1+\frac{\eta}{4}\frac{(\varphi^{\prime})^{ 2}}{W}\right)\Big{|}_{r_{+}}=\sqrt{U(r_{+})W(r_{+})}T_{BH}. \tag{43}\]
For the particular case of the solution given by the metric (13) with corresponding functions \(U(r)\) and \(W(r)\) we obtain:
\[T=\frac{\eta}{8\pi(n-1)\alpha r_{+}(r_{+}^{2}+d^{2})}\left(\left(\frac{\alpha }{\eta}-\Lambda\right)r_{+}^{2}+(n-1)(n-2)-\frac{q^{2}}{r_{+}^{2(n-2)}}-\frac{ (n-1)(n-2)}{2r_{+}^{2}}\bar{q}^{2}\right)^{2}. \tag{44}\]
We point out that in the limit \(\eta\to 0\) the both the temperature (44) and its cousin (42) become singular, confirming the fact that our solution, from which both these expressions are derived, does not meet the criteria imposed in [52]. Even though the temperature (44) is given by a relatively simple expression not all its peculiarities can be seen easily, but nevertheless its key features can be described. First of all, due to the square over the main parentheses an effective coupling between the terms of different origin appear, namely there is a coupling between both gauge fields given by the term proportional to \(q^{2}\bar{q}^{2}\), but we can also claim about a "coupling" between the gauge and scalar fields reflected by the terms where coupling parameters are multiplied by \(q^{2}\) or \(\bar{q}^{2}\). To sum it up the coupling we mention here is just a consequence of the coupling caused by Horndeski gravity and which appear in the metric functions \(U(r)\) and \(W(r)\).
Using the relation (44) we can easily analyse asymptotic behaviour of the temperature. For instance for large \(r_{+}\) (\(r_{+}\to\infty\)) the temperature \(T\) (44) shows de Sitterian or anti-de Sitterian character depending on signs of the parameters \(\alpha\) and \(\eta\), namely \(T\sim(\alpha-\Lambda\eta)^{2}r_{+}/(2(n-1)\alpha\eta)\), but we pay more attention to the former one, de Sitterian case will be examined elsewhere. For very small \(r_{+}\) (\(r_{+}\to 0\)) the temperature is mainly defined by the gauge field terms, and if \(n>3\) the leading term is related to the Maxwell field and it is of the form as follows \(T\sim q^{4}/((n-1)^{2}(n-2))r_{+}^{7-4n}\), what is curious here, being caused by the nonminimal coupling this leading term does not have any dependence neither of the parameter \(\eta\) nor of the parameter \(\alpha\). If \(n=3\) both gauge field terms give equal contribution, due to their symmetry even in the metric (32) and consequently it is reflected in temperature.
The analysis of the temperature as a function of the horizon radius for its intermediate values is not trivial since the contribution of various terms may be comparable what affects on the behaviour of the temperature. The terms in the relation (44) have opposite signs the temperature might be a nonmonotonous function of the horizon radius \(r_{+}\). In order to understand the dependence \(T=T(r_{+})\) better, we give some plots of this function for various values of parameters. The Figure [2] shows this dependence if the cosmological constant \(\Lambda\) (the left graph) and the parameter of nonminimal coupling \(\eta\) (the right graph) are varied. The general features of both graphs are very similar, the function \(T=T(r_{+})\) has a specific "narrow" and "deep" minimum, this minimum is not affected considerably by variation of either \(\Lambda\) or \(\eta\) and we conclude that it is mainly defined by the gauge field terms (it is given below). If the cosmological constant rises in absolute value the temperature \(T\) also rises for large \(r_{+}\) and it tends to be more monotonous in the range of intermediate values of \(r_{+}\). The mentioned feature is also known for Reissner-Nordstrom-AdS black hole and it causes nontrivial critical behaviour within extended thermodynamics approach, the latter will be considered in the following sections. Comparing both graphs of the Fig.[2] we also conclude that variation of the cosmological constant \(\Lambda\) leads to more substantial change of the temperature for intermediate and relatively large values of the horizon radius \(r_{+}\) than the variation of the coupling constant \(\eta\) and this result is expectable, because of the way those parameters contribute into the expression (44).
The Figure [3] shows the influence of variation of the electric charge \(q\) on the temperature \(T\). As it is pointed out above since the terms caused by the gauge field become principal ones for small radii of horizon \(r_{+}\) and it gives rise to the shift of the global minimum to the right if the charge \(q\) goes up. We also point out that the "narrow" domain close to the global minimum changes considerably, namely it widens if the charge \(q\) increases. The other important consequence of this variation is the fact that domain right to the global minimum also changes substantially, namely its nonmonotonicity becomes less notable and we can
Figure 2: Black hole’s temperature \(T\) as a function of horizon radius \(r_{+}\) for some values of the cosmological constant \(\Lambda\) (the left graph) and the coupling parameter \(\eta\) (the right one). For both cases the way “form bottom to top” corresponds to the increase in absolute value of the parameter we vary, whereas all other parameters are held fixed, namely for both graphs we have taken \(n=4,\alpha=0.1,q=\bar{q}=0.2\). For the left graph we take \(\eta=0.2\) and \(L_{1}=-2\), \(\Lambda_{2}=-3\) and \(\Lambda_{3}=-4\), whereas for the right graph we take \(\Lambda=-2\) and \(\eta_{1}=0.2\), \(\eta_{2}=0.4\) and \(\eta_{3}=0.8\).
conclude that further increase of the charge gives rise to its disappearance and it also affects on the critical behaviour of the black hole. Due to the same sign and inverse proportionality to the horizon radius \(r_{+}\) a variation of the nonabelian charge \(\bar{q}\) gives qualitatively to the same changes in behaviour of the temperature \(T\), but due to different \(r_{+}\) dependences in general case, those changes might be substantial for intermediate values of \(r_{+}\). Just for particular case \(n=3\) both gauge field give equal contribution.
## 4 Wald procedure, conserved quantities and the first law of black hole thermodynamics
Wald approach is a consistent method to derive the first law of black hole mechanics (thermodynamics). Being a generalization of the standard Noether procedure to obtain conserved quantities it allows to gain the latter one for general diffeomorphism invariant theories and it was successfully applied to various gravity theories, moreover the approach was generalized for the theories with internal gauge degrees of freedom [57]. To briefly describe the procedure we write the variation of the Lagrangian for the system (1):
\[\delta{\cal L}=\sqrt{-g}\left({\cal E}_{\mu\nu}\delta g^{\mu\nu}+{\cal E}_{ \varphi}\delta\varphi+{\rm Tr}\left({\cal E}_{A}{}^{(a)\mu}\delta A^{(a)}_{ \mu}\right)+{\cal E}_{{\cal A}}{}^{\mu}\delta{\cal A}_{\mu}\right)+\sqrt{-g} \nabla_{\mu}{\cal J}^{\mu}, \tag{45}\]
and here \({\cal E}_{\mu\nu}\), \({\cal E}_{\varphi}\), \({\cal E}_{A}{}^{(a)\mu}\) and \({\cal A}^{\mu}\) are the left hand sides of the equations of motion (5), (10), (11) and (12) respectively for the dynamical fields we consider. The last term in the upper variation is the so-called boundary term which is transformed into a hypersurface integral enclosing the chosen volume and \({\cal J}^{\mu}\) is the surface current which can be given as a sum of corresponding dynamical fields currents, namely:
\[{\cal J}^{\mu}={\cal J}^{\mu}_{g}+{\cal J}^{\mu}_{\varphi}+{\cal J}^{\mu}_{A}+ {\cal J}^{\mu}_{\cal A}, \tag{46}\]
where the respective components are defined as follows:
\[{\cal J}^{\mu}_{g}=2\frac{\partial{\cal L}}{\partial R_{\kappa \lambda\mu\nu}}\nabla_{\lambda}\left(\delta g_{\kappa\nu}\right)-2\nabla_{ \lambda}\left(\frac{\partial{\cal L}}{\partial R_{\kappa\mu\lambda\nu}} \right)\delta g_{\kappa\nu}, \tag{47}\] \[{\cal J}^{\mu}_{\varphi}=\frac{\partial{\cal L}}{\partial(\varphi _{\mu})}\delta\varphi,\quad{\cal J}^{\mu}_{A}=-4{\rm Tr}\left(F^{(a)\mu \lambda}\delta A^{(a)}_{\lambda}\right),\quad{\cal J}^{\mu}_{\cal A}=-4{\cal F }^{\mu\lambda}\delta{\cal A}_{\lambda}. \tag{48}\]
If the equations of motion are satisfied the only contribution to the variation of the Lagrangian (45) and respectively to the action is given by the hypersurface term. Having the current \({\cal J}^{\mu}\) (46) we can construct corresponding current form \(J_{(1)}={\cal J}_{\mu}dx^{\mu}=g_{\lambda\mu}{\cal J}^{\lambda}dx^{\mu}\) and then we define its Hodge dual which is essential in Wald approach:
\[\Theta(\psi,\delta\psi)=*J_{(1)}(\psi,\delta\psi), \tag{49}\]
Figure 3: Black hole’s temperature \(T\) as a function of horizon radius \(r_{+}\) for some values of the electric charge \(q\) if all the other parameters are held fixed. The solid, dashed, dotted and dash-dotted curves correspond to \(q_{1}=0.2\), \(q_{2}=0.4\), \(q_{3}=0.6\) and \(q_{4}=1\) respectively. The fixed parameters are as follows: \(n=4\), \(\alpha=0.1\), \(\eta=0.2\), \(\Lambda=-2\) and \(\bar{q}=0.2\).
where \(\psi\) is used to denote all the dynamical fields and \(\delta\psi\) is their variations. The diffeomorphism is generated by a vector field \(\xi^{\mu}\), therefore the variation of dynamical fields can be written in the form:
\[\delta_{\xi}\psi={\cal L}_{\xi}\psi, \tag{50}\]
where \({\cal L}_{\xi}\) is the corresponding Lie derivative, generated by the vector \(\xi^{\mu}\). The variation of the Lagrangian of the system can be written also as corresponding Lie derivative, namely:
\[\delta_{\xi}*{\cal L}={\cal L}_{\xi}*{\cal L}=d(i_{\xi}*{\cal L}), \tag{51}\]
here we point out that since the Lagrangian in our case is defined as a scalar function _i.e._ 0-form, therefore in the latter relation the Hodge dual of the Lagrangian is used and we note that to derive the second equality the upper relation the so-called Cartan magic formula is used. Rewriting the formula for the variation of the Lagrangian (45) in terms of forms and taking into account the relations (49) (51) as well as the notation (50) we obtain:
\[d(i_{\xi}*{\cal L})={\cal E}_{\psi}{\cal L}_{\xi}\psi+d\Theta(\psi,{\cal L}_{ \xi}\psi)\quad\Rightarrow\quad d\left(\Theta(\psi,{\cal L}_{\xi}\psi)-i_{\xi}* {\cal L}\right)=-{\cal E}_{\psi}{\cal L}_{\xi}\psi, \tag{52}\]
where \({\cal E}_{\psi}\) correspond to the equations of motions for the dynamical fields. If the equations of motion are satisfied, the right hand side of the latter reaction will be equal to zero. Now we introduce a Noether current \(n\)-form:
\[J_{\xi}=\Theta(\psi,\delta\psi)-i_{\xi}*{\cal L}, \tag{53}\]
which is obviously closed on-shell, moreover it implies that this form is exact on-shell, namely:
\[J_{\xi}=dQ_{\xi}. \tag{54}\]
Integral over a closed \(n-1\) dimensional hypersurface \(\Sigma_{n-1}\) is the so-called Noether charge related to the vector field \(\xi^{\mu}\) which generates the diffeomorphism. Then following Wald approach the space of the solutions of the equations of motion is defined to be the phase space of the theory and variation of the dynamical fields \(\delta_{\xi}\psi\) taken on-shell is a phase space vector flow generated by the vector \(\xi^{\mu}\). This flow can be generated by a Hamiltonian \({\cal H}_{\xi}\) which is related to a symplectic form defined on a Cauchy hypersurface \(\Sigma\), namely for its on-shell variation we write:
\[\delta{\cal H}_{\xi}=\int_{\Sigma}\Omega(\psi,\delta\psi,{\cal L}_{\xi}\psi)= \int_{\Sigma}\left(\delta\Theta(\psi,{\cal L}_{\xi}\psi)-{\cal L}_{\xi}\Theta (\psi,\delta\psi)\right). \tag{55}\]
Using the definition of the Noether current (53) and Cartan magic formula for the Lie derivative we can rewrite the latter relation as follows:
\[\delta{\cal H}_{\xi}=\int_{\Sigma}\left(\delta J_{\xi}+\delta(i_{\xi}*{\cal L })-i_{\xi}d\Theta-d(i_{\xi}\Theta)\right)=\int_{\Sigma}\left(\delta(dQ_{\xi} )-d(i_{\xi}\Theta)\right)=\int_{\partial\Sigma}\left(\delta Q_{\xi}-i_{\xi} \Theta\right). \tag{56}\]
We note that in the second equality we have used the on-shell condition which allow to remove the second and the third terms in the first integral. In the second integral we use the definition of the Noether charge and the fact that exterior derivative and the variation for the Noether charge \(Q_{\xi}\) commute allowed to derive the last equality and the integral over the boundary \(\partial\Sigma\). If \(\xi^{\mu}\) is supposed to be a generator of a symmetry then \({\cal L}_{\xi}\phi=0\) and consequently \(\delta{\cal H}_{\xi}=0\). If the hypersurface \(\Sigma\) has two boundaries, what actually takes place for black holes, namley the infinity and the event horizon, therefore from upper relation we obtain:
\[\delta{\cal H}_{r_{+}}\equiv\int_{\partial\Sigma_{+}}\left(\delta Q_{\xi}-i_ {\xi}\Theta\right)=\int_{\infty}\left(\delta Q_{\xi}-i_{\xi}\Theta\right) \equiv\delta{\cal H}_{\infty} \tag{57}\]
and here \(\partial\Sigma_{+}\) is the event horizon hypersurface. The written relation allows to derive the first law of black hole thermodynamics.
Before derivation of the first law of black hole thermodynamics we give an explicit relation for the components of the Noether charge, namely we write:
\[Q_{\lambda_{1}\ldots\lambda_{n-1}}=\varepsilon_{\lambda_{1}\ldots\lambda_{n-1}\mu \nu}\left(\frac{\partial\mathcal{L}}{\partial R_{\kappa\lambda\mu\nu}}\nabla_{ \lambda}\xi_{\kappa}-2\xi_{\kappa}\nabla_{\lambda}\left(\frac{\partial \mathcal{L}}{\partial R_{\kappa\lambda\mu\nu}}\right)-2\mathrm{Tr}\left(F^{(a) \mu\nu}A^{(a)}_{\lambda}\right)\xi^{\lambda}-2\mathcal{F}^{\mu\nu}\mathcal{A}_ {\lambda}\xi^{\lambda}\right). \tag{58}\]
Using the upper relation as well as the relation for the Hodge dual of the surface current (49) we can calculate the differences of variations which are given under the integrals in the relation (57). Similarly as in the previous section the time translation vector \(\xi^{\mu}\) can be chosen for corresponding calculations, it is a Killing vector and it is null on the event horizon. For more clarity we split the calculations of the difference of the variations on two parts, namely for the gravity part together with nonminimally coupled scalar field and for the gauge fields. The gravity part together with the scalar field contribution give rise to the following relation:
\[\left(\delta Q_{\xi}-i_{\xi}\Theta\right)_{gs}=-(n-1)r^{n-2}\delta U\hat{ \Omega}_{n-1}, \tag{59}\]
where \(\delta U\) is the variation of the metric function \(U\) and \(\hat{\Omega}_{n-1}\) is the surface \(n-1\)-form. The total variation for nonminimally coupled theory excluding gauge field contribution depends on the variation of the metric function \(\delta U\) only and we point out that similar result is derived in pure Einsteinian theory for instance for the Schwarzschild solution. The gauge fields give independent contribution and it takes the form:
\[\left(\delta Q_{\xi}-i_{\xi}\Theta\right)_{gf}=\frac{2r^{n-1}}{\sqrt{UW}} \mathcal{A}_{0}\left(\left(\frac{\delta U}{U}+\frac{\delta W}{W}\right) \mathcal{A}_{0}^{\prime}-2\delta\mathcal{A}_{0}^{\prime}\right)\hat{\Omega}_{n -1}, \tag{60}\]
where \(\mathcal{A}_{0}\) is the time component of the electromagnetic field potential and \(\mathcal{A}_{0}^{\prime}=\mathcal{F}_{rt}\) is its radial derivative (electric field). We would like to stress that the Yang-Mills field does not give any contribution to the difference of variations due to the fact that the constant \(\bar{q}\) associated with Yang-Mills coupling is held fixed. The total variation is the sum of the both written above variations:
\[\left(\delta Q_{\xi}-i_{\xi}\Theta\right)_{tot}=r^{n-2}\left(-(n-1)\delta U+ \frac{2r}{\sqrt{UW}}\mathcal{A}_{0}\left(\left(\frac{\delta U}{U}+\frac{ \delta W}{W}\right)\mathcal{A}_{0}^{\prime}-2\delta\mathcal{A}_{0}^{\prime} \right)\right)\hat{\Omega}_{n-1}. \tag{61}\]
For convenience we assume that the electric potential is equal to zero at the event horizon \(\mathcal{A}_{0}|_{r_{+}}=0\). Taking this condition into account and performing integration over a \(n-1\)-dimensional hypersphere of the radius \(r_{+}\) we obtain the explicit relation for the variation of the Hamiltonian \(\mathcal{H}_{r_{+}}\) at the horizon:
\[\delta\mathcal{H}_{r_{+}}=(n-1)\omega_{n-1}r_{+}^{n-2}U^{\prime}(r_{+})\delta r _{+}, \tag{62}\]
where \(\omega_{n-1}=2\pi^{(n-1)/2}/\Gamma((n-1)/2)\) is the surface are of a unit \(n-1\) dimensional hypersphere. Variation of the Hamiltonian \(\delta\mathcal{H}_{\infty}\) takes the form as follows:
\[\delta\mathcal{H}_{\infty}=(n-1)\omega_{n-1}\delta\mu-4\omega_{n-1}\mathcal{A} _{0}\delta q. \tag{63}\]
Since as it is pointed out above the variation of the Hamiltonian at the horizon and at the infinity are equal, therefore we obtain:
\[(n-1)\omega_{n-1}r_{+}^{n-2}U^{\prime}(r_{+})\delta r_{+}=(n-1)\omega_{n-1} \delta\mu-4\omega_{n-1}\mathcal{A}_{0}\delta q. \tag{64}\]
Finally, to derive the first law of black hole thermodynamics, it is necessary to find the relations between the variations of observable entities such us mass or charge of the black hole and corresponding variations in the given above relation.
The electric charge is defined in the standard way, namely we use the Gauss law and obtain:
\[Q_{e}=\frac{1}{4\pi}\int_{\Sigma_{\infty}}*F=\frac{\omega_{n-1}}{4\pi}q. \tag{65}\]
The electric potential measured at the infinity with respect to the horizon is defined as follows:
\[\Phi_{e}=\mathcal{A}_{\mu}\xi^{\mu}|_{\infty}-\mathcal{A}_{\mu}\xi^{\mu}|_{r_{+}} =\mathcal{A}_{0}. \tag{66}\]
we point out that the time translation vector \(\xi^{\mu}=\partial/\partial t\) is used here to calculate the electric potential. Black hole's mass can be defined as:
\[M=\frac{(n-1)\omega_{n-1}}{16\pi}m. \tag{67}\]
Variation of the mass (67) together with the relations (65) and (66) allow to rewrite the right hand side of the equation (64) in a form of a typical thermodynamic relation. In the left hand side of that relation we can use the relation for the temperature (44) in order to avoid introducing additional scalar charges and its corresponding conjugate value if the physical meaning of both these values is not clarified. Then the entropy of the black hole can be defined in a typical manner, namely:
\[S=\frac{\omega_{n-1}}{4}r_{+}^{n-1}. \tag{68}\]
Therefore, the entropy is equal to a quarter of the black hole horizon area, similarly as it takes place in General Relativity. Finally, the first law of the black hole thermodynamics can be written in the form:
\[\delta M=T\delta S+\Phi_{e}\delta Q_{e}. \tag{69}\]
The obtained relation is completely of the same form as for the Reissner-Nordstrom black hole in the framework of GR, even though the explicit relation for the temperature (44) differs from its general relativistic cousin. The fact that the thermodynamic relations like the first law are the same in different theories may be an additional confirmation of universality of black hole thermodynamics which at least for some cases are insensitive to the underlying theories which allow to obtain corresponding thermodynamic relations.
We would also like to stress that even from a naive thermodynamic point of view the temperature \(T\) (44) satisfies a simple consistency relation, which follows directly from the first law (69), namely \(\frac{\partial T}{\partial Q_{e}}=\frac{\partial\Phi_{e}}{\partial S}\) whereas the temperature \(T\) (42) does not. To obtain consistency relation for the temperature (42) additional scalar charge was introduced [27], what was used in earlier paper [29, 31], but its physical meaning is not clear. Moreover in the framework of the standard thermodynamics there are only two variable macroscopic parameters of the black hole, namely its mass or directly related to it the radius of the event horizon \(r_{+}\) and the electric charge \(q\) (or \(Q_{e}\)), any additional independent thermodynamic variable should be related to an independent macroscopic parameter (integration constant), but there are no any more independent macroscopic values in the standard framework. Thus, the "scalar charge" considered in earlier paper was introduced just to have consistent thermodynamics relations, but its physical meaning remains obscure.
Heat capacity or specific heat is an important notion to analyze thermal stability, particularly it is widely used in black hole thermodynamics. Thermally stable systems are characterized by positive specific heat and if the specific heat turns to be negative the system tends to decay. To obtain the heat capacity we use the standard definition for the latter and write:
\[C_{Q}=T\left(\frac{\partial S}{\partial T}\right)_{Q}=T\frac{ \partial S}{\partial r_{+}}\left(\frac{\partial r_{+}}{\partial T}\right)_{Q} =\frac{(n-1)\omega_{n-1}}{4}r_{+}^{n-2}\left(\left(\frac{\alpha}{\eta}-\Lambda \right)r_{+}^{2}+(n-1)(n-2)-\right.\\ \left.\frac{q^{2}}{r_{+}^{2(n-2)}}-\frac{(n-1)(n-2)\bar{q}^{2}}{ 2r_{+}^{2}}\right)\left[-\frac{3r_{+}^{2}+d^{2}}{r_{+}(r_{+}^{2}+d^{2})} \left(\left(\frac{\alpha}{\eta}-\Lambda\right)r_{+}^{2}+(n-1)(n-2)-\right.\right.\] \[\left.\left.\frac{q^{2}}{r_{+}^{2(n-2)}}-\frac{(n-1)(n-2)\bar{q}^ {2}}{2r_{+}^{2}}\right)+4\left(\left(\frac{\alpha}{\eta}-\Lambda\right)r_{+} +\frac{(n-2)q^{2}}{r_{+}^{2n-3}}+\frac{(n-1)(n-2)\bar{q}^{2}}{r_{+}^{3}} \right)\right]^{-1}. \tag{70}\]
The obtained relation (70) has a relatively more cumbersome structure in comparison with the expression for the temperature (44), but since the derivative of the temperature \(T\) over the horizon radius \(r_{+}\) makes
contribution in the heat capacity some important conclusions about the behaviour of the latter can be derived immediately knowing the peculiar features of the temperature. Namely, since the temperature may in general have three extrema points it means that the heat capacity as a function of \(r_{+}\) may have three discontinuity points, separating stable and unstable domains. We point out here that since for relatively large \(r_{+}\) the temperature shows rising character for any variation of black hole parameters, at least in the observed domain, therefore we can conclude that the specific heat \(C_{Q}\) is positive and the black hole is thermally stable. For smaller radii of the horizon the sign of \(C_{Q}\) and consequently conclusion about thermal stability or instability substantially depend of the chosen values of black hole parameters and the parameters of the Lagrangian. To make the behaviour of the function \(C_{Q}=C_{Q}(r_{+})\) more transparent we give corresponding graphs, showing its behaviour near discontinuity points and how it is affected by variations of certain parameters namely its electric charge \(q\) and the cosmological constant \(\Lambda\).
The Figure [4] shows the rightmost discontinuity point for two values of the electric charge. As it was noted above the heat capacity \(C_{Q}\) to the right of the discontinuity point is positive and it goes up if the horizon radius \(r_{+}\) increases, this feature is typical for most types of black holes with AdS asymptotic. Left to the asymptotes the heat capacity becomes negative, therefore this range of \(r_{+}\) is a domain of instability. We also point out that for smaller radius \(r_{+}\) there is a second discontinuity point what is reflected by very fast decrease of the heat capacity \(C_{Q}\) if the radius of the horizon goes down. We also conclude that discontinuity points become closer if the charge \(q\) goes up, and further increase of the charge gives rise to merging of the singularity points and consecutive shrinkage of the unstable domain at least for the considered range of the parameters. Similar conclusion can be made if the absolute value of \(\Lambda\) goes up. Then the peculiarity of the heat capacity diminishes what is shown on the Figure [5], namely the height of the peak drops down and finally it vanishes if the absolute value of the cosmological constant \(\Lambda\) rises.
We also point out that the heat capacity \(C_{Q}\) (70) within the extended thermodynamics approach can be treated as the heat capacity under constant pressure \(C_{P}\), and the pressure is introduced below. It is valid since all the parameters are held fixed in the relation (70).
## 5 Extended thermodynamics
The so-called extended thermodynamics attracts considerable attention during for more than a decade [58, 59, 60]. Even though some basic assumptions for the extended thermodynamics are still disputed, this approach gives rise to wider thermodynamic phase space allowing to describe richer thermodynamics and establish at least formal, but deeper ties with the thermodynamics of various systems usually considered in condensed matter physics. In particular, it establishes profound relations between phase transition phenomena of condensed matter systems and phase transitions (transformations) in black hole physics. The key assumption of the extended approach is the fact that the cosmological constant is considered to be a thermodynamic
Figure 4: Heat capacity \(C_{Q}\) as a function of horizon radius \(r_{+}\) for two values of the electric charge \(q\) whenever the other parameters are held fixed. The solid and dashed curves correspond to \(q=0.1\), and \(q=0.125\) respectively. The fixed parameters are as follows: \(n=3\), \(\alpha=0.2\), \(\eta=0.3\), \(\Lambda=-3\) and \(\bar{q}=0.01\).
value. Namely, the cosmological constant \(\Lambda\) was identified with thermodynamic pressure:
\[P=-\frac{\Lambda}{8\pi} \tag{71}\]
It should be pointed out that there is some analogy with ideal fluid, where a corresponding term related to the thermodynamic pressure goes along with the metric tensor in energy-momentum tensor of the fluid, but it will not be discussed in the current work. The introduced thermodynamic pressure (71) gives rise to the consequence that the black hole mass should be identified now with the enthalpy \(M=H\)[58], but not with the internal energy as it was in the standard thermodynamics. Having the pressure \(P\) (71) the corresponding conjugate thermodynamic volume \(V\) can be defined as follows:
\[V=\left(\frac{\partial M}{\partial P}\right)_{S,Q_{e}} \tag{72}\]
The explicit relation for the thermodynamic volume depends on the parity of dimension \(n\), similarly as it is for the metric function \(U(r)\) (25). Namely, for odd \(n\) the explicit expression for the thermodynamic volume \(V\) is as follows:
\[V=\omega_{n-1}\left(\frac{r_{+}^{n}}{n}-\frac{\eta}{4\alpha} \left(2\left(\frac{\alpha}{\eta}+\Lambda\right)\left[\sum_{j=0}^{\frac{n-1}{2} }(-1)^{j}d^{2j}\frac{r_{+}^{n-2j}}{n-2j}+(-1)^{\frac{n+1}{2}}d^{n}\arctan \left(\frac{r_{+}}{d}\right)\right]+2q^{2}\times\right.\\ \left.\left[\sum_{j=0}^{\frac{n-5}{2}}(-1)^{j}d^{2j}\frac{r_{+}^{ 4+2j-n}}{(4+2j-n)d^{2(j+1)}}+\frac{(-1)^{\frac{n-3}{2}}}{d^{n-2}}\arctan \left(\frac{r_{+}}{d}\right)\right]+(n-1)(n-2)\bar{q}^{2}\times\right.\\ \left.\left[\sum_{j=0}^{\frac{n-5}{2}}(-1)^{j}d^{2j}\frac{r_{+}^{ n-2j-4}}{n-2j-4}+(-1)^{\frac{n-3}{2}}d^{n-4}\arctan\left(\frac{r_{+}}{d} \right)\right]\right)\right). \tag{73}\]
Explicit expression for the thermodynamic volume can be written similarly for even \(n\). The obtained relation (73) is in agreement with a respective relation obtained in [29] for corresponding limits in both cases. Since \(n=3\) is of special interest we also write the termodynamic volume for this case:
\[V=4\pi\left(\frac{r_{+}^{3}}{3}-\frac{\eta}{2\alpha}\left[\left(\frac{\alpha} {\eta}+\Lambda\right)\left(\frac{1}{3}r_{+}^{3}-d^{2}r_{+}\right)+\left(d^{3} \left(\frac{\alpha}{\eta}+\Lambda\right)+\frac{1}{d}(q^{2}+\bar{q}^{2}) \right)\arctan\left(\frac{r_{+}}{d}\right)\right]\right). \tag{74}\]
Figure 5: Dissolution of the peak for the heat capacity \(C_{Q}\) for large absolute values of \(\Lambda\). The peaks “higher to lower” correspond to increase of the absolute value of the cosmological constant.
To derive the Smarr relation for the black hole we introduce additional intensive thermodynamic variable, which in some sense is similar to the pressure (71) introduced above. The new variable and its conjugate are defined as follows:
\[\Pi=\frac{\alpha}{8\pi\eta},\quad\Psi=\left(\frac{\partial M}{\partial\Pi} \right)_{S,Q_{e},P} \tag{75}\]
Taking correponding derivatives we write the explicit relation for the extensive conjugate value \(\Psi\). Namely, for odd \(n\) (\(n<7\)) we obtain:
\[\Psi=\omega_{n-1}\left(\frac{\eta}{2\alpha}\left[\left(\frac{ \alpha}{\eta}+L\right)\left(\sum_{j=0}^{\frac{n-1}{2}}(-1)^{j}d^{2j}\frac{r_{+ }^{n-2j}}{n-2j}+(-1)^{\frac{n-1}{2}}d^{n}\arctan\left(\frac{r_{+}}{d}\right) \right)+(n-1)(n-2)\frac{\bar{q}^{2}}{2}\times\right.\] \[\left.\left(\sum_{j=0}^{\frac{n-5}{2}}(-1)^{j}d^{2j}\frac{r_{+}^{ n-2j-4}}{n-2j-4}+(-1)^{\frac{n-3}{2}}d^{n-4}\arctan\left(\frac{r_{+}}{d} \right)\right)+q^{2}\left(\sum_{j=0}^{\frac{n-5}{2}}(-1)^{j}\frac{r_{+}^{4+2j -n}}{(4+2j-n)d^{2(j+1)}}+\right.\right.\] \[\left.\left.\frac{(-1)^{\frac{n-3}{2}}}{d^{n-2}}\arctan\left( \frac{r_{+}}{d}\right)\right)\right]+\frac{\eta^{2}r_{+}^{n-2}}{4\alpha^{2}(r _{+}^{2}+d^{2})}\left(\left(\frac{\alpha}{\eta}+\Lambda\right)r_{+}^{2}+\frac {q^{2}}{r_{+}^{2(n-2)}}+\frac{(n-1)(n-2)}{2}\frac{\bar{q}^{2}}{r_{+}^{2}} \right)^{2}-\frac{\eta^{2}}{8\alpha^{2}}\times\] \[\left.\left[(n+1)\left(\frac{\alpha}{\eta}+\Lambda\right)^{2} \left(\sum_{j=0}^{\frac{n-1}{2}}(-1)^{j}d^{2j}\frac{r_{+}^{n-2j}}{n-2j}+(-1)^{ \frac{n-1}{2}}d^{n}\arctan\left(\frac{r_{+}}{d}\right)\right)+2(4-n)\left( \frac{\alpha}{\eta}+\Lambda\right)q^{2}\times\right.\] \[\left.\left(\sum_{j=0}^{\frac{n-5}{2}}(-1)^{j}\frac{r_{+}^{4+2j-n} }{(4+2j-n)d^{2(j+1)}}+\frac{(-1)^{\frac{n-3}{2}}}{d^{n-2}}\arctan\left(\frac{r _{+}}{d}\right)\right)+3(2-n)q^{4}\left(\sum_{j=0}^{\frac{3n-7}{2}}(-1)^{j} \frac{r_{+}^{6+2j-3n}}{(6+2j-3n)d^{2(j+1)}}+\right.\right.\] \[\left.\left.\frac{(-1)^{\frac{3n-5}{2}}}{d^{3n-4}}\arctan\left( \frac{r_{+}}{d}\right)\right)+(n-1)(n-2)\left(\frac{\alpha}{\eta}+\Lambda \right)\bar{q}^{2}\left(\sum_{j=0}^{\frac{n-5}{2}}(-1)^{j}d^{2j}\frac{r_{+}^{ n-2j-4}}{n-2j-4}+(-1)^{\frac{n-3}{2}}d^{n-4}\right.\right.\] \[\left.\left.\times\arctan\left(\frac{r_{+}}{d}\right)\right)-n(n -1)(n-2)q^{2}\bar{q}^{2}\left(\sum_{j=0}^{\frac{n-1}{2}}(-1)^{j}\frac{r_{+}^{2 j-n}}{(2j-n)d^{2(j+1)}}+\frac{(-1)^{\frac{n+1}{2}}}{d^{n+2}}\arctan\left(\frac{r_{+}}{d} \right)\right)\right.\] \[\left.\left.+\frac{1}{4}(n-1)^{2}(n-2)^{2}(n-6)\bar{q}^{4}\left( \sum_{j=0}^{\frac{5-n}{2}}(-1)^{j}\frac{r_{+}^{n+2j-6}}{(n+2j-6)d^{2(j+1)}}+ \frac{(-1)^{\frac{7-n}{2}}}{d^{8-n}}\arctan\left(\frac{r_{+}}{d}\right) \right)\right]\right). \tag{76}\]
For the dimensions \(n\geqslant 7\) the there is a different contribution in the bottom line of the above relation, it follows from the corresponding term in the metric function \(U(r)\). The explicit expression for \(\Psi\) if \(n\) is even can be derived similarly. We also write the thermodynamic function \(\Psi\) for \(n=3\) case:
\[\Psi=4\pi\left(\frac{\eta}{4\alpha d}\left(1-\frac{\eta\Lambda}{ \alpha}\right)(q^{2}+\bar{q}^{2})\arctan\left(\frac{r_{+}}{d}\right)-\frac{ \eta\Lambda}{\alpha}\left(1-\frac{\eta\Lambda}{\alpha}\right)\left(\frac{r_{+} ^{3}}{3}-d^{2}r_{+}+d^{3}\arctan\left(\frac{r_{+}}{d}\right)\right)-\right.\] \[\left.\frac{3\eta^{2}}{8\alpha^{2}d^{2}}(q^{2}+\bar{q}^{2})^{2} \left(\frac{1}{3r_{+}^{3}}-\frac{1}{d^{2}r_{+}}+\frac{1}{d^{3}}\arctan\left( \frac{r_{+}}{d}\right)\right)+\frac{\eta^{2}r_{+}}{8\alpha^{2}(r_{+}^{2}+d^{2} )}\left(\left(\frac{\alpha}{\eta}+\Lambda\right)r_{+}^{2}+\frac{q^{2}+\bar{q} ^{2}}{r_{+}^{2}}\right)^{2}\right). \tag{77}\]
Since nonabelian field is also included into the action, moreover it gives a contribution into the metric function \(U(r)\) and all the derived quantities we assume that the nonabelian parameter \(\bar{q}\) can be varied as well. We introduce nonabelian charge similarly as it was defined for instance in [46]:
\[Q_{n}=\frac{1}{4\pi\sqrt{(n-1)(n-2)}}\int_{\Sigma_{n-1}}d^{n-1}\chi J(\chi) \sqrt{Tr(F_{\mu\nu}^{(a)}F^{(a)\mu\nu})}=\frac{\omega_{n-1}}{4\pi}\bar{q}. \tag{78}\]
The integral in upper relation is taken over a sphere enclosing the black hole and \(J(\chi)\) denotes the Jacobian for the chosen spherical coordinates. The Yang-Mills charge \(Q_{n}\) (magnetic) now can considered as a thermodynamic value similarly to the electric charge of the Maxwell field. Therefore thermodynamic conjugate value to the charge \(Q_{n}\) can be introduced:
\[U=\left(\frac{\partial M}{\partial Q_{n}}\right)_{S,Q_{e},P,\Pi}. \tag{79}\]
We do not give explicit expression for the potential \(U\), but it can be obtained easily. Having introduced additional thermodynamic variables such as \(P\), \(\Pi\), \(Q_{n}\) and their thermodynamic conjugates we are able to write the so-called extended first law, which takes the form:
\[\delta M=T_{BH}\delta S+\Phi_{e}\delta Q_{e}+V\delta P+\Psi\delta\Pi+U\delta Q _{n}. \tag{80}\]
Taking into account the pairs of conjugate variables we also write the Smarr relation:
\[(n-2)M=(n-1)T_{BH}S-2VP-2\Psi\Pi+(2-n)\Phi_{e}Q_{e}+UQ_{n}. \tag{81}\]
If nonabelian field is set to zero (\(\bar{q}=0\)) the obtained relation is reduced to the corresponding equation derived for the electrically charged black hole in Horndeski gravity [29]. If we compare with general relativistic case the Smarr relation (81) and the generalized first law (80) gain only one additional term caused by the thermodynamic variable \(\Pi\) and its conjugate value \(\Psi\). The latter two relations may be considered as an additional argument in favour of universality of black hole thermodynamics, which allows us to write the fundamental thermodynamic relations that take the same or at least very similar form for various underlying theories of gravity.
## 6 Gibbs free energy
If a thermodynamic systems undergoes phase transitions the Gibbs free energy is much convenient than the enthalpy identified with the black hole's mass \(M\). The Gibbs free energy is defined as follows:
\[G=M-T_{BH}S. \tag{82}\]
The explicit relation for the Gibbs free energy for odd \(n\) (\(n<7\)) takes the form:
\[G=\frac{\omega_{n-1}}{16\pi}\left(r_{+}^{n-2}+\frac{2\Lambda}{n( n-1)}r_{+}^{n}+\frac{2(2n-3)}{(n-1)(n-2)}q^{2}r_{+}^{2-n}-\frac{3(n-2)}{n-4}r_{+} ^{n-4}-\frac{\eta r_{+}^{n-2}}{2\alpha(n-1)(r^{2}+d^{2})}\left(\left(\frac{ \alpha}{\eta}+\Lambda\right)\times\right.\right.\] \[\left.\left.r_{+}^{2}+\frac{q^{2}}{r_{+}^{2(n-2)}}+\frac{(n-1)(n -2)\bar{q}^{2}}{2r_{+}^{2}}\right)^{2}+\frac{\eta}{2\alpha}\left[\left(\frac{ \alpha}{\eta}+\Lambda\right)^{2}\left(\sum_{j=0}^{\frac{n-1}{2}}(-1)^{j}d^{2j }\frac{r_{+}^{n-2j}}{n-2j}+(-1)^{\frac{n-1}{2}}d^{n}\arctan\left(\frac{r_{+}} {d}\right)\right)+\right.\right.\] \[\left.2\left(\frac{\alpha}{\eta}+\Lambda\right)q^{2}\left(\sum_ {j=0}^{\frac{n-5}{2}}(-1)^{j}\frac{r_{+}^{4+2j-n}}{(4+2j-n)d^{2(j+1)}}+\frac{ (-1)^{\frac{n-3}{2}}}{d^{n-2}}\arctan\left(\frac{r_{+}}{d}\right)\right)+q^{ 4}\left(\sum_{j=0}^{\frac{3n-7}{2}}(-1)^{j}\frac{r_{+}^{6+2j-3n}}{(6+2j-3n)d^ {2(j+1)}}\right.\right.\] \[\left.\left.+\frac{(-1)^{\frac{3n-5}{2}}}{d^{3n-4}}\arctan \left(\frac{r_{+}}{d}\right)\right)+(n-1)(n-2)\bar{q}^{2}\left(\left(\frac{ \alpha}{\eta}+\Lambda\right)\left(\sum_{j=0}^{\frac{n-5}{2}}(-1)^{j}d^{2j} \frac{r_{+}^{n-2j-4}}{n-2j-4}+\right.\right.\right.\] \[\left.\left.\left.(-1)^{\frac{n-3}{2}}d^{n-4}\arctan\left(\frac{ r_{+}}{d}\right)\right)+q^{2}\left(\sum_{j=0}^{\frac{n-1}{2}}(-1)^{j}\frac{r_{+}^{2j-n}}{(2j -n)d^{2(j+1)}}+\frac{(-1)^{\frac{n+1}{2}}}{d^{n+2}}\arctan\left(\frac{r_{+}} {d}\right)\right)\right.\] \[\left.\left.+\frac{(n-1)(n-2)}{2}\bar{q}^{2}\left(\sum_{j=0}^{ \frac{5-n}{2}}(-1)^{j}\frac{r_{+}^{n+2j-6}}{(n+2j-6)d^{2(j+1)}}+\frac{(-1)^{ \frac{7-n}{2}}}{d^{8-n}}\arctan\left(\frac{r_{+}}{d}\right)\right)\right]\right)\right) \tag{83}\]
Similarly as above we give explicit relation for \(n=3\) because of a special interest in this case.
\[G=\frac{1}{4}\left(r_{+}+\frac{\Lambda}{3}r_{+}^{3}+3\frac{q^{2}+ \bar{q}^{2}}{r_{+}}+\frac{\eta}{2\alpha}\left[\left(\frac{\alpha}{\eta}+\Lambda \right)^{2}\left(\frac{r_{+}^{3}}{3}-r_{+}d^{2}\right)+\frac{(q^{2}+\bar{q}^{2} )^{2}}{r_{+}d^{2}}\left(\frac{1}{d^{2}}-\frac{1}{3r_{+}^{2}}\right)\right.\\ \left.+\frac{1}{d}\left(\left(\frac{\alpha}{\eta}+\Lambda\right)d^ {2}+\frac{q^{2}+\bar{q}^{2}}{d^{2}}\right)^{2}\arctan\left(\frac{r_{+}}{d} \right)-\frac{r_{+}}{2(r_{+}^{2}+d^{2})}\left(\left(\frac{\alpha}{\eta}+ \Lambda\right)r_{+}^{2}+\frac{q^{2}+\bar{q}^{2}}{r_{+}^{2}}\right)^{2}\right] \right). \tag{84}\]
Since the Gibbs free energy \(G\) (83) and its particular case (84) have rather intricate from and their temperature dependences are given implicitly it is difficult to analyse their behaviour. To understand it better we give corresponding graph which shows the dependence \(G=G(T)\) while the pressure and all the other parameters are fixed. Namely, the Figure [6] shows that for smaller pressure \(P\) the Gibbs free energy has swallow-tail behaviour and it gives rise to the conclusion that there is a phase transition of the first order and from the qualitative point of view the behaviour of the Gibbs free energy is the same as for Reissner-Nordstrom-AdS black hole in General Relativity [58]. The Gibbs free energy in Horndeski gravity for a nonlinearly charged black hole was also examined in our earlier paper [29] and again from the qualitative point of view there is complete agreement between current and earlier results. If the pressure goes up, the swallow-tail gradually diminishes and after reaching of a critical value it completely vanishes and the Gibbs free energy turns to be a smooth function of the temperature \(T\) and it also means the disappearance of the phase transition. The critical point when the behaviour of the Gibbs free energy becomes smooth is supposed to be a point of the second order phase transition which is usually takes place for van der Waals system or Reissner-Nordstrom-AdS black hole [58]. Due to the interest to the critical point and near critical behaviour some aspects of this issue will be examined in the following section. For better illustration of the swallow-tail behaviour and its gradual diminishing with increasing of the pressure we add the \(3D\) figure for the Gibbs free energy (Figure [7]).
## 7 Critical behavior in the extended phase space
Since additional thermodynamic variables are defined, we are able to extend corresponding thermodynamic phase space for the system and consequently to derive and examine richer thermal behaviour of the black hole. One of the key relations in thermodynamics of any conventional system is its thermal equation of state, which establishes a relation between its macroscopic values such as temperature \(T\), pressure \(P\) and volume \(V\). Having defined the pressure \(P\) (71) and using the relation for the temperature (44) we can rewrite the
Figure 6: Gibbs free energy \(G\) as a function of temperature \(T\) for various values of pressure \(P\) or cosmological constant \(\Lambda\), while other parameters are held fixed. The dotted, dashed, dash-dotted and solid lines correspond to \(\Lambda=-1.5\), \(\Lambda=-3.5\), \(\Lambda=-5.5\) and \(\Lambda=-7.5\) respectively. The fixed parameters are as follows: \(n=3\), \(q=0.1\), \(\bar{q}=0.1\), \(\alpha=0.2\) and \(\eta=0.3\).
latter relation in a form of the thermal equation of state, namely we write:
\[P=\frac{1}{8\pi}\left(\frac{q^{2}}{r_{+}^{2(n-1)}}+\frac{(n-1)(n-2)\bar{q}^{2}}{2 r_{+}^{4}}-\frac{(n-1)(n-2)}{r_{+}^{2}}-\xi\right)\pm\frac{1}{4\pi r_{+}^{2}} \sqrt{2(n-1)\pi\xi r_{+}(r_{+}^{2}+d^{2})T}, \tag{85}\]
where for convenience we denote \(\xi=\alpha/\eta\) which is directly related to the introduced above thermodynamic value \(\Pi\) (75). We also point out that to obtain the expression (85) we extract the cosmological constant \(\Lambda\) from the relation (44) solving corresponding quadratic equation for the parameter \(\Lambda\) therefore the sign \(\pm\) appear in the upper relation. To have the pressure \(P\) positive in all the range of variation we pick up the sign \(+\) only and consider it in the following relations. We also point out that instead of thermodynamic volume (72) we still keep the horizon radius \(r_{+}\), partially because complexity of the relation (72) which does not allows to express \(r_{+}\) as an explicit function of \(V\). On the other hand it does not change or modify conclusions about the critical behaviour that we are to derive. In addition we remark that the equation (85) being completely "geometrical" in nature can be rewritten in terms of "physical" variables in a similar fashion as it was done in [58, 59], but such a redefinition of thermodynamic values does not affect on any physical conclusions at all. We also point out that some hints about possible critical behaviour of a nonlinearly charged back hole were obtained in our earlier paper [29], a more detailed consideration of criticality issues were made in [61].
Following the key assumption that the equation of state for black holes (85) is analogous to the van der Waals equation of state far reaching consequences can be derived. In particular, critical behaviour can be studied and one of the most important issues here is a phase transition between the so called large and small black holes. The central notion here is the so-called inflection point, defined as follows:
\[\left(\frac{\partial P}{\partial r_{+}}\right)_{T}=0,\quad\left(\frac{ \partial^{2}P}{\partial r_{+}^{2}}\right)_{T}=0. \tag{86}\]
It is worth noting that if we use the volume \(V\) (72), to find the inflection point the derivatives with respect to the volume \(V\) should be equated to zero, but using the relation \(\frac{\partial P}{\partial V}=\frac{\partial P}{\partial r_{+}}\frac{\partial r _{+}}{\partial V}\), and assuming that the derivative \(\frac{\partial V}{\partial r_{+}}\neq 0\) since the volume is supposed to be a monotonous function of \(r_{+}\) we again arrive at the relations (86). We also point out that other thermodynamic parameters we used in the extended description are held fixed. The relation for critical radius can be derived straightforwardly using the relations (86),
Figure 7: Gibbs free energy \(G\) as a function of temperature \(T\) and pressure \(P\) (or the cosmological constant \(\Lambda\)).
namely after simple calculations we write:
\[-3(n-2)+\frac{(2n-1)q^{2}}{r_{c}^{2(n-2)}}+\frac{5(n-2)\bar{q}^{2}}{r_{c}^{2}}+ \frac{3r_{c}^{4}+22d^{2}r_{c}^{2}+15d^{4}}{2(r_{c}^{2}+d^{2})(r_{c}^{2}+3d^{2})} \left(n-2-\frac{q^{2}}{r_{c}^{2(n-2)}}-\frac{(n-2)\bar{q}^{2}}{r_{c}^{2}}\right) =0, \tag{87}\]
where \(r_{c}\) is the critical radius \(r_{c}\). The critical temperature \(T_{c}\) and pressure \(P_{c}\) can be written as functions of the critical radius \(r_{c}\):
\[T_{c}=\frac{2(n-1)(r_{c}^{2}+d^{2})}{\pi\xi r_{c}(r_{c}^{2}+3d^{2})^{2}}\left( n-2-\frac{q^{2}}{r_{c}^{2(n-2)}}-\frac{(n-2)\bar{q}^{2}}{r_{c}^{2}}\right)^{2}; \tag{88}\]
\[P_{c}=\frac{1}{8\pi}\left(\frac{q^{2}}{r_{c}^{2(n-1)}}+\frac{(n-1)(n-2)\bar{q} ^{2}}{2r_{c}^{4}}-\frac{(n-1)(n-2)}{r_{c}^{2}}-\xi+\frac{4(n-1)(r_{c}^{2}+d^{2 })}{r_{c}^{2}(r_{c}^{2}+3d^{2})}\left(n-2-\frac{q^{2}}{r_{c}^{2(n-2)}}-\frac{( n-2)\bar{q}^{2}}{r_{c}^{2}}\right)\right). \tag{89}\]
The equation for critical horizon radius \(r_{c}\) (87) does not have an exact analytical solution for general dimension \(n\) and arbitrary chosen parameters \(q\), \(\bar{q}\) and \(d\), therefore the critical values such as \(T_{c}\) and \(P_{c}\) cannot be given as explicit functions of the mentioned parameters of the black hole in general case, as it takes for the van der Waals gas or even simpler black hole solutions such as for instance the Reissner-Nordstrom-AdS one [58]. In general the critical values can be calculated numerically for arbitrary values of \(n\), \(\xi\) and black hole charges \(q\) and \(\bar{q}\). It should be pointed out that for some particular cases analytical solutions can be in principle obtained. Because of some interest in analytical solution and taking into account the fact that analytical solutions often are easier to analyse, we note several particular cases where at least it is possible to derive analytical solution for the critical radius \(r_{c}\) and consequently to other two critical values \(T_{c}\) and \(P_{c}\). First of all, if \(n=3\) the equation (87) takes the form:
\[-3+\frac{5(q^{2}+\bar{q}^{2})}{r_{c}^{2}}+\frac{3r_{c}^{4}+22d^{2}r_{c}^{2}+15 d^{4}}{2(r_{c}^{2}+d^{2})(r_{c}^{2}+3d^{2})}\left(1-\frac{q^{2}+\bar{q}^{2}}{r_{c} ^{2}}\right)=0. \tag{90}\]
The latter equation can be rewritten in a form of a cubic equation for the square of the critical radius \(r_{c}^{2}\). Similar equation can be written if the electric charge charge \(q=0\), but in this case for any \(n\), the only difference with the equation (90) is hidden in the parameter \(d\) which is dimension dependent.
Other interesting particular case is \(\alpha=0\) and it is easy to verify that equation for the critical radius \(r_{c}\) (87) reduces to the form:
\[\frac{(4n-7)q^{2}}{r_{c}^{2(n-2)}}+5\frac{(n-2)\bar{q}^{2}}{r_{c}^{2}}+2-n=0. \tag{91}\]
Corresponding relations for the critical temperature \(T_{c}\) and the pressure \(P_{c}\) can be rewritten as follows:
\[T_{c}=\frac{4(n-2)}{9\pi r_{c}}\left(1-\frac{\bar{q}^{2}}{r_{c} ^{2}}-\frac{q^{2}}{(n-2)r_{c}^{2(n-2)}}\right)^{2}, \tag{92}\] \[P_{c}=\frac{(n-1)(n-2)}{24\pi r_{c}^{2}}\left(1-\frac{5\bar{q}^{ 2}}{2r_{c}^{2}}-\frac{(4n-7)}{(n-1)(n-2)}\frac{q^{2}}{r_{c}^{2(n-2)}}\right). \tag{93}\]
If \(n=3\) the equation (91) turns to be a quadratic one and the critical radius can be easily written:
\[r_{c}^{2}=5(q^{2}+\bar{q}^{2}). \tag{94}\]
Substituting the critical radius into the upper relations (92) and (93) we obtain corresponding critical values \(T_{c}\) and \(P_{c}\) and after the computation we write the explicit expression for the so-called critical ratio:
\[\rho_{c}\equiv\frac{P_{c}r_{c}}{T_{c}}=\frac{75}{512}. \tag{95}\]
Thus the critical ratio \(r_{c}\) as it is expected is a dimesnionless number which does not depend on the parameters of the solution such as its charges \(q\), \(\bar{q}\), this conclusion is in perfect agreement with the definition of critical ratio for conventional systems as well as in within the extended phase space thermodynamics for black holes. On the other hand it is known that for the standard van der Waals system and the Reissner-Nordstrom-AdS black hole the critical ratio is \(\rho_{c}=3/8\) and as we see in our case it is considerably smaller. We also note that exact analytical solutions of the equation (91) can be also derived for \(n=4\) and \(n=5\), where the equation (91) for \(r_{c}^{2}\) turns to be a quadratic and cubic respectively, but here we do not give explicit relations for corresponding values.
Other important particular cases of the equation (91) are related to the situation when one of the charges are set to zero. Namely, if \(q=0\), then the square of the critical radius for any dimension is:
\[r_{c}^{2}=5q^{2}. \tag{96}\]
Using this result we write the critical ratio \(\rho_{c}\) for this particular case:
\[\rho_{c}=\frac{75(n-1)}{1024}. \tag{97}\]
The obtained relation is in perfect agreement with the relation (95) if \(n=3\). Finally, we assume that \(\bar{q}=0\), then the equation (91) immediately gives us:
\[r_{c}^{2(n-2)}=\frac{(4n-7)}{(n-2)}q^{2}. \tag{98}\]
The latter expression gives rise to the following critical ratio:
\[\rho_{c}=\frac{3(4n-7)^{2}}{512(n-2)}. \tag{99}\]
Similarly to the upper case there is perfect agreement with the ratio \(\rho_{c}\) (95) if \(n=3\), but in contrast to the upper case its dimension dependence is different. The latter relation also shows that for higher dimensional cases, at least when \(n\) is not too high the critical ratio (99) is also smaller than the corresponding ratio for higher dimensional generalization of the Reissner-Nordstrom-AdS black hole, which equals to \(\rho_{c}=(2n-3)/(4(n-1))\).
If a thermodynamic system undergoes a second order phase transition, there are universal parameters, namely the critical exponents which characterize behaviour of certain thermodynamic values near the critical point and do not depend on the parameters of the system [62]. To obtain the critical exponents it is useful to introduce the so called reduced variables, which show how close to the critical point the system is:
\[t=\frac{T}{T_{c}}-1,\quad\omega=\frac{r_{+}}{r_{c}}-1. \tag{100}\]
Now the critical exponents \(\bar{\alpha}\), \(\beta\),\(\gamma\) and \(\delta\) are defined as follows:
\[C_{V}\sim|t|^{-\bar{\alpha}},\quad\Delta V_{ls}\sim|t|^{\beta},\quad\kappa_{T} \sim t^{\gamma},\quad P-P_{c}\sim|\omega|^{\delta}. \tag{101}\]
Here we point out that \(C_{V}\) is the heat capacity under constant volume, \(\Delta V_{ls}\) is the volume difference for large and small phases and \(\kappa_{T}\) is the isothermal compressibility. We also note that instead of the commonly used notation \(\alpha\) for the first of the critical exponents we use \(\bar{\alpha}\), because the symbol \(\alpha\) is used to denote one of the coupling constants.
It follows from the definition of the entropy \(S\) (68) that the heat capacity under fixed volume exactly equals to zero: \(C_{V}=T\left(\partial S/\partial T\right)_{V}=0\), therefore we immediately conclude that the critical exponent \(\alpha=0\). To derive the other critical exponents we rewrite the equation of state (85) near the critical point in the following form:
\[P=P_{c}+At+Bt\omega+C\omega^{3}+Dt^{2}+\cdots, \tag{102}\]
where:
\[A=T_{c}\left(\frac{\partial P}{\partial T}\right)_{r_{+}}\Big{|}_{r_{c}},\quad B= r_{c}T_{c}\left(\frac{\partial^{2}P}{\partial T\partial r_{+}}\right)\Big{|}_{r_{c}}, \quad C=\frac{r_{c}^{3}}{6}\left(\frac{\partial^{3}P}{\partial r_{+}^{3}} \right)_{T}\Big{|}_{r_{c}},\quad D=\frac{T_{c}^{2}}{2}\left(\frac{\partial^{2} P}{\partial T^{2}}\right)_{r_{+}}\Big{|}_{r_{c}}. \tag{103}\]
The derivatives noted above can be either calculated numerically for a general case of solution or for some particluar cases even analytical expressions can be derived, but in any case the following procedure is identical. Differentiating the equation (102) and taking into account the Maxwell's area law we can write:
\[\int_{\omega_{l}}^{\omega_{s}}\omega dP=\int_{\omega_{l}}^{\omega_{s}}(Bt+C \omega^{3})d\omega=0. \tag{104}\]
After integration we arrive at the relation:
\[Bt(\omega_{s}^{2}-\omega_{l}^{2})+\frac{C}{2}\left(\omega_{s}^{4}-\omega_{l}^ {4}\right)=0. \tag{105}\]
The obtained equation gives rise to a nontrivial solution \(\omega_{s}=-\omega_{l}\). Since for both phases we have the same pressure and using the equation of state (102) we obtain:
\[Bt(\omega_{s}-\omega_{l})+C\left(\omega_{s}^{3}-\omega_{l}^{3}\right)=0. \tag{106}\]
Solving the latter equation for \(\omega_{s}\) and taking into account the relation for \(\omega_{s}\) and \(\omega_{l}\) finally we arrive at the following expression:
\[\omega_{l}\simeq\sqrt{-\frac{B}{C}}t=\sqrt{\frac{B}{C}\frac{(T_{c}-T)}{T_{c}}}. \tag{107}\]
Now we are able to write the expression for the volume difference \(\Delta V_{ls}\) and extract critical exponent from it:
\[\Delta V_{ls}\simeq V_{c}\left(\omega_{l}-\omega_{s}\right)=2V_{c}\omega_{l} \sim|-t|^{1/2}\quad\Rightarrow\quad\beta=\frac{1}{2}. \tag{108}\]
Using the definition of the isothermal compressibility \(\kappa_{T}\) and the equation of state (102) we can derive the critical exponent \(\gamma\)
\[\kappa_{T}=-\frac{1}{V}\left(\frac{\partial V}{\partial P}\right)_{T}\sim \frac{1}{Bt},\quad\Rightarrow\quad\gamma=1. \tag{109}\]
Finally, considering the critical isotherm we obtain the critical exponent \(\delta\). Namely, from the equation of state (102) it follows:
\[P-P_{c}\sim C\omega^{3},\quad\Rightarrow\quad\delta=3. \tag{110}\]
All the critical exponents we have derived take the same value as their counterparts for RN-AdS black hole [58] and in a case of Horndeski gravity they were derived in the work [61], but for a different black hole solution. The same critical exponents were derived for various solutions in different frameworks, it was mentioned the reviewing paper [60], therefore we can conclude that the critical behaviour shows some universal features, at least for the vast number of black hole solutions in various independent frameworks.
We also note that in [61] the authors used a different equation of state identifying the thermodynamic pressure \(P\) not with the cosmological constant \(\Lambda\), but relating the pressure with the ratio of the coupling constants \(\alpha/\eta\). In their case that definition of pressure was reasonable, since asymptotic behaviour of the metric function \(U(r)\) for infinitely large distances in their case is defined by the ratio \(\alpha/\eta\), in fact that solution has an additional constraint, giving rise to the noted behaviour. In our case we do not impose any specific constraints, thus asymptotic behaviour at the infinity is equally defined by the cosmological constant \(\Lambda\) and the ratio \(\alpha/\eta\), actually we have an effective cosmological constant \(\Lambda_{eff}\sim\frac{\eta}{\alpha}\left(\frac{\alpha}{\eta}-\Lambda\right)^{2}\), whereas in [61] the effective cosmological constant is of the form \(\Lambda_{eff}\sim\frac{\alpha}{\eta}\). We also suppose that in our case the thermodynamic pressure can be defined to be proportional to the ratio \(\frac{\alpha}{\eta}\), it will give rise to a bit more
cumbersome equation of state instead of equation (85), but taking into account the results of the work [61] we do not think that it changes drastically the critical behaviour or gives rise to other critical exponents.
Since here we focus on the analysis of the thermal behaviour of the system at the critical point or in close vicinity of it we also consider Ehrenfest's equations which are developed for the study of the phase transition of the second order which is supposed to take place at the critical point. The Ehrenfest's equations characterize discontinuity of such thermodynamic parameters as the heat capacity under constant pressure \(C_{P}\), the isothermal compressibility \(\kappa_{T}\) and the volume expansion coefficient \(\tilde{\alpha}\), namely we write:
\[\left(\frac{\partial P}{\partial T}\right)_{S}=\frac{C_{P_{2}}-C_ {P_{1}}}{VT(\tilde{\alpha}_{2}-\tilde{\alpha}_{1})}=\frac{\Delta C_{P}}{VT \Delta\tilde{\alpha}}, \tag{111}\] \[\left(\frac{\partial P}{\partial T}\right)_{V}=\frac{\tilde{ \alpha}_{2}-\tilde{\alpha}_{1}}{\kappa_{T_{2}}-\kappa_{T_{1}}}=\frac{\Delta \tilde{\alpha}}{\Delta\kappa_{T}} \tag{112}\]
We point here that heat capacity \(C_{P}\) in the upper relation is given by the relation (70), because the latter one was derived under assumption that \(\Lambda\) was held fixed. The volume expansion coefficient \(\tilde{\alpha}\) is defined as follows: \(\tilde{\alpha}=1/V\left(\partial V/\partial T\right)_{P}\). We show that mentioned thermodynamic quanties such as \(C_{p}\), \(\tilde{\alpha}\) and \(\kappa_{T}\) have infinite discontinuity at the critical point. Let us consider the isothermal compressibility:
\[\kappa_{T}=-\frac{1}{V}\left(\frac{\partial V}{\partial P}\right)_{T}=-\frac{ 1}{V}\frac{\partial V}{\partial r_{+}}\left(\frac{\partial r_{+}}{\partial P }\right)_{T}. \tag{113}\]
Taking into account the first of the conditions (86) we conclude that at the critical point the derivative \((\partial r_{+}/\partial P)_{T}\rightarrow\infty\) therefore there is an infinite gap for the isothermal compressibility \(\kappa_{T}\) at the critical point. The other two thermodynamic quantities also have an infinite gap at the critical point and it is enough to consider one of them, because for the other one it can be shown in the exactly same way. Let us consider again the heat capacity (70) and it is clear that to show its discontinuity at the critical point we should show that the derivative \((\partial r_{+}/\partial T)_{P}\) has the infinite gap at the critical point, because both the temperature \(T\) and the derivative \(\partial S/\partial r_{+}\) are continuous and take finite values at that point. To make analysis more transparent, we write the derivative \((\partial T/\partial r_{+})_{P}\) taken at the critical point \(r_{c}\):
\[\left(\frac{\partial T}{\partial r_{+}}\right)_{P}\Big{|}_{c}=\frac{r_{c}^{2} \chi(r_{c})}{8(n-1)\pi\xi(r_{c}^{2}+d^{2})}\left(\frac{(r_{c}^{2}+3d^{2})}{(r_ {c}^{2}+d^{2})}\chi(r_{c})+2r_{c}\chi^{\prime}(r_{c})\right), \tag{114}\]
where we denote \(\chi(r)=\xi-\Lambda+(n-1)(n-2)/r^{2}-q^{2}/r^{2(n-1)}-(n-1)(n-2)\bar{q}^{2}/2r^ {4}\) and \(\chi^{\prime}(r)\) is its derivative with respect to \(r\). Now if we write the derivative \((\partial P/\partial r_{+})_{T}\) at the critical point \(r_{c}\) and using the expression for the critical temperature \(T_{c}\) (88) we obtain:
\[\left(\frac{\partial P}{\partial r_{+}}\right)_{T}\Big{|}_{c}=-\frac{1}{16\pi r _{c}}\left(\frac{(r_{c}^{2}+3d^{2})}{(r_{c}^{2}+d^{2})}\chi(r_{c})+2r_{c}\chi ^{\prime}(r_{c})\right)=0. \tag{115}\]
Where the last equality is nothing else, but the condition (86), before it follows that the expression in the parentheses in the upper relation equals to zero. Since there is identical contribution in the relation (114) we conclude that the derivative \((\partial T/\partial r_{+})_{P}\) equals to zero at the critical point \(r_{c}\) and as a result the heat capacity \(C_{P}\) is discontinuous with infinite gap at this point.
It is also established that there is a subtlety in the definition of the so-called phase transitions of the second order according to Ehrenfest's classification. Namely, more precisely the character of the phase transition with discontinuous second derivatives as we have here is defined by the Prigogine-Defay ratio, which is introduced as follows:
\[\tilde{\Pi}=\frac{(\partial P/\partial T)_{S}}{(\partial P/\partial T)_{V}}= \frac{\Delta C_{P}\Delta\kappa_{T}}{VT(\Delta\tilde{\alpha})^{2}}, \tag{116}\]
obviously the Prigogine-Defay ratio is calculated at the critical point. Taking into account corresponding relations for the thermodynamic values \(C_{P}\), \(\tilde{\alpha}\) and \(\kappa_{T}\) and substituting them into the upper relation and
after simple transformations we obtain:
\[\tilde{\Pi}=-\frac{(\partial S/\partial r_{+})(\partial r_{+}/\partial P)_{T}}{( \partial V/\partial r_{+})(\partial r_{+}/\partial T)_{P}}\big{|}_{c}. \tag{117}\]
Calculating derivatives \(\partial S/\partial r_{+}\) and \(\partial V/\partial r_{+}\) and taking into account the relations (114) and (115) we obtain:
\[\tilde{\Pi}=1. \tag{118}\]
Therefore, since the Prigogine-Defay equals to unity, the phase transition at the critical point is exactly of the second order. We point out that in contrast to the considered case for dilatonic black holes the Prigogine-Defay ratio is \(\tilde{\Pi}<1\)[47] giving rise to the conclusion about a glass-type phase transition for the latter case.
## 8 Discussion
In this work a static charged black hole solution is obtained in Horndeski gravity with linear Maxwell and Yang-Mills fields. Due to chosen form of the field potentials for the gauge fields, namely the Maxwell field is purely electric and the nonabelian field is of magnetic character the explicit relations for the metric function is derived in a closed form. We point out that due to nature of the Horndeski gravity the explicit relations for the metric function \(U(r)\) have some differences for even (31) and odd (30) dimensions of space \(n\), this is a specific feature of Horndeski gravity and similar differences occurred even for pure Horndeski gravity [28], but it also affects on the terms related to the gauge field [29]. The other distinctive feature of the obtained solution is a specific "effective" coupling between the gauge fields which is reflected by the terms proportional to the product of charges \(q\) and \(\bar{q}\) (\(\sim q^{2}\bar{q}^{2}\)) in the metric function \(U(r)\) (25) and in the following explicit relations. We point out that "effective" coupling of similar character never appears in the framework of General Relativity or Einstein-dilaton theory [47, 48], but it may appear in "higher order" gravity theories for instance when Gauss-Bonnet or higher Lovelock terms are taken into account, but as far as we know it has not been studied yet. It would be interesting to consider this issue in those theories and compare both results. It should be noted that for \(n=3\) both abelian and nonabelian fields give identical contribution to the metric function (32).
The intricate expression of the metric function \(U(r)\) in its integral (25) or explicit (30) (or/and (31)) forms turns a thorough analysis of the metric function into a difficult task. But asymptotic cases can be analysed relatively easily. First of all, it follows from (34) that asymptotic behaviour at the infinity will be of AdS or dS types, depending on the signs of the coupling constants. We also point out that in this case instead of bare cosmological constant \(\Lambda\) there is an effective cosmological defined by both the bare constant \(\Lambda\) and the ratio of the coupling constants \(\alpha/\eta\), namely \(\Lambda_{eff}\sim\eta/\alpha\left(\alpha/\eta-\Lambda\right)^{2}\). It should be noted that imposing additional constraints on the metric functions \(U(r)\) and \(W(r)\) another effective cosmological constant \(\Lambda_{eff}\) can be obtained, namely in [61] the effective cosmological constant was obtained to be proportional to the ratio of the coupling parameters \(\Lambda_{eff}\sim\alpha/\eta\). Therefore it is an interesting issue to examine various options how the effective cosmological constant appears and what form it takes. The latter is also important from the point of view of the extended thermodynamics, because it is directly related to the definition of the thermodynamic pressure \(P\). In this work we consider mainly the solution with \(AdS\) asymptotic, but as we have mentioned that our solution may have de Sitterian asymptotic depending on the signs of the parameters, but this solution has its own peculiarities and it needs additional careful study.
For very small distances \(r\to 0\) the leading contribution into the metric function \(U(r)\) is mainly defined by the gauge field, namely in our case for \(n>3\) the dominant contribution is given by the Maxwell field, whereas the for \(n=3\) both gauge fields contribute equally. Due to a specific interplay between Horndeski gravity and gauge field terms, the dominant term for \(r\to 0\) is always of negative sign, making the behaviour of the metric function more similar to the Schwarzschild solution than to the Reissner-Nordstrom one. In addition the leading term is always proportional to \(\sim q^{4}\) whereas in General Relativity the linear Maxwell
field contribution is of the order of \(\sim q^{2}\). The negative sign of the mentioned contribution for \(r\to 0\) gives rise to the conclusion that for this particular solution in Horndeski gravity a naked singularity never exist as it might happen for a charged solution in General Relativity, for instance for the Reissner-Nordstrom solution. The Figure [1] confirms the mentioned conclusion. The Figure [1] also shows that increase of the charge (or even both charges) can give rise to appearance of additional horizons, but it needs more careful examination and it will be considered elsewhere.
We also study thermal properties of the black hole. First of all we calculate the black hole temperature, to obtain it we have used the concept of modified surface gravity, introduced in [54], where the authors argued that due to the difference between speeds of gravitons and photons the concept of the surface gravity needs a revision. The modified surface gravity and correspondingly black hole temperature allowed us to avoid introducing additional scalar charges which are ill-defined, as it was done earlier [27, 29] to maintain the first law of black hole thermodynamics. Additional benefit we obtain using the modified surface gravity concept is the fact that the entropy we introduce takes the same form as in General Relativity. To obtain the first law we use the Wald approach [56]. We also point out that the concept of the effective surface gravity should be carefully analysed as it is performed in General Relativity. Both temperature \(T\) and entropy \(S\) allowed us to calculate heat capacity \(C_{Q}\) and examine it. Its examination shows that it might have singularity points and instability domains which disappear under certain conditions. These singularities give a hint about possible critical behaviour of the black hole which is also studied in the extended thermodynamics framework.
Finally, introducing the thermodynamic pressure \(P\) (71) we obtain the thermal equation of state (85). In addition to the pressure we have also introduced the thermal quantity \(\Pi\) which has similar nature to the pressure, what was pointed out in [61], but this issue should be carefully studied. The extended thermal phase space allowed to derive the Smarr relation (81). We also obtained the Gibbs free energy. The study of the Gibbs free energy for relatively small pressure shows swallow-tail behaviour (see Figures [6]-[7]) and increase of the pressure gives rise to gradual diminishing of the swallow-tail behaviour with its following dissolution. The swallow-tail character of \(G=G(T)\) function means that the system undergoes a phase transition of the first order for corresponding values of the pressure \(P\) and it disappears when the swallow-tail vanishes with increasing of the pressure. The study of the equation of state (85) gives rise to the critical radius \(r_{c}\) (or critical volume \(V_{c}\)) which is obtained for some particular cases and consequently for those cases we derived explicit relations for the critical ratios \(\rho_{c}\). General relations for the critical values can be studied only numerically. Studying the thermal behaviour near the critical point we obtained the critical exponents \(\bar{\alpha}\), \(\beta\), \(\gamma\) and \(\delta\), their numerical values are the same as even for the Reissner-Nordstrom-AdS black hole [58] and for other black hole solution in Horndeski gravity with different equation of state [61] what confirms universal character of thermodynamic relations. We have also analysed the Ehrenfest's equations to study the behaviour at the critical point and calculated the Prigogine-Defay ratio \(\tilde{\Pi}\) which is shown to be equal to one and we make the conclusion that at the critical point we have the second order phase transition. It would also be interesting to study carefully the critical behaviour if instead of the cosmological constant \(\Lambda\) the ratio \(\alpha/\eta\) is used to define thermodynamic pressure. The other interesting and important issue is to study in more details the domain where the first-order phase transition occurs, namely to obtain and examine the Clausius-Clapeyron equation.
## Acknowledgments
This was partially supported by the Fulbright Program grant for visiting scholars.
| この trabalhoで、最小のHorndeski重力枠組みにおいて、追加MaxwellとYang-Mills場を含む静的球対称電荷ブラックホールの解を取得する。得られた解は、特に漸近論を研究する。ブラックホールの熱力学は調査され、特に、有効表面重力を使用してブラックホールの温度を導出する。ブラックホール熱力学の第1法則を得るために、Wald法を使用する。また、拡張熱力学アプローチを使用し、Smarr関係、Gibbs自由エネルギー、熱状態方程式を導出する。拡張空間における熱値の研究は、特に第1次相転換が起こる領域と、2次相転換の臨界点を含む、複雑な相変化を示す。臨界点付近の熱的挙動を調査し、臨界指数を求め、Ehrenfestの式を臨界点で解析する。最終的に、Prigogine |
2309.04433 | Variations and Relaxations of Normalizing Flows | Normalizing Flows (NFs) describe a class of models that express a complex
target distribution as the composition of a series of bijective transformations
over a simpler base distribution. By limiting the space of candidate
transformations to diffeomorphisms, NFs enjoy efficient, exact sampling and
density evaluation, enabling NFs to flexibly behave as both discriminative and
generative models. Their restriction to diffeomorphisms, however, enforces that
input, output and all intermediary spaces share the same dimension, limiting
their ability to effectively represent target distributions with complex
topologies. Additionally, in cases where the prior and target distributions are
not homeomorphic, Normalizing Flows can leak mass outside of the support of the
target. This survey covers a selection of recent works that combine aspects of
other generative model classes, such as VAEs and score-based diffusion, and in
doing so loosen the strict bijectivity constraints of NFs to achieve a balance
of expressivity, training speed, sample efficiency and likelihood tractability. | Keegan Kelly, Lorena Piedras, Sukrit Rao, David Roth | 2023-09-08T16:55:23 | http://arxiv.org/abs/2309.04433v1 | # Variations and Relaxations of Normalizing Flows
###### Abstract
Normalizing Flows (NFs) describe a class of models that express a complex target distribution as the composition of a series of bijective transformations over a simpler base distribution. By limiting the space of candidate transformations to diffeomorphisms, NFs enjoy efficient, exact sampling and density evaluation, enabling NFs to flexibly behave as both discriminative and generative models. Their restriction to diffeomorphisms, however, enforces that input, output and all intermediary spaces share the same dimension, limiting their ability to effectively represent target distributions with complex topologies (Zhang and Chen 2021). Additionally, in cases where the prior and target distributions are not homeomorphic, Normalizing Flows can leak mass outside of the support of the target (Cornish et al. 2019; Wu et al. 2020). This survey covers a selection of recent works that combine aspects of other generative model classes, such as VAEs and diffusion, and in doing so loosen the strict bijectivity constraints of NFs to achieve a balance of expressivity, training speed, sample efficiency and likelihood tractability.
**Keywords:** Generative Modeling, Normalizing Flows, Diffusion
## 1 Introduction
Research in generative modeling with deep learning has in large part focused on four classes of models: flows, VAEs, diffusion models and GANs. Until recently, GANs had proven the model family capable of producing the highest fidelity generated samples, but a recent string of high-profile successes using diffusion models for natural image (Ho et al. 2020), audio (Kong et al. 2020) and video synthesis (Ho et al. (2022)), trajectory planning (Janner et al. 2022), protein and material design (Luo et al.; Anand and Achim 2022) has called into question their dominance in generative tasks. VAEs on the other hand, are a slightly older class of models that are easier to train but have been less successful at producing realistic data distributions. Some work has gone into improving the expressivity of VAEs (Aneja et al. 2021) but has encountered a tension between VAE expressivity and a tendency
towards posterior collapse, where the generated model ignores the latent codes \(z\) entirely in favor of learning a capable generator.
This paper presents the fundamentals for each of these basic model classes and a selection of recent works that combine aspects from each to achieve a balance of model expressivity, training speed, sample efficiency and likelihood tractability. In particular, we focus on a selection of papers that loosen the strict bijectivity constraints of Normalizing Flows (NF) and attempt to improve the expressivity and sample efficiency of NFs while retaining as much as possible the likelihood evaluation properties the strict construction affords.
## 2 Normalizing Flows
Normalizing Flows are notable among the broader family of generative models in that they are not only capable of expressing rich, complex distributions- they are able to do so while also retaining the ability to perform exact density evaluation. They achieve this capacity by expressing a complex target distribution of interest as a bijective, differentiable transformation of a simpler, known base distribution. This formulation provides a learning mechanism using maximum likelihood over i.i.d samples from the target distribution, a sampling mechanism via transformations over points drawn from the base distribution and exact density evaluation using the inverse of the learned transformation and a change of variables with the learned transform's Jacobian.
Normalizing Flows were popularized in the context of Variational Inference by Rezende and Mohamed (2015) as a choice of tractable posterior for continuous variables that is more capable of representing complex distributions than traditional choices for approximate posteriors, such as Mean Field Approximations. However, the use of flows for density estimation was first formulated by Tabak and Vanden-Eijnden (2010) and was used in subsequent works for clustering and classification tasks in addition to density estimation (Agnelli et al., 2010; Laurence et al., 2014).
The formal structure of a Normalizing Flow is as follows: Let \(Z\in\mathbb{R}^{D}\) be a random variable with known probability density function \(p_{Z}\): \(\mathbb{R}^{D}\mapsto\mathbb{R}\), referred to as the base distribution and let \(X\in\mathbb{R}^{D}\) be a random variable of interest over which we would like to define a density \(p_{X}\): \(\mathbb{R}^{D}\mapsto\mathbb{R}\), referred to as the target distribution. We then seek a parameterized transformation \(F_{\theta}\): \(\mathbb{R}^{D}\mapsto\mathbb{R}^{D}\) under which \(F_{\theta}(Z)=X.\) We restrict our choices for \(F_{\theta}\) to bijective, differentiable mappings, known as _diffeomorphisms_. Under these constraints, the density of a point \(x\sim X\), can be calculated under a change of variables using the determinant of the transformation's Jacobian, \(J_{F}\), as follows:
\[p_{X}(x)=p_{Z}(z)|detJ_{F}(z)|^{-1}\]
or, framed in terms of the reverse direction,
\[p_{X}(x)=p_{Z}(F_{\theta}^{-1}(x))|detJ_{F}^{-1}(x)|.\]
This product represents the probability density of the inverse-transformed point in the base distribution multiplied by the change in volume incurred by the transformation in an infinitesimal neighborhood around \(z\). In practice, \(F_{\theta}\) is often constructed as the composition of a sequence of \(N\) diffeomorphisms \(f_{1,\theta_{1}},\ldots,f_{M,\theta_{M}}\) such that
\[F_{\theta}=f_{1,\theta_{1}}\circ\cdots\circ f_{M,\theta_{M}}\]
Since each of these sub-transformations is itself invertible, their composition is also invertible and bijective. The determinant of \(J_{F}\) can be computed exactly as:
\[detJ_{F}(z)=\prod_{i=1}^{N}detJ_{f_{i,\theta_{i}}}\]
and the function's inverse as
\[F_{\theta}^{-1}=f_{M,\theta_{M}}^{-1}\circ\cdots\circ f_{1,\theta_{1}}^{-1}.\]
### Training of Normalizing Flows
Normalizing Flows can be trained in one of two ways, depending on the nature of access to the target distribution during training. In the setting where samples from \(p_{x}\) are available, but not their densities, the model parameters \(\theta\) can be estimated using the forward KL-Divergence:
\[\mathcal{L}_{\theta} =D_{KL}\left[p_{x}^{*}(x)\;||\;p_{X}(x;\theta)\right]\] \[=-\mathbb{E}_{p_{x}^{*}(x)}[\log p_{x}(x;\theta)]+const.\] \[=-\mathbb{E}_{p_{x}^{*}(x)}[\log p_{Z}(F_{\theta}^{-1}(x))+\log |detJ_{F}^{-1}(x)|]+const.\]
With a set of N samples \(\{x_{i}\}_{i=1}^{N}\), we can estimate the above loss as
\[\mathcal{L}_{\theta}\approx-\frac{1}{N}\sum_{i=1}^{N}\log p_{Z}(F_{\theta}^{- 1}(x))+\log|detJ_{F}^{-1}(x)|+const.\]
In the setting where it is possible to evaluate the target density \(p_{x}^{*}\) cheaply, but it is not straightforward to draw samples from said distribution, model parameters \(\theta\) can be estimated using the reverse KL-Divergence:
\[\mathcal{L}_{\theta} =D_{KL}\left[p_{X}(x;\theta)\;||\;p_{x}^{*}(x)\right]\] \[=\mathbb{E}_{p_{x}(x;\theta)}\left[\log p_{x}(x;\theta)-\log p_{x }^{*}(x)\right]\]
### Limitations of Normalizing Flows
Though Normalizing Flows are in principle capable of representing arbitrarily complex target distributions (Papamakarios et al., 2021), for choices of simple base distributions and reasonably smooth transformations they suffer from topological limitations (Stimper et al., 2021). Strict bijectivity enforces that the input, output and all intermediary spaces share identical dimensionality and topology. Cornish et al. (2019) demonstrate that for base and target distributions with distinct support topologies (e.g differing in number of connected components, number of holes), and choice of candidate transformation where \(F_{\theta}\) and \(F_{\theta}^{-1}\) are continuous, it is impossible to represent the target distribution as a transformation of the base distribution and an arbitrarily accurate approximation requires the bi-Lipschitz
constant of \(F_{\theta}\), a measure of a function's "invertibility" (Behrmann et al., 2020) to approach \(\infty\).
Evidence of this limitation can be seen in a "smearing" effect when attempting to represent a bi-modal or multi-modal target distribution using a standard unimodal Gaussian as a base distribution, where sharp boundaries cannot be expressed and density is leaked outside the support of the true target distribution. (Figure 1) Further, under the _manifold hypothesis_(Bengio et al., 2012), if real-world distributions reside in a low-dimensional (\(r<<d\)) manifold of the spaces they inhabit, it is a relative certainty that the base and target distributions will have mismatched support topologies and that probability density will leak outside of the target support.
_Figure taken from Stimper et al. (2021)_
## 3 Variational Autoencoders
Variational Autoencoders (VAEs) are a likelihood-based class of models that provide a principled framework for optimizing latent variable models (Kingma and Welling, 2013). It consists of two models- a recognition model or encoder and a generative model or decoder that are coupled together. The recognition model approximates the posterior over the latent random variables which is passed as input to the generative model to generate samples. The generative model, on the other hand, provides a scaffolding or structure for the recognition model to learn meaningful representations of the data. The recognition model is the approximate inverse of the generative model according to Bayes' rule. (Kingma and Welling, 2019)
In the typical setting for a latent variable model, we have some observed variables and some unobserved variables. To estimate the unconditional density of the observed variables, also called the model evidence, we marginalize over the joint distribution of the observed and unobserved variables, parameterized by \(\theta\). This is given by
\[p_{\theta}(x)=\int_{Z}p_{\theta}(x,z)dz\]
Figure 1: An example of ”smearing” in (\(c\)), where the target distribution (\(a\)) and the base distribution (\(b\)) differ in their number of connected components.
Framing the problem through an implicit distribution over x provides a great deal of flexibility. When we marginalize over the latents we end up with a compound probability distribution or mixture model. For example, if \(z\) is discrete and \(p_{\theta}(x|z)\) is a Gaussian distribution, then \(p_{\theta}(x)\) will be a mixture of Gaussians. For continuous \(z\), \(p_{\theta}(x)\), can be seen as an infinite mixture. Thus, depending on the choice of the latent distribution, we can control the expressivity of the unconditional density \(p_{\theta}(x)\), as desired.
This compound distribution, however, is obtained by integrating over the support of the latent distribution. Most of the time, this integral is intractable and thus, we cannot differentiate with respect to its parameters and optimize it using gradient descent. While the joint density \(p_{\theta}(x,z)\) is efficient to compute, the intractability of \(p_{\theta}(x)\), is related to the intractability of the posterior over the latent variable, \(p_{\theta}(z|x)\)(Kingma and Welling, 2019). From the chain rule, we have the following relationship between the densities
\[p_{\theta}(z|x)=\frac{p_{\theta}(x,z)}{p_{\theta}(x)}\]
The intractability of \(p_{\theta}(z|x)\) leads to the intractability of \(p_{\theta}(x)\). To overcome this hurdle, we employ approximate inference techniques. The framework of VAEs provides a computationally efficient way for optimizing latent variable models jointly with a corresponding inference model using gradient descent (Kingma and Welling, 2019). This is achieved by introducing the encoder or recognition model- a parametric inference model \(q_{\phi}(z|x)\), where \(\phi\) is the set of variational parameters.
Consequently, the optimization objective of VAEs is the variational lower bound or evidence lower bound (ELBO), where we optimize the variational parameters \(\phi\) such that
\[q_{\phi}(z|x)\approx p_{\theta}(z|x)\]
Figure 2: Computational flow in a VAE
_Figure taken from Kingma and Welling (2019)_
It follows from the derivation shown below
\[\log p_{\theta}(x) =\mathbb{E}_{q_{\phi}(z|x)}\log p_{\theta}(x)\] \[=\mathbb{E}_{q_{\phi}(z|x)}\log\left[\frac{p_{\theta}(x,z)}{p_{ \theta}(z|x)}\right]\] (chain rule) \[=\mathbb{E}_{q_{\phi}(z|x)}\log\left[\frac{p_{\theta}(x,z)q_{ \phi}(z|x)}{q_{\phi}(z|x)p_{\theta}(z|x)}\right]\] \[=\underbrace{\mathbb{E}_{q_{\phi}(z|x)}\log\left[\frac{p_{\theta} (x,z)}{q_{\phi}(z|x)}\right]}_{=\mathcal{L}_{\phi,\theta}(x)}+\underbrace{ \mathbb{E}_{q_{\phi}(z|x)}\log\left[\frac{q_{\phi}(z|x)}{p_{\theta}(z|x)} \right]}_{=D_{KL}(q_{\phi}(z|x)||p_{\theta}(z|x))}\]
The second term is the Kullback-Leibler (KL) divergence between \(q_{\phi}(z|x)\) and \(p_{\theta}(z|x)\), while the first term is the variational lower bound or evidence lower bound (ELBO).
Since the KL divergence is non-negative, the ELBO is the lower bound on the log-likelihood of the data
\[\mathcal{L}_{\phi,\theta}(x) =\log p_{\theta}(x)-D_{KL}(q_{\phi}(z|x)||p_{\theta}(z|x))\] \[\mathcal{L}_{\phi,\theta}(x) \leq\log p_{\theta}(x)\]
Thus, we can observe that maximizing the ELBO \(\mathcal{L}_{\phi,\theta}(x)\) with respect to \(\theta\) and \(\phi\), will have the following consequences
* It will approximately maximize the marginal likelihood \(p_{\theta}(x)\), implying that our generative model will get better
* It will minimize the KL divergence between \(q_{\phi}(z|x)\) and \(p_{\theta}(z|x)\), implying our approximation of the posterior, \(q_{\phi}(z|x)\), will get better
## 4 Denoising Diffusion
Diffusion-based generative models are parameterized Markov chains mainly used to create high quality images and videos, and also utilized in data compression and representation learning on unlabeled data. Diffusion models are both tractable and flexible, making them easier to train, evaluate and sample from. The transitions of the Markov chain gradually add noise to the data and then learn to reverse the diffusion process, producing desired samples after a finite time. Unlike VAEs, diffusion models are latent variable models where the dimensionality of latent space is the same as the original data. The idea of using diffusion for a generative process was initially introduced by Sohl-Dickstein et al. (2015) in 2015. Song and Ermon (2019) and Ho et al. (2020) improved the initial approach a couple of years after. The latter showed that diffusion models were capable of generating high quality images and unveiled an equivalence with denoising score matching in training and Langevin dynamics at the sampling stage.
The forward diffusion process gradually adds a small amount of noise in \(T\) steps to the original data until it is indistinguishable. A variance schedule \(\beta_{1},\ldots,\beta_{T}\), where \(w_{i}\in(0,1)\), is used to regulate the size of each step. If the noise is small enough, the transitions of the
reverse diffusion process will be conditional Gaussians as well. Given a point sampled from the original data distribution \(x_{0}\sim q(x)\) we have the following transition probabilities Weng (2021):
\[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}I)\]
The forward process in a diffusion probabilistic model is fixed, other diffusion models such as diffusion normalizing flows have a trainable forward process Zhang and Chen (2021). A desirable property of the forward process shown by Sohl-Dickstein et al. (2015) is that we can sample \(x_{t}\) given \(x_{0}\) at any time step without having to apply \(q\) repeatedly
\[q(x_{t}|x_{0})=\mathcal{N}(x_{t};\sqrt{\bar{\alpha}_{t}}x_{0},(1-\bar{\alpha}_ {t})I)\]
with \(\alpha_{t}:=1-\beta_{t}\) and \(\bar{\alpha}_{t}:=\prod_{s=1}^{t}\alpha_{s}\).
We could use \(q(x_{t-1}|x_{t})\) to revert the forward diffusion process and generate a sample from the real distribution using random noise as input. Unfortunately \(q(x_{t-1}|x_{t})\) is intractable, therefore we will learn a model \(p_{\theta}(x_{t-1}|x_{t})\) to approximate it. Notably, the reverse conditional probability is tractable if we condition on \(x_{0}\)Weng (2021). Similar to VAE, we can use a variational lower bound to optimize \(-\log p_{\theta}(x_{0})\). After rewriting the lower bound into several KL-divergence terms and one entropy term and ignoring all terms that don't have any learnable parameters, we get two components \(L_{0}=-\log p_{\theta}(x_{0}|x_{1})\) and \(L_{t}=D_{KL}(q(x_{t}|x_{t+1},x_{0})||p_{\theta}(x_{t}|x_{t+1}))\). \(L_{t}\) is the KL divergence of two Gaussian distributions, where \(q(x_{t}|x_{t+1},x_{0})\) is the tractable reverse conditional distribution mentioned earlier. In Ho et al. (2020), \(L_{0}\) is modeled using a separate discrete decoder and a fixed variance term.
## 5 Score-Based Methods
Transport models employ maximum likelihood estimation to learn probability distributions. This reliance can pose a major challenge given complex partition functions, which may prove intractable. Some models add constraints to ensure MLE is feasible; the bijectivity of Normalizing Flows and the approximation of the partition function in VAEs are two such methods to overcome intractability. Another framework to address this scenario is known as the score-based method. For this setup, we model the score function rather than the density function directly:
\[s_{\theta}(x)=\nabla_{x}\log p_{\theta}(x)\]
Partition function \(Z_{\theta}\) is reduced to zero in the context of \(\nabla_{x}\log Z_{\theta}\) and thus is independent of the score-based model. We are therefore able to sidestep any challenging computation posed by the partition function while training. This setup introduces flexibility, where we can now work with many families of models that may have been otherwise intractable.
Score-based diffusion is an extension upon this method. As in the previous section on diffusion, this model class involves both a forward and backward stochastic differential equation. Again, the forward pass returns a noisy distribution:
\[x_{t}=e^{-t}x+\sqrt{1-e^{-t}}z\]
Where \(x\sim\pi_{d}\) and \(z\sim N(0,I)\).
In score-based diffusion, the reversed pass can now be written as a flow composed of diffusion drift plus an exact score:
\[dY_{t}=[Y_{t}-\nabla log\pi_{t}(Y_{t})]dt+\sqrt{2}dB_{t}\]
Where \(Y_{t}=X_{T-t}\) (Bruna (2022)).
The challenge now falls on estimating the scores from the data. This is particularly impactful in low density regions, where there is less data available to compute scores. In such cases, the model may produce poor quality sampling. To work around this obstacle, noise can be added to the data in increasingly larger magnitudes so that the model can at first train on less corrupted data, then learn low data density regions as the magnitude of noise grows. In this way, adding noise adds stability to score-based methods and aids in producing higher quality samples (Song and Ermon, 2019).
Score-based models can also be used to compute exact likelihoods. This requires converting the reverse stochastic differential equation into an ordinary differential equation, as seen below:
\[dx=[f(x,t)-\frac{1}{2}g^{2}(t)\nabla_{x}\log p_{t}(x)]dt\]
The above equation, known as the probability flow ODE, becomes a representation of a neural ODE when the score function \(\nabla_{x}\log p_{t}(x)\) is approximated by the score-based model \(s_{\theta}(x,t)\). Because of this relationship, the probability flow ODE takes on characteristics of a neural ODE, such as invertibility, and can compute exact likelihoods using the instantaneous change-of-variables formula (Song et al., 2021).
## 6 Relaxing Constraints
In this section, we explore several works that formulate new model classes by relaxing the strict bijectivity constraints of Normalizing Flows. These works expand the family of admissible functions to include surjective/stochastic transformations and take inspiration from score-based models and diffusion by introducing noise into the training process.
### SurVAE Flows
In an attempt to place VAEs and Normalizing Flows in a common context, Nielsen et al. (2020) introduce SurVAE Flows- a class of models composed of _surjective transformations_, allowing for models that mix bijective, surjective and stochastic components in a single end-to-end trainable framework. They identify three mechanisms needed for probabilistic generative models in this family:
1. A forward transformation: \(p(x|z)\)
2. An inverse transformation: \(p(z|x)\)
3. A likelihood contribution: \(p(x)\)
In a normalizing flow, the forward transformation and reverse transformations are deterministic and can be represented as \(p(x|z)=\delta(x-F(z))\) and \(p(x|z)=\delta(z-F^{-1}(x))\). In a VAE, both directions are stochastic and a variational approximation \(q(z|x)\) is used in place of the intractable posterior.
They use this decomposition to draw formal connections between stochastic transformations (VAEs) and bijections (normalizing flows) using Dirac delta functions. In particular, they show that the marginal density \(p(x)\) can be expressed under both paradigms as:
\[\log p(x)\simeq\log p(z)+\mathcal{V}(x,z)+\mathcal{E}(x,z)\]
where \(\mathcal{V}(x,z)\) represents the likelihood contribution and \(\mathcal{E}(x,z)\) represents the 'looseness' of the provided bound. Under VAEs and other stochastic transformations, the likelihood contribution term is calculated as \(\log\frac{p(x|z)}{q(z|x)}\) and the 'bound looseness' term is calculated as \(\log\frac{q(z|x)}{p(z|x)}\), while under normalizing flows and other bijections, the likelihood contribution term is \(\log|\det J|\) and the 'bound looseness' term is \(0\).
Through the use of surjective, non-injective layers, the authors present constructions that allow for _inference surjections_- models with exact inverses and stochastic forward transformations- and _generative surjections_- models with exact forward transformations and stochastic right inverses. In doing so, they formulate models that bypass the dimensionality constraints enforced by bijectivity without sacrificing the ability to perform exact likelihood evaluation.
The surjective layers they introduce include absolute value, max-value and stochastic permutation, which they use to demonstrate strong experimental results on synthetic modeling tasks. They demonstrate the effectiveness of surjective layers on a handful of synthetic modeling tasks, particularly those with inherent symmetries. Importantly for this survey, these experiments also demonstrate an ability to model sharper boundaries than a fully bijective flow is capable of producing.
The authors argue that a number of recently proposed generative model types can be understood as SurVAE flows, including diffusion models (Ho et al. (2020)), continuously indexed normalizing flows (Cornish et al. (2019)), stochastic normalizing flows (Wu et al. (2020)) and normalizing flows acting on augmented data spaces (Huang et al. (2020)).
### Stochastic Normalizing Flows
Stochastic Normalizing Flows (SNF) are a generalization of the Normalizing Flow framework introduced by Wu et al. (2020). They offer certain benefits over classical stochastic sampling methods and Normalizing Flows for sampling from a known energy model specified up to a normalization constant. Sampling methods such as Markov Chain Monte Carlo (MCMC) or Langevin Dynamics (LD) may have trouble converging because of slow mixing times and local energy minima- adding a deterministic transformation can help alleviate this problem. On the other hand, introducing noise to relax Normalizing Flow's bijectivity constraints can help solve the topological constraints mentioned in 2.2. In Figure 3 we show the double-well example, by adding stochasticity we're able to successfully separate the modes of the distributions avoiding the "smearing" effect.
Similar to NFs, SNFs are a sequence of deterministic transformations. Their contribution comes from adding stochastic blocks, such as Langevin, Metropolis-Hastings, VAE,
and diffusion normalizing flow layers. Both the deterministic and stochastic transformations help modify a prior into a complicated target distribution. We can use KL divergence to train NFs and SNFs. In the former we can calculate the probability density to generate a sample \(p_{x}(x)\) using change of variables, however, we can no longer do so- with the introduction of stochasticity, SNFs are no longer invertible. As described in 2, we can train a Normalizing Flow by energy-based training, used when we have a model for the target energy, or maximum likelihood training, when we have samples. We need to generalize the notion of energy and maximum likelihood training in order to train an SNF. We start by defining \(\mu_{z}(x)\propto e^{-u_{z}(z)}\) as our latent space distribution, \(p_{x}(x)\propto e^{-u_{x}(x)}\) as our output distribution, and maximizing the importance weights
\[\log w(z\to x)=\log\frac{\mu_{x}(x)}{p_{x}(x)}=-u_{x}(x)+u_{z}(z)+\sum_{t} \nabla S_{t}(y_{t})\]
where \(y_{t+1}|y_{t}\sim q_{t}(y_{t}\to y_{t+1}),\ y_{t}|y_{t+1}\sim\tilde{q}_{t}(y_{t+ 1}\to y_{t})\) are the forward/backward probability distributions at \(t\), we no longer have deterministic flow layers, and \(\nabla S_{t}=\log\frac{\tilde{q}_{t}(y_{t+1}\to y_{t})}{q_{t}(y_{t}\to y_{t+1})}\) represents the forward-backward probability ratio of step t. By maximizing the importance weights we get the following expressions for energy base training
\[\min\mathbb{E}_{\mu_{z}(x)\mathbb{P}_{f}(z\to x)}[-\log(w(z\to x))]= \text{KL}(\mu_{z}(x)\mathbb{P}_{f}(z\to x)||\mu_{x}(x)\mathbb{P}_{b}(x\to z))\]
and for maximum likelihood training
\[\min\mathbb{E}_{\mu_{x}(x)\mathbb{P}_{b}(x\to z)}[-\log(w(z\to x))]= \text{KL}(\mu_{x}(x)\mathbb{P}_{b}(x\to z)||\mu_{z}(x)\mathbb{P}_{f}(z\to x)).\]
where \(\mu_{z}(x)\mathbb{P}_{f}(z\to x),\ \mu_{x}(x)\mathbb{P}_{b}(x\to z)\) are our forward and backward pass probabilities. Notably, the KL divergence of the paths is an upper bound to the KL divergence of the marginal distributions.
\[\text{KL}(p_{x}||\mu_{x})\leq\text{KL}(\mu_{z}(x)\mathbb{P}_{f}(z\to x)||\mu_ {x}(x)\mathbb{P}_{b}(x\to z))\]
Finally, we can draw asymptotically unbiased samples from our target distribution \(x\sim\mu_{x}(x)\) by employing the Metropolis-Hastings algorithm and using the importance weights shown above.
Figure 3: Double well problem: a) Normalizing flows, b) NF with stochasticity, c) Sample from true distribution
### Diffusion Normalizing Flow
Diffusion Normalizing Flow (Zhang and Chen (2021)), or DiffFlow, was introduced as a cross between Normalizing Flows and Diffusion models. DiffFlow is composed of two neural stochastic differential equations (SDEs): a forward pass \(F\) that transforms the data X into a simple distribution like Gaussian and a backward pass \(B\) that removes noise from the simple distribution to generate samples that match the target data distribution. Like diffusion models, the SDEs are jointly trained to minimize the KL divergence between the forward pass distribution and the backward pass distribution at final time \(\tau\). The objective equation is as follows:
\[KL(p_{F(\tau)}|p_{B(\tau)})=\mathbb{E}_{\tau\sim p_{F}}[\log p_{F}(x_{0})]+ \mathbb{E}_{\tau\sim p_{F}}[-\log p_{B}(x_{N})]+\sum_{i=1}^{N-1}\mathbb{E}_{ \tau\sim p_{F}}[-\log\frac{p_{F}(x_{i}|x_{i-1})}{p_{B}(x_{i-1}|x_{i})}]\]
Similar to Normalizing Flows, DiffFlow is able to learn while mapping to the latent space. However, DiffFlow relaxes the bijectivity constraint of NFs on this mapping. In doing so, Difflow has more expressivity and can learn distributions with sharper boundaries. Further, bijectivity may prevent models from having density support over the whole space. Thus in lifting the constraint, DiffFlow has been found to perform better on tasks like image generation of complex patterns. The authors also claim that the boosted expressivity of DiffFlow results in better performance in likelihood over other NF implementations(Zhang and Chen (2021)).
Diffusion Normalizing Flow bypasses the bijectivity constraint by adding noise to the forward stochastic differential equation. Most diffusion models add noise indiscriminately, which can require many iterations to reach Gaussian noise and can lead to generated distributions with corrupted or missing details. On the other hand, due to the trainability of the forward SDE, DiffFlow adds noise only to targeted areas. Thus, DiffFlow can diffuse noise more efficiently and retain topological details that might have been blurred out in other diffusion processes.
Similar to diffusion, DiffFlow SDEs are composed of a drift term f, a vector valued function, and a diffusion term g, a scalar valued function. The equations are as follows:
Forward SDE: \[dx =f(x,t,\theta)dt+g(t)dw\] Backward SDE: \[dx =[f(x,t,\theta)-g^{2}(t)s(x,t,\theta)]dt+g(t)dw,\]
where \(x\) is data at time \(t\) and \(w\) represents the standard Brownian motion. The main distinguishing factor from Diffusion models is that DiffFlow includes the \(\theta\) parameter in the drift term which makes the SDEs learnable. From these equations, it is clear that when the diffusion term \(g\) tends to 0, DiffFlow reduces to Normalizing Flows.
Given the SDEs above, the discretized equations can be written as:
\[x_{i+1} =x_{i}+f_{i}(x_{i})\Delta t_{i}+g_{i}\delta_{i}^{F}\sqrt{\Delta t _{i}}\] \[x_{i} =x_{i+1}-[f_{i+1}(x_{i+1})-g_{i+1}^{2}(x_{i+1})]\Delta t_{i}+g_{ i+1}\delta_{i}^{B}\sqrt{\Delta t_{i}}\]
Returning to KL divergence, given that the first term is a constant and utilizing the discretized SDEs, the objective can be reduced to the form:
\[L=\mathbb{E}_{\delta^{F};x_{0}\sim p_{0}}[-\log p_{B}(x_{N})+\sum_{i=1}\frac{1}{2 }(\delta^{B}_{i}(\tau))^{2}]\]
where noise is represented as:
\[\delta^{B}_{i}(\tau)=\frac{1}{g_{i+1}\sqrt{\Delta t}}[x_{i}-x_{i+1}+[f_{i+1}(x _{i+1})=g^{2}_{i+1}s_{i+1}(x_{i+1})]\Delta t]\]
Loss can now be minimized with Monte Carlo gradient descent (Zhang and Chen (2021)).
### Stochastic Normalizing Flows and Diffusion Normalizing Flows
Zhang and Chen (2021) introduced Diffusion Normalizing Flows (DNF) as a new type of model. Nevertheless, per Hagemann et al. (2021) if we view SNMs as a pair of Markov chains \(((X_{0},\ldots,X_{t}),(Y_{t},\ldots,Y_{0}))\) where \((Y_{t},\ldots,Y_{0})\) is the reverse Markov chain of \((X_{0},\ldots,X_{t})\), we can view DNFs as a type of SDFs with specific forward and backward layers
\[\mathcal{K}_{t}(x,\cdot) =P_{X_{t}|X_{t-1}=x}=\mathcal{N}(x+\epsilon g_{t-1}(x),\epsilon h ^{2}_{t-1})\] \[\mathcal{R}_{t}(x,\cdot) =P_{Y_{t-1}|Y_{t}=x}=\mathcal{N}(x+\epsilon(g_{t}(x)-h^{2}_{t}s_{ t}(x)),\epsilon h^{2}_{t})\]
The equations above come from the Euler discretization with step size \(\epsilon\) of the stochastic differential equation with drift \(g_{t}\), diffusion coefficient \(h_{t}\) and Brownian motion \(B_{t}\)
\[dX_{t}=g_{t}(X_{t})dt+h_{t}dB_{t}\]
## 7 Discussion
In this section, we talk about the role of stochasticity in normalizing flows and compare the various techniques introduced above on the basis of the following criteria:
* _Expressivity-_ while expressivity is usually used in a broad sense in the literature, we focus on each technique's ability to capture the various modes of the distribution they are trying to model as well as regions with relatively low density.
* _Training speed-_ we characterize training speed as the time taken by each technique to reach convergence.
* _Ease of likelihood computation-_ for this criterion, we look at the tractability of the likelihood computation for density estimation.
* _Sampling efficiency-_ we differentiate sampling efficiency from data efficiency, with the former referring to the computational cost required to generate samples with the latter referring to the number of samples required for optimization.
We also direct the reader to the comprehensive comparison of the bulk of the techniques covered in this paper performed by Bond-Taylor et al. (2022)
### Expressivity
As described in section 3, VAEs employ the use of latent variables. The choice of these provides them with a great deal of flexibility, resulting in highly expressive models. On the other hand, bijectivity constraints imposed by the normalizing flows framework result in representational insufficiency. Their representational capacity depends on the type of flow used in the model. For example, linear flows are limited in their expressivity. On the other hand, coupling and autoregressive flows, two of the most widely used flow architectures, enable normalizing flows to represent very complex distributions. However, they are still limited in their expressivity due to the invertibility constraint imposed by the framework (Kobyzev et al., 2021).
Stochastic normalizing flows overcome some of these limitations by incorporating stochastic sampling blocks into the normalizing flow framework, thus improving representational capacity over deterministic flow architectures by overcoming topological constraints (Wu et al., 2020). DiffFlow enjoys better expressivity than vanilla normalizing flows by overcoming their bijectivity constraints by adding noise. The aforementioned constraints prevent normalizing flows from expanding density support to the whole space when transforming complex distributions to a base distribution. As a result, DiffFlow can learn distributions with sharper boundaries (Zhang and Chen, 2021)
Score-based methods are notably flexible due to the fact that they are independent of the normalizing constant \(Z_{\theta}\). This allows score-based methods to represent a more diverse set of models. Similar to other types of models, score-based methods are limited by the constraint that the dimension between their input and output must match. Otherwise, score-based models may take the form of any vector-valued function and thus are quite expressive (Song and Ermon, 2019).
Surjective flows empirically demonstrate an ability to represent sharper boundaries than vanilla NFs, however, their methods are non-general and require prior knowledge of relevant symmetries in the target distribution (Nielsen et al., 2020).
### Training speed
Normalizing Flows are known to be inefficient and difficult to train due to invertibility constraints on transformations and as a consequence input and output spaces with the same dimension (Bond-Taylor et al., 2022). By adding noise and bypassing strong bi-Lipschitz limitations, stochastic normalizing flows are easier to optimize. Moreover, adding stochastic layers is not computationally costly since they have linear computational complexity Wu et al. (2020).
DiffFlow tends to train slower in comparison to other models. While certain aspects, such as a trainable forward function, help improve efficiency, DiffFlow ultimately relies on backpropagation, making it slow to train. On the other hand, VAEs reach convergence quite quickly. Due to the reparameterization trick proposed by Kingma and Welling (2013), VAEs can use SGD during optimization.
Score-based models may struggle with training for low density regions, especially if the target distribution has multiple modes with a degree of separation. The model may then fail to converge in a reasonable time. As mentioned in section 5, adding progressively more noise to the data in training can improve model convergence in such cases.
### Ease of Likelihood Computation
Normalizing flows benefit from having bijective and invertible (diffeomorphisms) transformations applied to base distributions, resulting in the ability to compute exact likelihoods Kobyzev et al. (2021). Adding noise to stochastic and diffusion normalizing flows increases expressivity over normalizing flows but at the cost of not being able to compute exact likelihoods. The parameters of a stochastic normalizing flow can be optimized by minimizing the KL divergence between the forward and backward path probabilities. This minimization makes use of the variational approximation, which precludes them from computing exact likelihoods Wu et al. (2020). Diffusion normalizing flows add noise in the forward stochastic differential equation. Consequently, they use the reparameterization trick proposed by Kingma and Welling (2013) and thus we cannot compute exact likelihoods. To estimate likelihoods they use the marginals distribution equivalent SDE Zhang and Chen (2021).
VAEs optimize the variational lower bound, an approximation of the log-likelihood we are trying to optimize and as a result, we cannot compute exact likelihoods. Importance sampling or Monte Carlo sampling techniques are used to compute the likelihood of data after training is completed Kingma and Welling (2019). Finally, score-based methods provide an avenue to compute exact likelihoods. This requires some manipulation of the equations and the introduction of invertibility into the model. According to Song et al. (2021), score-based methods are then able to achieve'state-of-the-art likelihoods' on some image generation tasks.
Among the transformations proposed in Nielsen et al. (2020), only inference surjections, i.e. surjective layers that have full support in the base distribution and partial support in the target distribution, are able to produce exact likelihoods. Generative surjections, on the other hand, can only provide stochastic lower bound estimates.
### Sampling Efficiency
Sampling efficiency is mainly affected by the complexity of the model and number of iterations to generate a sample. For example, VAEs consist of an encoder and decoder that are typically complex neural networks. On the other hand, VAEs can generate samples from a single network pass and are thus more efficient than other energy-based models such as stochastic normalizing flows Bond-Taylor et al. (2022).
The sampling efficiency of normalizing flows is related to the cost of the generative direction. However, since the transformations applied to the base distribution are deterministic, samples can be generated in a single network pass and thus, normalizing flows enjoy high sampling efficiency. In comparison, diffusion normalizing flows have poor sampling efficiency, since they require MCMC during sampling. Nevertheless, they have better sampling efficiency than diffusion probabilistic models since they require fewer discretization steps Zhang and Chen (2021). Similar to diffusion normalizing flows, stochastic normalizing flows have lower sampling efficiency than vanilla normalizing flows because they use an MCMC method, Metropolis-Hastings algorithm, to generate samples.
Score-based methods tend to be slow in generating samples, due to the iterative nature of their sampling process. However, score-based methods are often able to produce high quality samples, comparable to GANs in image generation (Song and Ermon (2019)).
### On the Role of Stochasticity
Both Stochastic Normalizing Flows and Diffusion Normalizing Flows introduce stochasticity into their model formulations, though they provide different explanations for the role that stochasticity plays in improving expressivity. Wu et al. (2020) frame the addition of stochastic layers in SNFs as incorporating the strengths of deterministic methods at performing large-scale probability transport with the fine-grained expressivity of Metropolis-Hasting MC sampling- effectively removing samples in areas of lower probability density without incurring the sampling time costs of running a fully MC-reliant model (Wu et al., 2020). Zhang and Chen (2021), on their other hand, attribute the expressivity improvements by DNFs to an expansion of the training support to larger areas of the ambient space, improving gradient coverage for training (Zhang and Chen, 2021).
Both agree that adding stochasticity is central to bypassing topological constraints and representing sharp density boundaries in the target space, but the exact mechanism by which it improves expressivity is not fully elucidated by either work. Though beyond the scope of this paper, Bansal et al. (2022) demonstrates experimental evidence of successful diffusion-like models trained using deterministic, non-Gaussian forward processes, such as blurring and masking, calling into question the need for stochastic noise at all. None of the surjective layers proposed Nielsen et al. (2020) utilize added noise, yet they are nonetheless able to represent sharp boundaries in the target distribution. The role and necessity of added noise in improving model expressivity are not clear from these works and require further investigation.
## 8 Conclusion
This paper delved into a variety of generative models and compared the relative performance of each on expressivity, training speed, sample efficiency and likelihood tractability. Starting from a basis of likelihood-based models, we explored the ability of Normalizing Flows and Variational Autoencoders to directly learn a distribution's probability density in addition to their capacity to generate samples. VAEs have an encoder-decoder framework trained by optimizing the evidence lower bound, while NFs are structured as a series of bijective differentiable transformations on a data distribution to a simple base distribution.
The strict constraints of the NF architecture narrow the types of distributions that the model can represent. Thus, we explored several models that relaxed the strict bijectivity constraint of NFs. The variations we studied borrowed aspects from different frameworks, including diffusion and score-based models, and introduced stochasticity into the training process. The introduction of noise adds both flexibility and stability to these models. The variations of NFs have performed well in practice, particularly on sampling tasks like image generation. While they cannot be used to compute exact likelihoods, they add much to the field in terms of expressivity and sampling efficiency. | Normalizing Flows (NF) は、より複雑なターゲット分布を、単純なベース分布の系列の双対変換の合成として表現するクラスのモデルを説明する。 diffeomorphism を候補変換の空間を限定することで、NF は効率的な、正確なサンプルと密度評価を可能にし、NF は二分的なモデルとしての柔軟性を持ち、生成的モデルとしての柔軟性を持ちます。しかし、 diffeomorphisms の制限により、入力、出力、すべての中間空間は同じ次元を共有することとなり、複雑な topologies を持つターゲット分布を効果的に表現する能力が制限されます。さらに、仮説とターゲット分布が同相でない場合、Normalizing Flows はターゲットのサポートの外にマスの流出が発生します。この調査は、VAEs とスコアベースの拡散などの他の生成モデルのクラスの要素を組み合わせた最近の仕事についてカバーし、NF の厳格な双対性の制約を緩和 |
2309.13629 | Periodic Variable Star Classification with Deep Learning: Handling Data
Imbalance in an Ensemble Augmentation Way | Time-domain astronomy is progressing rapidly with the ongoing and upcoming
large-scale photometric sky surveys led by the Vera C. Rubin Observatory
project (LSST). Billions of variable sources call for better automatic
classification algorithms for light curves. Among them, periodic variable stars
are frequently studied. Different categories of periodic variable stars have a
high degree of class imbalance and pose a challenge to algorithms including
deep learning methods. We design two kinds of architectures of neural networks
for the classification of periodic variable stars in the Catalina Survey's Data
Release 2: a multi-input recurrent neural network (RNN) and a compound network
combing the RNN and the convolutional neural network (CNN). To deal with class
imbalance, we apply Gaussian Process to generate synthetic light curves with
artificial uncertainties for data augmentation. For better performance, we
organize the augmentation and training process in a "bagging-like" ensemble
learning scheme. The experimental results show that the better approach is the
compound network combing RNN and CNN, which reaches the best result of 86.2% on
the overall balanced accuracy and 0.75 on the macro F1 score. We develop the
ensemble augmentation method to solve the data imbalance when classifying
variable stars and prove the effectiveness of combining different
representations of light curves in a single model. The proposed methods would
help build better classification algorithms of periodic time series data for
future sky surveys (e.g., LSST). | Zihan Kang, Yanxia Zhang, Jingyi Zhang, Changhua Li, Minzhi Kong, Yongheng Zhao, Xue-Bing Wu | 2023-09-24T13:08:32 | http://arxiv.org/abs/2309.13629v1 | # Periodic variable star classification with deep learning:
###### Abstract
Time-domain astronomy is progressing rapidly with the ongoing and upcoming large-scale photometric sky surveys led by the Vera C. Rubin Observatory project (LSST). Billions of variable sources call for better automatic classification algorithms for light curves. Among them, periodic variable stars are frequently studied. Different categories of periodic variable stars have a high degree of class imbalance and pose a challenge to algorithms including deep learning methods. We design two kinds of architectures of neural networks for the classification of periodic variable stars in the Catalina Survey's Data Release 2: a multi-input recurrent neural network (RNN) and a compound network combing the RNN and the convolutional neural network (CNN). To deal with class imbalance, we apply Gaussian Process to generate synthetic light curves with artificial uncertainties for data augmentation. For better performance, we organize the augmentation and training process in a "bagging-like" ensemble learning scheme. The experimental results show that the better approach is the compound network combing RNN and CNN, which reaches the best result of 86.2 per cent on the overall balanced accuracy and 0.75 on the macro F1 score. We develop the ensemble augmentation method to solve the data imbalance when classifying variable stars and prove the effectiveness of combining different representations of light curves in a single model. The proposed methods would help build better classification algorithms of periodic time series data for future sky surveys (e.g. LSST).
Periodic variable stars(1213) -- Light curve classification(1954) -- Neural networks(1933) -- Time domain astronomy(2109) -- Algorithms(1883)
## 1 Introduction
With the upcoming large-scale sky surveys represented by the Vera C. Rubin Observatory project (LSST; Ivezic et al., 2019), time-domain astronomy is now entering a golden age with overwhelming data. Besides, the ongoing surveys are still accumulating data to be analyzed, such as the Zwicky Transient Facility (ZTF; Bellm et al., 2019), the Optical Gravitational Lensing Experiment (OGLE; Udalski et al., 2015), and the Catalina Real-Time Transient Survey (CRTS; Drake et al., 2009). Billions of observed variable sources demand automatic classification for further research. Machine learning plays a prominent role in this task and is broadly divided into two types of algorithms: traditional approaches with artificial input features and deep learning methods based on various neural networks.
Among numerous kinds of variable sources, periodic variable stars are often studied as a particular category because of their importance and distinct observable difference from other sources. For example, Cepheid stars and RR Lyrae stars can be used as standard candles for distance measurement (Alloin & Gieren, 2003), hence are crucial for the Galaxy structure. However, the samples of different periodic variable star classes are highly imbalanced, meaning some classes dominate the known samples while others have few cases. This makes it chal
lenging to train a satisfying machine learning model for the classification task due to the scarcity of some samples and the bias towards the majority classes.
Techniques for dealing with the class imbalance in machine learning can be grouped into three categories: data-level, algorithm-level, and hybrid approaches (Henning et al., 2023). Data-level methods focus on adjusting the training data by resampling and augmentation. Algorithm-level techniques generally modify algorithms with a weight or cost schema, with the assumption that the data are sufficient. Hybrid approaches combine both of them and are often implemented with ensemble learning methods. For traditional machine learning algorithms with artificial input features, plenty of researches are aimed at imbalance learning, such as the Synthetic Minority Over-sampling Technique (SMOTE; Chawla et al., 2002) and the Self-paced Ensemble (SPE; Liu et al., 2020). Nevertheless, the subject of deep learning with class-imbalanced data is understudied (Henning et al., 2023). Although some techniques exist for deep learning to avoid bias towards majority classes on the imbalance data, the issue remains due to the lack of data. Deep learning relies much more on data sufficiency than traditional machine learning algorithms because of the high model complexity with many parameters. A practical solution would be to find an ideal data augmentation method, especially for astronomical light curves. Physical models and data-driven methods are often applied to generate synthetic light curves for data augmentation. As for variable stars, there are no proper physical models for now, only data-driven approaches to be considered, including adding noise (Naul et al., 2018), the Gaussian Process (Faraway et al., 2016; Castro et al., 2017) and the deep generative models (Martinez-Palomera et al., 2022).
The recurrent neural network (RNN) is a widely used deep learning algorithm for light curves, accepting both the light curve and its uncertainties as input (Naul et al., 2018). RNN can also combine with scalar contextual information, such as period and colour, to form a multi-input model (Burhanudin et al., 2021). A synthetic light curve for an RNN model should have a well-modelled uncertainty, which is ignored by previous studies. There is another deep learning method to classify variable stars using the convolutional neural network (CNN) presented by Szklenar et al. (2022). CNN takes the light curve image as input, which also demands Gaussian Process augmentation.
Our work aims to find suitable neural network architecture for periodic variable star classification on the CRTS data while applying proper augmentation to handle data imbalance. We design a multi-input RNN-based neural network and apply Gaussian Process to generate artificial light curves with uncertainties. We also develop a compound neural network architecture by combining the RNN and CNN structures. To mitigate model overfitting on minority classes while improving classification performance, we organize the augmentation and training process in a "bagging-like" ensemble learning scheme.
This paper is organized as follows. Section 2 briefly introduces the CRTS variable star data and details the Gaussian process augmentation. Section 3 describes the ensemble learning scheme we adopt. In Section 4, we characterize the two neural network architectures we design. Section 5 gives the classification result by a comprehensive evaluation on an imbalance test data set. Section 6 presents some limitations and future work, and finally, the summary is provided in Section 7. We publish our source code on [https://github.com/52Hzihan/mixnn4vs](https://github.com/52Hzihan/mixnn4vs).
## 2 Data and Augmentation
The Catalina Real-Time Transient Survey (CRTS) surveyed 33,000 deg\({}^{2}\) of the sky and produced more than 500 million light curves for various sources. Among them, CRTS DR2 provided a catalogue for variable stars (Drake et al., 2017). Similar to Hosenie et al. (2020), we take 11 classes into account for our analysis, as presented in Table 1. In addition, CRTS uses an unfiltered telescope, so the lack of colour data highlights the importance of extracting information from light curves when implementing classification.
To clean the dataset, we fit the phase-folded light curves with Friedman's SuperSmoother (Friedman, 1984) and exclude data points deviating more than three standard deviations from the smoothed light curves. We also delete points with significant errors greater than twice the average error.
\begin{table}
\begin{tabular}{l|l} \hline Classes of variable stars & No. \\ \hline RRab & 4325 \\ Blazhko & 171 \\ RRc & 3752 \\ RRd & 502 \\ Rot (Rotational) & 3636 \\ Ecl (Contact and Semi-Detached Binary) & 18803 \\ EA (Detached Binary) & 4509 \\ LPV (Long Period Variable) & 1286 \\ \(\delta\)-Scuti & 147 \\ ACEP (Anomalous Cepheids) & 153 \\ Cep-II (Type-II Cepheids) & 153 \\ \hline \end{tabular}
\end{table}
Table 1: The number of different classes of CRTS variable stars.
### Gaussian Process
In order to deal with the imbalance of data, we need to generate simulated light curves for augmentation. Since no proper physical model is available for all types of variable stars, the only way is to create synthetic data from natural light curves. Because the deep learning approach we adopt takes uncertainties as part of inputs, we ought to build models for both natural light curves and their uncertainties. Therefore we turn to Gaussian Process (GP; Rasmussen and Williams, 2006), a stochastic process for modelling time series, which is applied for light curve data augmentation in previous studies (Boone, 2019).
GP is a distribution over functions. In our simple form with scalar input, it can be viewed as an Infinite-dimensional joint Gaussian distribution over time, which is fully described by its mean function and kernel function (i.e. covariance function).
\[f(t)\sim GP(\mu(t),k(t,t^{\prime})) \tag{1}\]
where \(\mu(t)\) is the mean function and \(k(t,t^{\prime})\) computes the covariance between two points \(t\) and \(t^{\prime}\).
To fit a GP model for a light curve, we need to choose a prior kernel function and a prior mean function, then calculate the Bayesian posterior functions under the data. Here we adopt the Matern 5/2 kernel as the prior kernel, which is given by
\[k_{Matern52}(\tau)=\alpha^{2}\left(1+\frac{\sqrt{5}\tau}{\rho}+\frac{5\tau^{2} }{3\rho^{2}}\right)\exp\left(-\frac{\sqrt{5}\tau}{\rho}\right) \tag{2}\]
where \(\tau=t-t^{\prime}\), \(\alpha\) and \(\rho\) are the hyperparameters to be optimized. For the prior mean function, we can simply set \(\mu(t)=0\).
Given a real light curve with uncertainties \((\mathbf{t},\mathbf{m},\boldsymbol{\sigma})\), For any other random time points \(\mathbf{t}_{*}\), the joint distribution of \(\mathbf{m}\) and the predicted magnitude \(\mathbf{m}_{*}\) will be a multivariate Gaussian distribution as follow:
\[\begin{bmatrix}\mathbf{m}\\ \mathbf{m}_{*}\end{bmatrix}\sim\mathcal{N}\left(\begin{bmatrix}\mu(\mathbf{t} )\\ \mu(\mathbf{t}_{*})\end{bmatrix},\begin{bmatrix}k(\mathbf{t},\mathbf{t})+diag( \boldsymbol{\sigma})^{2}&k(\mathbf{t},\mathbf{t}_{*})\\ k(\mathbf{t}_{*},\mathbf{t})&k(\mathbf{t}_{*},\mathbf{t}_{*})\end{bmatrix}\right) \tag{3}\]
The posterior distribution of \(\mathbf{m}_{*}\) comes as \(\mathbf{m}_{*}\sim\mathcal{N}(\overline{\mu}(\mathbf{t}_{*}),\overline{k}( \mathbf{t}_{*},\mathbf{t}_{*}))\), where
\[\overline{\mu}(\mathbf{t}_{*})=k(\mathbf{t}_{*},\mathbf{t})[k(\mathbf{t}, \mathbf{t})+diag(\boldsymbol{\sigma})^{2}]^{-1}(\mathbf{m}-\mu(\mathbf{t}))+ \mu(\mathbf{t}_{*}) \tag{4}\]
\[\overline{k}(\mathbf{t}_{*},\mathbf{t}_{*})=k(\mathbf{t}_{*},\mathbf{t}_{*})- k(\mathbf{t}_{*},\mathbf{t})[k(\mathbf{t},\mathbf{t})+diag(\boldsymbol{\sigma})^{2}]^{ -1}k(\mathbf{t},\mathbf{t}_{*}) \tag{5}\]
The hyperparameters of the prior kernel function are optimized by minimizing the negative log-likelihood function
\[\ln\mathcal{L}(\alpha,\rho)=-\frac{1}{2}\mathbf{r}^{T}\mathbf{K}^{-1}\mathbf{ r}-\frac{1}{2}\ln\det(\mathbf{K})-\frac{N}{2}\ln(2\pi) \tag{6}\]
where \(\mathbf{K}=k(\mathbf{t},\mathbf{t})+diag(\boldsymbol{\sigma})^{2}\), \(\mathbf{r}\) is the residual after subtracting the model prediction means from the observations, and \(N\) is the number of data points.
We employ the GP regression using **George**(Ambikasaran et al., 2015).
### Generate Synthetic Light Curves with Uncertainty
The GP allows us to generate synthetic light curves \((\mathbf{t}_{*},\mathbf{m}_{*},\boldsymbol{\sigma}_{*})\) on randomly sampled time points \(\mathbf{t}_{*}\), where \(\boldsymbol{\sigma}_{*}=tr(\overline{k}(\mathbf{t}_{*},\mathbf{t}_{*}))\). To make a synthetic light curve more "real" in the specific generation process, we scale up \(\boldsymbol{\sigma}_{*}\) to ensure the mean error is the same as its prototype. The time points \(\mathbf{t}_{*}\) are sampled to have the same size as \(\mathbf{t}\), and the magnitudes \(\mathbf{m}_{*}\) come from the corresponding \(\overline{\mu}(\mathbf{t}_{*})\) adding Gaussian noises with scaled \(\boldsymbol{\sigma}_{*}\) as standard deviations. In addition, we apply a random phase shift for each synthetic light curve to enhance diversity.
Figure 1 shows examples of GP regression and synthetic light curves on several classes. The light curves are folded with twice the period for better exhibition. As a data-driven model, the GP regression result gives significant uncertainty at sections with few observations.
## 3 Ensemble Learning Method
A general approach for data augmentation is to produce enough synthetic light curves so that every class has an equal size for training data. However, in the case of the deep learning method, especially the recurrent neural network we adopt, this equal augmentation method meets problems. Although an artificial light curve has different numerical values against its prototype, their high-level features in the neural networks may still resemble each other since their shapes look similar. For categories with few entries, equal augmentation means too many simulations for a single light curve, which may cause overfitting on samples of small-size classes when training the neural networks. Employing fewer simulations and applying class weights may partly overcome the overfitting problem; however, this is still some trade-off between large-size and small-size classes, and it makes a limited contribution to the overall performance.
Considering the above issues, we develop an ensemble learning method for neural networks to tackle data imbalance, as depicted in Figure 2. Like the classical ensemble manner of "bagging" (Breiman, 1996), our ap
proach is to build several sub-datasets from the training data and then train a neural network on each sub-dataset. The classified result will be an average of all networks' outputs. When setting up sub-datasets, we apply random undersampling for large-size categories while implementing Gaussian Process augmentation for small-size categories, ensuring every category has an equal and moderate size. Notice that the augmentation procedure generates different synthetic light curves for each sub-dataset.
Besides the benefit of handling data imbalance, the ensemble learning method has its original profit of improving performance. Our "bagging-like" approach also takes this advantage to reach higher classification accuracy at the expense of training multiple networks. However, there is no need to worry about the overall computation cost. We can train every network for only a few epochs by choosing a large learning rate and then letting the ensemble procedure combine these "weak learners" to generate a strong model.
We split the whole dataset into a training set, a validation set and a test set by a ratio of \(6:1:3\) with no overlapping. The training set is used to build 10 sub-datasets with 1875 light curves for each class. Note that these two hyperparameters are not optimized as the computational cost is high, only a proper value to exhibit the effectiveness of ensemble operation. The validation set is augmented to have every category's size equal to the max. We do not augment the test set in order to evaluate the classification on a real data imbalance degree (actually the imbalance degree of the dataset since we do not know the natural distribution of classes). In practice, a neural network trained by this augmentation-based ensemble approach will give the same classification result for a natural light curve and its simulations.
## 4 Multi-input Neural Networks
Neural networks have become popular in astronomical data mining with their convenience and high performance and without feature engineering, usually recurrent neural networks (RNNs) for sequential data and convolutional neural networks (CNNs) for image data. As for light curves, RNNs turned into a trendy method since Naul et al. (2018) demonstrated that RNNs could easily handle their characteristic of irregularly sampled
Figure 1: Examples of GP regression and synthetic light curves on several classes. For these six variable stars, the above plotted the natural light curve with error bars and its GP model, and the bottom plotted a corresponding synthetic light curve with synthetic error bars. The mean functions of the GP models are presented in solid lines, and the modelled uncertainties are illustrated in filled blue regions.
time series. Meanwhile, Szklenar et al. (2022) proved that CNNs could also behave well when classifying variable stars by plotting phase-folded light curves as images. These approaches provided different choices for data with different sequence lengths. RNNs usually deal with sequences of not longer than several hundred data points, while CNNs need as many observations as possible to make the plotting have distinct patterns. We try both kinds of neural networks to classify CRTS variable stars' light curves, finding that RNN performs much better than the CNN approach because the typical sequence lengths are merely 100-300. Therefore we choose RNN as a basic structure of our neural network model.
Apart from the light curve itself, generally, there is extra information that can be utilized as features in classification, such as period and variation amplitude (and colours in surveys other than CRTS). These numerical features demand a proper way to combine them with the RNN structure. We design a multi-input neural network to fully use these pieces of information, as shown in Figure 3. Hereafter we call this network an RNN-based multi-input neural network.
We can easily combine RNN and CNN structures to perform better within this multi-input neural network architecture, as exhibited in Figure 4. Although using CNN alone is less effective than using RNN alone on CRTS data, the participation of CNN in the compound architecture may provide an additional perspective on high-level feature extraction. We apply a procedure of 2-stage training to optimize the compound multi-input neural network. The specification will be in Section 4.2.
We deploy our neural network models on Keras, a Python deep learning API built over TensorFlow (Abadi et al., 2015).
### RNN-based multi-input neural network
As shown in Figure 3, the RNN structure takes as input a sequence of vectors \((\Delta t,mag,error)\). To pre-process light curves, we transform sampling time points \(t\) to time intervals \(\Delta t\) and normalize the values of magnitudes to have mean zero and standard deviations one on each light curve. A masking layer is applied after the input of the sequence in order to cope with different lengths of light curves, which demand to pad input sequences with zero until they reach a same length.
The central part of this RNN structure is two bidirectional Gated recurrent unit (GRU) layers: the first returns a sequence of vectors, and the second returns an embedding vector. GRU is a widely used architecture to help retain information over a long sequence and make it possible to extract morphological features from the entire light curve. Then we employ two fully connected layers (also named dense layer) to reduce the dimensionality of the embedding vector.
We also apply two dense layers after the input layer for the numerical data to be embedded into a vector. This idea is motivated by heuristic thinking that a high-dimensional representation may attach importance to the numerical input when concatenating with high-level features of the RNN part. After the concatenation, a dense layer with softmax output gives the probability of the input belonging to each class.
To mitigate overfitting, we adopt two kinds of regularization approaches, the Dropout layer and the so-called "label smoothing" method. Dropout layers are attached to the second GRU and the first dense layers. They take effect by randomly dropping neurons and corresponding connections during training. We use a dropout rate of 0.4, which means dropping 40 per cent of the neurons.
Figure 2: The augmentation-based ensemble learning process.
Label smoothing is to set the one-hot label of training samples to soft labels, which is to set \(1-\alpha\) for the true class and \(\alpha\) for other classes, where \(\alpha\) is a small number that equals 0.1 in our case. This technique can urge the neural network not to be over-confident in the classification result.
### Compound multi-input neural network
Figure 4 shows the architecture of our compound multi-input neural network, which represents one light curve with two different inputs: a sequence and an image. The RNN and numerical input parts are identical to the previous RNN-based multi-input neural network. The image input is a \(128\times 128\) pixels single-channel image on which we plot a phase-folded light curve. The structure of the CNN part is the same as in Szklenar et al. (2022). We only change the output of the last dense layer to a 32-dimension vector for proper concatenation with the outputs of another two parts of the network.
To train this network, we apply a 2-stage procedure. First, we ignore the image input, train an RNN-based multi-input neural network on the same dataset and save the weights of the best model. Then these weights are loaded to the compound network layer by layer, and the weights of the RNN part are fixed before training, which means setting the RNN part untrainable. This 2-stage procedure aims to make the CNN part a supplement for high-level features, not a redundant structure. The regularization approach during training for the compound network is the same as those used in the RNN-based network.
## 5 Implementation and Results
We implement our classification methods on a computer with an NVidia GeForce RTX3090 GPU. Every training epoch takes about 1 minute, and the test process takes about 3 minutes. The computation cost of the Gaussian Process augmentation is comparable to the entire training and validation process.
### Evaluation Metrics
We adopt the confusion matrix, the balanced accuracy and the macro F1 score as metrics to evaluate the performance of the imbalanced data classification. To normalize the confusion matrix for a better exhibition, we divide each row by the total number of objects per class. Therefore the diagonal values become \(Recall\) of every class. The confusion matrix can also be normalized by columns to show \(Precision\) of each category. The balanced accuracy is defined as the average of \(Recall\) obtained on each class, hence the average of the diagonal values of the normalized confusion matrix. The macro \(F1\) score is the harmonic average of \(Precision\) and \(Recall\), as follow
\[F1=2\times\frac{Precision\times Recall}{Precision+Recall} \tag{7}\]
The higher the macro F1 score, the better the classification result.
### Hyperparameter setting
Most of the hyperparameters are depicted in Figures 3-4, leaving the batch size and the learning rate for the Adam optimizer. These two hyperparameters are always tuned together. We choose a relatively small batch size of 32 to get a better generalization performance for the neural network model and a relatively large learning rate of 0.001 to reduce the computation cost. We also try lower learning rates and careful scheduling strategies, finding the improvement level less than the effect of different random initial weights of neural networks.
Figure 3: The architecture of the RNN-based multi-input neural network.
Considering the ability of the ensemble learning method to integrate weak learners into a strong learner, it is rational to make this trade-off between performance and computation cost.
### Results of the RNN-based multi-input neural network
The training step of the RNN-based model takes ten epochs on each sub-dataset, therefore 100 epochs for the entire ensemble learning process. Figure 5 shows the confusion matrix of the classification results on the test data compared to the other two methods: one is to apply equal augmentation, i.e. augmenting each category of the training set to the same number of the largest category; another is to augment the small-sized classes to a moderate size (1875 in the implementation) and employ the class weight when training. The macro F1 score of the ensemble technique, the equal augmentation, and augmentation with class weights are 0.71, 0.66, 0.67, respectively. As a result, the performance with ensemble technique is obviously superior to that with the equal augmentation and augmentation with class weights.
### Result of the compound multi-input neural network
Though training a single RNN-based or CNN-based multi-input neural network consumes not much time, the compound neural network is hard to train by taking 100 epochs on every sub-dataset to find the optimal model. Table 2 exhibits the balanced accuracy of the results given by our two different networks trained on each sub-datasets. Both the classification score on validation and test sets are offered. The final ensemble learning result of the compound network is displayed in Figure 6, where we mark the variation of each class's \(Recall\) compared to the RNN-based network.
The overall balanced accuracy of the compound model improves slightly from 85.7 per cent to 86.2 per cent compared to the RNN-based model, while the macro \(F1\) score increases from 0.71 to 0.75. The relatively significant improvement in the macro \(F1\) score derives from the advance in \(Precision\), which is depicted in Figure 7 by the confusion matrix normalized to the \(Precision\) view. The compound model classifies the large-sized categories more accurately at the cost of slightly decreasing \(Recall\) of the small-sized ones, and this relieves the misclassification of the majority of the samples.
### 10-fold cross-test
The performance of a classification algorithm on a dataset usually varies with different partitions of training and test set. To reliably demonstrate the capacity of our method, we carry out a 10-fold cross-test. We split the original dataset into ten equal-sized distinct parts with no overlapping. Each is used as a test
Figure 4: The architecture of the compound multi-input neural network.
set in a training-and-evaluation process, while others are for training and validation sets. Thus the process is repeated ten times. The augmentation and ensemble learning operations are implemented respectively on each of the ten processes.
Here we only perform cross-test for the RNN-based multi-input neural network since the compound model only slightly improves on \(Recall\). Figure 8 depicts the result. Each matrix element is the median of the corresponding value in the ten confusion matrix. The superscript and the subscript indicate the deviation of the second-best and second-worst values, respectively. On the imbalance test set, the relatively large deviation for
\begin{table}
\begin{tabular}{c c c c} \hline \hline RNN val & Compound val & RNN test & Compound test \\ (\%) & (\%) & (\%) & (\%) \\ \hline
80.91 & **84.29** & 81.90 & **84.97** \\
82.68 & **83.49** & 82.62 & **84.29** \\
82.19 & **84.62** & 83.43 & **85.10** \\
81.90 & **82.19** & **82.78** & 79.92 \\
82.82 & **83.54** & 82.67 & **82.72** \\
81.57 & **83.69** & 80.74 & **81.25** \\
83.00 & **85.33** & 83.92 & **85.34** \\
84.61 & **84.97** & 83.39 & **84.77** \\
80.74 & **83.24** & 78.62 & **84.30** \\
83.13 & **83.20** & **79.73** & 78.95 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The balanced accuracy of models trained on each of the ten sub-datasets, given by two different neural networks of the RNN-based and compound networks. Both the classification score on validation and test sets are offered.
Figure 5: The confusion matrixes of the ensemble RNN-based model with comparison to the other two approaches. The matrixes are normalized to the \(Recall\) view. (a): classification result of the ensemble RNN-based model. (b): the result of the equal augmentation approach. (c): the result of moderate augmentation and class weights.
Figure 6: The confusion matrix of the ensemble compound multi-input neural network, marked with the variation of each class’s \(Recall\) compared to the RNN-based network.
small-size classes can derive from the misclassification of only one or two samples.
## 6 Discussion
Considering all types of RR Lyrae stars as a whole, the compound neural network achieves a total \(Recall\) of 96.7 per cent and a total \(Precision\) of 97.1 per cent. Other periodic variable stars, except rotational stars, rarely contaminate the RR Lyrae star samples. However, some normal RRab stars are misclassified as RR Lyrae stars with the Blazhko effect or ACEP stars. Due to the significant data imbalance, these misclassified samples constitute a large proportion of the predicted samples for these two small-sized categories. Similarly, the misclassified samples from RRc stars greatly affect the purity of RRd stars.
For rotational stars, the compound model achieves 66.9 per cent of \(Recall\) and 42.2 per cent of \(Precision\). Some of these stars could confuse with the contact and semi-detached binary (Ecl) stars. Moreover, the misclassified rotating stars markedly contaminated the small-sized Cep-II stars class. The Ecl class gets \(Recall\) of 80.5 per cent and \(Precision\) of 94.2 per cent. The misclassified samples of Ecl stars also contaminate the detached binary stars (EA) because there is no clear division between these two types. \(Recall\) and \(Precision\) of EA stars are 96.1 per cent and 83.3 per cent, respectively.
The long period variable stars (LPV) and the \(\delta\)-Scuti stars are relatively distinct from other types of stars, and have \(Recall\) of 99.5 per cent and 97.7 per cent, respectively. But they could be polluted by the misclassified samples of other large-sized classes, which makes \(Precision\) decrease to 90.1 per cent and 89.6 per cent, for LPV and \(\delta\)-Scuti respectively.
For both types of cepheids, the total \(Recall\) is 93.3 per cent while the total \(Precision\) is 52.5 per cent. The ACEP stars have \(Recall\) of 97.8 per cent and while Cep-II stars have 73.3 per cent. They mainly confuse each other, but also get dramatically contaminated by other large-sized classes' misclassified samples.
## 7 Conclusions
In this paper, we present an ensemble learning approach based on data augmentation for periodic variable star classification by light curves using deep learning on imbalanced data. We apply Gaussian Process to generate artificial light curves with uncertainties for small-size classes and take undersampling for large-size classes, setting up balanced sub-datasets of the training set. Training models on these sub-datasets could avoid overfitting on small-size classes, and the ensemble result shows performance improvement.
We design two kinds of neural network architectures for the task: the RNN-based multi-input model and the compound model combing RNN and CNN structures.
Figure 7: The confusion matrixes normalized to the \(Precision\) view. (a): the result of the RNN-based model. (b): the result of the compound model. On the imbalance test set, the misclassified samples of the large-sized classes can easily dominate samples of the predicted small-sized categories.
Figure 8: The 10-fold cross-test result for the RNN-based multi-input neural network. The matrix is normalized to the \(Recall\) view. Each matrix element is the median of the corresponding values in the ten confusion matrixes. The superscript and the subscript indicate the deviation of the second-best and second-worst values, respectively.
These multi-input models take both the light curve and additional numerical features as input. On the CRTS variable star data, The macro \(F1\) score on the imbalanced test set reaches 0.71 and 0.75 for the RNN-based and compound model, respectively.
Our ensemble learning approach can easily cooperate with different deep learning models since it is a data-level technique. Our attempt to combine CNN and RNN structures implies that using different representations of light curves together in a model is possible for higher performance. This kind of compound neural network architecture is flexible for time series sky surveys with different light curve lengths. These methods put forward by us will contribute to a better classification of variable sources with time series data in the future projects (e.g. LSST), finish the multi-classification in one step with high performance, and also shed light on the imbalance classification with multimodal data.
## Acknowledgements
This paper is funded by the National Natural Science Foundation of China (Grant Nos.12273076, 12203077, 12133001, U1831126 and 11873066), the Science Research Grants from the China Manned Space Project (Nos. CMS-CSST-2021-A04 and CMS-CSST-2021-A06), and Natural Science Foundation of Hebei Province (No.A2018106014).
| 天体観測の時間領域における研究は、Vera C. Rubin Observatoryのプロジェクト(LSST)による大型測光天体 surveysの進行と予定により急速に進歩しています。数千億個の変動光源は、光曲線のための自動分類アルゴリズムの改善を必要とし、その中でも周期性を持つ変動星は頻繁に研究されています。周期性を持つ変動星には、さまざまなカテゴリーがあり、そのクラスの不均衡が高い。これは、深度学習の方法を含むアルゴリズムに課題となります。私たちは、Catalina Surveyのデータリリース2において周期性を持つ変動星を分類するための二種類のニューラルネットワークのアーキテクチャを設計しました。一つ目は、マルチ入力の再帰的ニューラルネットワーク(RNN)、もう一つ目は、RNNとConvolucional Neural Network(CNN)を組み合わせた複合ネットワークです。クラスの不均衡に対処するため、ガウス過程を使用して、データの増強のために架 |
2309.04355 | Value-Compressed Sparse Column (VCSC): Sparse Matrix Storage for
Redundant Data | Compressed Sparse Column (CSC) and Coordinate (COO) are popular compression
formats for sparse matrices. However, both CSC and COO are general purpose and
cannot take advantage of any of the properties of the data other than sparsity,
such as data redundancy. Highly redundant sparse data is common in many machine
learning applications, such as genomics, and is often too large for in-core
computation using conventional sparse storage formats. In this paper, we
present two extensions to CSC: (1) Value-Compressed Sparse Column (VCSC) and
(2) Index- and Value-Compressed Sparse Column (IVCSC). VCSC takes advantage of
high redundancy within a column to further compress data up to 3-fold over COO
and 2.25-fold over CSC, without significant negative impact to performance
characteristics. IVCSC extends VCSC by compressing index arrays through delta
encoding and byte-packing, achieving a 10-fold decrease in memory usage over
COO and 7.5-fold decrease over CSC. Our benchmarks on simulated and real data
show that VCSC and IVCSC can be read in compressed form with little added
computational cost. These two novel compression formats offer a broadly useful
solution to encoding and reading redundant sparse data. | Skyler Ruiter, Seth Wolfgang, Marc Tunnell, Timothy Triche Jr., Erin Carrier, Zachary DeBruine | 2023-09-08T14:24:40 | http://arxiv.org/abs/2309.04355v1 | # Value-Compressed Sparse Column (VCSC): Sparse Matrix Storage for Redundant Data
###### Abstract
Compressed Sparse Column (CSC) and Coordinate (COO) are popular compression formats for sparse matrices. However, both CSC and COO are general purpose and cannot take advantage of any of the properties of the data other than sparsity, such as data redundancy. Highly redundant sparse data is common in many machine learning applications, such as genomics, and is often too large for in-core computation using conventional sparse storage formats. In this paper, we present two extensions to CSC: (1) Value-Compressed Sparse Column (VCSC) and (2) Index- and Value-Compressed Sparse Column (IVCSC). VCSC takes advantage of high redundancy within a column to further compress data up to 3-fold over COO and 2.25-fold over CSC, without significant negative impact to performance characteristics. IVCSC extends VCSC by compressing index arrays through delta encoding and byte-packing, achieving a 10-fold decrease in memory usage over COO and 7.5-fold decrease over CSC. Our benchmarks on simulated and real data show that VCSC and IVCSC can be read in compressed form with little added computational cost. These two novel compression formats offer a broadly useful solution to encoding and reading redundant sparse data.
\({}^{1}\)Grand Valley State University
1 Campus Drive
Allendale, Michigan, 49401, USA
\({}^{2}\)Van Andel Institute
333 Bostwick Ave NE
Grand Rapids, Michigan, 49503, USA
## 1 Introduction
Sparse data is mostly zero or missing, and is often encoded in sparse matrices that avoid explicit storage of these values. Sparse matrices are abundant in many domains that involve scientific computing, machine learning, and data engineering. In these domains, software priorities are often a combination of memory usage and fast compute, with these goals usually being at odds with one another.
Historically, most improvements to sparse matrix formats have prioritized compute optimizations over compression ratio. However, as the size of datasets continue to grow, the inability of matrices in conventional sparse storage formats to be processed in-core will be a primary bottleneck for computation.
General purpose sparse matrix formats, such as Coordinate (COO) or Compressed Sparse Column (CSC), offer compression with reasonably fast compute. Specifically, COO stores the matrix in triplet format, storing a row-index, column-index, and value for each nonzero value. As depicted in Figure 1, for each nonzero value, CSC (CSR) stores the value and the row-index (column-index), along with the pointer offsets to the start of each column (row). While popular, COO, CSC, and CSR fail to leverage
specific characteristics of the data, such as significant redundancy in nonzero values, common in count data and discrete distributions.
Massive sparse datasets such as those used in genomics, often contain highly redundant values. Take, for example, an \(18082\times 897734\), single-cell transcriptomics dataset from 10x Genomics containing the counts of genes in single-cells [1]. This matrix is \(92\%\) sparse and contains approximately \(1.3\) billion nonzero values. Despite the large number of nonzero values, there are only about \(7,000\) unique values. This type of data is critical for biological research, but existing data structures are memory inefficient, failing to leverage the high redundancy of the values. For instance, as shown in Table 1, in COO format this dataset requires approximately \(20\) gigabytes (GB), with CSC reducing the memory requirement by only \(25\%\) relative to COO. As genomics data continues to grow, memory limitations will become increasingly prohibitive to in-core analysis and massive data integration.
In this paper, we introduce two novel sparse matrix formats that aim to address this limitation of conventional sparse matrix formats on highly redundant data: (1) Value-Compressed Sparse Column (VCSC) and (2) Index- and Value-Compressed Sparse Column (IVCSC). Using VCSC, the size of the previously mentioned 10x Genomics dataset is reduced by \(74\%\) compared to COO and \(65\%\) compared to CSC. IVCSC enables further compression, reducing the size over COO and CSC by \(91\%\) and \(88\%\), respectively. These formats are implemented in C++ and available on Github ([https://github.com/Seth-Wolfgang/IVSparse](https://github.com/Seth-Wolfgang/IVSparse)).
This paper is organized as follows: Related work is discussed in section 2, methods underlying VCSC and IVCSC are discussed in section 3, experimental results are presented in section 4, and a conclusion and discussion is given in section 5.
## 2 Related Work
A variety of compressed matrix formats exist, for both sparse and dense matrices, with most formats focusing on enabling fast compute. As is typical with sparse matrix formats, we limit our discussion of related work to only lossless compressed matrix formats.
### Sparse Matrix Compression
A popular approach is to create a data structure that groups data into blocks [2, 3, 4, 5]. Block storage formats often take advantage of natural patterns in the data to speed up Sparse Matrix-Vector Multiplication (SpMV), utilizing this structure for better cache access [3, 4], optimizing for distributed systems [2], or for better memory bandwidth via data compression [5].
The authors of [6] utilize a form of data compression aimed at achieving faster SpMV computations. Specifically, index data is delta-encoded and decompressed to enable faster computation on sparse data with distinct local density patterns [6].
In [7], two modifications to further compress CSR, namely CSR Delta Unit (CSR-DU) and CSR Value Indexed (CSR-VI) are presented. CSR-DU compresses the column indices and row pointers in CSR by breaking up the matrix into units, applying
delta encoding over each unit with flags to store when a unit starts a new row. CSR-VI compresses values by storing unique values in a lookup table, reducing the memory footprint for redundant data but necessitating the storage of references for each nonzero value. While similar in idea to our approach, [7] attempts to optimize SpMV performance through compression, whereas we focus on optimizing compression with limited performance impact on common operations.
### Dense Matrix Compression
While most existing sparse matrix compression formats fail to take advantage of redundancy, many dense data compression algorithms are designed to capitalize on redundancy. For example, Huffman encoding constructs a dictionary of keys where more common symbols have smaller keys [8]. Lempel-Ziv-Welch (LZW) [9] compression is similar to Huffman, but targets redundancy by compressing repetitive patterns. LZW excels on data where repeated patterns are prevalent, albeit at the cost of increased storage for unique patterns. Run-length Encoding (RLE) is another compression technique that encodes consecutive identical symbols as a single count-value pair. RLE's compression ratio directly correlates with the extent of consecutive
Figure 1: Comparison of Sparse Matrix Storage Formats
redundant values in the data.
## 3 Methods
### Value-Compressed Sparse Column (VCSC) Format
VCSC takes advantage of the per-column redundancy of values by storing each unique value only once per column in which it occurs. For each column, VCSC stores three arrays, one for unique values, one for value counts, and one for row indices. The entries of the values array are the unique values in the column. The corresponding entries of the counts array are the number of times each value occurs in the column. The entries of the indices array are the row indices of each occurrence. These row indices are ordered first by value and within each value ordered in ascending order of row index. An example of VCSC format for a single column of a sparse matrix is shown in the middle panel of Figure 1.
By reordering the row indices first by value, we eliminate the need to store references to each element, as is necessary in CSC-VI. While ordering first by value, then by row index significantly improves compression for highly redundant data, traversal through the rows in a column is no longer guaranteed to be performed sequentially.
To evaluate the memory usage of VCSC, we first consider the memory usage of CSC which is given by
\[\text{CSC}_{\text{size}}=\underbrace{val_{\text{size}}*nnz}_{\text{bytes for nonzero vals}}+\underbrace{idx_{\text{size}}*nnz}_{\text{bytes for indices}}+\underbrace{idx_{\text{size}}*(ncol+1)}_{\text{bytes for col pointers}}, \tag{1}\]
where \(nnz\) is the number of nonzeros and \(val_{\text{size}}\) and \(idx_{\text{size}}\) are the byte sizes for values and indices, respectively. In contrast, the memory usage of VCSC is given by
\[\text{VCSC}_{\text{size}}=\sum_{i=1}^{nCols}\underbrace{(val_{\text{size}}*nUniq _{i}}_{\text{bytes for values}}+\underbrace{idx_{\text{size}}*nUniq_{i}}_{\text {bytes for counts}}+\underbrace{idx_{\text{size}}*nnz_{i}}_{\text{bytes for indices}}+ \underbrace{idx_{\text{size}}}_{\text{len}}), \tag{2}\]
where \(nUniq_{i}\) is the number of unique values in column \(i\), and \(nnz_{i}\) is the number of nonzeros in column \(i\). Unlike CSC (and CSC-VI), the only component of Equation 2 which grows at a fixed rate with the number of nonzeros in a column is the memory for the indices.
VCSC can be useful for compressing data with high within-column value-redundancy, even if it is not sparse. For instance, VCSC can compress a large dense matrix containing only a few unique values. However, as is common when using sparse matrix formats for relatively dense matrices, random access and computations would be significantly slower than dense storage.
### Index- and Value-Compressed Sparse Column (IVCSC) Format
Whereas VCSC compresses just values, IVCSC further compresses VCSC by also compressing the indices. For each column, IVCSC stores a single array. This array
contains sections, where each section is a unique value, followed by the index width, followed by the row indices where that value occurs, followed by a delimiter (a zero) to indicate the end of the row indices for that value. Within a single unique value, the indices are compressed by positive-delta encoding, as shown in the bottom pane of Figure 1, and then byte-packed.
By positive-delta encoding the row indices, the magnitude of the stored indices is reduced. Byte-packing the encoded indices discards leading bytes that are zero, further reducing the storage required for each index, while still allowing efficient traversal through the indices (which would not be the case with bit-packing). Depending on the redundancy and density of the data, it is often possible to represent indices for frequently occurring values with a single byte.
As with VCSC, traversal through the rows in a column is not necessarily sequential. Furthermore, traversal through a column in IVCSC requires decoding the positive-delta-encoded indices and checking for end-of-value delimiters.
The memory usage of IVCSC is given by
\[\text{IVCSC}_{\text{size}}=\sum_{i=1}^{nCols}\left(\underbrace{8}_{\text{len}}+ \underbrace{nUniq_{i}*(val_{\text{size}}+1)}_{\text{bytes for value and idx width}}+\underbrace{\sum_{j=1}^{nUniq_{i}}(nnz_{j}+1)*idxWid_{j}}_{\text{ bytes for encoded indices and delim}}\right), \tag{3}\]
where the 8 bytes (denoted by len) are used to store the size of the data array, \(nnz_{j}\) is the number of times the unique value \(j\) appears in the column, and \(idxWid_{j}\) is the byte width of the unique value's indices after positive-delta encoding and byte-packing. Comparing Equation 2 and Equation 3, one can see the bytes for storing the value data is similar, with the main difference being that the counts of unique values are replaced with delimiters, which are slightly smaller in most use cases. The differences due to index compression can be seen in Equation 3 and the term for bytes for encoded indices and delim.
IVCSC is well-suited for very large, highly redundant matrices. As with VCSC, IVCSC can be useful for compressing dense matrices with highly redundant values. In comparison to VCSC, IVCSC further prioritizes compression at increased computation and traversal time.
## 4 Experimental Results
### Setup and Implementation
All benchmarking is performed on a dedicated workstation computer running on a Ryzen 9 5950x (Zen 3) with a max clock speed of 5.1GHz, 512KB L1 cache (64KB per core), 8MB L2 cache (512KB per core) and 64MB shared L3 cache. This machine has 64-bit Ubuntu 22.04.1 LTS installed and all programs were compiled with GCC version 11.3.0 and the flag -O2. OpenMP was disabled for all benchmarking.
We test the compression ratio on a variety of datasets that represent a range of real-world use-cases, including best and worst case examples. For timing results relating to the performance of our formats, we benchmark against the CSC format of the Eigen C++ library, a widely used matrix and linear algebra package [10].
We present timing results on three common matrix operations: multiplication of a sparse matrix by a scalar (scalar multiplication), sparse matrix-vector multiplication (SpMV), and sparse matrix-matrix multiplication (SpMM). Additionally, we present timing results for iterator traversal, which is the fundamental cost to many sparse-dense BLAS-like operations, as well as construction time from a COO matrix. Each timing benchmark is given a cold start, which is not timed, and then results are reported as the mean of ten timed iterations using high_resolution_clock of the C++ chrono library. To isolate as many variables as possible and because we benchmark the data structure and not our implementation of BLAS-like routines, we implement the same naive algorithm for SpMV and SpMM on the Eigen CSC matrix and our implementations. For all benchmarks, rows and columns are stored as 4 byte integers and values are stored as 8 byte doubles (excluding any positive-delta encoding and byte-packing).
### Memory Usage
In order to quantify the efficiency of our format on redundant data, for all columns with nonzero elements we define the redundancy of the \(i\)-th column as
\[r_{i}=\left\{\begin{array}{cl}1-\frac{nUniq_{i}}{nnz_{i}},&\text{if }nUniq_{i}>1\\ 1,&\text{otherwise}\end{array}\right.. \tag{4}\]
Figure 2: Comparison of memory required for VCSC, IVCSC, COO and CSC (as a ratio over dense storage required) for simulated random \(10000\times 100\) matrices.
This value is averaged over all columns with nonzero elements, giving the mean matrix redundancy (MMR).
For the benchmarks in Figure 1(a), a single matrix is generated at the beginning of a session, the values of which are then modified to change the redundancy in further runs. Figure 1(a) shows the compression ratio of CSC, VCSC, and IVCSC as a function of MMR on a matrix with 90% sparsity. Compared to dense storage, COO and CSC require 20% and 15% of the dense memory, respectively. VCSC and IVCSC are able to compress by more than COO in all cases and by more than CSC at an MMR greater than 0.33 and 0.09, respectively.
Figure 1(b) shows the compression ratio over dense matrix memory usage for COO, CSC, VCSC, and IVCSC as a function of sparsity on a matrix fixed to 90% MMR. Regardless of sparsity, VCSC and IVCSC use less memory than a dense representation of the same matrix on highly redundant data. CSC and COO both require more memory to store the given dense representation until a sparsity of approximately 33% and 49%, respectively. In the worst case, at 0% sparsity, VCSC and IVCSC use approximately 64% and 13% of the memory required for dense storage, respectively.
We test on four real world datasets which represent a wide range of use cases. Of these four, three are conducive to memory compression using our two methods. Additionally, we test on two simulated datasets, representing the ideal and worst case scenarios. For each of these datasets, the dimensions, number of nonzeros, sparsity, and MMR are given in Table 1.
We use a single-cell transcriptomics dataset from [1] as a representative example of data that follows a zero-inflated negative bionmial counts distribution. Table 1 shows that VCSC and IVCSC compress this dataset to 25.96% and 8.94% of the COO memory footprint, respectively.
We use the large Web of Science dataset obtained from [11], and processed using CountVectorizer with default parameters [12], as a representative example of a bag-of-words model. Table 1 shows that VCSC and IVCSC compress this dataset to 32.80% and 15.83% of the COO memory footprint, respectively.
The MovieLens dataset, obtained from [13], is a representative example of discrete, ordinal data, containing 10 possible ratings between 0 and 5. Table 1 shows that VCSC and IVCSC compress this dataset to 32.86% and 17.83% of the COO memory footprint, respectively.
The PB02r dataset was obtained from SuiteSparse [14] and is representative of performance on comparable computational fluid simulation matrices. Table 1 shows
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
**Dataset** & **Dimensions** & **Nonzeros** & **Sparsity** & **MMR** & **COO Size (GB)** & **CSC \%** & **VCSC \%** & **IVCSC \%** \\ \hline Single-cell & \(18082\times 897734\) & \(1.30e9\) & \(91.92\%\) & \(9875\) & \(20.85\) & \(75.02\) & \(25.96\) & \(8.94\) \\ Web of Science & \(46985\times 124836\) & \(5.41e6\) & \(99.66\%\) & \(.7155\) & \(0.09\) & \(75.58\) & \(32.80\) & \(15.83\) \\ MovieLens & \(237187\times 100000\) & \(2.85e7\) & \(97.04\%\) & \(.6162\) & \(0.46\) & \(75.88\) & \(32.86\) & \(17.83\) \\ PR02R & \(161070\times 161070\) & \(8.19e6\) & \(99.97\%\) & \(.0054\) & \(0.12\) & \(75.49\) & \(103.47\) & \(97.61\) \\ Sim Binary & \(100000\times 1000\) & \(9.99e7\) & \(90.00\%\) & \(1\) & \(1.60\) & \(75.00\) & \(25.00\) & \(6.25\) \\ Sim Unique & \(1000000\times 1000\) & \(9.99e7\) & \(90.00\%\) & \(0\) & \(1.60\) & \(75.00\) & \(100.00\) & \(104.61\) \\ \hline \end{tabular}
\end{table}
Table 1: Memory usage of methods on real and simulated large datasets. CSC, VCSC, and IVCSC storage costs are given as a percentage of COO memory footprint.
that VCSC increases the memory footprint to 103.47% and IVCSC decreases the memory footprint to 97.61%.
A simulated binary matrix, having an MMR of 100% represents the best case scenario for compression using our formats. Table 1 shows that VCSC and IVCSC compress this dataset to 25.00% and 6.25% of the COO memory footprint, respectively. This simulated matrix has an MMR of 0 and represents the worst case scenario for compression using our formats. In this case, Table 1 shows that VCSC offers no change to the memory footprint and IVCSC increases the memory footprint to 104.61%.
### Computational Performance
Computational performance benchmarking results are shown in Figure 3. Because the scalar multiplication calculation only needs to loop over the unique values in VCSC, element-wise operations are performed more quickly than in CSC when data is redundant (Figure (a)a). For sparse matrix-vector (SpMV) and sparse matrix-matrix (SpMM) operations, VCSC is marginally slower than Eigen, and IVCSC is 2-4 fold slower (Figure (b)b).
Constructor time is measured as the time to construct the Eigen (CSC), VCSC, and IVCSC matrices from an underlying COO matrix. At low MMR, the construction time for VCSC and IVCSC is significantly slower, and at high MMR, the construction time for both VCSC and IVCSC approaches the Eigen construction time (Figure 4). However, in absolute terms, the construction time is negligible relative to almost any compute task and the benefits of in-core rather than distributed computing.
Iterator traversal time is measured as the time necessary to fully iterate through the sparse matrix. As shown in Figure 5, VCSC iterator traversal time is comparable to Eigen at high redundancy, whereas IVCSC is 2-4 fold slower.
Figure 3: Benchmarking results for common BLAS-like matrix operations on a 90% sparse \(1000000\times 10000\) matrix.
## 5 Conclusion and Future Work
In this paper we presented two novel compression formats, VCSC and IVCSC, for matrices with highly redundant values. Testing showed that both VCSC and IVCSC offer considerable compression over CSC and COO on data with high MMR in exchange for a very small cost to iterator speed.
One disadvantage of our method, mostly with IVCSC, is the slower iterator traversal, which results in increased compute time. However, the up to 3-fold and 10-fold decrease in size for VCSC and IVCSC, respectively, enables in-core processing of large matrices that would otherwise exhaust RAM on most workstations and necessitate much slower distributed or disk operations.
This work lays the foundation for future research on hybrid sparse matrix formats, such as a hybrid CSC-VCSC structure that uses VCSC to store redundant values and CSC to store non-redundant values for any given column. Additionally, distributed memory and SIMD parallelization of both VCSC and IVCSC could be beneficial to large scale machine learning applications.
We are actively testing VCSC and IVCSC on larger matrices, benchmarking additional operations, and adding further optimizations to address known performance bottlenecks.
## 6 Acknowledgements
This work was funded by a grant from the Chan Zuckerberg Initiative Single Cell Biology Data Insights DI-000-0287 (to Z.D., T.T., S.R., S.W.) and a Grand Valley State University Kindschi Fellowship (to S.W.). | Compressed Sparse Column (CSC) と Coordinate (COO) はSparse Matrix の圧縮形式として人気です。しかし、CSC と COO どちらも汎用的な形式で、データの他の特性(例えばデータ冗長性)に利点を引き出すことはできません。特に、ゲノム学などの機械学習アプリケーションでは、高冗長性のスパースデータは多く存在し、従来のsparse storage format を使用してインコア計算するだけでは大きすぎる場合が多いです。本論文では、CSC の2つの拡張を紹介します:(1) Value-Compressed Sparse Column (VCSC) と (2) Index- and Value-Compressed Sparse Column (IVCSC)。 VCSC は列内の高い冗長性を活用して、COO に比べてデータ圧縮を最大3倍、CSC に比べて最大2.25倍に達成することができ、パフォーマンス特性に大きな影響を与えません。IVCSC は VCSC を拡張し、インデックス配列を |
2309.14146 | Examining Temporal Bias in Abusive Language Detection | The use of abusive language online has become an increasingly pervasive
problem that damages both individuals and society, with effects ranging from
psychological harm right through to escalation to real-life violence and even
death. Machine learning models have been developed to automatically detect
abusive language, but these models can suffer from temporal bias, the
phenomenon in which topics, language use or social norms change over time. This
study aims to investigate the nature and impact of temporal bias in abusive
language detection across various languages and explore mitigation methods. We
evaluate the performance of models on abusive data sets from different time
periods. Our results demonstrate that temporal bias is a significant challenge
for abusive language detection, with models trained on historical data showing
a significant drop in performance over time. We also present an extensive
linguistic analysis of these abusive data sets from a diachronic perspective,
aiming to explore the reasons for language evolution and performance decline.
This study sheds light on the pervasive issue of temporal bias in abusive
language detection across languages, offering crucial insights into language
evolution and temporal bias mitigation. | Mali Jin, Yida Mu, Diana Maynard, Kalina Bontcheva | 2023-09-25T13:59:39 | http://arxiv.org/abs/2309.14146v1 | # Examining Temporal Bias in Abusive Language Detection
###### Abstract
The use of abusive language online has become an increasingly pervasive problem that damages both individuals and society, with effects ranging from psychological alarm right through to escalation to real-life violence and even death. Machine learning models have been developed to automatically detect abusive language, but these models can suffer from temporal bias, the phenomenon in which topics, language use or social norms change over time. This study aims to investigate the nature and impact of temporal bias in abusive language detection across various languages and explore mitigation methods. We evaluate the performance of models on abusive data sets from different time periods. Our results demonstrate that temporal bias is a significant challenge for abusive language detection, with models trained on historical data showing a significant drop in performance over time. We also present an extensive linguistic analysis of these abusive data sets from a diachronic perspective, aiming to explore the reasons for language evolution and performance decline. This study sheds light on the pervasive issue of temporal bias in abusive language detection across languages, offering crucial insights into language evolution and temporal bias mitigation.
Department of Computer Science, The University of Sheffield, UK
{m.jin, y.mu, d.maynard, k.bontheva}@sheffield.ac.uk
## Introduction
The increasing use of social media platforms has given rise to a pervasive problem of online abusive language, which can cause harm to individuals and lead to societal polarization. In recent years, researchers have developed a huge variety of machine learning models that can automatically detect abusive language Mishra et al. (2019); Aurpa, Sadik, and Ahmed (2022); Das and Mukherjee (2023); Alrashidi, Jamal, and Alkathlan (2023). However, these models may be subject to temporal bias, which can lead to a decrease in the accuracy of abusive language detection models, potentially allowing abusive language to be undetected or falsely detected.
Temporal bias arises from differences in populations and behaviors over time Olteanu et al. (2019). In natural language processing (NLP), it can result from various issues. Temporal concept drift refers to the problem of language evolving over time Zhao et al. (2022). Languages change as new meanings develop for existing words and new words and topics come into use over time. Models trained on data from an earlier period can perform worse on chronologically newer data as they are unable to recognize new topics or linguistic features Lukes and Sogaard (2018); Vidgen et al. (2019); Mu et al. (2023). Previous work has examined temporal bias in various tasks such as named entity recognition Derczynski et al. (2016), sentiment analysis Lukes and Sogaard (2018) and rumour detection Mu et al. (2023).
In online abuse detection, words and expressions considered acceptable in the past may have an abusive or offensive connotation now due to the changing language or societal norms Wich et al. (2022); McGillivray et al. (2022). Also, temporal bias occurs when the abusive content fluctuates based on the latest trends, popular topics or breaking news. As the online discussion evolves with new development, certain topics and forms of abuse might gain prominence while others become less prevalent. For example, in 2020 a fraudulently altered video was circulated on Twitter purporting to show Al Jazeera journalist Ghada Oueiss naked in a jacuzzi, as part of an orchestrated attack designed to discredit her Posetti et al. (2021). The video and other photos were distributed with messages alleging she was an alcoholic, drug-addicted profuitute, which engendered in turn a large number of hateful messages connected with the alleged jacuzzi incident, a topic not typically associated with abuse.
Previous work identified temporal bias in an Italian hate speech data set associated with immigrants Florio et al. (2020). However, they have yet to explore temporal factors affecting predictive performance from a multilingual perspective. In this paper, we explore temporal bias in 5 different abusive data sets that span varying time periods, in 4 languages (English, Spanish, Italian, and Chinese). Specifically, we investigate the following core research questions:
* _RQ1:_ How does the magnitude of temporal bias vary across different data sets such as language, time span and collection methods?
* _RQ2:_ What type of language evolution causes the temporal bias in our data sets and how?
* _RQ3:_ Could domain adaptation models, large language models (LLMs) or a more robust data set help to mitigate the temporal bias in abusive language detection?
To answer these questions, we compare the predictive
performance between random and chronological data splits across data sets in different languages and with different temporal coverage. We also experiment with different transformer-based pre-trained language models (PLMs) using the original data set and a filtered data set. Finally, we present an in-depth analysis to investigate the factors for performance degradation.
## Related Work
### Bias in NLP
Bias refers to the presence of systematic and unfair favouritism or prejudice. In various contexts, bias can manifest as a skewed representation or inaccurate judgments that unfairly advantage or disadvantage certain individuals or groups [1]. Bias can arise from various sources such as data selection, annotation processes, models and research design. These biases can potentially lead to unfair or discriminatory outcomes through NLP applications [13]. For instance, biased language models might generate discriminatory content or fail to accurately understand and respond to underrepresented languages. Consequently, addressing and mitigating bias in NLP has become a critical research endeavour. Researchers are exploring techniques to measure and mitigate bias across diverse domains and languages [22, 23, 24]. Common debiasing methods include data reweighing and resampling, debiasing word embeddings, counterfactual data augmentation and bias fine-tuning [15, 16].
### Bias in Abusive Language Detection
Previous work has focused on identifying and mitigating different forms of social bias in abusive language detection, such as gender bias [20], dialect bias (e.g. African-Americans English) [17, 21, 22] and different forms of identity bias (e.g. transgender, black) [15, 16]. Moreover, Elsafoury et al. (2022) measured systematic offensive stereotyping bias (i.e., associating slurs or profane terms with specific groups of people, especially marginalized people) in different word embeddings.
However, little attention has been paid to temporal bias in abusive language detection. One exception is the work of Florio et al. (2020), who identified temporal bias in an Italian hate speech data set associated with immigrants. They investigated the impact of data size and time spans on temporal robustness by using two strategies, namely a sliding window model and an incremental model. Their results showed that adding training data temporally closer to the testing set greatly improved the performance but simply increasing the size of training data did not lead to performance improvement. Also, they found that offensive language in online contexts experienced rapid changes in topics over different time periods. Moreover, McGillivray et al. (2022) made use of time-dependent lexical features to detect abusive language effectively by training on smaller and older data. To facilitate this, they obtained a list of words for semantic change (i.e. acquired or lost an offensive meaning between 2019 and 2020). Their results showed that semantic change impacts abusive language detection and it is feasible to improve the detection by considering this change instead of depending on large labeled data sets. However, both work restricted themselves only to a single data set or a single language and did not explore other languages.
### Temporal Bias in Classification Tasks
Temporal bias occurs in classification tasks due to the variation and evolution of data patterns over time. This temporal variation can pose difficulties for machine learning models as patterns learned from one time period may not be applicable in another. Temporal bias was assessed in various classification tasks such as rumour detection [18], stance detection [19] and multi-label classification tasks related to legislation and biomedicine [13]. Mu et al. (2023) found that domain-adapted pre-trained language models are less sensitive to time and thus are beneficial to temporal gap mitigation; while Chalkidis and Sogaard (2022) proposed group-robust algorithms to reduce the temporal bias in multi-label classification. Moreover, Alkhalifa, Kochkina, and Zubiaga (2023) investigated the impact of word representations and machine learning model choice on temporal performance of various classification tasks such as stance detection and sentiment analysis.
## Data
We study two widely used English abusive data sets (_WaseEM_ and _FOUNTA_). We also study a Chinese data set (_JIANG_), a Spanish data set (_PEREIRA_), and an Italian data set (_SANGUNETTI_), in order to explore the impact of temporarily on different languages. We choose these data sets because the creation time of each post is provided or accessible (via tweet IDs). Details of the data sets are shown in Table 1.
Waseem(Waseem and Hovy, 2016) is an English abusive data set focusing on sexism and racism. They collect the tweets by manually searching common terms related to religious, sexual, gender, and ethnic minorities, and by using the public Twitter search API. They combine these two methods to ensure that non-offensive tweets that contain clearly or potentially offensive words are also obtained. The annotations are created by manual experts and then reviewed by an additional gender study expert. We merge the original _sexism_ and _racism_ labels into a single _abusive_ label, and rename the _neither_ label as _non-abusive_.
Founta(Founta et al., 2018) is an English data set collected from Twitter containing two types of online abuse expressions: abusive and hateful. They randomly collect and sample the data, using text analysis and machine learning techniques to create the boosted set of tweets which are likely to belong to the two abusive classes. The data is then
annotated by crowdsourced workers. Similar to Leonardelli et al. (2021), we map the four labels in the data set into a binary offensive or non-offensive label. We exclude tweets labeled as _spam_, and merge _abusive_ and _hateful_ labels into _abusive_. The _normal_ label is renamed _non-abusive_.
**Jiang**Jiang et al. (2022) is a Chinese sexism data set collected from Sina Weibo (a Chinese microblogging platform). They first collect gender-related Weibos by searching keywords such as 'feminism' and 'gender discrimination'. Then they extract the comments that link to these Weibos and filter out the comments to produce the final data set, which is annotated by three PhD students.
**PEreira**Pereira-Kohatsu et al. (2019) is a Spanish hate speech data set annotated by experts. They randomly collect the data using the Twitter Rest API and filter it using seven dictionaries, where six of them represent different types of hate speech (e.g., race, gender) and the last one contains generic insults.
**SANGUNETTI**Sanguinetti et al. (2018) is an Italian hate speech data set targeting immigrants, Roma and Muslims. They obtain the tweets by selecting a set of neutral keywords related to each target. The data is annotated by a team of both expert and crowdsourced annotators.
### Data Filtering
Since three is no time information or tweet content in the FOUNTA and SANGUNETTI datasets, we re-obtain the tweets with their created time using Twitter Academic API based on the provided tweet IDs. Given the provided tweet IDs and related texts in the PEREIRA corpus, we use them directly without re-collecting the data to avoid data loss as Twitter ids are time ordered1. For all data sets, we remove the duplicates and any tweets with no created time information.
Footnote 1: [https://developer.twitter.com/en/docs/twitter-ids](https://developer.twitter.com/en/docs/twitter-ids)
### Data Splits
We divide the data into training and testing sets using two strategies, namely random splits and chronological splits. The statistics of each data set are shown in Table 2. We can see that two of the data sets cover only a short period (FOUNTA contains many tweets but only covers 10 days, while PEREIRA covers 10 months but is fairly small in size) while all the other datasets span several years.
#### Random Splits
We randomly split the data sets into training and testing sets and keep class distribution the same as the original data sets.
#### Chronological Splits
We adopt a stratified chronological split strategy following the method in Mu, Bontcheva, and Aletras (2023). We first sort the abusive and non-abusive texts separately in chronological order. Then, we extract the first 70% of posts from abusive and non-abusive sets separately and combine them as the training set. Similarly, we combine the last 15% of posts from abusive and non-abusive sets as the testing set. The middle part of the two sets is merged into the validation set. In this way, the distribution of labels in each set is consistent with the original data.
## Predictive Models
LrWe use Logistic Regression with bag-of-words using L2 regularization as our baseline (LR).
**BERT**Bidirectional Encoder Representations from Transformers; Devlin et al. (2018) is a transformer-based Vaswani et al. (2017) language model, which is pre-trained on large corpora, such as the English Wikipedia and the Google Books corpus. During pre-training, it uses a technique called masked language modeling (MLM) where it randomly masks some of the words in the input text, aiming to predict the masked word based on the context Devlin et al. (2018). We fine-tune the BERT model on abusive language detection by adding an output layer with a softmax activation function.
#### RoBERTa
is an extension of BERT trained on more data with different hyperparameters and has achieved better performance in multiple classification tasks Liu et al. (2019). We fine-tune RoBERTa in a similar way to BERT.
#### RoBERTa-hate-speech
This domain adaptation model2 is trained on 11 English data sets for hate and toxicity based on the RoBERTa-base model Vidgen et al. (2020).
Footnote 2: [https://rb.gy/k5x9t](https://rb.gy/k5x9t)
#### OA
We use the OpenAssistant (OA) 30B model developed by LAIONAI, which fine-tunes the LLaMA (Large Language Model Meta AI; Touvron et al., 2023) 30B model using the OA dataset. Since the original LLaMA model is
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Dataset** & **Language** & **Source** & **Time** & **Size** & **Labels** \\ \hline Waseem and Hovy (2016) & English & Twitter & 07-04-2013 - 06-01-2016 (33 months) & 16,914 & neither, sexism, racism \\ \hline Founta et al. (2018) & English & Twitter & 30-03-2017 - 08-04-2017 (10 days) & 80,000 & normal, spam, abusive, hateful \\ \hline Jiang et al. (2022) & Chinese & Weibo & 06-04-2012 - 26-06-2020 (8 years) & 8,969 & sexism, not sexism \\ \hline Pereira-Kohatsu et al. (2019) & Spanish & Twitter & 04-02-2017 - 22-12-2017 (10 months) & 6,000 & hate speech, not hate speech \\ \hline Sanguinetti et al. (2018) & Italian & Twitter & 26-02-2015 - 25-04-2017 (26 months) & 6,928 & hate speech, not hate speech \\ \hline \end{tabular}
\end{table}
Table 1: Data sets details.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Dataset** & **Training** & **Validation** & **Testing** & **All** \\ \hline WASEEM & 12,214 & 2,156 & 2,536 & 16,906 \\ FOUNTA & 27,368 & 5,683 & 4,830 & 37,881 \\ \hline JIANG & 6,335 & 1,118 & 1,316 & 8,769 \\ PEREIRA & 4,335 & 765 & 900 & 6,000 \\ SANGUNETTI & 2,861 & 595 & 506 & 3,962 \\ \hline \end{tabular}
\end{table}
Table 2: Data sets statistics.
not fully open-source, we obtain the xor weights from HuggingFace3 and apply 8-bit quantisation techniques via BitsAndBytes [16] to decrease the inference memory requirements. We use OA for zero-shot classification where we provide the model with a sequence of texts and a prompt that describes what we want our model to do.
Footnote 3: [https://huggingface.co/OpenAssistant](https://huggingface.co/OpenAssistant)
Footnote 4: [https://www.cs.uc.edu/~huggingface.co/bert-base-chinese](https://www.cs.uc.edu/~huggingface.co/bert-base-chinese)
## Experimental Setup
Tweet Pre-ProcessingFor all data sets, we replace username mentions and hyperlinks with placeholder tokens, \(<\)USER\(>\) and \(<\)URL\(>\) respectively. For the Chinese data set, we use Jieba4, a Chinese text segmentation, to tokenize the texts.
Footnote 4: [https://huggingface.co/bert-base-chinese](https://huggingface.co/bert-base-chinese)
HyperparametersFor all the English data sets, we use RoBERTa-base5; for data sets in other languages, we use bert-base-chinese6, bert-base-spanish-wwm-cased7 and bert-base-italian-cased8 respectively, which are trained on big corpora of the corresponding language based on the BERT-base model. We fine-tune all models with learning rate \(l\) = 3e-6, \(l\) {1e-4, 1e-5, 5e-6, 3e-6, 1e-6, 1e-7}. The batch size is set to 32 and the maximum sequence length is set to 128. All experiments are performed on a NVIDIA Titan RTX GPU with 24GB memory. We follow the official guidelines9 to run the 30B OA model on a local server with two NVIDIA A100 GPUs.
Footnote 6: [https://huggingface.co/bert-base-chinese](https://huggingface.co/bert-base-chinese)
Footnote 7: [https://ftb.gy.br/2ys](https://ftb.gy.br/2ys)
Footnote 8: [https://huggingface.co/dbmdz/bert-base-italian-cased](https://huggingface.co/dbmdz/bert-base-italian-cased)
Training and EvaluationWe split the data sets into training, validation and testing sets with a ratio of 70:15:15. During training, we choose the model with the smallest validation loss value over 12 epochs. We run all models five times with different random seeds for both random and chronological split strategies. We report predictive performance using the average Accuracy, Precision, Recall and macro-F1 scores. For OA, we only input the prompt (i.e. _identify if the following text is abusive or non-abusive_) and the same testing sets using two data split strategies.
## Results
The predictive results are shown in Table 3 (English data sets)10 and Table 4 (data sets in Chinese, Spanish and Italian). Values in the _Performance Drop_ column are calculated by subtracting the results of chronological splits from that of random splits, where \(\downarrow\) indicates a positive value and \(\uparrow\) indicates a negative value. In other words, performance drop refers to the performance decreases using chronological splits compared to random splits with the same model.
\begin{table}
\begin{tabular}{|c|l|l|l|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Splits**} & \multicolumn{4}{c|}{**JIANG**} & \multicolumn{4}{c|}{**PEREIRA**} & \multicolumn{4}{c|}{**SANGUINETTI**} \\ \cline{3-13} & & Acc & P & R & F1 & Acc & P & R & F1 & Acc & P & R & F1 \\ \hline \multirow{2}{*}{**LR**} & _Random_ & 76.14 & 73.24 & 72.81 & 73.01 & 77.00 & 70.35 & 70.95 & 70.64 & 86.22 & 73.93 & 77.19 & 75.36 \\ \cline{2-13} & _Chronological_ & 71.50 & 68.28 & 68.76 & 68.49 & 80.67 & 76.08 & 69.86 & 71.83 & 85.21 & 71.71 & 71.71 & 71.71 \\ \cline{2-13} & Performance Drop & 4.641 & 4.96\(\downarrow\) & 4.05\(\downarrow\) & 4.521 & 3.67\(\uparrow\) & 5.73\(\uparrow\) & 1.09\(\downarrow\) & **1.19\(\uparrow\)** & 1.01\(\uparrow\) & 2.221 & 5.48 & **3.65\(\uparrow\)** \\ \hline \multirow{2}{*}{**BERT**} & _Random_ & 80.68 & 78.95 & 76.65 & 77.52 & 80.67 & 75.30 & 72.31 & 73.44 & 88.07 & 78.09 & 72.69 & 74.85 \\ \cline{2-13} & _Chronological_ & 78.66 & 76.28 & 77.80 & 76.81 & 82.78 & 83.15 & 69.72 & 72.67 & 84.87 & 70.22 & 63.08 & 65.13 \\ \cline{2-13} & Performance Drop & 2.02\(\downarrow\) & 2.67\(\uparrow\) & 1.15\(\uparrow\) & **0.71\(\downarrow\)** & 2.11\(\uparrow\) & 7.85\(\uparrow\) & 2.59\(\downarrow\) & 0.77\(\uparrow\) & 3.20\(\uparrow\) & 7.87\(\uparrow\) & 9.61\(\downarrow\) & 9.72\(\downarrow\) \\ \hline \end{tabular}
\end{table}
Table 4: Model predictive performance on a Chinese, Spanish and Italian data set using random and chronological splits. The smallest performance drops (or rise) across models are in bold.
\begin{table}
\begin{tabular}{|c|l|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Splits**} & \multicolumn{4}{c|}{**WASEEM**} & \multicolumn{4}{c|}{**FOOTNOTE:**} \\ \cline{3-13} & & Acc & P & R & F1 & Acc & P & R & F1 \\ \hline \multirow{2}{*}{**LR**} & _Random Splits_ & 81.94 & 79.27 & 79.08 & 79.18 & 92.54 & 83.66 & 84.69 & 84.16 \\ \cline{2-13} & _Chronological Splits_ & 74.88 & 76.93 & 62.69 & 63.15 & 93.26 & 85.56 & 85.28 & 85.42 \\ \cline{2-13} & Performance Drop & 7.06\(\downarrow\) & 2.33\(\downarrow\) & 16.39\(\downarrow\) & 16.03\(\downarrow\) & 0.72\(\uparrow\) & 1.90\(\uparrow\) & 0.59\(\uparrow\) & **1.26\(\uparrow\)** \\ \hline \multirow{2}{*}{**RoBERTa**} & _Random Splits_ & 85.73 & 84.10 & 82.65 & 83.26 & 94.95 & 90.98 & 86.43 & 88.49 \\ \cline{2-13} & _Chronological Splits_ & 76.77 & 80.54 & 65.20 & 66.33 & 94.81 & 91.16 & 85.45 & 87.99 \\ \cline{2-13} & Performance Drop & 8.96\(\downarrow\) & 3.56\(\downarrow\) & 17.45\(\downarrow\) & 16.93\(\downarrow\) & 0.14\(\downarrow\) & 0.18\(\uparrow\) & 0.98\(\uparrow\) & 0.50\(\downarrow\) \\ \hline \multirow{2}{*}{**RoBERTa-hate-speech**} & _Random Splits_ & 89.20 & 87.50 & 87.82 & 87.64 & 96.42 & 93.16 & 91.11 & 92.09 \\ \cline{2-13} & _Chronological Splits_ & 81.58 & 85.99 & 72.21 & 74.71 & 96.07 & 92.03 & 90.79 & 91.39 \\ \cline{2-13} & Performance Drop & 7.62\(\downarrow\) & 1.51\(\downarrow\) & 15.61\(\downarrow\) & **12.93\(\downarrow\)** & 0.35\(\downarrow\) & 1.13\(\downarrow\) & 0.32\(\downarrow\) & 0.70\(\downarrow\) \\ \hline \multirow{2}{*}{**OA**} & _Random Splits_ & 64.47 & 68.96 & 70.88 & 64.26 & 80.43 & 68.11 & 81.93 & 70.54 \\ \cline{2-13} & _Chronological Splits_ & 72.36 & 72.53 & 75.89 & 71.48 & 80.75 & 68.24 & 81.83 & 70.77 \\ \cline{1-1} \cline{2-13} & Performance Drop & 7.89\(\uparrow\) & 3.57\(\uparrow\) & 5.01\(\uparrow\) & 7.22\(\uparrow\) & 0.32\(\uparrow\) & 0.13\(\uparrow\) & 0.10\(\uparrow\) & 0.23\(\uparrow\) \\ \hline \end{tabular}
\end{table}
Table 3: Model predictive performance on English data sets using random and chronological splits. The smallest F1 performance drop (or rise) across models is in bold.
Random vs. chronological splitsIn general, we observe performance degradation using chronological splits compared to random splits across all pretrained language models (PLMs). This is in line with previous work on other classification tasks such as document classification [1], stance detection [12] and rumour detection [12]. Furthermore, the longer the time span, the greater the performance degradation. For the data sets with long time spans, we observe 16.93\(\downarrow\) F1 on WASEEM using RoBERTa and 9.72\(\downarrow\) F1 on SANGUINETTI using BERT); while for the data sets with short time spans we observe only 0.5\(\downarrow\) F1 on FOUNTA using RoBERTa and 0.77\(\downarrow\) F1 on PEREIRA using BERT.
However, although the performance of LR is not as good as that of PLMs, it has a smaller performance drop (or even performance rise) on data sets with small time spans (e.g., 1.26\(\uparrow\) F1 on FOUNTA compared with 0.50\(\downarrow\) F1 using RoBERTa).
Interestingly, we observe only a slight performance drop on the data set of JIANG (0.71\(\downarrow\) F1 using BERT) despite the eight-year time span. This may be due to the differences in the expression of abusive language online in Chinese and English (JIANG vs. WASEEM) or different collection methods between these two data sets. Another speculation is that JIANG only focuses on sexist abuse (sexism or not) which is one of the domains of abusive language. In this case, it covers fewer topics than other abusive data sets, which makes the performance less affected by temporalities (we will further investigate it in the following section).
Vanilla vs. domain adaptation modelsWe compare the vanilla RoBERTa model with the domain adaptation model (RoBERTa-hate-speech) on two English data sets. We found that RoBERTa-hate-speech not only outperforms RoBERTa across two data sets using both random and chronological splits as expected but also has a smaller performance drop on WASEEM (12.93\(\downarrow\)), where tweets span three years. This suggests that domain adaptation models can help mitigate temporal bias in abusive language detection, especially over long time spans. However, there are no domain-specific models for other languages, suggesting that further efforts are needed to develop such models.
Zero-Shot ClassificationSince OA is trained after the year of waseem2016multidimensional and rounta2018multidimensional, we hypothesize that the difference of predictive results between two data split strategies using OA will be negligible (e.g. smaller than 1). The performance drop of FOUNTA is as expected (0.23\(\uparrow\) F1); while the F1 performance on WASEEM using chronological splits is 7.22 higher than using random splits. We speculate that the large performance difference between these two splitting ways on WASEEM is due to the more explicit abusive content in the testing set using chronological splits as temporalities are less likely to be an influencing factor for OA. To investigate this, we calculate the swearing rates (the percentage of tweets containing at least one swear word among all tweets) of these two testing sets using an English swearword list from Wiktionary (words considered taboo and vulgar or offensive)11. The swearing rate of WASEEM using random and chronological splits is 5.60% and 8.40%; while that of FOUNTA is 4.64% and 5.51% respectively. The performance of OA is more likely to be influenced by the explicitness of abusive expressions instead of temporal factors based on the results of two English data sets. However, more abusive data sets are needed to make a more robust conclusion.
Footnote 11: [https://en.wiktionary.org/wiki/Category:English_swear_words](https://en.wiktionary.org/wiki/Category:English_swear_words)
We further explore whether temporal bias has a greater influence on abusive texts or non-abusive texts. Table 5 shows the performance of each class as well as the overall performance on five data sets using their best-performing models (_RoBERTa-hate-speech_ for English data sets and _BERT_ for other language data sets). In general, the performance drop in abusive classes is larger than that in non-abusive classes. Also, the larger the time span of the data sets, the greater the difference in performance degradation between abusive and non-abusive classes (e.g. F1 1.8\(\uparrow\) vs. 27.4\(\downarrow\) for PEREIRA with ten-month time span and F1 1.6\(\downarrow\) vs. 17.8\(\downarrow\) for SANGUINETTI with two-year time span). However, jiang2022multidimensional is an exception where F1 scores of abusive classes increase by 1.2. We also notice that the degradation of precision for non-abusive content is larger than that of recall using chronological splits (e.g. 3.2\(\downarrow\) precision and 0.4\(\downarrow\) recall in SANGUINETTI); while for abusive content, the performance drop in precision and recall is reversed (e.g. 5.4\(\uparrow\) precision and 43.8\(\downarrow\) in recall in WASEEM). This indicates that by using chronological splits, non-abusive texts are more likely to be detected; fewer abusive texts can be detected but the detected ones are more likely to be correct.
## Analysis
### Text Similarities
We hypothesize that the drop in performance is due to a larger difference between training and testing sets using chronological splits. To verify this, we use three methods to
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multicolumn{6}{|c|}{Random Split} & \multicolumn{3}{c|}{Chronological Split} \\ \hline & Precision & Recall & F1 & Precision & Recall & F1 \\ \hline \multicolumn{6}{|c|}{WASEEM} \\ \hline Non-abusive & 92.6 & 91.6 & 92.0 & 77.6 (15.0\(\downarrow\)) & 97.4 (5.8\(\downarrow\)) & 86.6 (5.4\(\uparrow\)) \\ \hline Abusive & 82.6 & 84.0 & 83.0 & 88.0 (5.4\(\uparrow\)) & 40.2 (42.8\(\downarrow\)) & 35.6 (27.4\(\downarrow\)) \\ \hline Overall & 87.5 & 87.8 & 87.6 & 86.9 (15.1) & 72.2 (5.6\(\downarrow\)) & 74.7 (12.9\(\downarrow\)) \\ \hline \multicolumn{6}{|c|}{POUNTA} \\ \hline Non-abusive & 97.6 & 98.2 & 98.0 & 96.3 (0.8\(\downarrow\)) & 98.0 (0.2\(\downarrow\)) & 97.0 (1.0\(\downarrow\)) \\ \hline Abusive & 88.6 & 83.8 & 86.4 & 86.2 (2.4\(\uparrow\)) & 77.4 (6.4\(\downarrow\)) & 81.6 (4.8\(\downarrow\)) \\ \hline Overall & 93.2 & 91.1 & 92.1 (1.2\(\downarrow\)) & 90.8 (0.3\(\downarrow\)) & 91.4 (0.7\(\downarrow\)) \\ \hline \multicolumn{6}{|c|}{IMAG} \\ \hline Non-abusive & 83.2 & 88.8 & 85.8 & 86.2 (3.0\(\uparrow\)) & 80.6 (8.2\(\downarrow\)) & 83.2 (2.6\(\downarrow\)) \\ \hline Abusive & 74.6 & 64.2 & 69.0 & 66.2 (8.4\(\uparrow\)) & 74.8 (0.6\(\uparrow\)) & 70.2 (1.2\(\downarrow\)) \\ \hline Overall & 79.0 & 76.7 & 77.5 & 76.3 (27.1) & 77.8 (1.1\(\downarrow\)) & 76.8 (0.7\(\downarrow\)) \\ \hline \multicolumn{6}{|c|}{PERREIRA} \\ \hline Non-abusive & 85.0 & 89.6 & 87.4 & 82.8 (2.2\(\uparrow\)) & 97.0 (7.4\(\uparrow\)) & 89.2 (1.8\(\uparrow\)) \\ \hline Abusive & 65.6 & 55.0 & 59.6 & 83.6 (16.0\(\downarrow\)) & 42.4 (12.6\(\downarrow\)) & 55.8 (0.8\(\downarrow\)) \\ \hline Overall & 75.3 & 72.3 & 73.4 & 83.2 (7.9\(\downarrow\)) & 69.7 (2.6\(\downarrow\)) & 72.7 (0.7\(\downarrow\)) \\ \hline \multicolumn{6}{|c|}{SANOUNETTI} \\ \hline Non-abusive & 91.4 & 95.0 & 95.2 & 88.2 (3.2\(\downarrow\)) & 94.6 (0.4\(\uparrow\)) & 91.6 (1.6\(\downarrow\)) \\ \hline Abusive & 64.8 & 50.4 & 56.8 & 52.2 (12.6\(\downarrow\)) & 31.4 (19.9\(\downarrow\)) & 93.0 (17.8\(\downarrow\)) \\ \hline Overall & 78.1 & 72.7 & 74.9 & 70.2 (7.9\(\downarrow\)) & 63.1 (9.6\(\downarrow\)) & 65.1 (9.8\(\downarrow\)) \\ \hline \end{tabular}
\end{table}
Table 5: Model predictive performance of each class as well as the overall performance using random and chronological splits.
calculate text similarities: (a) Jaccard similarity coefficient; (b) DICE coefficient [14] and (c) overlap coefficient (OC).
**Jaccard similarity coefficient** is defined as the size of the intersection divided by the size of the union of two sets, A and B,
\[J(A,B)=\frac{|A\cap B|}{|A\cup B|} \tag{1}\]
**DICE coefficient** is defined as twice the size of the intersection divided by the sum size of two sets, A and B,
\[DICE(A,B)=\frac{2*|A\cap B|}{|A|+|B|} \tag{2}\]
**Overlap coefficient** is defined as the size of the intersection divided by the smaller size of the two sets, A and B,
\[OC(A,B)=\frac{|A\cap B|}{\min(|A|,|B|)} \tag{3}\]
where A and B denote the set of distinctive words from training and test sets, respectively. \(|A\cap B|\) and \(|A\cup B|\) indicate the sum of distinctive words that appear in the intersection and union of the two subsets respectively. When the two subsets have no shared vocabulary, these three coefficient values will be zero, while if they are identical, the two values will be 1.
Table 6 shows the similarity coefficient between training and testing sets using _random_ and _chronological_ splits. Firstly, we notice that values from three similarity calculation methods drop across all data sets, indicating that using chronological splits leads to a larger difference between training and testing sets. Secondly, the longer the time span of data sets, the larger the similarity drop. For example, OC of WASEEM (three years) drops 0.044 while that of FOUNTA (one week) drops 0.004. Also, there tends to be a positive correlation between the magnitude of similarity reduction and the performance drop. However, considering the minor decline (drop 0.71 F1) in the predictive performance of JIANG (eight years), the text similarity drop is not consistent (e.g. OC drops 0.31). This can be explained by the fact that text similarity calculation is granular down to words; while topics might be limited (number, variety) in a sexist data set (i.e. JIANG).
cause their time spans are both long (three years vs. eight years) while their predictive performance drops vary widely (16.93 vs. 0.71).
For WASEEM, most abusive tweets in the testing set using chronological splits involve an Australian TV show, My Kitchen Rules (MKR) (e.g. _#mkr_, _#cuntandandre_, _#kateandre_, _kat_, _andre_, _annie_), which is one of the queried terms for data collection. Our speculation is that the discussion about this show began to emerge during the later timeframe of the data set (within the time covered by the testing set when using chronological splits). However, there are hardly any new topics in the testing set when using random splits (e.g. _countless_, _lower_, _forget_).
For JIANG, testing sets using both split strategies mainly contain basic or gender-related terms (e.g. _more_, _not_, _misogyny_, _female manager_) and do not involve terms related to specific events. This is also correlated to how they collect the data: searching gender-related keywords such as 'feminism' and 'gender discrimination' for sexist content instead of using specific events as keywords. This suggests that collecting data using generic terms as keywords instead of terms associated with current hot events is likely to introduce less temporal bias.
### Topic Distribution
We also explore topic distribution over time across two English data sets. We first use a topic modelling technique, BERTopic12, to extract the 10 most important topic groups in a data set. Then we manually remove repeated or commonly used words (e.g. 'this','said') in these topic groups and combine similar groups into one group (e.g. combining 'women','men','she', and 'girls' into _gender-related_ group). The generated topic groups of each data set are shown as follows13:
Footnote 12: [https://github.com/MaartenGr/BERTopic](https://github.com/MaartenGr/BERTopic)
Footnote 13: We also try to extract topics of data sets with other language using BERTopic but the results are not good.
**WASEEM:** Group 1: {_sexist_, _women_, _men_, _bitch_, _her_, _she_, _girls_, _female_, _woman_, _notsexist_}; Group 2: {_kat_, _mkr_, _face_, _mkr2015_, _karma_}; Group 3: {_drive_, _drivers_, _driving_, _driver_}; Group 4: {_blondes_, _blonde_, _pretty_, _hot_, _dumb_}; Group 5: {_israel_, _hamas_, _palestinians_, _israelis_, _palestinian_, _palestine_, _gays_, _destroy_, _muslims_}; Group 6: {_sports_, _annocupers_, _commutators_, _annoucer_, _football_, _stand_, _commentator_}; Group 7: {_feminism_, _feminists_, _feminist_, _equality_, _movement_, _hypocrisy_, _rights_, _emma_, _modern_}; Group 8: {_funny_, _cometalons_, _comedian_, _jokes_}.
**FOUNTA:** Group 1: {_trump_, _president_, _obama_, _voted_, _republicans_, _idiot_}; Group 2: {_nigga_, _niggas_}; Group 3: {_hate_, _bitch_, _bad_, _fucking_, _bitches_, _she_}; Group 4: {_syria_, _assad_, _syrian_, _chemical_, _trump_, _missiles_, _attack_, _obama_, _war_, _refugees_}; Group 5: {_pizza_, _eat_, _pineapple_, _digusting_, _food_, _home_, _taco_}; Group 6: {_vstrellemania_, _wwe_, _match_, _rawreflemania_, _westlemania_3}.
Figure 1 shows the topic distributions over time of these two data sets. For WASEEM, Group 2 (MKR TV show related ), 5 (race and religion related) and 7 (feminism related) appear only after 2015, which is also the starting time of the testing data set using chronological splits [1]. This results in the models barely seeing these words in the training set and a lack of knowledge in these three topics during training, especially for Group 2. Thus, it would be easier for models to fail when predicting text involving these topics using chronological splits. All topic groups are evenly distributed in FOUNTA except for Group 6 (wrestling match related). However, Topic Group 6 rarely appears in the testing set using chronological splits [1], which is less likely to influence the performance.
### Filtered Data Set
We explore whether removing words related to specific topics or events will enhance the robustness of the models when predicting abusive content. We hypothesize that the model performance will drop slightly while the difference between random and chronological splits will be more minor by removing these words. We experiment with WASEEM as its performance drop has room to reduce. We filter the data set by excluding three types of words: (1) words in all eight groups extracted by BERTopic **(D1)**; (2) words se
Figure 1: Topic distribution over time.
lected by attention mechanisms (**D2**) and (3) the union of the words extracted by (1) and (2) (**D3**). For (2), we first use the RoBERTa-hate-speech model to produce attention scores that represent a probability distribution over each text. We then manually remove topic-related tokens among the top five tokens with the highest probability in each abusive tweet. Most of the removed tokens are names or hashtags related to the cooking TV show.
The results of filtered data sets are shown in Table 9. Similar to the previous experiment, we run five times for each method. First, all three strategies for removing topic-related words hurt performance in most cases, especially for chronological splits (e.g. 87.64 vs. 84.75 F1 using random splits, 74.71 vs. 72.11 F1 using chronological splits). However, the performance on D2 using chronological splits improves by 0.32 F1. Second, using more robust data sets leads to more minor performance drops. We achieve the smallest performance drop (9.65\(\downarrow\) F1) using D3. Also, using D2 achieves a comparable performance drop but only slightly hurts the performance. This suggests that filtering out specific topic-related words in a data set (i.e. a more robust data set) helps reduce temporal bias.
### Error Analysis
Additionally, we perform an error analysis on two data sets containing sexist abuse, WASEEM and JIANG, using chronological splits. For WASEEM, we found that most errors happen when content involves the TV show (MKR). Also, when names from the show are mentioned, it is easy for models to misclassify the texts as non-abusive. We guess this is because the model cannot associate names in the testing set with male, female (gender-related) or abusive if it has not seen those names in the training set. However, the annotators of this data set have prior knowledge of this TV show and its characters. Thus, they are able to classify dissatisfaction or hatred toward specific characters as _sexist_. In the following two examples, tweets belonging to _abusive_ are misclassified as _non-abusive_ (names are highlighted in bold)14:
Footnote 14: Note that WASEEM is originally a sexist and racist data set, so other abusive content will be labeled as neither (_non-abusive_ in our paper).
T1: _**Kat** on #mkr is such a horrible person.. I wish **Kat** and **Andre** would just get eliminated already._
T2: _#MKR-I am seriously considering not watching just because I have to see **Kats** face. God. I want to slap it with a spatula!_
However, when gender-related words also appear in the content, models are more likely to classify them correctly. The following tweets are correctly classified as _abusive_:
T3: _#katandandre gaaaaaah I just want to slap **her** back to WA #MKR_
T4: _#MKR **Girls**, thank you for filling the slapper quotient on this years series... we no longer have a need for **bitchy blondes**! Au Revoir!_
For JIANG, it is easy for models to fail to understand the actual meaning of a text without knowing traditional Chinese cultural viewpoints related to gender and marriage (e.g. some people value sons more than daughters). The following text belong to _abusive_ (sexism originally) is wrongly classified as _non-abusive_:
T5: _#1_#2_#3_#4_#5_#6_#7
## Limitations
This work aims to investigate the impact and causes of temporalities across different abusive data sets. In our work, we can only evaluate limited data sets that provide time information (e.g. 2 English ones, 2 data sets spanning more than 3 years) which limits control experiments for more sound comparisons. Also, all debiasing methods can only applied to English abusive data sets due to the imperfect implementation of techniques in other languages (i.e. domain adaptation models, BERTopic, OA). Moreover, our studies on temporal bias only explore topic changes and lack a comprehensive understanding of language evolution over time.
## Conclusion
In this work, we investigate the impact of temporal bias on abusive language detection. We compare the predictive results using two data split methods (i.e. random and chronological splits) across different data sets (_RQ1_). The results indicate that temporal bias has a larger influence on data sets with a larger time span and collected using keywords, especially specific event-related keywords. Languages (or culture) may also be a factor but due to insufficient data sets, we can not draw concrete conclusions. We also conduct extensive analysis including text similarities, feature analysis and topic distribution to explore the causes of temporalities (_RQ2_). We found that performance degradation is mostly because of topic changes in our data sets. To provide a complete answer to _RQ3_, we filter a data set by removing topic-related words that appear in abusive texts. The predictive results suggest that using domain adaptation models and LLMs and training on a more robust data set can effectively reduce temporal bias in abusive language detection.
In the future, we plan to study temporal bias patterns in abusive data sets across different languages or platforms, aiming to understand the importance of considering the specific nature of the target variable when collecting the data sets and developing models. It can also be expanded to other text classification tasks.
## Ethics Statement
This work has received ethical approval from our Research Ethics Committee. All datasets are acquired either through the URLs provided in the original papers or by requesting them from the respective authors. Note that we did not gather any fresh data from Twitter for this study. Additionally, we can verify that the data has been completely anonymized prior to its utilization in the Language Model Inference process.
## Acknowledgements
This research is supported by a UKRI grant ES/T012714/1 ("Responsible AI for Inclusive, Democratic Societies: A cross-disciplinary approach to detecting and countering abusive language online").
| オンラインでの暴言の使用が、個人の傷つき、社会全体に悪影響を与え、心理的な被害から現実の暴力にまで及ぶ影響をもたらす深刻な問題となっています。機械学習モデルは、自動的に暴言を検出するために開発されていますが、これらのモデルは、時間的なバイアスに苦しみます。時間的なバイアスとは、テーマ、言語の使用方法、社会的な慣習が時間とともに変化することを指します。この研究は、さまざまな言語で、時間的なバイアスの影響を調査し、軽減方法を探求することを目的としています。私たちは、さまざまな時代からの暴言データセットのモデルの性能を評価しています。私たちの結果は、時間的なバイアスが、過去のデータでトレーニングされたモデルが時間の経過とともにパフォーマンスに影響を与えていることを示しています。また、これらの暴言データセットの広範な言語学的分析を行うことで、時間的な視点から言語の進化とパフォーマンスの低下を調査しています。この |
2310.20380 | Dropout Strategy in Reinforcement Learning: Limiting the Surrogate
Objective Variance in Policy Optimization Methods | Policy-based reinforcement learning algorithms are widely used in various
fields. Among them, mainstream policy optimization algorithms such as TRPO and
PPO introduce importance sampling into policy iteration, which allows the reuse
of historical data. However, this can also lead to a high variance of the
surrogate objective and indirectly affects the stability and convergence of the
algorithm. In this paper, we first derived an upper bound of the surrogate
objective variance, which can grow quadratically with the increase of the
surrogate objective. Next, we proposed the dropout technique to avoid the
excessive increase of the surrogate objective variance caused by importance
sampling. Then, we introduced a general reinforcement learning framework
applicable to mainstream policy optimization methods, and applied the dropout
technique to the PPO algorithm to obtain the D-PPO variant. Finally, we conduct
comparative experiments between D-PPO and PPO algorithms in the Atari 2600
environment, and the results show that D-PPO achieved significant performance
improvements compared to PPO, and effectively limited the excessive increase of
the surrogate objective variance during training. | Zhengpeng Xie, Changdong Yu, Weizheng Qiao | 2023-10-31T11:38:26 | http://arxiv.org/abs/2310.20380v3 | # Dropout Strategy in Reinforcement Learning:
###### Abstract
Policy-based reinforcement learning algorithms are widely used in various fields. Among them, mainstream policy optimization algorithms such as TRPO and PPO introduce importance sampling into policy iteration, which allows the reuse of historical data. However, this can also lead to a high variance of the surrogate objective and indirectly affects the stability and convergence of the algorithm. In this paper, we first derived an upper bound of the surrogate objective variance, which can grow quadratically with the increase of the surrogate objective. Next, we proposed the dropout technique to avoid the excessive increase of the surrogate objective variance caused by importance sampling. Then, we introduced a general reinforcement learning framework applicable to mainstream policy optimization methods, and applied the dropout technique to the PPO algorithm to obtain the D-PPO variant. Finally, we conduct comparative experiments between D-PPO and PPO algorithms in the Atari 2600 environment, and the results show that D-PPO achieved significant performance improvements compared to PPO, and effectively limited the excessive increase of the surrogate objective variance during training.
Deep Reinforcement Learning, policy optimization, importance sampling, actor-critic, proximal policy optimization, surrogate objective variance, dropout strategy
## I Introduction
Deep Reinforcement Learning (DRL) is a machine learning approach that combines deep learning and reinforcement learning to address end-to-end sequential decision-making problems. In DRL, an agent interacts with the environment and explores through trial and error to learn an optimal policy. In recent years, a number of DRL algorithms have been widely used in various fields, including board games [1, 2, 3], video games [5, 6, 12, 27], autonomous driving [7, 8], intelligent control [9, 10, 11], and so on. DRL has emerged as one of the hottest research topics in artificial intelligence.
During the development of DRL, scholars have proposed and improved many representative methods, which can be summarized into two categories: 1) value-based and 2) policy-based DRL algorithms. Value-based DRL algorithms originated from Deep Q-Networks (DQN) [12], which approximates the action value function \(Q(s,a)\) using a deep neural network and updates the network through the Bellman equation using historical data. Subsequent scholars have made a series of improvements to DQN, such as Schaul _et al._[14] proposing the prioritized experience replay technique, which prioritizes the use of data with larger TD errors for network updates, improving the learning efficiency. Hasselt _et al._[16] proposed a Double Q-learning algorithm that mitigates the problem of overestimating the action value function \(Q(s,a)\) in DQN by introducing a target network in the update process. Wang _et al._[17] proposed a Dueling Network that decomposes the action value function \(Q(s,a)\) into a state value function \(V(s)\) and an advantage function \(A(s,a)\), with both of them shares the same convolutional layer during training. The Dueling Network can better estimate the contribution of different actions to the state, improving the learning efficiency of DQN. Bellemare _et al._[18] modeled the distribution of the action-state value function \(Q(s,a)\) in DQN to avoid losing information about the distribution of it during the training process, thereby improving the performance of the DQN algorithm. Fortunato _et al._[19] proposed NoisyNet, which adds random noise to the fully connected layers of the network to enhance the exploration and robustness of the model. Finally, Hessel _et al._[20] integrated all the excellent variants of DQN and used a multi-step learning approach to calculate the error, resulting in the Rainbow DQN, which achieved state-of-the-art performance.
Unlike value-based reinforcement learning methods, policy-based reinforcement learning methods directly learn a policy network that outputs a probability distribution of actions based on the input state, and randomly samples an action from it. Therefore, it can effectively solve the problem of high-dimensional continuous action spaces. The policy-based reinforcement learning algorithm originated from the Reinforcement algorithm proposed by Sutton _et al._[21], which uses Monte Carlo (MC) method to approximate the policy gradient estimation. Subsequently, many policy optimization methods were proposed [24, 41], with TRPO and PPO being the most representative ones. The Trust Region Policy Optimization (TRPO) algorithm is proposed by Schulman _et al._[25], which introduces the trust region approach in optimization theory to policy optimization, representing the policy update process as a constrained optimization problem. By limiting the KL divergence between the old and new policies, the TRPO
algorithm limits the change in policy parameters in the update process. They also provided theoretical guarantees that TRPO algorithm has monotonic improvement performance, which makes TRPO algorithm more robust to hyperparameter settings and more stable than traditional policy gradient methods. However, TRPO algorithm needs to solve a constrained optimization problem in each round of update, which also leads to a significant computational overhead of TRPO algorithm, making it unsuitable for solving large-scale reinforcement learning problems. Schulman _et al._[26] proposed proximal policy optimization (PPO) algorithm, PPO algorithm has two main variants, namely PPO-CLIP and PPO-PENALTY, where PPO-PENALTY also uses the KL divergence between old and new policies as a constraint, but treats it as a penalty term in the objective function rather than a hard constraint, and automatically adjusts the penalty coefficient during policy update to ensure constraint satisfaction; PPO-CLIP does not use KL divergence, but introduces a special ratio clipping function, which limits the ratio of output action probabilities between old and new policies through a clipping function, thus implicitly ensuring that the algorithm satisfies the constraints of old and new policies during the update process. PPO-CLIP algorithm is widely used by scholars in many reinforcement learning environments due to its concise form, ease of programming implementation, and superior performance. Many subsequent studies have discussed whether the ratio clipping function in the PPO algorithm can effectively guarantee the constraint of the trust region [28, 29, 30, 31, 32, 33, 34]. However, at present, the equivalence between the ratio clipping function and the confidence domain constraint is not clear. Although TRPO and PPO algorithms are widely used in many reinforcement learning environments, importance sampling can cause their surrogate objective variance to become very large during training, which is an urgent problem that needs to be addressed.
The process of policy optimization relies on the estimation of policy gradients, and its accuracy depends on the variance and bias caused by the estimated policy. A common approach to reducing variance is to add a value function that is independent of actions as a baseline to the policy update process. Generally, a state value function is used as the baseline, which naturally leads to the Actor-Critic (AC) [13] architecture. The introduction of a baseline can significantly reduce the variance of the policy gradient estimate. Schulman _et al._[35] proposed a technique called Generalized Advantage Estimation (GAE), which is an extension of the advantage function. GAE uses an exponential weighted estimate similar to the advantage function to reduce the bias of the policy gradient estimate. However, the above research is conducted for policy gradients, and there is a lack of systematic analysis and discussion on the variance of the objective function.
In summary, we focus on addressing the issue of excessive growth in the variance of the surrogate objective during the iterative process of introducing importance sampling strategies. Our main contributions are summarized as follows.
1. For the iterative process of introducing the importance sampling strategy, we derive the variance upper bound of its surrogate objective, and show that this upper bound approximately grows quadratically with the increase of the surrogate objective. To the best of our knowledge, we are the first to provide this upper bound.
2. A general reinforcement learning framework for traditional policy optimization methods is proposed, and the mathematical formalization of the dropout strategy is given.
3. Two feasible dropout strategies are proposed, and the feasibility of the proposed dropout strategy is explained based on the theoretical results of the surrogate objective variance.
4. Introducing the dropout technique into the PPO algorithm to obtain the D-PPO algorithm. A series of comparative experiments with the PPO algorithm in the Atari 2600 environment show that the performance of the D-PPO algorithm is superior to that of PPO, and it has a lower variance of the surrogate objective.
The remainder of this article is organized as follows: Section II introduces the policy gradient and related work, including TRPO and PPO algorithms. Section III introduces the main theoretical results, dropout technique, dropout strategy framework, and pseudo-code of D-PPO algorithm. Section IV presents the comparative experiments between D-PPO and PPO algorithms and the hyperparameter analysis of D-PPO algorithm. Section V summarizes this article and presents the conclusion.
## II Related Work
In this section, we will briefly introduce some basic concepts of reinforcement learning and two representative policy optimization methods.
### _Policy Gradient_
Reinforcement learning is generally defined by a tuple \((\mathcal{S},\mathcal{A},r,\mathcal{P},\rho_{0},\gamma)\), where \(\mathcal{S}\) and \(\mathcal{A}\) represent the state space and action space, \(r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is the reward function, \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) is the probability distribution of the state transition function, \(\rho_{0}\) is the initial state distribution, and \(\gamma\in[0,1]\) is the discount factor. Starting from the initial state, for each time step, the agent receives the current state \(s_{t}\), takes the action \(a_{t}\), obtains the reward \(r_{t}\) from the environment feedback, and obtains next state \(s_{t+1}\) until entering the terminal state. The action value function and state value function are defined as
\[Q^{\pi}(s_{t},a_{t}):=\mathbb{E}_{\mathcal{S}_{t+1},\mathcal{A}_{t+1}}\left[ \sum_{i=t}^{T}\gamma^{i-t}r_{i}\Bigg{|}S_{t}=s_{t},A_{t}=a_{t}\right],\]
where \(\mathcal{S}_{t}=\left\{S_{t},\ldots,S_{T}\right\},\mathcal{A}_{t}=\left\{A_{ t}\ldots,A_{T}\right\}\).
Policy-based reinforcement learning algorithms often require the estimation of Policy Gradient (PG), so the derivation of policy gradient is briefly introduced below. Consider the trajectory generated by an agent starting from an initial state and interacting with the environment for one full episode
\[\tau=(s_{1},a_{1},r_{1},\ldots,s_{T-1},a_{T-1},r_{T-1},s_{T}). \tag{1}\]
The goal of reinforcement learning is to maximize the expected return \(R(\tau)=\sum_{i=1}^{T}\gamma^{i-1}r_{i}\), so the expected \(R(\tau)\) for all possible trajectories is
\[J(\theta)=\mathbb{E}_{\tau\sim p(\cdot)}\left[R(\tau)\right]=\sum_{\tau}R(\tau) \cdot p(\tau), \tag{2}\]
where \(p(\tau)=\rho_{0}(s_{1})\cdot\prod_{t=1}^{T-1}\pi_{\theta}(a_{t}|s_{t})\cdot \mathcal{P}(s_{t+1}|s_{t},a_{t})\), and \(\pi_{\theta}\) is the parameterized policy network. The gradient of the objective function \(J(\theta)\) with respect to parameters \(\theta\) is obtained by
\[\begin{split}\nabla J(\theta)&=\sum_{\tau}R(\tau )\cdot\nabla p(\tau)=\sum_{\tau}R(\tau)\cdot\nabla\log p(\tau)\cdot p(\tau)\\ &=\mathbb{E}_{\tau\sim p(\cdot)}\left[R(\tau)\cdot\nabla\log p( \tau)\right]\\ &=\mathbb{E}_{\tau\sim p(\cdot)}\Bigg{\{}R(\tau)\cdot\nabla\Bigg{[} \log\rho_{0}(s_{1})+\sum_{t=1}^{T-1}\log\pi_{\theta}(a_{t}|s_{t})\\ &\hskip 28.452756pt+\log\mathcal{P}(s_{t+1}|s_{t},a_{t})\Bigg{]} \Bigg{\}}\\ &=\mathbb{E}_{\tau\sim p(\cdot)}\left[\sum_{t=1}^{T-1}R(\tau) \cdot\nabla\log\pi_{\theta}(a_{t}|s_{t})\right]\\ &\approx\frac{1}{N}\sum_{n=1}^{N}\sum_{t=1}^{T_{n}-1}R(\tau^{n}) \cdot\nabla\log\pi_{\theta}(a_{t}^{n}|s_{t}^{n}).\end{split} \tag{3}\]
We derived the basic form of the policy gradient, so that the agent can approximate the policy gradient based on the current policy network \(\pi_{\theta}\) and \(N\) episodes of interaction with the environment, which is called Monte Carlo method.
### _Trust Region Policy Optimization_
If we adopt the equation \((\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq
Proof:: According to the definition of expectation and equation (9), we have
\[\left\{\mathop{\mathbb{E}}_{(s,a)\sim\pi_{\theta_{\rm old}}}\left[ \mathfrak{O}_{\theta_{\rm old}}^{\theta}(s,a)\right]\right\}^{2}\] \[= \left[\mathop{\sum}_{(s,a)\sim\pi_{\theta_{\rm old}}}\mathbb{P}_{ \theta_{\rm old}}(s,a)\cdot\frac{\pi_{\theta}(a|s)}{\pi_{\theta_{\rm old}}(a|s )}\cdot A^{\pi_{\theta_{\rm old}}}(s,a)\right]^{2}\] \[= \left[\mathop{\sum}_{(s,a)\sim\pi_{\theta_{\rm old}}}\mathbb{P}_{ \theta_{\rm old}}(s)\cdot\pi_{\theta}(a|s)\cdot A^{\pi_{\theta_{\rm old}}}(s,a )\right]^{2}\] \[= \mathop{\sum}_{(s_{i},a_{i})\sim\pi_{\theta_{\rm old}}}\mathbb{P}_ {\theta_{\rm old}}(s_{i})^{2}\cdot\pi_{\theta}(a_{i}|s_{i})^{2}\cdot A^{\pi_{ \theta_{\rm old}}}(s_{i},a_{i})^{2}+\] \[\mathop{\sum}_{(s_{i},a_{i})\sim\pi_{\theta_{\rm old}}}\mathbb{P }_{\theta_{\rm old}}(s_{i})\cdot\pi_{\theta}(a_{i}|s_{i})\cdot A^{\pi_{\theta_ {\rm old}}}(s_{i},a_{i})\cdot\] \[\mathop{\sum}_{(s_{j},a_{j})\sim\pi_{\theta_{\rm old}}}\mathbb{P }_{\theta_{\rm old}}(s_{j})\cdot\pi_{\theta}(a_{j}|s_{j})\cdot A^{\pi_{\theta _{\rm old}}}(s_{j},a_{j})\] \[= \mathop{\sum}_{(s_{i},a_{i})\sim\pi_{\theta_{\rm old}}}\mathbb{P }_{\theta_{\rm old}}(s_{i},a_{i})^{2}\cdot\frac{\pi_{\theta}(a_{i}|s_{i})^{2} }{\pi_{\theta_{\rm old}}(a_{i}|s_{i})^{2}}\cdot A^{\pi_{\theta_{\rm old}}}(s_{ i},a_{i})^{2}+\] \[\mathop{\sum}_{(s_{i},a_{i})\sim\pi_{\theta_{\rm old}}}\mathbb{P }_{\theta_{\rm old}}(s_{i},a_{i})\cdot\frac{\pi_{\theta}(a_{i}|s_{i})}{\pi_{ \theta_{\rm old}}(a_{i}|s_{i})}\cdot A^{\pi_{\theta_{\rm old}}}(s_{i},a_{i})\cdot\] \[\mathop{\sum}_{(s_{j},a_{j})\sim\pi_{\theta_{\rm old}}}\mathbb{P }_{\theta_{\rm old}}(s_{j},a_{j})\cdot\frac{\pi_{\theta}(a_{j}|s_{j})}{\pi_{ \theta_{\rm old}}(a_{j}|s_{j})}\cdot A^{\pi_{\theta_{\rm old}}}(s_{j},a_{j})\] \[= \mathop{\mathbb{E}}_{(s_{i},a_{i})\sim\pi_{\theta_{\rm old}}} \mathbb{P}_{\theta_{\rm old}}(s_{i},a_{i})\cdot\left[\mathfrak{O}_{\theta_{\rm old }}^{\theta}(s_{i},a_{i})\right]^{2}+\] \[\mathop{\mathbb{E}}_{(s_{i},a_{i})\sim\pi_{\theta_{\rm old}}} \mathop{\mathbb{E}}_{(s_{j},a_{j})\sim\pi_{\theta_{\rm old}}}\left[\mathfrak{O }_{\theta_{\rm old}}^{\theta}(s_{i},a_{i})\cdot\mathfrak{O}_{\theta_{\rm old }}^{\theta}(s_{j},a_{j})\right].\]
Hence, Lemma 2 is proved.
**Theorem 1:** When introducing importance sampling, the variance of the surrogate objective \(\mathbb{E}_{(s,a)\sim\pi_{\theta_{\rm old}}}\left[\mathfrak{O}_{\theta_{\rm old }}^{\theta}(s,a)\right]\) can be written as
\[\sigma_{\theta_{\rm old}}(\theta)=\] \[\mathop{\mathbb{E}}_{(s_{i},a_{i})\sim\pi_{\theta_{\rm old}}} \left\{\xi-\mathop{\mathbb{E}}_{(s_{j},a_{j})\sim\pi_{\theta_{\rm old}}} \left[\mathfrak{O}_{\theta_{\rm old}}^{\theta}(s_{i},a_{i})\cdot\mathfrak{O}_{ \theta_{\rm old}}^{\theta}(s_{j},a_{j})\right]\right\},\]
where \(\xi=[1-\mathbb{P}_{\theta_{\rm old}}(s_{i})\cdot\pi_{\theta_{\rm old}}(a_{i}| s_{i})]\cdot\left[\mathfrak{O}_{\theta_{\rm old}}^{\theta}(s_{i},a_{i})\right]^{2}\).
Proof:: According to
\[\sigma_{\theta_{\rm old}}(\theta)\] \[= \mathop{\mathbb{E}}_{(s,a)\sim\pi_{\theta_{\rm old}}}\Big{\{}\left[ \mathfrak{O}_{\theta_{\rm old}}^{\theta}(s,a)\right]^{2}\Big{\}}-\left\{ \mathop{\mathbb{E}}_{(s,a)\sim\pi_{\theta_{\rm old}}}\left[\mathfrak{O}_{ \theta_{\rm old}}^{\theta}(s,a)\right]\right\}^{2}\]
and Lemma 1-2, Theorem 1 is proved.
**Corollary 1:** When introducing importance sampling, the variance of the surrogate objective \(\mathbb{E}_{(s,a)\sim\pi_{\theta_{\rm old}}}\left[\mathfrak{O}_{\theta_{\rm old }}^{\theta}(s,a)\right]\) is bounded by
\[\sigma_{\theta_{\rm old}}(\theta)\] \[\mathop{\mathbb{E}}_{\begin{subarray}{c}(s_{j},a_{j})\sim\pi_{ \theta_{\rm old}}\\ j\neq i\end{subarray}}\Big{[}\mathfrak{O}_{\theta_{\rm old}}^{\theta}(s_{i},a_{i })\cdot\mathfrak{O}_{\theta_{\rm old}}^{\theta}(s_{j},a_{j})\Big{]}\,\Bigg{\}}.\]
Proof:: According to
\[\xi = [1-\mathbb{P}_{\theta_{\rm old}}(s_{i})\cdot\pi_{\theta_{\rm old}}(a _{i}|s_{i})]\cdot\left[\mathfrak{O}_{\theta_{\rm old}}^{\theta}(s_{i},a_{i}) \right]^{2}\] \[\leq \left[\mathfrak{O}_{\theta_{\rm old}}^{\theta}(s_{i},a_{i})\right] ^{2}\]
in Theorem 1, we have the Corollary 1, which means removing the uncomputable \(\mathbb{P}_{\theta_{\rm old}}(s_{i})\).
**Explanation.** It is clear from Corollary 1 that the upper bound on the variance of the surrogate objective is mainly determined by two terms: 1) one of which is the square of the surrogate objective, which means that increasing the objective function will inevitably lead to a quadratic increase in the variance with respect to it; 2) the other is
\[\mathop{\mathbb{E}}_{(s_{j},a_{j})\sim\pi_{\theta_{\rm old}}}\left[\mathfrak{O}_{ \theta_{\rm old}}^{\theta}(s_{i},a_{i})\cdot\mathfrak{O}_{\theta_{\rm old}}^{ \theta}(s_{j},a_{j})\right], \tag{10}\]
which is subtracted from the square of the surrogate objective. Therefore, in order to reduce the variance of the surrogate objective, we mainly focus on adjusting this item from the perspective of training data.
### _Dropout Strategy and Formalization_
Now suppose that the agent interacts with the environment to obtain training data1
Footnote 1: Here we ignore the terminal states.
\[(s_{1},a_{1},r_{1}),(s_{2},a_{2},r_{2}),\ldots,(s_{n},a_{n},r_{n}), \tag{11}\]
for each data \((s_{i},a_{i},r_{i})\), we denote its corresponding expectation (10) as \(\Delta_{i}\) and perform Monte Carlo approximation, which is denoted as
\[\Delta_{i}\approx\hat{\Delta}_{i}=\mathop{\sum}_{\begin{subarray}{c}(s_{j},a_{j })\sim\pi_{\theta_{\rm old}}\\ j\neq i\end{subarray}}\left[\hat{\mathfrak{O}}_{\theta_{\rm old}}^{\theta}(s_{i},a_{i })\cdot\hat{\mathfrak{O}}_{\theta_{\rm old}}^{\theta}(s_{j},a_{j})\right], \tag{12}\]
where \(i=1,2,\ldots,n\); \(\hat{\mathfrak{O}}_{\theta_{\rm old}}^{\theta}(s,a)=\frac{\pi_{\theta}(a|s)}{\pi_{ \theta_{\rm old}}(a|s)}\cdot\hat{A}^{\pi_{\theta_{\rm old}}}(s,a)\) and \(\hat{A}^{\pi_{\theta_{\rm old}}}(s,a)\) is an estimate of the advantage \(A^{\pi_{\theta_{\rm old}}}(s,a)\), using GAE [35] technique.
At the code level, we implement parallel computation of \(\hat{\Delta}_{i}\) through matrices, that is,
\[\begin{bmatrix}\hat{\Delta}_{1}\\ \hat{\Delta}_{2}\\ \vdots\\ \hat{\Delta}_{n}\end{bmatrix}=\begin{bmatrix}\hat{\mathfrak{O}}_{1}&\hat{ \mathfrak{O}}_{1}&\cdots&\hat{\mathfrak{O}}_{1}\\ \hat{\mathfrak{O}}_{2}&\hat{\mathfrak{O}}_{2}&\cdots&\hat{\mathfrak{
strategy used in this paper. Before that, we would like to first introduce some abstract mathematical definitions.
We define
\[\mathbb{D}_{\phi}^{f}(X):=\left\{x|x\in X,f(\phi(x))>0\right\}, \tag{14}\]
where \(X=\{(s_{i},a_{i},r_{i})\}_{i=1}^{n}\) is the training dataset, \(\phi:\mathcal{S}\times\mathcal{A}\times\mathbb{R}\rightarrow\mathbb{R}\) represents a certain transformation applied to each data \(x\) in the dataset \(X\), and \(f\) corresponds to a certain filtering rule for the original dataset \(X\). Therefore, \(\mathbb{D}_{\phi}^{f}\) is a formalization of the dropout strategy, which maps the original data \(X\) to a subset of it, that means, \(\mathbb{D}_{\phi}^{f}(X)\subset X\).
For example, \(\phi(x_{i})=\phi(s_{i},a_{i},r_{i})\) denotes \(\hat{\Delta}_{i}\) in this paper. As mentioned earlier, it can be seen from Corollary 1 that \(\Delta_{i}\) is subtracted from the previous term. In order to reduce the variance expectation of the surrogate objective, we want the value of \(\hat{\Delta}_{i}\) to be as large as possible, whether it is positive or negative. This means that data \(X\) is divided into two parts based on the sign of \(\hat{\Delta}_{i}\), and for both of them, we choose to dropout the data \(x_{i}\) corresponding to the relatively small \(\hat{\Delta}_{i}\) to restrict the surrogate objective variance, as shown in Fig 2.
Now suppose that the data \(X\) is divided into two parts \(X_{\phi}^{+}\) and \(X_{\phi}^{-}\) according to the sign of \(\hat{\Delta}_{i}\), that is,
\[\begin{cases}X_{\phi}^{+}=\left\{x|x\in X,\phi(x)\geq 0\right\};\\ X_{\phi}^{-}=\left\{x|x\in X,\phi(x)<0\right\}.\end{cases} \tag{15}\]
There are two ways to implement the dropout strategy: 1) one is setting a threshold \(\delta^{-}\) and \(\delta^{+}\) for \(X_{\phi}^{-}\) and \(X_{\phi}^{+}\), at this point, our dropout strategy is formalized as
\[\mathcal{D}(X)=\mathbb{D}_{\phi}^{x-\delta^{-}}(X_{\phi}^{-})\cup\mathbb{D}_{ \phi}^{x-\delta^{+}}(X_{\phi}^{+}). \tag{16}\]
However, this way is too sensitive to the setting of the hyperparameters \(\delta^{-}\) and \(\delta^{+}\), and due to orders of magnitude and other factors, it may be difficult to select a pair of \(\delta^{-}\) and \(\delta^{+}\) that is applicable to any environment.
2) The other is to fix the dropout ratio in the training dataset \(X\), which introduces the hyperparameter \(r\in[0,1]\). For a set of numbers, \(M\), We define
\[M^{[r]}\triangleq\operatorname*{arg\,min}_{m\in M}\ \left|\frac{|\mathbb{D}_{1}^{m -r}|}{|M|}-r\right|, \tag{17}\]
where \(\mathbb{1}(\cdot)\) represents the identity mapping, and \(|\cdot|\) represents the absolute value or the number of elements in the set. Therefore, the dropout strategy can be formalized as
\[\mathcal{D}(X)=\mathbb{D}_{\phi}^{x-|\phi(X_{\phi}^{-})|^{\left|r\right|}}(X_ {\phi}^{-})\cup\mathbb{D}_{\phi}^{x-|\phi(X_{\phi}^{+})|^{\left|r\right|}}(X_ {\phi}^{+}), \tag{18}\]
where \(\phi(X)=\{\phi(x)|x\in X\}\).
### _Framework and Algorithm_
Clearly, not all data can be effectively used to improve the performance of a policy network, especially in environments with sparse rewards [36]. Most of the data collected by the agent through interaction with the environment is not directly helpful for improving its policy. Therefore, it is necessary to develop a dropout rule for sample data in specific situations to improve the learning efficiency of the algorithm.
A specific dropout strategy is given in equation (18), and we first present a more general dropout strategy framework, as shown in Algorithm 1. Where algorithm \(\mathcal{A}\) can be any of the policy optimization algorithms, such as TRPO or PPO, which introduces importance sampling. When algorithm \(\mathcal{A}\) is PPO algorithm and the drpout strategy is given by (18), we obtain the pseudo-code of D-PPO algorithm in Algorithm 2.
Fig. 1: Dropout strategy and neural network structure.
## IV Experiments
In this section, we first introduce the experimental environment, Atari 2600, as well as the structure of the policy network and value network, and the hyperparameter settings. Then we compare the performance of D-PPO and PPO algorithms in different environments. Finally, we analyze the impact of the hyperparameters of D-PPO on its performance.
### _Atari 2600_
The Atari 2600 environment is a popular test platform for reinforcement learning, which was introduced by Atari in 1977 as a video game console. The environment contains various popular games such as Breakout, MsPacman and Space Invaders, as shown in Fig. 3. Since the introduction of the Deep Q-Networks (DQN) algorithm by Mnih _et al._[12], the Atari 2600 has become a standard environment for testing new reinforcement learning algorithms. Subsequently, OpenAI Gym packaged the Atari 2600 environment to create a more standardized interface and provided 57 Atari 2600 games as an environment. These games cover a variety of genres and difficulties, allowing researchers to experiment and compare different problems and algorithms.
### _Comparative Experiments_
Our policy network and value network structure are shown in Fig. 1. The input to the network is the stacking of the last four frames resized to 84 x 84 x 4. The value network and policy network share the same convolutional layer to improve learning efficiency. The network structure and common hyperparameters of the PPO and D-PPO algorithms are completely consistent. Specifically, the learning rate is set to \(2.4\times 10^{-4}\) and linearly decreases to 0, and the total number of steps of interaction with the environment is \(1\times 10^{7}\). We have eight intelligent agents that share the latest parameters and interact independently with the environment. The batch size is set to 2048, and each round of updates trains for 4 epochs, with each epoch divided into 4 mini-batches for updates. We used the GAE [35] technique to estimate the advantage function, with related parameters \(\lambda\) and \(\gamma\) set to 0.95 and 0.99. Our final loss consists of three parts, that is,
\[l=l_{p}+c_{1}\cdot l_{v}-c_{2}\cdot l_{e}, \tag{19}\]
where \(l_{p}\) and \(l_{v}\) are the losses of policy and value network, \(l_{e}\) is the entropy of policy network output, the weight coefficients are set as \(c_{1}=1\) and \(c_{2}=0.01\). The clipping ratio \(\epsilon\) is set to 0.1, and the dropout ratio \(r\) of D-PPO is set to 0.2, as shown in Table I.
The experimental results are shown in Fig. 4. It can be seen that, except for Boxing environment, the performance of D-PPO algorithm is slightly lower than that of PPO algorithm. However, in all other environments, there is a certain performance improvement, especially in Breakout, Enduro, Gravitar
Fig. 3: Atari 2600 environments. (a) Breakout. (b) MsPacman. (c) Spaclen-vaders.
and Kangaroo, where there is a significant performance improvement.
For the variance of the surrogate objective, It can be seen that the D-PPO algorithm can effectively limit the surrogate
Fig. 4: The training curves (left) and surrogate objective variances (right) for PPO and D-PPO algorithms in different environments (five sets of experiments repeated for each environment with different random seeds).
objective variance in almost all environments except for the CrazyClimber. In the Breakout environment, the return of the D-PPO algorithm is much higher than that of the PPO algorithm before about 7 million steps, which also leads to a larger surrogate objective variance compared to the PPO algorithm. After 7 million steps, with the decrease of the learning rate, the surrogate objective variance of the D-PPO algorithm decreases gradually due to the dropout strategy and is significantly lower than that of the PPO algorithm. This experimental phenomenon is also very evident in the Enduro environment, after approximately 7.5 million steps, the surrogate objective variance of the D-PPO algorithm can be observed to gradually decrease and smaller than that of the PPO algorithm. In the DemonAttack and Gravitar environments, the effectiveness of the D-PPO algorithm is even more evident: its surrogate objective variance is lower than that of the PPO algorithm at almost all time steps.
In addition, Tables II, III and IV show the average returns of PPO and D-PPO algorithms for all time steps, for the last 0.1 million steps and for the last 1 million steps in the corresponding environments under five random seeds. It can be seen that the D-PPO algorithm has achieved stable performance improvements in all environments except for Boxing. Fig. 5 shows the box plot of all the returns of PPO and D-PPO algorithms in the last 1 million steps in the corresponding environments under five random number seeds. It can be seen that significant performance improvement was achieved in Breakout, CrazyClimber, and Enduro environments, and the returns had smaller variance.
In general, from the experimental results, the performance of the D-PPO algorithm is superior in most environments, and it can effectively limit the excessive growth of the variance of the surrogate objective, which is a direct proof of the effectiveness of the dropout strategy.
### _Hyperparameter Analysis_
Our main question now is how to set the hyperparameter \(r\) in the D-PPO algorithm? To answer this question, in this section we set comparative experiments with different hyperparameters \(r\) to determine the optimal one.
We selected three representative environments, namely Breakout, Enduro, and SpaceInvaders, and conducted repeated experiments on five different random seeds. The experimental results are shown in Fig. 6 and Fig. 7. From the perspective of return, Fig. 7 and the first column of Fig. 6 reflect the returns of D-PPO algorithm under different hyperparameters
Fig. 5: The normalized returns of PPO and D-PPO algorithms in different environments during the last 1 million training steps (five sets of experiments repeated for each environment with different random seeds).
\(r\in\{0.1,0.2,0.3,0.4,0.5\}\). It can be seen that D-PPO algorithm achieves the highest average return when \(r=0.2\) in Breakout and Enduro environments. From the perspective of surrogate objective variance, as shown in the second column of Fig. 6, when \(r\) is set to 0.1 or 0.2, D-PPO algorithm effectively limits the growth of surrogate objective variance in Breakout and Enduro environments. In addition, the third and fourth columns of Fig. 6 respectively represent the average values of \(\phi(x)\) that is positive and negative in the dropout data. It can be seen that as \(r\) increases, its value gradually increases, which is intuitive and also indicates the rationality of the dropout strategy. Therefore, based on the above analysis, we recommend setting the hyperparameter \(r\) of the D-PPO algorithm to 0.2, as it achieves the highest average return in multiple environments and is able to more effectively limit the variance of the surrogate objective.
Fig. 6: The training curves corresponding to different values of \(r\) in the D-PPO algorithm under different environments. They are returns, the variances of the surrogate objective, and the average values of \(\phi(x)\) that is positive and negative in the dropout data, respectively (five sets of experiments repeated for each environment with different random seeds).
Fig. 7: The box plot of the returns for the last 1 million training steps corresponding to different values of \(r\) in the D-PPO algorithm under three environments, here the box with the largest mean value highlighted in red (five sets of experiments repeated for each environment with different random seeds).
## V Conclusion
In this article, a dropout strategy framework for policy optimization methods was proposed. Under this framework, we derive an upper bound on the variance of the surrogate objective, and propose a dropout strategy to limit the excessive growth of the surrogate objective variance caused by the introduction of importance sampling. By applying the dropout strategy to the PPO algorithm, we obtain the D-PPO algorithm. We conducted comparative experiments between the D-PPO and PPO algorithms in the Atari 2600 environment to verify the effectiveness of the dropout strategy, and further discussed the setting of the hyperparameter \(r\) in D-PPO.
There is still space for improvement. The dropout strategy may also pose risks to policy optimization algorithms, as discarding some sample data to reduce the variance of the surrogate objective may also result in the dropout of important samples that can significantly improve the performance of the policy network. An interesting direction for future work is to apply the dropout strategy to a wider range of policy optimization methods and simulation environments, and try to avoid the above situations, which we will consider as our research goal in the next stage.
| Policyベースの強化学習アルゴリズムは、さまざまな分野で広く使用されています。そのなかでも、TRPOとPPOといった主流のポリシー最適化アルゴリズムは、ポリシーイテレーションに重要サンプリングを導入することで、過去のデータの再利用が可能になります。しかし、これはサロゲートオベクティブの変動度を高め、その結果、アルゴリズムの安定性と収束を indirectly 影響させる可能性があります。この論文では、サロゲートオベクティブの変動度の上限値を導出し、その変動度はサロゲートオベクティブの増加に伴って二次関数的に増大する可能性があります。次に、重要サンプリングによるサロゲートオベクティブの増加を回避するためのdropout技術を提案しました。その後、主流のポリシー最適化方法に適用できる一般的な強化学習フレームワークを導入し、dropout技術をPPOアルゴリ |
2309.15258 | Efficient Quasiparticle Determination beyond the Diagonal Approximation
via Random Compression | Calculations of excited states in Green's function formalism often invoke the
diagonal approximation, in which the quasiparticle states are taken from a
mean-field calculation. Here, we extend the stochastic approaches applied in
the many-body perturbation theory and overcome this limitation for large
systems in which we are interested in a small subset of states. We separate the
problem into a core subspace, whose coupling to the remainder of the system
environment is stochastically sampled. This method is exemplified on computing
hole injection energies into CO$_2$ on an extended gold surface with nearly
3000 electrons. We find that in the extended system, the size of the problem
can be compressed up to $95\%$ using stochastic sampling. This result provides
a way forward for self-consistent stochastic methods and determining Dyson
orbitals in large systems. | Annabelle Canestraight, Xiaohe Lei, Khaled Ibrahim, Vojtech Vlcek | 2023-09-26T20:33:04 | http://arxiv.org/abs/2309.15258v1 | # Efficient Quasiparticle Determination beyond the Diagonal Approximation via Random Compression
###### Abstract
Calculations of excited states in Green's function formalism often invoke the diagonal approximation, in which the quasiparticle states are taken from a mean-field calculation. Here, we extend the stochastic approaches applied in the many-body perturbation theory and overcome this limitation for large systems in which we are interested in a small subset of states. We separate the problem into a core subspace, whose coupling to the remainder of the system environment is stochastically sampled. This method is exemplified on computing hole injection energies into CO\({}_{2}\) on an extended gold surface with nearly 3000 electrons. We find that in the extended system, the size of the problem can be compressed up to 95% using stochastic sampling. This result provides a way forward for self-consistent stochastic methods and determining Dyson orbitals in large systems.
pacs: 71.10.-m, 71.10.-k, 71.10.-k _Introduction_Single particle states are frequently used in the study of excitation phenomena such as photoionization, electron injection, and generally optical transitions[1; 2; 3; 4; 5; 6; 7; 8; 9]. The physical interpretation of such single particle states often depends on the specific type of observables[7; 10]. In particular, Dyson orbitals, which correspond to the probability amplitude distribution of a specific electron or hole excitation (i.e., quasiparticle state), are directly accessible via orbital tomography and provide insights into the relation between energies and real-space distribution of single particle excitation[11; 12]. This has fundamental implications for chemistry - e.g., hybridization of quasiparticles on surfaces governs the propensity for direct injection of an electron [9]. These are just a few compelling reasons to account for the physically meaningful orbital disturtions, especially for problems concerning (chemical) interfaces.
In practice, however, single-particle states for interfacial systems are typically taken from the Density Functional Theory (DFT) [13; 14; 15], as the cost of higher level theory is too high. While DFT can handle extremely large systems[16], these calculations can not, even in principle, yield quasiparticle (QP) energies or the Dyson orbitals[7; 17]. A natural and widely applied extension, especially in condensed matter problems, is application of the Many Body Perturbation Theory (MBPT) employing Green's function formalism[7; 18; 19; 20]. In particular, the \(GW\) approximation, which truncates the correlation expansion to non-local charge density fluctuations, has emerged as arguably the most popular approach[21; 22] and higher order corrections emerged recently[23; 24; 25; 26]. Its self-consistent solution yields both QP energies and the Dyson orbitals [27; 28; 29]. However, it is common to apply \(GW\) approach as a one-shot correction, \(G_{0}W_{0}\), employing the Kohn-Sham Green's function \(G_{0}\) and the screened coulomb interaction \(W_{0}\) derived from the underlying Kohn Sham DFT solutions. Despite its approximate nature, \(G_{0}W_{0}\) often provides good estimates of band gaps[30; 31; 32; 33; 34]. The use of one-shot corrections has been largely motivated by the computational cost, which scales as \(\mathcal{O}(N^{4})\) with the number of electrons in conventional implementations[35; 36]. The computational cost has been significantly decreased by stochastic sampling approaches in \(GW\) (and post-\(GW\)) to be nearly linear; 1000's of states can thus be studied[37; 38; 39; 40; 41]. However, even in the stochastic \(GW\), "updating" the single-particle basis (i.e., finding the Dyson orbitals) is difficult[42] and, in practice, usually avoided[43]. Routine calculations of QP orbitals in realistic systems with thousands of electrons are still elusive. This is true even if one is, in principle, interested in treating a _small subset_ of states, as exemplified in this work (see below).
Here, we tackle this problem and present a scheme without the diagonal approximation for realistic nanoscale systems. This stochastic framework is exemplified for CO\({}_{2}\) molecule on a large Au slab. For this problem, the surface contributions to the orbitals are sampled, drastically reducing the cost of QP calculations. This
method divides the system into a set of states in an "core" subspace, treated by standard stochastic MBPT, and a rest space, for which additional sampling is introduced. This step is combined with a search over the fixed-point solutions of the frequency-dependent QP Hamiltonian, which is basis representation independent and thus enables the use of random vectors.
We apply these methods to a prototypical system of a small molecule on a plasmonic surface (CO\({}_{2}\) on Au illustrated in the inset in Fig. 1). In the practical demonstration for an extended Au (111) surface with 270 atoms (2986 electrons), we found convergence in the hybridized HOMO energy with a 95% rank compression compared to evaluation in the full canonical orbital basis. This success provides a way to use costly high-level theories to study realistic chemical systems.
_Formalism_ The time-ordered Green's function (GF) contains information about the quasiparticle (QP) energy spectrum and lifetimes, and it corresponds to the probability amplitude of a QP propagation between two space-time points \(\mathbf{r},t\) and \(\mathbf{r}^{\prime},t^{\prime}\). In the Lehmann representation, it is expressed as
\[G(\mathbf{r},\mathbf{r}^{\prime},\omega)=\sum_{n}\bigg{[}\frac{\psi_{n}( \mathbf{r})\psi_{n}(\mathbf{r}^{\prime})^{*}}{\omega-\varepsilon_{n}-i\eta} \bigg{]}, \tag{1}\]
where the Dyson orbitals are obtained as from the \(N\)-particle ground state and the \(n^{\text{th}}\) excited state of the \(N-1\) particle system, where \(\hat{\psi}(\mathbf{r})\) is the field operator. The poles of the GF are located at the QP energies, \(\varepsilon_{n}\), here corresponding to the charge removal [7]. Charge addition is treated analogously. The GF poles are conveniently expressed as solutions to a non-linear eigenvalue problem for an effective Hamiltonian obtained by downfolding interactions with the system[7]:
\[\hat{H}_{QP}(\omega)\ket{\psi}=\omega\ket{\psi} \tag{2}\]
In practice, the QP Hamiltonian is divided into a static and local term, \(H_{0}\), which typically contains all one-body contributions, while a space-time non-local portion is represented by the self-energy operator \(\hat{\Sigma}\)[22]. The latter is approximated by selected types of interaction diagrams (and their resummation). As \(\hat{\Sigma}\) is conceptually equivalent to the exchange-correlation potential applied in the Kohn-Sham density functional theory (KS DFT), the QP Hamiltonian is practically constructed as a perturbative correction on top of such a mean-field starting point:
\[\hat{H}_{QP}(\omega)=\hat{H}_{0,\text{KS}}-\hat{V}_{xc}+\hat{\Sigma}(\omega), \tag{3}\]
where \(\hat{H}_{0,\text{KS}}\) is the KS DFT Hamiltonian.
Further, the "one-shot" correction corresponds to:
\[\Sigma(\mathbf{r},\mathbf{r}^{\prime},\omega)=i\int\frac{d\omega^{\prime}}{2 \pi}G_{0}(\mathbf{r},\mathbf{r}^{\prime},\omega+\omega^{\prime})W_{0}(\mathbf{ r},\mathbf{r}^{\prime},\omega^{\prime}), \tag{4}\]
where \(\mathcal{G}_{0}\) has poles at the DFT Kohn Sham eigenvalues, \(\varepsilon_{0}\), and \(W_{0}\) is the screened coulomb interaction. The self-consistency requires repeated construction of \(\Sigma\) and re-evaluation of Eq. 2; multiple flavors of self-consistent approaches have been developed [27; 28]. Typically, the convergence pattern is smooth. If the KS DFT single-particle states are close to the Dyson orbitals, the "one-shot" correction provides good estimates of QP energies, yet the quality of the mean-filed eigenstates are not _a priori_ known.
A step beyond this practice is to diagonalize \(H_{QP}\) in Eq. 2 in the orbital basis, yielding Dyson orbitals (in the first iteration) and updated one-shot QP energies in the \(GW\) approximation[7]. Note that, in principle, the nonlinear problem in Eq. 2 holds for multiple values of \(\omega\) associated with satellite features [44; 23; 45]. In this work, we will focus only on the primary QP peaks, i.e., we seek a single solution to the QP Hamiltonian in the vicinity of \(\varepsilon_{0}\) and look for the fixed point solutions to \(\omega_{i}=\bra{\phi_{i}}\hat{H}_{QP}[\omega_{i}]\ket{\phi_{i}}\). Note that \(H_{QP}\) is non-hermitian, and each QP state, in general, corresponds to \(H_{QP}\) computed at a different frequency.
In practical schemes[46; 47; 29; 43], it is common to construct a single "static" effective Hamiltonian (yielding orthogonal eigenstates). However, due to the non-linearity of this problem, it is not entirely clear at what frequency the self-energy should be evaluated. For strongly diagonally dominant \(H_{QP}\), i.e., those where KS DFT orbitals are, in fact, close to the Dyson orbitals, one may evaluate \(\omega_{i}\) as the fixed point solution for the diagonal entries. The remaining off-diagonal self-energy is e.g., \(\Sigma_{ij}=\frac{1}{4}\left[\Sigma_{ij}(\omega_{i})+\Sigma_{ji}(\omega_{i})+ \Sigma_{ij}(\omega_{j})+\Sigma_{ji}(\omega_{j})\right]\). In this
Figure 1: Illustration of the stochastic compression technique, which samples the “rest subspace” using a set of (filtered) random vectors, here spanning the single particle states of the gold substrate.
form, it is possible to construct a static and hermitized QP Hamiltonian. By enforcing the hermicity of \(H_{QP}\), we impose that the resulting QP states are orthonormal. The QP energies are purely real, corresponding to an infinite lifetime QP. Alternatively, one can therefore relax the latter step by taking \(\Sigma_{ij}=\frac{1}{2}\left[\Sigma_{ij}(\omega_{i})+\Sigma_{ij}(\omega_{j})\right]\).
Note that both approaches strongly depend on the basis choice. We illustrate this in detail in the Supporting Information (SI) for the acrolein molecule, for which the magnitudes of the off-diagonal terms are 98.5% smaller than the diagonal ones for the canonical KS DFT basis. The situation changes dramatically when localized (unitary transformed) orbitals are employed. Hence, depending on the construction of a single \(H_{QP}\), the resulting QP energies change by as much as 10% and translate to changes of 0.77 eV on average for acrolein.
Since our goal is to determine Dyson orbitals for a selected subspace of interest (which will be constructed from localized basis states), we avoid any approximation to the fixed point solution. In this method, the whole QP Hamiltonian is evaluated at multiple frequencies, and the QP eigenvalues are found as the fixed point solutions to Eq. 2. No assumptions are further made about the hermicity of the Hamiltonian matrix; a graphical example of such a fixed point solution for the \(H_{QP}\) is also illustrated in the SI.
Stochastic Compression of QP statesWhen studying a large system with a subspace of particular interest, it is prohibitively expensive to employ all \(M\) electronic states. It is also insufficient to assume that the Hamiltonian matrix takes a block-diagonal form due to the coupling between the subspace and its orthogonal complement. To handle such a case, we propose a method of stochastic matrix compression where a portion of the \(H_{QP}\) matrix is represented by a set of random vectors. These vectors sample a large portion of the Hilbert space, which overall contributes to the QP shift and affects the Dyson orbitals, but for which each individual single particle state has only a limited contribution.
As illustrated in Fig. 1, we separate the "core subspace" spanned by \(N_{c}\) deterministic states, \(\{\phi^{c}\}\), (e.g., the original KS DFT eigenstates), and the remainder spanned by a \(N_{s}\) stochastic states \(\{\zeta\}\), constructed as a random linear combination of the KS states that are orthogonal to the \(\{\phi^{c}\}\) set: \(\left|\zeta\right\rangle=\sum_{i=1\pm\phi^{c}}^{N_{s}}c_{i}\left|\phi_{i}\right\rangle\). In the final step, the individual random states are orthogonalized via the Gram-Schmidt process. Because this change of basis is guaranteed to be a unitary transformation of the Hamiltonian matrix, when the whole system is diagonalized, the resulting eigenstates will be the same. When the Hamiltonian matrix is truncated in this new stochastic basis, the coupling of each stochastic state to the core subspace will represent the subspace interaction with the full environment. In this way, we have "compressed" the information of the whole system environment into a single state. Given that the fixed point solution is basis independent (as illustrated in Fig. S.3), a total number of states \(N_{c}+N_{s}\) is the same as the dimension of \(H_{QP}\), \(M\), we necessarily obtain the same QP energies. For fewer random states, \(N_{c}+N_{s}<M\), and the computation is less expensive. Note that the QP energy has a finite statistical error, which decreases as \(1/\sqrt{N_{s}}\) with the number of states sampling the off-diagonal self-energy contributions. As we show below, the convergence of the QP energies is smooth. Further, note that instead of the canonical single particle states in the above equation, we achieve further speedup if already preselected (filtered) subset of states (orthogonal to the \(\{\phi^{c}\}\)) are used in the construction of \(\left|\zeta\right\rangle\).
ResultsWe now demonstrate the method practically for the CO\({}_{2}\) molecule on the Au substrate for which we intend to extract the energies of quasi-hole states on the molecule (i.e., corresponding to the charge removal energies from CO\({}_{2}\) on the surface). We first construct a minimal example on which we can solve entirely and illustrate how stochastic sampling smoothes the convergence of the QP energies. Later, we show a realistic example with nearly 3,000 electrons, which cannot be easily solved without the sampling methodology.
We will demonstrate the success of our stochastic sampling method on a minimal system of CO\({}_{2}\) on a bilayer of 8 gold atoms. This system contains only 52 occupied states, which we also treat explicitly. Note that, in principle, the hybridization extends beyond merely the occupied manifold, but to illustrate the methodology, we consider only the rotation within the occupied subspace. To see the surface-induced changes, calculate the QP states for a CO\({}_{2}\) molecule in a vacuum (\(N=8\)) and for the minimal composite system (\(N=52\)). We find that the seven lowest valence states of the molecule shift in energy when the substrate is included, but the eigenvectors (orbitals) do not change in response to the gold substrate.
In contrast, HOMO state behaves differently: no single state would correspond to the molecular HOMO (either the canonical DFT or Dyson orbitals computed at the \(G_{0}W_{0}\) level). Instead, there are multiple _hybridized_ states sufficiently localized on the molecule, whose eigenvalues lay within a small range of energies. We aim to characterize them and, consequently, to find a characteristic QP energy for this distribution of HOMO QP for the CO\({}_{2}\) molecule on Au.
We thus define a "core subspace" comprising the states with the most molecular character. In practice, they are identified based on projection onto localized (unitary transformed) orbitals centered on CO\({}_{2}\), e.g., using the molecular reconstruction technique which is applied here[48; 41]. The corresponding projection value is:
\[P_{i}=\sum_{j}|\left\langle\xi_{j}|\phi_{i}\right\rangle|^{2} \tag{5}\]
Here, \(\{\left|\xi\right\rangle\}\) and \(\{\left|\phi\right\rangle\}\) are the sets of transformed (localized) and canonical KS DFT states respectively. Each KS state with \(P\) greater than a chosen threshold is included in the core region. This preselection separates the "core" subspace from the rest.
We now track the fixed point HOMO QP solution with the number of states considered in the \(H_{QP}\), i.e., we gradually add more states outside of the core subspace. The molecular HOMO is hybridized with many of the surface states. We thus define a single energy for this state by taking its mean value, constructed by weighting by the projection onto the HOMO of CO\({}_{2}\) in a vacuum. The results are shown by the green color in Fig. 2. The left-most point represents only the core space, containing 12 orbitals corresponding to 23% of the entire \(H_{QP}\). The size of the problem is increased by adding states depending on their distance from the KS DFT HOMO energy, as one would expect that the hybridization of states will be small for energetically distant states. This does not produce a smooth convergence (green line in Fig. 2) as some surface hybridization is due to Au orbitals that are far from the core subspace.
To demonstrate the stochastic approach, we now instead sample the remaining KS states using random vectors:
\[\ket{\zeta}=\frac{1}{\sqrt{N_{e}}}\sum_{j=1}^{N_{e}}w_{j}e^{i\theta_{j}}\ket{ \phi_{j}} \tag{6}\]
Here \(\theta_{j}\in[0,2\pi]\) is randomly chosen, and \(N_{e}\) is the number of "rest" states used with weight \(w_{i}\). Note that we can either sample all remaining states evenly (\(w_{i}=1\forall i\)), but generally, we consider a random selection from a distribution within the sampled subspace (determined by \(P_{i}\) in Eq. 5) as we show later.
Once we have obtained the set \(\{\zeta\}\), we randomly draw \(N_{s}\) of them and the fixed point solution is then found for \(H_{QP}\) with the dimension of \(N_{c}+N_{s}\). The results for \(N_{c}=12\) and variable \(N_{s}\) is in Fig. 2 and shows a monotonic and smooth convergence towards to the asymptotic value (obtained for the entire 52 occupied states). The stochastic sampling was repeated ten times for each step with a different set of \(N_{s}\) random vectors; the standard deviations are indicated in the plot and they naturally disappear in the complete basis limit. For instance, for \(N_{s}=20\), i.e., 62% of the entire system, we see a difference of \(0.057\pm 0.19\) eV between the mean HOMO QP energies. For an increased core space, \(N_{c}=15\), we see that the HOMO QP value converges similarly, i.e., the size of the core space is not changing the convergence profile significantly. For \(N_{s}=20\) (i.e., 32.5% compression of matrix rank), the resulting spectrum mean falls within a 100 meV from the value obtained from the diagonalization of the full matrix.
Without any prior knowledge or arbitrary truncation of the KS states, we can capture molecule-surface hybridization effects by employing stochastic states representing the substrate environment. This description is systematically improvable by increasing both \(N_{c}\) and \(N_{s}\). In general, the cost reduction provided by the stochastic sampling is due to circumventing the summation over many states that contribute either similarly or very little to the expectation values in question[49]. For a small system such as the one used here, the amount of compression is less significant as most of the states contribute to the QP HOMO energy.
We now turn to a realistic large-scale system for which such a calculation would not be possible with standard methods. Here, we study CO\({}_{2}\) molecule on an extended Au-111 surface of 270 atoms, containing 2986 electrons. The system is treated analogously to the minimal example: we selected a core subspace of 15 and 25 states. Due to the molecule-surface hybridization, \(N_{c}=15\) is the minimal size of the core space identified for this particular problem. Next, the stochastic sampling uses a filtered distribution in Eq. 6 in which we consider a linear combination of states that are sufficiently localized on the molecules. In practice, this step determines the sampled subspace, which is practically restricted to states with \(P\) greater than a selected threshold, \(P_{T}\). Here we consider two cases \(P_{T}=10^{-3}\) and \(P_{T}=5\times 10^{-4}\).
From Fig. 3 we can see that the HOMO energy converges with only 5% of the total number of states used[50]. For slightly increased selectivity (i.e., lower projection threshold \(P\)), the stochastic sampling of the hybridization converges similarly. Further, the size of the core subspace does not significantly impact the convergence rate: when \(N_{c}=25\) with the filtering threshold of \(P_{T}=5\times 10^{-4}\), the curve matches that of the \(N_{c}=15\) for the same value of \(P_{T}\). This suggests that the size of the core subspace can be decreased, possibly at the expense of using more stochastic samplings.
Finally, note that when the orbital re-hybridization is used at the \(G_{0}W_{0}\) level, the HOMO QP energy moves down in energy by more than 1 eV. Since approximate semilocal KS DFT is known to suffer from overdelocaliza
Figure 2: **Hybridized HOMO Convergence (Minimal System)**: Core sizes of \(N_{c}=12\) and \(N_{c}=15\) are used, with the remaining states sampled with equal weight. In contrast, the adding the states by energy (\(\varepsilon\) ordered) demonstrates the lack of smooth convergence. The gray-shaded region shows where the spectrum converges within 0.1 eV
tion, it is expected that the physical Dyson orbitals are more localized than the canonical KS DFT eigenstates. In turn, stronger localization of HOMO is typically associated with its energy decrease[9]. These observations are thus in line with what the MBPT should accomplish and underline the need for more appropriate treatment of surface phenomena.
_Outlook_ The rapid convergence of the QP energies with \(N_{s}\) implies that when we stochastically sample the matrix, aided by preselection and filtering, we can represent the full QP spectrum for a molecule that hybridizes with an extended surface using less than 5% of the system. The \(H_{QP}\) matrix size is thus compressed by 95%. This is largely due to the significant "redundancy" of information encoded in individual single-particle states, and the sampling allows sampling all (or a large filtered portion of them) simultaneously through random vectors. The approach presented here will enable the treatment of large-scale interfacial problems and opens the door for efficient self-consistent stochastic MBPT.
## Acknowledgements
The development (A.C., X.L., and V.V.) of the stochastic compression technique was supported by the NSF CAREER award (DMR-1945098). The implementation and numerical testing (A.C., K.I., and V.V.) is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and Office of Basic Energy Sciences, Scientific Discovery through Advanced Computing (SciDAC) program under Award Number DE-SC0022198. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award BES-ERCAP0020089.
| グリーン関数形式論における励起状態の算出は、通常、対角線近似を用いるが、この近似では、准粒子状態は平均場計算から得られる。ここでは、多体perturbation理論で適用された確率的アプローチを拡張し、興味のある状態の集合を扱う大規模系においてこの制約を克服する。問題をコアサブ空間へと分割し、残りの系環境との相互作用は確率的にサンプリングされる。この方法を例として、CO$_2$のホール注入エネルギーを、ほぼ3000電子を持つ拡張 gold SURF ace で計算する。この拡張系では、問題のサイズを95%まで確率的サンプリングで圧縮することができる。この結果、自己一貫性のある確率的アプローチと、大規模系におけるDyson軌道を決定するための方法を示唆する。 |
2310.00050 | Scalar boson emission from a magnetized relativistic plasma | We investigate the differential emission rate of neutral scalar bosons from a
highly magnetized relativistic plasma. We show that three processes contribute
at the leading order: particle splitting ($\psi\rightarrow \psi+\phi $),
antiparticle splitting ($\bar{\psi} \rightarrow \bar{\psi}+\phi $), and
particle-antiparticle annihilation ($\psi + \bar{\psi}\rightarrow \phi $). This
is in contrast to the scenario with zero magnetic field, where only the
annihilation processes contribute to boson production. We examine the impact of
Landau-level quantization on the energy dependence of the rate and investigate
the angular distribution of emitted scalar bosons. The differential rate
resulting from both (anti)particle splitting and annihilation processes are
typically suppressed in the direction of the magnetic field and enhanced in
perpendicular directions. Overall, the background magnetic field significantly
amplifies the total emission rate. We speculate that our model calculations
provide valuable theoretical insights with potentially important applications. | Jorge Jaber-Urquiza, Igor A. Shovkovy | 2023-09-29T18:00:03 | http://arxiv.org/abs/2310.00050v2 | # Scalar boson emission from a magnetized relativistic plasma
###### Abstract
We investigate the differential emission rate of neutral scalar bosons from a highly magnetized relativistic plasma. We show that three processes contribute at the leading order: particle splitting (\(\psi\to\psi+\phi\)), antiparticle splitting (\(\bar{\psi}\to\bar{\psi}+\phi\)), and particle-antiparticle annihilation (\(\psi+\bar{\psi}\to\phi\)). This is in contrast to the scenario with zero magnetic field, where only the annihilation processes contribute to boson production. We examine the impact of Landau-level quantization on the energy dependence of the rate and investigate the angular distribution of emitted scalar bosons. The differential rate resulting from both (anti)particle splitting and annihilation processes are typically suppressed in the direction of the magnetic field and enhanced in perpendicular directions. Overall, the background magnetic field significantly amplifies the total emission rate. We speculate that our model calculations provide valuable theoretical insights with potentially important applications.
## I Introduction
The properties of matter under extreme conditions, where relativistic effects play a profound role, are a source of great fascination. This fascination is not surprising, as such extreme conditions naturally occur in the early Universe and in stars [1; 2; 3; 4; 5]. However, replicating these conditions in a laboratory setting is exceptionally challenging. The most promising efforts in this direction involve heavy-ion experiments conducted at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and the Large Hadron Collider (LHC) at CERN [6; 7]. In these experiments, heavy ions are accelerated to sufficiently high energies to produce tiny volumes of quark-gluon plasma (QGP) [8; 9]. Although this new state of matter has a very brief lifetime and is likely far from equilibrium, some of its properties can still be deduced [10; 11; 12].
Over the last two decades, significant progress has been made in understanding the properties of hot QGP [13; 14]. The emerging picture can be summarized as follows. Heavy-ion collisions generate matter with high energy density, which is initially far from equilibrium. Due to strong interaction, this matter rapidly approaches a quasi-equilibrium QGP state. Furthermore, it behaves almost like a perfect hydrodynamic fluid, undergoing expansion, cooling, and eventual hadronization [15; 16; 17; 18]. The resulting hadrons carry its remnants to the detectors, enabling one to unveil the properties of hot QGP.
The QGP produced in heavy-ion collisions not only possesses an extremely high temperature but also carries a strong magnetic field [19; 20; 21; 22] and exhibits high vorticity [23; 24; 25]. Theoretical investigations indicate that both the magnetic field and vorticity can modify the observed properties of QGP [26; 27; 28; 29; 30]. Of particular significance are the observables linked to electromagnetic probes, as they convey information about the plasma's properties across all stages of its evolution [31].
In this paper, we will study the differential production rate of neutral scalar bosons within a strongly magnetized relativistic plasma. Previously, an attempt to address this problem was undertaken in Ref. [32]. However, only simplified kinematics with momenta of scalar bosons parallel to the magnetic field was considered. Another related study on the scalar boson decay at zero temperature and weak field was reported in Ref. [33]. In both instances, the constraints imposed by kinematics allowed only for the contribution of particle-antiparticle annihilation processes to the absorptive part of the self-energy (or boson decay). Herein, we undertake a comprehensive approach, removing all constraints on kinematics, permitting arbitrary magnetic field strengths, and incorporating the thermal effects of the plasma to address this problem in its entirety.
At first glance, this problem may not have direct phenomenological implications for QGP in heavy-ion collisions. After all, there are no known spin-zero particles to be emitted from a relativistic plasma. Nevertheless, we believe that this problem has theoretical value. By comparing the results with the emission of photons [34; 35; 36; 37; 38] and dileptons [39; 40; 41; 42; 43; 44; 45; 46; 47; 48] (i.e., spin-one channel) studied previously, we can get insights into the impact of particle spin on emission
rates and angular distributions. This hypothetical scenario also extends our understanding of the fundamental laws of physics and their potential applications in other fields.
For example, neutral scalar bosons often appear in dark matter [49; 50; 51] and inflationary models [52]. Moreover, their properties, when modified by a nonzero temperature and primordial magnetic fields, can have cosmological implications [53; 54]. In the end, our goal is to refine our theoretical tools and expand the frontiers of scientific knowledge. Not every thought experiment or hypothetical scenario leads to a discovery, but more often than not, it provides fresh insights and perspectives.
The paper is organized as follows. We introduce the model of magnetized plasma with a single flavor of fermion species, coupled to a neutral scalar field via a Yukawa-type interaction, in Sec. II. There, we also define the differential emission rate in terms of the imaginary part of the scalar-boson self-energy. The general expression for the self-energy at nonzero temperature is obtained in Sec. III. In the derivation, we utilize the Landau-level representation for the fermion propagator, which allows us to extract an analytical expression for the imaginary part of the self-energy in the form of a convergent series. In Sec. IV, the corresponding result is used to calculate the differential emission rate of scalar bosons from a magnetized plasma. We study in detail the angular dependence of the emission rate, as well as analyze the partial contributions due to annihilation (i.e., \(\bar{\psi}\to\bar{\psi}+\phi\) and \(\psi\to\psi+\phi\)) processes. A discussion of the main findings and a summary of the results are given in Sec. V. For comparison, the bosonic self-energy in the zero magnetic field limit is presented in Appendix A.
## II Model
For simplicity, we consider a model of magnetized plasma with a single flavor of fermion species \(\psi\). By assumption, the fermions interact with the neutral scalar field \(\phi\) via a Yukawa interaction. The corresponding Lagrangian density reads
\[\mathcal{L}=\bar{\psi}\left(i\gamma^{\mu}D_{\mu}-m\right)\psi+\frac{1}{2} \partial^{\mu}\phi\partial_{\mu}\phi-\frac{1}{2}M^{2}\phi^{2}-g\phi\bar{\psi }\psi, \tag{1}\]
where \(m\) and \(M\) are the masses of the fermion and scalar particles, and \(q\) is the electric charge of the fermion. The covariant derivative is defined as usual, i.e., \(D^{\mu}\equiv\partial^{\mu}+iqA^{\mu}(x)\), where \(A^{\mu}(x)\) is an Abelian gauge field, capturing the effect of a background magnetic field \(\mathbf{B}\). The corresponding field strength tensor is given by \(F^{\mu\nu}=\partial^{\mu}A^{\nu}(x)-\partial^{\nu}A^{\mu}(x)\). Without loss of generality, we will assume that the magnetic field points along the \(z\) axis and use the following Landau gauge: \(A^{\mu}(x)=-yB\delta_{1}^{\mu}\). The explicit form of the strength tensor reads \(F^{\mu\nu}=-\varepsilon^{0\mu\nu 3}B\). Here we use the conventional definition of the contravariant coordinates, i.e., \(x^{\mu}=(t,x,y,z)\), and the Minkowski metric \(g_{\mu\nu}=\mathrm{diag}(1,-1,-1,-1)\).
The differential thermal emission rate of scalar bosons from the corresponding plasma is given by
\[\frac{d^{3}R}{d^{3}k}=-\frac{n_{B}(\Omega)}{(2\pi)^{3}\Omega}\mathrm{Im}\left[ \Sigma^{R}(\Omega,\mathbf{k})\right], \tag{2}\]
where \(\Omega=\sqrt{M^{2}+|\mathbf{k}|^{2}}\) is the (on shell) scalar particle energy, \(n_{B}(\Omega)=1/\left[e^{\Omega/T}-1\right]\) is the Bose-Einstein distribution function, and \(\Sigma^{R}(\Omega,\mathbf{k})\) is the retarded self-energy of the scalar field. At leading order in coupling, the latter is determined by the one-loop Feynman diagram in Fig. 1, where the solid and dashed lines represent fermions and bosons, respectively. Because of the background magnetic field, the fermion propagators are labeled by the longitudinal momenta and the Landau-level indices.
Note that, in view of the detailed balance, the expression in Eq. (2) can represent either the emission or the absorption rate per unit volume. However, the total emission (or absorption) rate can be also affected by the system size, if the latter is comparable to or larger than the mean free path \(l_{\phi}\) of the scalar bosons with energy \(\Omega\). For simplicity, we will ignore the corresponding effects below. If needed, however, they could be incorporated approximately by separating the surface layers of depth \(l_{\phi}\) from the rest of the plasma. The rate in Eq. (2) is valid only for the surface layers. The emission from the inner parts is approximately vanishing.
Figure 1: Leading order one-loop Feynman diagram for the scalar self-energy.
In view of the rotational symmetry of a magnetized plasma about the magnetic field direction, the differential rate is independent of the azimuthal angle \(\phi\) (which is measured in \(xy\)-plane from the positive \(x\)-axis). Taking this fact into account, we derive the following expression for the total rate integrated over all directions:
\[\frac{dR}{dk}=-\int_{0}^{\pi}\frac{k^{2}n_{B}(\Omega)}{(2\pi)^{2}\Omega}\text{ Im}\left[\Sigma^{R}(\Omega,\mathbf{k})\right]\sin\theta d\theta, \tag{3}\]
where the polar angle \(\theta\) is measured from the positive \(z\)-axis towards the \(xy\)-plane. In other words, the transverse and the longitudinal components of the boson momentum are \(k_{\perp}=k\sin\theta\) and \(k_{z}=k\cos\theta\), respectively. By rewriting the rate in terms of the boson energy, we have
\[\frac{dR}{d\Omega}=-\int_{0}^{\pi}\frac{k\,n_{B}(\Omega)}{(2\pi)^{2}}\text{Im} \left[\Sigma^{R}(\Omega,\mathbf{k})\right]\sin\theta d\theta. \tag{4}\]
In order to characterize the angular profile of emission, we will utilize the following definition of the ellipticity parameter:
\[v_{2}=-\frac{\int_{0}^{\pi}\left(d^{3}R/d^{3}k\right)\cos(2\theta)d\theta}{ \int_{0}^{\pi}\left(d^{3}R/d^{3}k\right)d\theta}, \tag{5}\]
which is analogous to the one used in heavy-ion physics but expressed in terms of a different angular coordinate. An extra negative sign in the definition ensures that a positive value of ellipticity (\(v_{2}>0\)) describes an oblate emission profile, i.e., stronger average emission in the directions perpendicular to the magnetic field (or, in heavy-ion physics language, in the reaction plane). A negative value of ellipticity (\(v_{2}<0\)) implies a prolate emission profile, i.e., stronger average emission in the directions parallel to the magnetic field (or, in heavy-ion physics language, perpendicularly to the reaction plane).
## III One-loop self-energy
In the presence of a background magnetic field, translation symmetry is broken in the plane perpendicular to the magnetic field. As a consequence, the transverse momenta are not good quantum numbers for classifying fermionic states. This fact is also reflected in the structure of the fermion propagator, which takes the following form in coordinate space [55]:
\[S(x,y)=\exp\left(-iq\int_{y}^{x}A_{\mu}(x)dx^{\mu}\right)\bar{S}(x-y), \tag{6}\]
where the first factor is the so-called Schwinger's phase. Formally, this phase is the only part that breaks the translation symmetry. The second factor, \(\bar{S}(x-y)\), is a translation invariant part of the propagator. Its explicit form will be given below.
In coordinate space, the one-loop self-energy of the scalar field is given by
\[\Sigma(x-y)=ig^{2}\text{Tr}\left[\bar{S}(x-y)\bar{S}(y-x)\right], \tag{7}\]
see Fig. 1, where the trace runs over the Dirac indices. Note that, at this leading order in coupling, it is determined only by the translation invariant part of the fermion propagator \(\tilde{S}(x-y)\).
It should not be surprising that the dependence of \(\tilde{S}(x)\) on the transverse and longitudinal spatial coordinates (i.e., \(\mathbf{r}_{\perp}\) and \(z\), respectively) is very different. Unlike translations in the \(xy\)-plane, translations in the \(z\) direction are part of the remaining symmetry in the problem. In other words, the corresponding longitudinal momentum \(k_{z}\) is a good quantum number. Thus, it convenient to use the following mixed representation for translation invariant part of the propagator:
\[\bar{S}(x)=\int\frac{d^{2}p_{\parallel}}{(2\pi)^{2}}\tilde{S}(p_{\parallel}; \mathbf{r}_{\perp})e^{-ip_{\parallel}\cdot x_{\parallel}}, \tag{8}\]
where, by definition, \(x_{\parallel}=(t,z)\), \(\mathbf{r}_{\perp}=(x,y)\), and \(p_{\parallel}=(p_{0},p_{z})\) is the longitudinal momentum. The explicit form of \(\tilde{S}(p_{\parallel};\mathbf{r}_{\perp})\) reads [27]
\[\tilde{S}(p_{\parallel};\mathbf{r}_{\perp})=i\frac{e^{-\zeta/2}}{2\pi\ell^{2}} \sum_{n=0}^{\infty}\frac{\tilde{D}\left(p_{\parallel};\mathbf{r}_{\perp} \right)}{p_{\parallel}^{2}-m^{2}-2n|qB|} \tag{9}\]
with the shorthand notation \(\zeta\equiv|\mathbf{r}_{\perp}|^{2}/(2\ell^{2})\) and
\[\tilde{D}\left(p_{\parallel};\mathbf{r}_{\perp}\right)\equiv\left(p\!\!\!/_{ \parallel}+m\right)\left[\mathcal{P}_{+}L_{n}\left(\zeta\right)+\mathcal{P}_{- }L_{n-1}\left(\zeta\right)\right]-i\frac{\mathbf{r}_{\perp}^{\prime}}{\ell^{2} }L_{n-1}^{1}\left(\zeta\right), \tag{10}\]
where \(\ell\equiv 1/\sqrt{|qB|}\) is the magnetic length, \(L_{n}(z)\) are the Laguerre polynomials, \(L_{n}^{a}(z)\) are the generalized Laguerre polynomials, and \(\mathcal{P}_{\pm}\equiv\frac{1}{2}\left(1\pm i\mathrm{sign}\left(qB\right) \gamma^{1}\gamma^{2}\right)\) are the spin projectors along the magnetic field direction.
After substituting the expression for \(\tilde{S}(x)\) into the definition of self-energy in Eq. (7) and performing the Fourier transform, we derive the following momentum representation:
\[\Sigma(k)=ig^{2}\int\frac{d^{2}p_{\parallel}}{(2\pi)^{2}}\int d^{2}\mathbf{r} _{\perp}e^{-i\mathbf{r}_{\perp}\cdot\mathbf{k}_{\perp}}\mathrm{Tr}\left[ \tilde{S}(p_{\parallel};\mathbf{r}_{\perp})\tilde{S}(p_{\parallel}-k_{ \parallel};-\mathbf{r}_{\perp})\right]. \tag{11}\]
By using the fermion propagator in Eq. (9) and performing the trace over the Dirac indices, we obtain the following expression for the scalar self-energy:
\[\Sigma(k) = -\frac{ig^{2}}{2\pi^{2}\ell^{4}}\int\frac{d^{2}p_{\parallel}}{(2 \pi)^{2}}\int d^{2}\mathbf{r}_{\perp}e^{-i\mathbf{r}_{\perp}\cdot\mathbf{k}_{ \perp}}e^{-\zeta} \tag{12}\] \[\times \sum_{n,n^{\prime}}\frac{\left(m^{2}+p_{\parallel}\cdot(p-k)_{ \parallel}\right)\left[L_{n}(\zeta)L_{n^{\prime}}(\zeta)+L_{n-1}(\zeta)L_{n^{ \prime}-1}(\zeta)\right]-\frac{2\mathbf{r}_{\perp}^{2}}{\ell^{4}}L_{n-1}^{1}( \zeta)L_{n^{\prime}-1}^{1}(\zeta)}{\left(p_{\parallel}^{2}-m^{2}-2n|qB|\right) \left((p_{\parallel}-k_{\parallel})^{2}-m^{2}-2n^{\prime}|qB|\right)}.\]
The integration over the transverse spatial coordinates can be performed exactly using the same approach as in Refs. [37; 38]. The result reads
\[\Sigma(k)=-i\frac{g^{2}}{\pi\ell^{2}}\int\frac{d^{2}p_{\parallel}}{(2\pi)^{2} }\sum_{n,n^{\prime}}\frac{\left(m^{2}+p_{\parallel}\cdot(p-k)_{\parallel} \right)\left(\mathcal{I}_{0}^{n,n^{\prime}}(\xi)+\mathcal{I}_{0}^{n-1,n^{ \prime}-1}(\xi)\right)-\frac{2}{\ell^{2}}\mathcal{I}_{2}^{n-1,n^{\prime}-1}( \xi)}{\left(p_{\parallel}^{2}-m^{2}-2n|qB|\right)\left((p_{\parallel}-k_{ \parallel})^{2}-m^{2}-2n^{\prime}|qB|\right)}. \tag{13}\]
where \(\xi\equiv(k_{\perp}\ell)^{2}/2\) and the two new functions are
\[\mathcal{I}_{0}^{n,n^{\prime}}(\xi) \equiv (-1)^{n+n^{\prime}}e^{-\xi}L_{n}^{n^{\prime}-n}(\xi)L_{n^{\prime }}^{n-n^{\prime}}(\xi), \tag{14}\] \[\mathcal{I}_{2}^{n,n^{\prime}}(\xi) \equiv 2(n^{\prime}+1)(-1)^{n+n^{\prime}}e^{-\xi}L_{n}^{n^{\prime}-n}( \xi)L_{n^{\prime}+1}^{n-n^{\prime}}(\xi). \tag{15}\]
To take thermal effects into account, we introduce the Matsubara frequencies through the imaginary time formalism. Then, replacing the fermion energy \(p_{0}\to i\omega_{k}=2i\pi(k+1)T\) and the boson energy with the bosonic counterpart, i.e., \(k_{0}\to i\Omega_{m}=2i\pi mT\), the corresponding finite-temperature scalar self-energy reads
\[\Sigma(i\Omega_{m},\mathbf{k})=\frac{g^{2}T}{\pi\ell^{2}}\sum_{k=-\infty}^{ \infty}\int\frac{dp_{z}}{2\pi}\sum_{n,n^{\prime}}\frac{\left(m^{2}+p_{ \parallel}\cdot(p-k)_{\parallel}\right)\left(\mathcal{I}_{0}^{n,n^{\prime}}( \xi)+\mathcal{I}_{0}^{n-1,n^{\prime}-1}(\xi)\right)-\frac{2}{\ell^{2}} \mathcal{I}_{2}^{n-1,n^{\prime}-1}(\xi)}{\left(i\omega_{k})^{2}-p_{z}^{2}-m^{2 }-2n|qB|\right)\left((i\omega_{k}-i\Omega_{m})^{2}-(p_{z}-k_{z})^{2}-m^{2}-2n^ {\prime}|qB|\right)}, \tag{16}\]
where the shorthand notation \(p_{\parallel}\cdot(p-k)_{\parallel}\) stands for \(i\omega_{k}(i\omega_{k}-i\Omega_{m})-p_{z}(p_{z}-k_{z})\). Computing the sum over the Matsubara frequencies, we derive the following expression for the self-energy:
\[\Sigma(i\Omega_{m},\mathbf{k}) = \frac{g^{2}}{\pi\ell^{2}}\int\frac{dp_{z}}{2\pi}\sum_{n,n^{\prime }}\sum_{\eta,\lambda=\pm 1}\frac{n_{F}\left(E_{n,p_{z}}\right)-n_{F}\left(\lambda E_{n^{ \prime},p_{z}-k_{z}}\right)}{4\lambda E_{n,p_{z}}E_{n^{\prime},p_{z}-k_{z}} \left(E_{n,p_{z}}-\lambda E_{n^{\prime},p_{z}-k_{z}}+i\eta\Omega_{m}\right)} \tag{17}\] \[\times \bigg{[}\left(\lambda E_{n,p_{z}}E_{n^{\prime},p_{z}-k_{z}}+m^{2} -p_{z}\left(p_{z}-k_{z}\right)\right)\left(\mathcal{I}_{0}^{n,n^{\prime}}(\xi)+ \mathcal{I}_{0}^{n-1,n^{\prime}-1}(\xi)\right)-\frac{2}{\ell^{2}}\mathcal{I}_{ 2}^{n-1,n^{\prime}-1}(\xi)\bigg{]},\]
where \(E_{n,p_{z}}\equiv\sqrt{p_{z}^{2}+m^{2}+2n|qB|}\) and \(E_{n^{\prime},p_{z}-k_{z}}\equiv\sqrt{(p_{z}-k_{z})^{2}+m^{2}+2n^{\prime}|qB|}\) are the Landau level energies, and \(n_{F}(\Omega)=1/\left(e^{\Omega/T}+1\right)\) is the Fermi-Dirac distribution function. In the derivation we used the following general result:
\[T\sum_{k=-\infty}^{\infty}\frac{i\omega_{k}(i\omega_{k}-i\Omega_{m})\mathcal{Y}+ \mathcal{Z}}{\left[(i\omega_{k})^{2}-a^{2}\right]\left[(i\omega_{k}-i\Omega_{m })^{2}-b^{2}\right]}=\sum_{\eta,\lambda=\pm 1}\frac{n_{F}(a)-n_{F}(\lambda b)}{4\lambda ab\left(a- \lambda b+\eta i\Omega_{m}\right)}\left[\lambda ab\mathcal{Y}+\mathcal{Z} \right]. \tag{18}\]
To obtain the self-energy in Minkoswky space, we need to perform a suitable analytic continuation in Eq. (17). The retarded expression for the self-energy is obtained by replacing \(i\Omega_{m}\rightarrow\Omega+i\epsilon\)
\[\Sigma^{R}(\Omega,\mathbf{k}) = \frac{g^{2}}{\pi\ell^{2}}\int\frac{dp_{z}}{2\pi}\sum_{n,n^{\prime} }\sum_{\lambda=\pm 1}\frac{n_{F}\left(E_{n,p_{z}}\right)-n_{F}\left(\lambda E_{n^{ \prime},p_{z}-k_{z}}\right)}{4\lambda E_{n,p_{z}}E_{n^{\prime},p_{z}-k_{z}} \left(E_{n,p_{z}}-\lambda E_{n^{\prime},p_{z}-k_{z}}+\eta\Omega+i\eta\epsilon \right)} \tag{19}\] \[\times \bigg{[}\left(\lambda E_{n,p_{z}}E_{n^{\prime},p_{z}-k_{z}}+m^{2} -p_{z}\left(p_{z}-k_{z}\right)\right)\left(\mathcal{I}_{0}^{n,n^{\prime}}( \xi)+\mathcal{I}_{0}^{n-1,n^{\prime}-1}(\xi)\right)-\frac{2}{\ell^{2}} \mathcal{I}_{2}^{n-1,n^{\prime}-1}(\xi)\bigg{]},\]
where \(\epsilon\rightarrow+0\).
It should be noted that the expression for the self-energy in Eq. (19) contains both vacuum and thermal contributions. While the latter is finite, the former has an ultraviolet divergence. Therefore, one has to regularize it in order to proceed with the calculation. Fortunately, only the real part of the self-energy is divergent. The imaginary part, which appears in the definition of the emission rate, is finite.
### Absorptive part of the self-energy
From the expression for the retarded self-energy in Eq. (19), one can extract the imaginary part by using the well-known Sokhotski formula, see Eq. (100). The corresponding result reads
\[\mathrm{Im}\left[\Sigma^{R}(\Omega,\mathbf{k})\right] = -\frac{g^{2}}{\ell^{2}}\int\frac{dp_{z}}{2\pi}\sum_{n,n^{\prime} }\sum_{\eta,\lambda=\pm 1}\frac{n_{F}\left(E_{n,p_{z}}\right)-n_{F}\left( \lambda E_{n^{\prime},p_{z}-k_{z}}\right)}{4\eta\lambda E_{n,p_{z}}E_{n^{ \prime},p_{z}-k_{z}}}\delta\left(E_{n,p_{z}}-\lambda E_{n^{\prime},p_{z}-k_{z} }+\eta\Omega\right) \tag{20}\] \[\times \bigg{[}\left(\lambda E_{n,p_{z}}E_{n^{\prime},p_{z}-k_{z}}+m^{2} -p_{z}\left(p_{z}-k_{z}\right)\right)\left(\mathcal{I}_{0}^{n,n^{\prime}}(\xi) +\mathcal{I}_{0}^{n-1,n^{\prime}-1}(\xi)\right)-\frac{2}{\ell^{2}}\mathcal{I} _{2}^{n-1,n^{\prime}-1}(\xi)\bigg{]}.\]
Note that the Dirac \(\delta\)-function inside the integrand enforces the following energy conservation relation:
\[E_{n,p_{z}}-\lambda E_{n^{\prime},p_{z}-k_{z}}+\eta\Omega=0. \tag{21}\]
The imaginary part of the self-energy (20) is an odd function of the scalar energy \(\Omega\). Without loss of generality, therefore, we will assume that \(\Omega>0\) from this point onward.
Depending on the choice of signs of \(\lambda\) and \(\eta\), the energy conservation relation (21) represents one of the three possible processes involving particles and/or antiparticles states with Landau indices \(n\) and \(n^{\prime}\). Two of them correspond to particle-splitting processes involving fermions (\(\lambda=1\) and \(\eta=-1\)) or antifermions (\(\lambda=1\) and \(\eta=-1\)). In Fig. 2, they are represented by the diagrams in panels (a) and (b), respectively. The third process (\(\lambda=-1\) and \(\eta=-1\)) corresponds to the fermion-antifermions annihilation, represented by the diagram in panel (c) of Fig. 2. When \(\Omega\) is positive, there are no physical processes associated with the fourth combination of signs, i.e., \(\lambda=-1\) and \(\eta=1\). It is clear since the energy conservation equation (21) has no real solutions in this case.
The necessary and sufficient conditions for having real-valued solutions to the energy conservation equation (21) are given as follows:
\[\psi\rightarrow\psi+\phi: \sqrt{\Omega^{2}-k_{z}^{2}}\leq k_{-}\ \ \mathrm{and}\ \ n>n^{\prime}, \tag{22}\] \[\bar{\psi}\rightarrow\bar{\psi}+\phi: \sqrt{\Omega^{2}-k_{z}^{2}}\leq k_{-}\ \ \mathrm{and}\ \ n<n^{\prime},\] (23) \[\psi+\bar{\psi}\rightarrow\phi: \sqrt{\Omega^{2}-k_{z}^{2}}\geq k_{+}, \tag{24}\]
Figure 2: Feynman diagrams for the three processes involving a scalar boson and fermion states in the Landau levels \(n\) and \(n^{\prime}\): (a) particle splitting \(\psi\rightarrow\psi+\phi\), (b) antiparticle splitting \(\bar{\psi}\rightarrow\bar{\psi}+\phi\), (c) particle-antiparticle annihilation \(\psi+\bar{\psi}\rightarrow\phi\).
for the three types of processes. Here we introduced the following shorthand notation for the transverse momenta thresholds:
\[k_{\pm}\equiv\bigg{|}\sqrt{m^{2}+2n|qB|}\pm\sqrt{m^{2}+2n^{\prime}|qB|}\bigg{|}, \tag{25}\]
which depend on the Landau-level indices \(n\) and \(n^{\prime}\). The constraints for \(\Omega\) are identical for the two particle-splitting processes in Eqs. (22) and (23), except for the restrictions on the Landau-level indices. However, they are very different from the kinematic constraint for the annihilation process in Eq. (24). The requirements \(n>n^{\prime}\) and \(n<n^{\prime}\) in Eqs. (22) and (23), respectively, ensure that the initial Landau level state lies above the final one.
By solving the energy conservation relation (21), we find the following pair of analytical solutions for the longitudinal momentum:
\[p_{z}^{(\pm)}\equiv\frac{k_{z}}{2}\left[1+\frac{2(n-n^{\prime})|qB|}{\Omega^{ 2}-k_{z}^{2}}\pm\frac{\Omega}{|k_{z}|}\sqrt{\left(1-\frac{k_{-}^{2}}{\Omega^{ 2}-k_{z}^{2}}\right)\left(1-\frac{k_{+}^{2}}{\Omega^{2}-k_{z}^{2}}\right)} \right]. \tag{26}\]
Note that these are exactly the same as in the case of dilepton emission [48], provided the dilepton invariant mass is replaced with the scalar boson mass. Nevertheless, as we will see below, the rate and the angular distribution of scalar emission will be very different.
By using the analytical solutions in Eq. (26), we can also obtain the corresponding fermion Landau-level energies,
\[E_{n,p_{z}}\Big{|}_{p_{z}^{(\pm)}} = -\frac{\eta\Omega}{2}\left[1+\frac{2(n-n^{\prime})|qB|}{\Omega^{ 2}-k_{z}^{2}}\pm\frac{|k_{z}|}{\Omega}\sqrt{\left(1-\frac{k_{-}^{2}}{\Omega^{ 2}-k_{z}^{2}}\right)\left(1-\frac{k_{+}^{2}}{\Omega^{2}-k_{z}^{2}}\right)} \right], \tag{27}\] \[E_{n^{\prime},p_{z}-kz}\Big{|}_{p_{z}^{(\pm)}} = \frac{\lambda\eta\Omega}{2}\left[1-\frac{2(n-n^{\prime})|qB|}{ \Omega^{2}-k_{z}^{2}}\mp\frac{|k_{z}|}{\Omega}\sqrt{\left(1-\frac{k_{-}^{2}}{ \Omega^{2}-k_{z}^{2}}\right)\left(1-\frac{k_{+}^{2}}{\Omega^{2}-k_{z}^{2}} \right)}\right]. \tag{28}\]
Having explicit analytical solutions for the longitudinal momentum, now we can rewrite the Dirac \(\delta\)-function in Eq. (20) as follows:
\[\delta\left(E_{n,p_{z}}-\lambda E_{n^{\prime},p_{z}-k_{z}}+\eta\Omega\right)= \sum_{s=\pm}\frac{2E_{n,p_{z}}E_{n^{\prime},p_{z}-k_{z}}}{\sqrt{\left(\Omega^{ 2}-k_{z}^{2}-k_{-}^{2}\right)\left(\Omega^{2}-k_{z}^{2}-k_{+}^{2}\right)}} \delta\left(p_{z}-p_{z}^{(s)}\right). \tag{29}\]
Finally, by integrating over \(p_{z}\) in Eq. (20), we derive the expression for the imaginary part of the scalar boson self-energy in the form of a convergent series over Landau levels:
\[\mathrm{Im}\left[\Sigma^{R}(\Omega,\mathbf{k})\right] = \frac{g^{2}}{2\pi\ell^{2}}\sum_{n>n^{\prime}}^{\infty}\frac{ \theta\left(\Omega^{2}-k_{z}^{2}-k_{+}^{2}\right)-\theta\left(k_{-}^{2}+k_{z} ^{2}-\Omega^{2}\right)}{\sqrt{\left(\Omega^{2}-k_{z}^{2}-k_{-}^{2}\right) \left(\Omega^{2}-k_{z}^{2}-k_{+}^{2}\right)}}h\left(n,n^{\prime}\right) \tag{30}\] \[\times\bigg{[}\left(\left(n+n^{\prime}\right)|qB|-\frac{1}{2} \left(\Omega^{2}-k_{z}^{2}\right)+2m^{2}\right)\left(\mathcal{I}_{0}^{n,n^{ \prime}}(\xi)+\mathcal{I}_{0}^{n-1,n^{\prime}-1}(\xi)\right)-\frac{2}{\ell^{2} }\mathcal{I}_{2}^{n-1,n^{\prime}-1}(\xi)\bigg{]}\] \[+ \frac{g^{2}}{4\pi\ell^{2}}\sum_{n=0}^{\infty}\frac{\theta\left( \Omega^{2}-k_{z}^{2}-4m^{2}-8n|qB|\right)}{\sqrt{\left(\Omega^{2}-k_{z}^{2} \right)\left(\Omega^{2}-k_{z}^{2}-4m^{2}-8n|qB|\right)}}h_{0}\left(n\right)\] \[\times\bigg{[}\left(2n|qB|-\frac{1}{2}\left(\Omega^{2}-k_{z}^{2} \right)+2m^{2}\right)\left(\mathcal{I}_{0}^{n,n}(\xi)+\mathcal{I}_{0}^{n-1,n- 1}(\xi)\right)-\frac{2}{\ell^{2}}\mathcal{I}_{2}^{n-1,n-1}(\xi)\bigg{]}.\]
Here we introduced the following functions made of the Fermi-Dirac distributions:
\[h\left(n,n^{\prime}\right) \equiv 2-\sum_{s_{1},s_{2}=\pm}n_{F}\left(\frac{\Omega}{2}+s_{1}\frac{ \Omega\left(n-n^{\prime}\right)|qB|}{\Omega^{2}-k_{z}^{2}}+s_{2}\frac{|k_{z}|}{ 2\left(\Omega^{2}-k_{z}^{2}\right)}\sqrt{\left(\Omega^{2}-k_{z}^{2}-k_{-}^{2} \right)\left(\Omega^{2}-k_{z}^{2}-k_{+}^{2}\right)}\right), \tag{31}\] \[h_{0}\left(n\right) \equiv h(n,n)=2-2\sum_{s_{2}=\pm}n_{F}\left(\frac{\Omega}{2}+s_{2} \frac{|k_{z}|}{2}\sqrt{1-\frac{4\left(m^{2}+2n|qB|\right)}{\Omega^{2}-k_{z}^{2} }}\right). \tag{32}\]
Notice that the second term in Eq. (30) is the contribution due to annihilation processes with \(n=n^{\prime}\).
The expression for the imaginary part of self-energy (30) is one of the main analytical results of this study. By substituting it into the definition in Eq. (2), we can calculate the differential emission rate of neutral bosons from a magnetized plasma. The corresponding numerical results will be presented and analyzed in the next section.
Note that the general structure of the expression in Eq. (30) resembles the photon polarization tensor obtained in Ref. [38]. However, there are some profound differences. Unlike spin-one photons, the bosons are spin-zero particles in the model at hand. As we discuss later in detail, the spinless nature of bosons strongly affects the angular dependence of the self-energy and, in turn, the corresponding angular distribution of boson emission. For example, the differential rate due to particle-splitting processes will be suppressed in the direction parallel to the magnetic field. In the case of photons, such emission was not only allowed but played a dominant role at small energies.
Before concluding this subsection, it is instructive to consider a simplified kinematic regime with \(\mathbf{k}_{\perp}=0\) (i.e., for \(\theta=0\) or \(\theta=\pi\)). It is the only case that was analyzed previously in the literature, see Ref. [32]. It corresponds to scalar boson emission in the direction of the magnetic field. Substituting \(\mathbf{k}_{\perp}=0\), the general result for self-energy in Eq. (30) reduces down to
\[\mathrm{Im}\left[\Sigma^{R}(\Omega,\mathbf{0},k_{z})\right]=-\frac{g^{2}}{8 \pi\ell^{2}}\frac{\left(\Omega^{2}-k_{z}^{2}-4m^{2}\right)}{\sqrt{\Omega^{2} -k_{z}^{2}}}\sum_{n=0}^{\infty}\alpha_{n}\frac{\theta\left(\Omega^{2}-k_{z}^{2 }-4m^{2}-8n|qB|\right)}{\sqrt{\Omega^{2}-k_{z}^{2}-4m^{2}-8n|qB|}}h_{0}\left(n \right), \tag{33}\]
where we introduced \(\alpha_{n}=2-\delta_{n,0}\) and took into account that \(\lim_{\xi\to 0}\left[\mathcal{I}_{0}^{n,n^{\prime}}(\xi)\right]=\delta_{n,n^{ \prime}}\) and \(\lim_{\xi\to 0}\left[\mathcal{I}_{2}^{n,n^{\prime}}(\xi)\right]=2(n+1)\delta_{n, n^{\prime}}\)[38]. Compared to the general result in Eq. (30), this expression for the self-energy is much simpler. More importantly, from a physics viewpoint, the kinematics of allowed processes is very restrictive at \(\mathbf{k}_{\perp}=0\). In particular, no one-to-two particle-splitting processes contribute in this case at all. Only the particle-antiparticle annihilation processes do (and only if \(M>2m\)). Since the same does not hold at any nonzero \(\mathbf{k}_{\perp}\), such a simplified regime is an exceptional outlier. Furthermore, as we will see in the next section, the particle-splitting processes contribute substantially to the total emission rate in some kinematic regimes.
### Zero magnetic field limit
Here we verify that the result for the self-energy in Eq. (30) is consistent with the known zero-field limit. For our purposes, it is sufficient to consider only the case with \(\mathbf{k}_{\perp}=0\).
To consider the limit of vanishing magnetic field in Eq. (33), we introduce a continuous variable \(v=2n|qB|\) and replace the sum over \(n\) with an integral over \(v\). Then, we have
\[\mathrm{Im}\left[\Sigma^{R}(\Omega,\mathbf{k})\right]=-\frac{g^{2}}{8\pi} \frac{\left(\Omega^{2}-|\mathbf{k}|^{2}-4m^{2}\right)}{\sqrt{\Omega^{2}-| \mathbf{k}|^{2}}}\theta\left(\Omega^{2}-|\mathbf{k}|^{2}-4m^{2}\right)\int_{0 }^{v_{0}}\frac{dv}{\sqrt{v_{0}-v}}\left[1-\sum_{s_{2}=\pm}n_{F}\left(\frac{ \Omega}{2}+s_{2}\frac{|\mathbf{k}|\sqrt{v_{0}-v}}{\sqrt{\Omega^{2}-|\mathbf{k }|^{2}}}\right)\right], \tag{34}\]
where the upper limit of integration is \(v_{0}=\left(\Omega^{2}-|\mathbf{k}|^{2}-4m^{2}\right)/4\). In the last expression, we also replaced \(|k_{z}|\) with \(|\mathbf{k}|\) in view of the Lorentz symmetry, which is restored in the absence of a magnetic field.
After introducing the new integration variable \(u=|\mathbf{k}|\sqrt{v_{0}-v}/\sqrt{\Omega^{2}-|\mathbf{k}|^{2}}\), we obtain
\[\mathrm{Im}\left[\Sigma^{R}(\Omega,\mathbf{k})\right]=-\frac{g^{2}}{4\pi} \frac{\left(\Omega^{2}-k_{z}^{2}-4m^{2}\right)}{|k_{z}|}\theta\left(\Omega^{2} -k_{z}^{2}-4m^{2}\right)\int_{0}^{u_{0}}du\left[1-\sum_{s_{2}=\pm}n_{F}\left( \frac{\Omega}{2}+s_{2}u\right)\right], \tag{35}\]
where
\[u_{0}=\frac{|\mathbf{k}|\sqrt{\Omega^{2}-|\mathbf{k}|^{2}-4m^{2}}}{2\sqrt{ \Omega^{2}-|\mathbf{k}|^{2}}}. \tag{36}\]
Finally, after integrating over \(u\), we derive
\[\mathrm{Im}\left[\Sigma^{R}(\Omega,\mathbf{k})\right]=-\frac{g^{2}}{8\pi} \left(\Omega^{2}-|\mathbf{k}|^{2}-4m^{2}\right)\left[\frac{\sqrt{\Omega^{2}-| \mathbf{k}|^{2}}-4m^{2}}{\sqrt{\Omega^{2}-|\mathbf{k}|^{2}}}+\frac{2T}{| \mathbf{k}|}\ln\frac{1+e^{-E_{+}/T}}{1+e^{-E_{-}/T}}\right]\theta\left(\Omega ^{2}-|\mathbf{k}|^{2}-4m^{2}\right). \tag{37}\]
Note that \(E_{\pm}\equiv\Omega/2\pm u_{0}\) coincide with the definitions in Eq. (101) in Appendix A. The final result for the imaginary part of self-energy in Eq. (37) also agrees with the \(B=0\) expression given in Eq. (112).
When the scalar bosons are on the mass shell, i.e., \(\Omega^{2}=M^{2}+|{\bf k}|^{2}\), one has
\[\mathrm{Im}\left[\Sigma^{R}({\bf k})\right]\Big{|}_{\Omega^{2}=M^{2}+|{\bf k}|^ {2}}=-\frac{g^{2}}{8\pi}\left(M^{2}-4m^{2}\right)\left[\frac{\sqrt{M^{2}-4m^{2 }}}{\sqrt{M^{2}}}+\frac{2T}{|{\bf k}|}\ln\frac{1+e^{-E_{+}/T}}{1+e^{-E_{-}/T}} \right]\theta\left(M^{2}-4m^{2}\right). \tag{38}\]
As we see, this expression is nonvanishing only when \(M^{2}\geq 4m^{2}\). From a physics viewpoint, it indicates that the annihilation processes are the only ones contributing. It is not surprising since one-to-two particle-spitting processes (\(\psi\rightarrow\psi+\phi\) and \(\bar{\psi}\rightarrow\bar{\psi}+\phi\)) are forbidden without a background magnetic field. The latter is evident when considering the process in the rest frame of the boson. (Curiously, such one-to-two processes may be allowed when the masses of the initial and final fermions are different [56].) In the case of a nonzero magnetic field, in contrast, particle-spitting processes are allowed because the momentum conservation constraint in the plane perpendicular to the field is relaxed.
## IV Numerical results
Here, we use the imaginary part of self-energy derived in the previous section to analyze the differential emission rate of neutral bosons from a magnetized plasma. Because of an elaborate expression in Eq. (30) and the complications due to the sum over Landau levels, the angular dependence of the rate in Eq. (2) is hard to comprehend. Therefore, here we study it with the help of numerical methods.
In the model at hand, two qualitatively different regimes exist. They are determined by the value of the scalar boson mass \(M\), which can be either greater or less than the fermion-antifermion threshold \(2m\). In the subthreshold regime (\(M<2m\)), no scalar boson production can occur without a background magnetic field at the leading order in coupling. The situation changes when \(B\neq 0\). The annihilation becomes possible when the scalar boson energy exceeds the threshold of \(2m\). More interestingly, the boson production via particle-splitting processes is allowed in the whole range of energies \(\Omega>M\).
Below, we will study both regimes by considering the following two representative cases: \(M=3m\) (suprathreshold) and \(M=m/3\) (subthreshold). In each case, we will study the angular dependence of the rate in detail for several representative values of the magnetic field and temperature. As we will see, the behavior of the differential rates will be very different, especially at small values of the polar angle \(\theta\).
To reduce the number of free parameters and simplify the analysis, we will express all dimensionful quantities in units of the fermion mass \(m\). We will consider two different values of the magnetic field, i.e., \(|qB|=(2m)^{2}\) (moderate field) and \(|qB|=(5m)^{2}\) (strong field), and two different temperatures, i.e., \(T=5m\) and \(T=15m\), that correspond to moderately relativistic and ultrarelativistic plasmas, respectively. Without loss of generality, we will use the Yukawa coupling \(g=1\) in numerical calculations below.
When calculating numerically the imaginary part of self-energy (30), one needs to sum over Landau-level indices \(n\) and \(n^{\prime}\). Since the corresponding double-series is convergent, one may truncate the summation at sufficiently large finite \(n_{\mathrm{max}}\). Its value will be determined by the largest energy scale in the problem, which will be set by either the temperature or the scalar boson energy \(\Omega\). The latter will be varied in a range from \(\Omega=M\) up to about \(\Omega\simeq 35m\) (for \(|qB|=4m^{2}\)) and \(\Omega\simeq 90m\) (for \(|qB|=25m^{2}\)). Thus, from general considerations, one should include at least sufficient number of Landau levels to open the phase space for the processes with the largest energies. This leads to the bound from below:
\[n_{\mathrm{max}}\gtrsim\left[\mathrm{max}\left\{\frac{T^{2}}{2|qB|},\frac{ \Omega^{2}}{2|qB|}\right\}\right], \tag{39}\]
where the square brackets represent the integer part.
### Moderate magnetic field, \(|{\bf qB}|=4{\bf m^{2}}\)
Let us start the study of the differential rate as a function of the angular coordinate \(\theta\) in the case of a moderately strong magnetic field \(|qB|=(2m)^{2}\). To achieve a high angular resolution, we will use the discretization steps of \(\Delta\theta=\pi/(2n_{\theta})\) with \(n_{\theta}=10^{3}\). The direction along the magnetic field corresponds to \(\theta=0\), and the perpendicular direction is \(\theta=\pi/2\). There is no need to consider \(\theta>\pi/2\), as the corresponding rates can be obtained using the symmetry with respect to mirror reflection in the \(xy\)-plane. Indeed, such a symmetry remains unbroken in the presence of a constant background magnetic field.
Representative numerical results for the differential rates are shown in Fig. 3 for two fixed temperatures, i.e., \(T=5m\) (left panels) and \(T=15m\) (right panels), as well as two choices of the scalar boson mass, i.e., \(M=3m\) (top panels)
and \(M=m/3\) (bottom panels). Different lines correspond to different energies of neutral scalar bosons. They satisfy the mass-shell condition: \(\Omega=\sqrt{M^{2}+k_{\perp}^{2}+k_{z}^{2}}\), where \(k_{\perp}=k\sin\theta\) and \(k_{z}=k\cos\theta\).
By comparing the results for two different temperatures in the left and right panels of Fig. 3, we see that the rates tend to grow with temperature, as expected. In the case of \(M=3m\), the growth is relatively week at first when the energy exceeds the threshold \(\Omega\gtrsim M\) only slightly. It becomes more pronounced at higher values of energy. From a different perspective, the average rates decrease with increasing the scalar boson energy. However, one sees a substantial suppression only after the energy exceeds the plasma's temperature. To a large degree, such a behavior is dominated by the annihilation processes.
It is worth noting that the contribution of the lowest Landau level to the total rate remains relatively modest across the whole range of scalar boson energies. It plays a significant role only in the suprathreshold case (\(M=3m\)) at small temperatures, when the scalar boson's energy is only slightly higher than its minimum value \(\Omega_{\rm min}=M\). This observation underscores the limitations of the so-called lowest Landau level approximation, which is often employed to obtain simple estimates in the strong field regime. As we see, in relativistic plasmas, relying on such an approximation would yield unreliable results.
The growth of rates with increasing temperature is more pronounced in the case of a subthreshold scalar boson mass, i.e., \(M=m/3\), as seen from the two bottom panels of Fig. 3. The qualitative behavior is also different, especially at small values of the polar angle \(\theta\). To understand this subthreshold regime, it is important to remember that the scalar production is possible only because of a nonzero magnetic field. Since \(M<2m\), neither annihilation nor (anti)particle-splitting processes can occur at \(\theta=0\), see Eq. (33) and related discussion. This is in drastic difference to the suprathreshold case in the two top panels of Fig. 3.
For both temperatures and both values of the scalar mass, the differential rates tend to grow on average as a function of \(\theta\). It implies that the scalar bosons are emitted predominantly in the directions perpendicular to the magnetic field. We can easily visualize the corresponding emission profiles using the polar plots in Fig. 4. According to our convention, the magnetic field points upwards. The six individual panels show the polar plots for emission rates of bosons with several fixed energies and the two mass choices: \(M=3m\) (top panels) and \(M=m/3\) (bottom panels). The red lines represent the partial contributions of the annihilation rates, the green lines represent the particle-splitting rates, and the blue lines give the total rates. We show the results only for one temperature, \(T=15m\). The results for another
Figure 3: Neutral scalar boson differential production rates for several different energies and two fixed temperatures: \(T=5m\) (left panels) and \(T=15m\) (right panels). The magnetic field is \(|qB|=4m^{2}\) and the scalar boson masses are \(M=3m\) (top panels) and \(M=m/3\) (bottom panels).
temperature (\(T=5m\)) are qualitatively similar but have different magnitudes and contain slightly different admixture of annihilation and particle-splitting processes. Their relative contributions will become clear when we discuss the total rates below.
As seen from Fig. 4, both annihilation (red lines) and particle-splitting (green lines) processes tend to provide higher rates of the scalar boson production in the directions perpendicular to the magnetic field. While having similar butterfly-shaped profiles, relative magnitudes of the two types of contributions vary with model parameters. In the suprathreshold case \(M=3m\), annihilation dominates almost at all energies. In the subthreshold case \(M=m/3\), however, the particle-splitting processes contribute more at small energies, but annihilation overtakes them at large energies. It is interesting to draw attention to the spacial case of \(\Omega=1.5m\) when the boson mass is \(M=m/3\), which falls into the subthreshold regime with \(M<\Omega<2m\). In this case, particle-splitting processes are the only ones contributing to the total rate. It is the reason why the corresponding (bottom left) panel in Fig. 4 has only blue lines visible. (Technically, the green lines, with the exact same profile, hide behind the blue ones.)
Let us now turn to the total rate \(dR/d\Omega\) integrated over all angular directions, as defined in Eq. (4). It describes production rate (per unit time and unit volume) of scalar bosons with energies between \(\Omega\) and \(\Omega+d\Omega\). Unlike the differential rate, its expression contains an extra power of momentum, which accounts for the available phase space. Clearly, such a phase space collapses when \(\Omega\) approaches \(M\) from above. Then, the rate \(dR/d\Omega\) should also vanish when \(\Omega\to M\). We will see below that it is indeed the case. The extra power of the momentum in the definition will also explain why \(dR/d\Omega\) does not start decreasing with energy until \(\Omega\) becomes several times the plasma's temperature.
For the case of the moderately strong field \(|qB|=4m^{2}\), the corresponding rates as functions of the energy are summarized in the two left panels of Fig. 5. The other two panels on the right show the ellipticity measure \(v_{2}\) for the scalar boson emission, formally defined by Eq. (5). In all the panels, the color coding represents temperature, with the blue for \(T=5m\) and the red for \(T=15m\). In addition to the total rates (filled circles) shown in the panels on the left, we also display the separate partial contributions due to annihilation (open diamonds) and particle-splitting
Figure 4: The angular profile of the scalar boson production rates for several different energies and fixed temperature \(T=15m\). The magnetic field is \(|qB|=4m^{2}\) and the scalar boson masses are \(M=3m\) (top panels) and \(M=m/3\) (bottom panels). Each panel contains separate contributions due to annihilation (red lines) and particle-splitting (green lines) processes, as well as the total rates (blue lines).
(open squares) processes. For guiding the eye, we connected the points with different lines: solid (total rate), dotted (annihilation part) and dashed (particle-splitting part), respectively. For comparison, the dash-dotted lines represent the rates in the limit of the vanishing magnetic field. As we argued before, such a limit is meaningful only for \(M=3m\) (suprathreshold case). For subthreshold scalar mass \(M=m/3\), the rates vanish without a magnetic field.
The rates for all model parameters represented in Fig. 5 share many similar features. Overall, they have a tendency to grow with increasing the temperature. It is easy to understand since the number densities of both occupied positive-energy states and unoccupied negative-energy states increase with temperature. The availability (anti)particles in such states, in turn, opens the phase space for all relevant processes producing scalar bosons. On the other hand, as a function of energy, the rates grow at first, reach a maximum value around \(\Omega\sim 1.7T\), and then decrease. After passing the peak value, the behavior at high energies gradually approaches an exponential asymptote, i.e., \(dR/d\Omega\sim\exp{(-\Omega/T)}\).
By comparing the partial contributions of different types of processes in the two left panels of Fig. 5, we see that it is the annihilations rather than the particle splittings that dominate at sufficiently large energies. The interplay of the two is more subtle at low energies, where the relative contributions depend on the scalar boson mass. For the suprathreshold mass, \(M=3m\), the annihilation is more likely to dominate the total rate for (almost) all energies. For the subthreshold mass, \(M=m/3\), on the other hand, the particle-splitting processes give larger contributions in a range of small energies, \(\Omega\lesssim 1.7T\). Still, even for \(M=m/3\), the annihilation eventually takes over at higher energies.
Now let us turn to the results for the ellipticity parameter \(v_{2}\), shown in the two right panels of Fig. 5. In general, as we see, the values of \(v_{2}\) are positive and relatively large. At high energies, typical values of \(v_{2}\) are of the order of 0.2 to 0.3. The values tend to go down with increasing the temperature, though. There are some qualitative differences between the cases of \(M=3m\) (suprathreshold) and \(M=m/3\) (subthreshold), especially in the range of small energies, i.e., \(\Omega\lesssim 1.7T\). In particular, for \(M=3m\), the ellipticity parameter \(v_{2}\) shows a wide range of variations at small energies. It can even take negative values. These variations come from a finite number of quantum transitions between Landau levels that produce large threshold spikes in some directions and, thus, dramatically affecting \(v_{2}\). In contrast, for \(M=m/3\), the ellipticity parameter tends to grow by a factor of two or more with decreasing the energy from \(\Omega=2m\) down to \(\Omega=m/3\). Recall that, in this energy range, many particle-splitting processes contribute. They
Figure 5: The total rates and ellipticity of scalar boson emission from a magnetized plasma at two different temperatures: \(T=5m\) (blue lines) and \(T=15m\) (red lines). The magnetic field is \(|qB|=4m^{2}\) and the scalar boson masses are \(M=3m\) (top panels) and \(M=m/3\) (bottom panels).
do not allow scalar boson production in the direction \(\theta=0\) and, thus, tend to give large \(v_{2}\).
### Strong magnetic field, \(|\mathbf{qB}|=\mathbf{25m^{2}}\)
Now let us consider the case of a strong field, i.e., \(|qB|=(5m)^{2}\). As in the previous subsection, we will start from the representative numerical results for the differential rates as functions of the angular coordinate \(\theta\). The rates for several fixed values of the scalar boson energy are displayed in Fig. 6. It includes four panels with the results for two different temperatures, \(T=5m\) (left panels) and \(T=15m\) (right panels), and two different scalar boson masses, \(M=3m\) (top panels) and \(M=m/3\) (bottom panels).
The strong field results in Fig. 6 are qualitatively similar to those in the weaker field, presented earlier in Fig. 3. As before, the rates generally grow with temperature. Also, their dependence on the angular coordinate \(\theta\) is similar too: (i) on average, the rates tend to increase with \(\theta\) and (ii) the functional dependence around \(\theta=0\) changes in the same way when one goes from the suprathreshold (\(M=3m\)) to the subthreshold (\(M=m/3\)) scalar boson mass. By comparing the results in Figs. 3 and 6, we also find that the rates are considerably higher in the case of stronger field.
The emission profiles and relative contributions of the annihilation and particle-splitting processes in the case of strong field, \(|qB|=25m^{2}\), remain about the same as in the weaker field, \(|qB|=4m^{2}\). Several representative profiles with characteristic butterfly shapes are displayed in six polar plots in Fig. 7. For the scalar mass \(M=3m\), the angular distribution of emission is particularly simple at small energies. One of such cases for \(\Omega=6m\) is displayed in the top left panel of Fig. 7. At such low energy, the only allowed annihilation processes are those between the lowest Landau levels. As a results, the corresponding rate visualized by the red line has a very smooth profile. Interestingly, it is one of those special cases when the annihilation has a slightly higher probability of producing scalar boson in the direction parallel to the magnetic field. Nevertheless, the particle-splitting processes overcompensate due to their much higher probability to produce scalar bosons in the directions perpendicular to the magnetic field.
There are no surprises in the case of the subthreshold boson mass, \(M=m/3\). When \(M<\Omega<2m\), again only the particle-splitting processes contribute. It explains why only the blue-line profile is shown in the bottom left panel of Fig. 7, which corresponds to \(\Omega=1.5m\). With increasing the energy, the role of annihilation processes grows and they
Figure 6: Neutral scalar boson differential production rates for several different energies and two fixed temperatures: \(T=5m\) (left panels) and \(T=15m\) (right panels). The magnetic field is \(|qB|=25m^{2}\) and the scalar boson masses are \(M=3m\) (top panels) and \(M=m/3\) (bottom panels).
eventually dominate the total rate even for the subthreshold values of the boson mass. In fact, the emission profiles and relative contributions of different processes become very similar at large energies irrespective of the boson mass.
For the case of \(|qB|=25m^{2}\), the total rates \(dR/d\Omega\) integrated over the angular directions are shown in the two left panels of Fig. 8. The two right panels contain the data for the ellipticity measure \(v_{2}\) of the scalar boson production. As before, the results for the lower temperature, \(T=5m\), are represented by the blue lines and those for the higher temperature, \(T=15m\), are represented by the red lines. Additionally, the filled circles are used as plot markers for the total rate, the open diamonds for annihilation contributions, and the open squares for particle-splitting processes. In the suprathreshold case, \(M=3m\), we show additionally the zero-field rates, represented by the dash-dotted lines. (Recall that no nontrivial zero-field limit exists in the subthreshold case with \(M=m/3\).)
The energy dependence of the total rates in Fig. 8 is very similar to the weaker field case in Fig. 5. The rates grow with increasing the temperature. The dependence on the scalar boson energy vaguely resembles the black body radiation: the rates grow from zero to its maximum value around the energy \(\Omega\sim 1.7T\) and then decrease by gradually approaching the exponential asymptote, \(dR/d\Omega\sim\exp{(-\Omega/T)}\).
The relative contributions of the annihilation and particle-splitting processes can be read off from Fig. 8 too. While particle-splittings dominate in a range of small energies, \(\Omega\lesssim 1.7T\), the annihilation overwhelms the total rate at high energies, \(\Omega\gtrsim 1.7T\). In the case of larger (smaller) scalar mass \(M=3m\) (\(M=m/3\)), the corresponding switch of the two regimes occurs at slightly lower (higher) energies. Such a correlation is not surprising since the relative role of annihilation processes is larger (smaller) in the suprathreshold (subthreshold) case.
The ellipticity measure \(v_{2}\) of the scalar boson production is again positive and relatively large. Its values are in the same ballpark of \(0.2\) to \(0.3\). As in the case of the weaker field, \(v_{2}\) gets slightly suppressed with increasing the temperature. The prominent differences between the cases of \(M=3m\) (suprathreshold) and \(M=m/3\) (subthreshold) appear only at small energies, i.e., \(\Omega\lesssim 1.7T\).
Figure 7: The angular profile of the scalar boson production rates for several different energies and fixed temperature \(T=5m\). The magnetic field is \(|qB|=25m^{2}\) and the scalar boson masses are \(M=3m\) (top panels) and \(M=m/3\) (bottom panels). Each panel contains separate contributions due to annihilation (red lines) and particle-splitting (green lines) processes, as well as the total rates (blue lines).
## V Conclusions
In this paper, we have derived an analytic expression for the imaginary (absorptive) part of the scalar boson's self-energy within a strongly magnetized relativistic plasma. The model we consider involves a neutral scalar field that interacts with charged fermions through a Yukawa-type coupling. We use the expression for the imaginary part of self-energy to calculate the differential production rate of scalar bosons. In view of the principle of detailed balance, this same quantity also determines the absorption rate of scalar boson in the magnetized plasma.
As evident from the explicit expression we have derived, the production rate is determined by three distinct types of processes: particle-splitting (\(\psi\rightarrow\psi+\phi\)), antiparticle-splitting (\(\bar{\psi}\rightarrow\bar{\psi}+\phi\)), and particle-antiparticle annihilation (\(\psi+\bar{\psi}\rightarrow\phi\)). All such processes have been thoroughly analyzed, with careful consideration given to the effects of Landau-level quantization of charged fermions. In the context of a high-temperature relativistic plasma (i.e., \(T\gtrsim\sqrt{|qB|}\)), our findings reveal that a large number of Landau levels contributes to the rate. In essence, this implies that one cannot rely on the commonly employed lowest Landau level approximation even when the magnetic field is very strong compared to the scale set by the fermion mass.
The energy dependence of the rates exhibits a resemblance to a black body spectrum, featuring a peak at an intermediate energy level comparable to the plasma's temperature. In our study of several representative cases, we have found that the peak typically occurs at approximately \(\Omega\simeq 1.7T\). Also, the rates grow with increasing temperature. The influence of thermal effects can be readily understood. As the temperature rises, the number of occupied positive-energy states and unoccupied negative-energy states grows. It leads to a larger phase space for all the processes contributing to the scalar boson production.
The rates also exhibit growth with an increasing magnetic field, but the underlying physics is more subtle. One key aspect is the substantial relaxation of momentum conservation constraints provided by the background field. The case in point is the production of bosons through (anti)particle-splitting processes, which are prohibited in the absence of a magnetic field. Additionally, the high degeneracy of Landau levels likely plays a role in enhancing scalar boson production. As in the case of magnetic catalysis [57], one may argue that such degeneracy increases the average density of quantum states near small energies. In the case of a hot plasma, this effect translates into an increased
Figure 8: The total rates and ellipticity of scalar boson emission from a magnetized plasma at two different temperatures: \(T=5m\) (blue lines) and \(T=15m\) (red lines). The magnetic field is \(|qB|=25m^{2}\) and the scalar boson masses are \(M=3m\) (top panels) and \(M=m/3\) (bottom panels).
phase space for annihilation processes. By comparing the results for two representative field strengths, \(\left|qB\right|=4m^{2}\) and \(\left|qB\right|=25m^{2}\), as well as for \(B=0\), we see that the presence of a magnetic field enhances the average rates.
We also studied in detail the dependence of the differential production rate on the angular coordinate and the scalar boson energy. The butterflylike emission profiles indicate a higher likelihood of boson production in directions perpendicular to the magnetic field. This preference for perpendicular emission is reflected in the ellipticity measure, denoted as \(v_{2}\), which typically assumes positive values in the range of \(0.2\) to \(0.3\) at high scalar boson energies. At small energies, on the other hand, the values of \(v_{2}\) exhibit greater variability due to energy quantization of the low-lying Landau-level states. In this regime, isolated energy thresholds can lead to abrupt changes in the \(v_{2}\) values, rendering this characteristics less informative and of limited utility.
As stated in the Introduction, we do not try to address phenomenological applications in this study. Nevertheless, we cannot help but note that our findings regarding the production (or decay) rate of scalar bosons may have important implications for cosmology. In particular, they suggest that the primordial magnetic field might exert an even stronger influence on the magnetic warm inflation scenario than previously reported in Refs. [53; 54]. Indeed, now we can fully substantiate the claim that the presence of the magnetic field significantly amplifies the total boson decay rate. Furthermore, the rate far exceeds the contribution from the lowest Landau level, which was employed as an estimate in Ref. [54].
###### Acknowledgements.
The visit of J. J.-U. to Arizona State University was supported by the Universidad Nacional Autonoma de Mexico through Facultad de Ciencias, CGEP-AANILD, and DGAPA-UNAM under Grant No. PAPIIT-IN108123. The work of I. A. S. was supported by the U.S. National Science Foundation under Grant No. PHY-2209470.
## Appendix A Zero magnetic field
In this appendix, for comparison purposes, we derive the imaginary part of the scalar boson self-energy in the limit of vanishing magnetic field. Similar results at nonzero temperature can be found in the literature, e.g., see Refs. [56; 58].
At the leading order, the scalar boson self-energy is given by
\[\Sigma(k)=ig^{2}\int\frac{d^{4}p}{(2\pi)^{4}}\mathrm{Tr}\left[S(p)S(p-k)\right], \tag{101}\]
which is the momentum space representation of a definition analogous to Eq. (7). In the absence of a background field, the fermion propagator reads
\[S(p)=i\frac{p\!\!\!/+m}{p^{2}-m^{2}+i\epsilon}. \tag{102}\]
After calculating the Dirac trace and replacing the energy integration with the Matsubara sum, we derive
\[\Sigma\left(i\Omega_{m},\mathbf{k}\right)=4g^{2}T\sum_{k=-\infty}^{\infty} \int\frac{d^{3}p}{(2\pi)^{3}}\frac{i\omega_{n}\left(i\omega_{n}-i\Omega_{m} \right)-\mathbf{p}\cdot\left(\mathbf{p}-\mathbf{k}\right)+m^{2}}{\left[\left( i\omega_{n}\right)^{2}-E_{p}^{2}\right]\left[\left(i\omega_{n}-i\Omega_{m} \right)^{2}-E_{p-k}^{2}\right]}, \tag{103}\]
where we have introduced the notation for the fermion energies \(E_{p}=\sqrt{\mathbf{p}^{2}+m^{2}}\) and \(E_{p-k}=\sqrt{(\mathbf{p}-\mathbf{k})^{2}+m^{2}}\).
The zero-field result above is analogous to Eq. (16) in the main text. Similarly, we use Eq. (18) to compute the Matsubara sum and arrive at the following result:
\[\Sigma^{R}\left(\Omega,\mathbf{k}\right)=g^{2}\sum_{\eta,\lambda=\pm 1}\int \frac{d^{3}p}{(2\pi)^{3}}\frac{n_{F}\left(E_{p}\right)-n_{F}\left(\lambda E_{ p-k}\right)}{\lambda E_{p}E_{p-k}\left(E_{p}-\lambda E_{p-k}+\eta\Omega+i \eta\epsilon\right)}\left[\lambda E_{p}E_{p-k}-\mathbf{p}\cdot\left(\mathbf{ p}-\mathbf{k}\right)+m^{2}\right], \tag{104}\]
where we performed the analytical continuation to Minkowski space by replacing \(i\Omega_{m}\longrightarrow\Omega+i\epsilon\). To separate the real and imaginary parts, we utilize the Sokhotski formula,
\[\frac{1}{E_{p}-\lambda E_{p-k}+\eta\Omega+i\eta\epsilon}=\mathcal{P}\frac{1}{ E_{p}-\lambda E_{p-k}+\eta\Omega+i\eta\epsilon}-i\eta\pi\delta\left(E_{p}- \lambda E_{p-k}+\eta\Omega\right). \tag{105}\]
Then, the imaginary part of the self-energy is given by
\[\mathrm{Im}\left[\Sigma^{R}\left(\Omega,\mathbf{k}\right)\right]=-g^{2}\pi\sum_{ \eta,\lambda=\pm 1}\int\frac{d^{3}p}{(2\pi)^{3}}\frac{n_{F}\left(E_{p}\right)-n_{F} \left(\lambda E_{p-k}\right)}{\eta\lambda E_{p}E_{p-k}}\left[\lambda E_{p}E_{p- k}-\mathbf{p}\cdot\left(\mathbf{p}-\mathbf{k}\right)+m^{2}\right]\delta\left(E_{p}- \lambda E_{p-k}+\eta\Omega\right). \tag{39}\]
The remaining integration over the loop momenta can be performed by switching to spherical coordinates,
\[\mathrm{Im}\left[\Sigma^{R}\left(\Omega,\mathbf{k}\right)\right] = -g^{2}\pi\sum_{\eta,\lambda=\pm 1}\int_{0}^{\infty}\int_{-1}^{1} \int_{0}^{2\pi}\frac{\mathbf{p}^{2}\ dp\ dx\ d\varphi}{(2\pi)^{3}}\frac{n_{F} \left(E_{p}\right)-n_{F}\left(\lambda E_{p-k}\right)}{\eta\lambda E_{p}E_{p-k}} \tag{40}\] \[\times\left[\lambda E_{p}E_{p-k}-\mathbf{p}^{2}+|\mathbf{p}|| \mathbf{k}|x+m^{2}\right]\delta\left(E_{p}-\lambda E_{p-k}+\eta\Omega\right)\] \[= -g^{2}\pi\sum_{\eta,\lambda=\pm 1}\int_{0}^{\infty}\int_{-1}^{1} \frac{\mathbf{p}^{2}\ dp\ dx}{(2\pi)^{2}}\frac{n_{F}\left(E_{p}\right)-n_{F} \left(\lambda E_{p-k}\right)}{\eta\lambda E_{p}E_{p-k}}\] \[\times\left[\lambda E_{p}E_{p-k}-\mathbf{p}^{2}+|\mathbf{p}|| \mathbf{k}|x+m^{2}\right]|E_{p}+\eta\Omega|\frac{\delta\left(x-x_{0}\right)}{ |\mathbf{p}||\mathbf{k}|},\]
where we used the properties of the Dirac \(\delta\)-function and took into account the following solution to the energy-conservation equation:
\[x_{0}=-\frac{\Omega^{2}-|\mathbf{k}|^{2}+2\eta E_{p}\Omega}{2|\mathbf{p}|| \mathbf{k}|}. \tag{41}\]
Changing the integration variable from \(p\) to energy \(E_{p}\), we derive
\[\mathrm{Im}\left[\Sigma^{R}\left(\Omega,\mathbf{k}\right)\right]=-\frac{g^{2} \pi}{(2\pi)^{2}}\int_{E_{-}}^{E_{+}}dE_{p}\frac{n_{F}\left(E_{p}\right)-n_{F} \left(E_{p}-\Omega\right)}{|\mathbf{k}|}\left(2m^{2}-\frac{\Omega^{2}-| \mathbf{k}|^{2}}{2}\right)\Theta\left(\Omega-E_{p}\right)\Theta\left(\Omega^ {2}-|\mathbf{k}|^{2}-4m^{2}\right), \tag{42}\]
where the integration limits are defined by
\[E_{\pm}\equiv\frac{\Omega}{2}\pm\frac{|\mathbf{k}|}{2}\sqrt{1-\frac{4m^{2}}{ \Omega^{2}-|\mathbf{k}|^{2}}}. \tag{43}\]
These were obtained by requiring that \(-1<x_{0}<1\). After integrating over the energy, the final result reads
\[\mathrm{Im}\left[\Sigma^{R}\left(\Omega,\mathbf{k}\right)\right]=-\frac{g^{2} }{8\pi}\left(\Omega^{2}-|\mathbf{k}|^{2}-4m^{2}\right)\left[\sqrt{1-\frac{4m^{ 2}}{\Omega^{2}-|\mathbf{k}|^{2}}}+\frac{2T}{|\mathbf{k}|}\ln\left(\frac{1+e^{- \beta E_{+}}}{1+e^{-\beta E_{-}}}\right)\right]\Theta\left(\Omega^{2}-| \mathbf{k}|^{2}-4m^{2}\right). \tag{44}\]
| 磁性が高く重力質量を持つ relativistically なプラズマから中性スカラボソン の微分放出率を調査します。これは、粒子分割 ($\psi\rightarrow \psi+\phi $), 反粒子分割 ($\bar{\psi} \rightarrow \bar{\psi}+\phi $), and 粒子-反粒子 annihilation ($\psi + \bar{\psi}\rightarrow \phi $ ) の 3 つの過程が先駆的な役割を果たしていることを示します。この結果は、磁場がゼロの場合と異なり、粒子消滅過程のみがボソン生成に貢献しています。この現象の物理的解釈を深く理解するために、Landau-level 量子化の影響を評価し、散乱波のエネルギー依存性、および放出されるスカラボソンの角度分布を調査します。両方の反粒子分割と annihilation プロセスによって得られる微分放出率は、磁場の方向に抑制され、 |
2309.12537 | Run-and-tumble oscillator: moment analysis of stationary distributions | When it comes to active particles, even an ideal-gas model in a harmonic
potential poses a mathematical challenge. An exception is a run-and-tumble
model (RTP) in one-dimension for which a stationary distribution is known
exactly. The case of two-dimensions is more complex but the solution is
possible. Incidentally, in both dimensions the stationary distributions
correspond to a beta function. In three-dimensions, a stationary distribution
is not known but simulations indicate that it does not have a beta function
form. The current work focuses on the three-dimensional RTP model in a harmonic
trap. The main result of this study is the derivation of the recurrence
relation for generating moments of a stationary distribution. These moments are
then used to recover a stationary distribution using the Fourier-Lagrange
expansion. | Derek Frydel | 2023-09-21T23:29:00 | http://arxiv.org/abs/2309.12537v1 | # Run-and-tumble oscillator: moment analysis of stationary distributions
###### Abstract
When it comes to active particles, even an ideal-gas model in a harmonic potential poses a mathematical challenge. An exception is a run-and-tumble model (RTP) in one-dimension for which a stationary distribution is known exactly. The case of two-dimensions is more complex but the solution is possible. Incidentally, in both dimensions the stationary distributions correspond to a beta function. In three-dimensions, a stationary distribution is not known but simulations indicate that it does not have a beta function form. The current work focuses on the three-dimensional RTP model in a harmonic trap. The main result of this study is the derivation of the recurrence relation for generating moments of a stationary distribution. These moments are then used to recover a stationary distribution using the Fourier-Lagrange expansion.
## I Introduction
Ideal-gas of active particles in a harmonic trap at first glance appears like a simple toy model with ready solutions and useful insights. The fact that such solutions are still lacking, or in the making, highlights the fact that active matter, even at the most basic level, is a challenge and experimentation with alternative formulations is needed and justified.
In this work we focus on stationary marginal distributions, that we designate as \(p\), of run-and-tumble particles (RTP) in a harmonic trap. In one- [1; 2; 3; 4] and two-dimensions [5], those distributions have a beta functional form. In three-dimensions, no exact expression for a distribution is available. In this work, instead of obtaining an expression for \(p\) directly, we calculate moments of that distribution. The moments are generated by the recurrence relation obtained by transforming the Fokker-Planck equation. A stationary distribution \(p\) is then recovered from those moments using the Fourier-Legendre series expansion.
The main analysis in this work is carried out for a system at zero temperature and for a harmonic potential in a single direction \(u=\frac{1}{2}Kx^{2}\) (embedded in a higher dimension). This makes the analysis simpler since the system is effectively one-dimensional. To extend the results to finite temperatures, we use the convolution construction [5; 6], which is equivalent to Gaussian smearing of a distribution at zero temperature. It also turns out that the moments of a stationary distribution for a potential \(u=\frac{1}{2}Kx^{2}\) can be related to the moments of a stationary distribution for an isotropic potential \(u=\frac{1}{2}Kr^{2}\). This permits us to extend our analysis in s straightforward way to isotropic potentials.
To place the current work in a larger context, we mention a number of previous contributions to active particles in a harmonic potential. An extension of the RTP model in 1D and zero temperature to three discrete swimming velocities was considered in [3]. The RTP model in two-dimensions (2D) with four discrete swimming velocities was investigated in [7]. A stationary distribution of active Brownian particles in 2D and for a finite temperature represented as a series expansion was considered in [8]. Dynamics of active Brownian particles (ABP) was recently investigated in [11]. A unifying approach to theoretical treatment of ABP and AOUP models in a harmonic trap was carefully investigated in [10]. Rare events in the context of active particles in a harmonic potential were considered in [12]. Active particles in a harmonic chains was considered in [13; 14; 15]. Experimental realizations of active particles in a harmonic trap are found in [16], using acoustic traps, and in [17], using optical tweezers. Entropy production rate of active particles in a harmonic trap was considered in [6; 18; 19; 20; 21; 22].
As an exact analysis of the RTP and ABP particles in a harmonic trap can be challenging, the active Ornstein-Uhlenbeck particles (AOUP) is more straightforward. The AOUP model has been developed to capture a behavior of a passive particle in a bath of active particles [23; 9]. Stationary distributions of this model in a harmonic trap have a Gaussian functional form, the same as that for passive Brownian particles, but with an effective temperature. Theoretical aspects of the AOUP model have been extensively investigated in [24; 25].
This paper is organized as follows. In Sec. (II) we consider RTP particles in a harmonic trap in 1D. We consider distributions in a position and a velocity space to identify the presence of "nearly immobile" particles. In Sec. (III) we consider the RTP particles in a harmonic trap \(u=\frac{1}{2}Kx^{2}\) embedded in 2D. In Sec. (IV) we consider RTP particles embedded in 3D. By transforming the Fokker-Planck equation, we obtain a recurrence relation for generating moments of the stationary distribution. From the moments we then recover distributions using the Fourier-Legendre expansion. In Sec. (V) we extend the previous results to finite temperatures then in Sec. (VI) to isotropic harmonic potentials. In Sec. (VII) we summarize the work and provide concluding remarks.
## II RTP particles in 1D
We start with the simple case: RTP particles in a harmonic trap \(u=\frac{1}{2}Kx^{2}\) in 1D [1; 2; 3; 4; 5]. Apart for looking into stationary distributions in a velocity space, Sec. (II.1), the section reviews previously derived results.
In one-dimension, swimming orientations are limited to two values, \(v_{swim}=\mp v_{0}\) and the Fokker-Planck formulation
yields two coupled differential equations [5]:
\[\dot{p}_{+} =\frac{\partial}{\partial x}\left[\left(\mu Kx-v_{0}\right)p_{+} \right]+\frac{1}{2\tau}\left(p_{-}-p_{+}\right)\] \[\dot{p}_{-} =\frac{\partial}{\partial x}\left[\left(\mu Kx+v_{0}\right)p_{-} \right]+\frac{1}{2\tau}\left(p_{+}-p_{-}\right), \tag{1}\]
where \(p_{+}\) and \(p_{-}\) are the distributions of particles with forward and backward direction, \(\tau\) is the persistence time (that determines the average time a particle persists in a given direction), and \(\mu\) is the mobility. No thermal fluctuations are taken into account.
The two equations in a stationary state \(\dot{p}_{\pm}=0\) and dimensionless units become
\[0 =\frac{\partial}{\partial z}\left[\left(z-1\right)p_{+}\right]+ \frac{\alpha}{2}\left(p_{-}-p_{+}\right)\] \[0 =\frac{\partial}{\partial z}\left[\left(z+1\right)p_{-}\right]+ \frac{\alpha}{2}\left(p_{+}-p_{-}\right). \tag{2}\]
where
\[z=\frac{\mu Kx}{v_{0}},\]
is dimensionless distance and
\[\alpha=\frac{1}{\tau\mu K}, \tag{3}\]
is the dimensionless rate of an orientational change. Note that in one-dimension the new direction of motion is chosen at the rate \(\alpha/2\) (rather than \(\alpha\)). The reason for this is that at an instance that a particle changes its direction, there is \(1/2\) probability it will select the same orientation. This problem does not arise for higher dimensions and \(\alpha\) is the actual rate at which a particle changes its orientation. This should be kept in mind when we later compare the results for different dimensions.
The two coupled equations in Eq. (2) can be combined into a single differential equation for the total distribution \(p=p_{+}+p_{-}\),
\[0=(2-\alpha)zp-(1-z^{2})p^{\prime}, \tag{4}\]
which when solved yields [1; 2; 3; 4; 5]
\[p=A(1-z^{2})^{\frac{\alpha}{2}-1}, \tag{5}\]
and where the normalization factor, that assures \(\int_{-1}^{1}dz\,p(z)=1\), is given by
\[A=\frac{\Gamma\left(\frac{\alpha}{2}+\frac{1}{3}\right)}{\sqrt{\pi}\Gamma \left(\frac{\alpha}{2}\right)}. \tag{6}\]
Note that in the absence of thermal fluctuations \(p\) is defined on \([-1,1]\) as a result of a swimming velocity having a fixed magnitude, which restricts how far a particle can move away from a trap center.
The distribution \(p\) in Eq. (5) can be either concave, with a majority of particles accumulated at the trap borders as a result of slow orientational change, or convex, with a majority of particles concentrated around a trap center as a result of fast orientational change. The crossover between the two behaviors occurs at \(\alpha=2\), at which point \(p\) is uniform on the interval \([-1,1]\).
In addition to \(p\), it is possible to obtain distributions for a specific swimming direction:
\[p_{\pm}=\frac{A}{2}\left(1\pm z\right)\left(1-z^{2}\right)^{\frac{\alpha}{2}- 1}. \tag{7}\]
The expression above can be verified if inserted into Eq. (2).
### distribution in \(w\)-space
For slow rates of orientational change, that is, for \(\alpha<2\), the accumulation of particles near the trap border takes the form of a divergence at \(z=\pm 1\), see Eq. (5). That divergence can be linked to the presence of "nearly immobile" particles accumulated at the trap border.
The existence of "nearly immobile" particles can be verified from a velocity distribution, manifested as a divergence at \(v=0\). In the overdamped regime, the two contributions to a velocity are a swimming velocity plus a contribution of a linear force of a harmonic trap, \(v=-\mu Kx\pm v_{0}\), in the dimensionless units given by
\[w=-z\pm 1, \tag{8}\]
where \(w=v/v_{0}\) is the dimensionless velocity.
A distribution in \(w\)-space can be inferred from a positional distribution in Eq. (7) by applying the change of variables suggested by Eq. (8). For particles with forward orientation, the substitution \(z=-w+1\) into \(p_{+}(z)\) leads to
\[p_{w}^{+}=\frac{A}{2}w^{\frac{\alpha}{2}-1}(2-w)^{\frac{\alpha}{2}},\ \ \ \mbox{defined on}\ \ \ 0<w<2.\]
The reason why the distribution for the forward swimming velocity is defined on \([0,2]\) can be understood from Eq. (8) and the fact that \(z\) is defined on \([-1,1]\). Given that \(p_{w}^{-}(w)=p_{w}^{+}(-w)\), a complete distribution defined on \([-2,2]\) is
\[p_{w}=\frac{A}{2}|w|^{\frac{\alpha}{2}-1}(2-|w|)^{\frac{\alpha}{2}}. \tag{9}\]
The divergence at \(w=0\) signals the presence of "nearly immobile" particles. We characterize these particles as "nearly immobile" since \(\lim_{e\to 0}\int_{-e}^{e}dw\,p_{w}=0\), which implies that there are no particles with zero velocity. Rather, there are particles whose velocity slowly converges to zero without ever attaining it.
Comparing Eq. (9) with Eq. (5) confirms that divergences in both distributions appear/disappear for the same value of \(\alpha\). This coextensiveness implies that the "nearly immobile" particles are concentrated around trap borders at \(z=\pm 1\). Only at \(z=\pm 1\) a particle can attain zero velocity, and since \(\lim_{e\to 0}\int_{1-e}^{1}dz\,p(z)=\lim_{e\to 0}\int_{-1}^{-1+e}dz\,p(z)=0\), no particle can reach this position, except for \(\alpha=0\).
In Fig. (1) we plot \(p\) and \(p_{w}\) for three values of \(\alpha\). At the crossover, \(\alpha=2\), both distributions reduce to simple shapes: \(p\) is flat and \(p_{w}\) is triangular. Then for \(\alpha<2\), both distributions develop divergences.
### moment analysis
In this section, we briefly analyze even moments \(\langle z^{2n}\rangle\) of a distribution \(p(z)\) in Eq. (5). (Odd moments are zero due to even symmetry of \(p\)). The moments can be calculated directly from \(p\) using \(\langle z^{2n}\rangle=\int_{-1}^{1}dz\,p(z)z^{2n}\):
\[\langle z^{2n}\rangle=\frac{\Gamma\left(n+\frac{1}{2}\right)\Gamma\left( \frac{1}{2}+\frac{\alpha}{2}\right)}{\sqrt{\pi}\,\Gamma\left(\frac{1}{2}+ \frac{\alpha}{2}+n\right)},\quad\text{for }n=1,2,\ldots \tag{10}\]
The moments are monotonically decreasing with increasing \(n\). The infinite sum \(\sum_{n=0}^{\infty}\langle z^{2n}\rangle\) can be evaluated exactly and is given by
\[\sum_{n=0}^{\infty}\langle z^{2n}\rangle=\begin{cases}\frac{\alpha-1}{\alpha- 2}&\text{if }\alpha>2\\ \infty&\text{if }\alpha\leq 2,\end{cases} \tag{11}\]
where at the crossover the sum is seen to diverge. By representing the infinite sum by its generating function, we can connect this behavior to the divergence in \(p\):
\[\sum_{n=0}^{\infty}\langle z^{2n}\rangle=\left\langle\frac{1}{1-z^{2}}\right\rangle.\]
As particles accumulate at the borders, \(z\rightarrow\pm 1\) and the above expression diverges. This explains the divergence of the sum in Eq. (11).
## III RTP oscillator in 2D
We next consider an RTP oscillator with 1D geometry, \(u=\frac{1}{2}Kx^{2}\), but embedded in 2D space. To set up the problem, we start with a Fokker-Planck equation for RTP particles in an arbitrary confinement:
\[\hat{\rho}=-\nabla\cdot\left[\left(\mu\mathbf{F}+v_{0}\mathbf{n}\right)\rho \right]+\hat{L}\rho, \tag{12}\]
where \(\mathbf{n}\) is the unit vector designating an orientation of the swimming velocity \(v_{swim}=v_{0}\mathbf{n}\), in 2D defined as \(\mathbf{n}=(\cos\theta,\sin\theta)\), where \(\theta\) is the angle of the orientation. The evolution of \(\mathbf{n}\) is governed by the operator \(\hat{L}\) given by
\[\hat{L}\rho=\frac{1}{\tau}\left[-\rho+\frac{1}{2\pi}\int_{0}^{2\pi}d\theta\, \rho(x,\theta)\right].\]
The two terms imply that particles with a given orientation \(\theta\) vanish with the rate \(\tau^{-1}\) and reappear with the same rate at another location that depends on the marginal distribution
\[p=\int_{0}^{2\pi}d\theta\,\rho(x,\theta).\]
Note that \(\int_{0}^{2\pi}d\theta\,\hat{L}\rho=0\). This condition is necessary if the total number of particles is to be conserved.
For \(u=\frac{1}{2}Kx^{2}\), the external force is \(\mathbf{F}=-Kx\mathbf{e}_{x}\). In a steady-state, only the component of \(\mathbf{v}_{swim}\) in the \(x\) direction is relevant, \(\mathbf{v}_{swim}\cdot\mathbf{e}_{x}=v_{0}\cos\theta\). This results in an effectively one-dimensional system governed by the following stationary Fokker-Planck equation:
\[0=\frac{\partial}{\partial z}\left[\left(z-\cos\theta\right)\rho\right]- \alpha\rho+\frac{\alpha}{2\pi}p, \tag{13}\]
given in dimensionless units.
The above Fokker-Planck equation can be interpreted as representing RTP model in 1D with continuous distribution of velocities, and what constitutes a generalized RTP model [26]. For a truly 1D RTP model, the distribution of swimming velocities is \(P\propto\delta(v_{swim}+v_{0})+\delta(v_{swim}-v_{0})\). The Fokker-Planck equation in Eq. (13) represents a system for the following distribution of swimming velococies: \(P\propto 1/\sqrt{v_{0}^{2}-v_{swim}^{2}}\). See Appendix A in [27].
There is no straightforward procedure to reduce Eq. (14) to a differential equation for \(p\) but it is possible to infer it from the moments of \(p\). (How to calculate such moments will be demonstrated when we analyze a system in 3D). Because a moment formula in 2D was determined to have a similar structure to that in 1D, it was, in turn, possible to infer that a differential equation for \(p\) should have the same structure as that for a system in 1D. See Eq. (4). For 2D, the differential equation for \(p\) was determined to be [5]
\[0=(1-2\alpha)zp-(1-z^{2})p^{\prime}, \tag{14}\]
where the solution is a beta distribution
\[p(z)=A(1-z^{2})^{\alpha-\frac{1}{2}}, \tag{15}\]
and where the normalization constant is given by
\[A=\frac{\Gamma(\alpha+1)}{\sqrt{\pi}\Gamma\left(\alpha+\frac{1}{2}\right)}. \tag{16}\]
### distribution in \(w\)-space
For a system embedded in 2D, the velocity component in the \(x\)-direction is \(v=-\mu Kx+v_{0}\cos\theta\), in reduced units given
Figure 1: Stationary distributions in \(z\)- and \(w\)-space for different values of \(\alpha\). Divergence emerges at the crossover \(\alpha=2\) and is linked to the presence of immobile particles at trap borders.
by
\[w=-z+\cos\theta. \tag{17}\]
Compare this with Eq. (8).
The distribution \(p_{w}\) can be obtained from Eq. (13) by substituting for \(p\) with expression given in Eq. (15), followed by the change of variables \(z=-w+\cos\theta\), followed by the integration over all orientations. This yields an inhomogeneous first order differential equation
\[(1-\alpha)p_{w}+wp_{w}^{\prime}=-\frac{\alpha}{2\pi}I, \tag{18}\]
where
\[I(w)=2A\int_{-1}^{1-w}ds\,\left[1-s^{2}\right]^{\alpha-\frac{1}{2}}\left[1-(s +w)^{2}\right]^{-\frac{1}{2}}. \tag{19}\]
The solution is a distribution defined on \([-2,2]\)
\[p_{w}=\frac{\alpha}{2\pi}|w|^{\alpha-1}\left[\int_{|w|}^{2}dw^{\prime}\,w^{ \prime-\alpha}I(w^{\prime})\right]. \tag{20}\]
Although \(p_{w}\) has a more complicated form compared to that in Eq. (9) for a system in 1D, its general structure remains similar. Divergence at \(w=0\) comes from the factor \(|w|^{\alpha-1}\), which signals the existence of nearly immobile particles for \(\alpha>1\) and suggests the crossover at \(\alpha=1\). This, however, is not corroborated by \(p\) in Eq. (15), in which case divergences at the trap border emerge for \(\alpha>1/2\).
The reason why divergences in \(p\) disappear at a lower value of \(\alpha\) is a result of averaging procedure used to obtain \(p\) from \(\rho\). Even if a distribution \(\rho\) exhibits divergences up to \(\alpha=1\), the averaging procedure \(p=\int_{0}^{\alpha}d\theta\,\rho\) smooths those divergences and effectively makes them disappear for \(\alpha<1/2\). In Appendix (A) we analyze distributions \(\rho\) in more detail to back up these claims.
In Fig. (2) we plot a number of different distributions \(p_{w}\) for different values of \(\alpha\) calculated using Eq. (20), where the integral is evaluated numerically. Those distributions are compared with those obtained from simulations. Simulations were carried out using the Euler method for updating particle positions:
\[x(t+\Delta t)=\left[v_{0}\cos\theta(t)-\mu Kx(t)\right]\Delta t,\]
with new orientation \(\theta(t)\) selected after a time period selected randomly from the Poisson distribution \(P\propto e^{-t/\tau}\). For comparison, in Fig. (2) we also plot the corresponding distributions \(p\) below each \(p_{w}\).
Unlike the distributions in Fig. (1) for the system in 1D, a peak at \(w=0\) does not fall to zero for \(\alpha\) above the crossover. We can calculate the height of \(p_{w}\) at \(w=0\) from Eq. (18) by setting \(w\) to zero. This yields \(p_{w}(0)=\frac{A}{2\pi}\frac{\alpha}{\alpha-1}I(0)\) which simplifies to
\[p_{w}(0)=\frac{1}{\alpha-1}\left[\frac{4^{\alpha}}{\pi}\frac{\alpha!\alpha!} {(2\alpha)!}\right]^{2}. \tag{21}\]
Rather than suddenly falling to zero for \(\alpha>1\), the peak height at \(w=0\) approaches zero algebraically as a function of \(\alpha\).
For half integer values of \(\alpha\), it is possible to obtain exact expression for \(p_{w}\). Those expressions are derived in Appendix (B).
Note that the crossover in 2D occurs at \(\alpha=1\) while that in 1D at \(\alpha=2\). Yet if we recall the discussion below Eq. (3), the actual rate for a system in 1D is not \(\alpha\) but \(\alpha/2\). Therefore, considering the actual rates, the crossover in both dimensions occurs for the same rates.
### moment analysis
The moments of the distribution \(p(z)\) in Eq. (15) are obtained directly from the formula \(\langle z^{2n}\rangle=\int_{-1}^{1}dz\,p(z)z^{2n}\):
\[\langle z^{2n}\rangle=\frac{\Gamma\left(n+\frac{1}{2}\right)\Gamma(\alpha+1) }{\sqrt{\pi}\Gamma(\alpha+n+1)}. \tag{22}\]
The moments generate a monotonically decreasing sequence whose infinite sum is
\[\sum_{n=0}^{\infty}\langle z^{2n}\rangle=\left\langle\frac{1}{1-z^{2}}\right\rangle =\begin{cases}\frac{2\alpha}{2\alpha-1}&\text{if }\alpha>\frac{1}{2}\\ \infty&\text{if }\alpha\leq\frac{1}{2}.\end{cases} \tag{23}\]
The sum diverges at \(\alpha=\frac{1}{2}\). This is linked to divergences in \(p\) and does not represent a crossover. An actual crossover is determined from the behavior of \(p_{w}\).
## IV RTP particles in 3D: linear harmonic trap
For RTP particles in a harmonic trap in 3D, there is no available solution for a stationary distribution \(p\). Instead of trying to obtain such a distribution directly, in this section we focus on how to obtain exact expressions fo the moments of
Figure 2: Distributions \(p_{w}\) for different values of \(\alpha\). Exact distributions (represented by lines) are obtained from Eq. (20). The circular symbols represent simulation data points. In addition, below each plot for \(p_{w}\) we plot a corresponding distribution \(p\) to emphasize qualitatively different behavior.
\(p\). The results of this section are the most important results of this work.
### moment analysis
The stationary Fokker-Planck equation for RTP particles in a harmonic potential \(u=\frac{K}{\pi}x^{2}\) embedded in 3D is obtained from Eq. (12) for the force \(\hat{\mathbf{F}}=-Kx\mathbf{e}_{x}\) and
\[\hat{L}\rho=\frac{1}{\tau}\left[-\rho+\frac{1}{2}\int_{0}^{\pi}d\theta\,\sin \theta\,\rho(x,\theta)\right].\]
The resulting equation in reduced units is
\[0=\frac{\partial}{\partial z}\left[(z-\cos\theta)\,\rho\right]-\alpha\rho+ \frac{\alpha}{2}p, \tag{24}\]
with the marginal distribution defined as
\[p(z)=\int_{0}^{\pi}d\theta\,\sin\theta\,\rho(z,\theta). \tag{25}\]
The Fokker-Planck equation in Eq. (24) can be transformed into the following recurrence relation:
\[A_{l,m}=\frac{\alpha}{l+\alpha}A_{l,0}A_{0,m}+\frac{l}{l+\alpha}A_{l-1,m+1} \tag{26}\]
where
\[A_{l,m}=\langle z^{l}\cos^{m}\theta\rangle,\]
and the angular brackets indicate averaging procedure defined as \(\langle\dots\rangle=\int_{-1}^{1}dz\int_{0}^{\pi}d\theta\,\rho\sin\theta( \dots)\).
The recurrence relation reflects the structure of the differential equation from which it was obtained: it is first order, linear, with variable coefficients. The relation was obtained by multiplying the Fokker-Planck equation by \(z^{l}\cos^{m}\theta\) followed by integration \(\int_{-1}^{1}dz\int_{0}^{\pi}d\theta\,\sin\theta\) and written in its final form using integration by parts.
Since \(A_{l,0}=\langle z^{l}\rangle\), solving the recurrence relation would permit us to obtain moments. The initial condition of the recurrence relation is provided by the terms \(A_{0,m}\) which are easily evaluated
\[A_{0.m}=\frac{1}{2}\int_{0}^{\pi}d\theta\,\sin\theta\cos^{m}\theta=\begin{cases} \frac{1}{m+1}&\text{if $m$ even}\\ 0&\text{if $m$ odd}.\end{cases}\]
The recurrence relation cannot be solved for an arbitrary \(A_{l,m}\). Nonetheless, it is possible to reduce the relation to another recurrence relation in terms of \(A_{2n,0}=\langle z^{2n}\rangle\) only:
\[\langle z^{2n}\rangle=\frac{\alpha}{2n}\sum_{k=0}^{n-1}\frac{\langle z^{2k} \rangle}{2n-2k+1}\frac{(2k+1)_{\alpha-1}}{(2n+1)_{\alpha-1}}. \tag{27}\]
where \((x)_{n}=\frac{\Gamma(x+n)}{\Gamma(x)}\) is the falling factorial.
The recurrence relation in Eq. (27) is the central result of this section. Although it does not provide an exact expression for an arbitrary moment, it provides an analytically tractable procedure for obtaining such an expression recursively. A number of initial even moments generated from the recurrence relation in Eq. (27) are given in Table (1).
By examining Table (1), we can verify that for \(\alpha=0\) the moments reduce to a simple general formula \(\langle z^{2n}\rangle=\frac{1}{2n+1}\) that can be linked to a uniform distribution where \(\langle z^{2n}\rangle=\frac{1}{2}\int_{-1}^{1}dz^{2n}\). This means that for finite \(\alpha\), \(p\) can only be convex which, in turn, implies the absence of divergences at the trap borders.
To understand how a flat distribution arises for \(\alpha=0\) we should understand that for \(\alpha=0\) all particles are immobile and trapped at \(z=\cos\theta\), where the swimming velocity and the velocity due to harmonic potential cancel one another. As a result \(\rho=\frac{1}{2}\delta(z-\cos\theta)\). Averaged over all orientations, this yields:
\[\lim_{\alpha\to 0}p(z)=\frac{1}{2}\int_{0}^{\pi}d\theta\,\sin\theta\,\delta(z-\cos \theta)=\frac{1}{2}. \tag{28}\]
The averaging procedure completely smooths out the delta distribution.
In the Table (2), we compare the second and fourth moments calculated for different dimension. For 1D and 2D, these moments are obtained from the formulas in Eq. (10) and Eq. (22). The tendency is that an increased dimensionality reduces the value of a given moment. The best way to understand this reduction is to think of each system as one-dimensional with different distributions of swimming velocities (for a potential \(u=\frac{K}{2}x^{2}\) we actually consider the projection of a swimming velocity along the \(x\)-axis and not the true swimming velocity).
\begin{table}
\begin{tabular}{l l l} \hline \hline & \(\langle z^{2}\rangle\) & \(\langle z^{4}\rangle\) \\ \hline
1D & \(\frac{1}{1+\alpha}\) & \(\frac{3}{(1+\alpha)(3+\alpha)}\) \\
2D & \(\frac{1}{2}\frac{1}{1+\alpha}\) & \(\frac{1}{4}\frac{3}{(1+\alpha)(2+\alpha)}\) \\
3D & \(\frac{1}{3}\frac{1}{1+\alpha}\) & \(\frac{1}{15}\frac{18+5\alpha}{(1+\alpha)(2+\alpha)(3+\alpha)}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Second and fourth moments of the distribution \(p\) in different dimensions.
\begin{table}
\begin{tabular}{l l} \hline \hline \(\langle z^{2}\rangle=\frac{1}{3}\frac{1}{1+\alpha}\) \\ \(\langle z^{4}\rangle=\frac{1}{15}\frac{18+5\alpha}{(1+\alpha)(2+\alpha)(3+ \alpha)}\) \\ \(\langle z^{6}\rangle=\frac{1}{6}\frac{1}{(3+\alpha)(4+\alpha)(4+\alpha)(4+ 5)}\) \\ \(\langle z^{8}\rangle=\frac{1}{15}\frac{75600+284046+378004^{2}+175\alpha^{3}}{( 4+1)+2(\alpha+3)(4+\alpha)(4+5)}\) \\ \(\langle z^{10}\rangle=\frac{1}{99}\frac{3256592+12592804+196442+1386049+385 \alpha^{4}}{(4+1)(\alpha+2)(\alpha+3)(4+4)(4+5)(4+6)(\alpha+7)(\alpha+8)( \alpha+9)}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Moments of a stationary distribution \(p\) obtained from a recurrence relation in Eq. (27).
### distribution in \(w\)-space
To obtain a distribution in \(w\)-space we follow a similar procedure to that used in Sec. (III.1). We transform Eq. (24) using the change of variables \(z=-w+\cos\theta\). The resulting equation is then integrated over all orientations. The procedure yields the first order inhomogeneous differential equation:
\[0=(1-\alpha)p_{w}+wp_{w}^{\prime}+\frac{\alpha}{2}\int_{-1}^{1-w}ds\,p(s), \tag{29}\]
for which the solution is
\[p_{w}=\frac{\alpha}{2}|w|^{\alpha-1}\left[\int_{|w|}^{2}dw^{\prime}\,w^{ \prime-\alpha}\int_{-1}^{1-w^{\prime}}ds\,p(s)\right]. \tag{30}\]
The solution permits us to obtain \(p_{w}\) from \(p\). The difference between this result and that in Eq. (20) is that here we do not have the exact expression for \(p\).
Even without knowing \(p\), Eq. (29) can be used to calculate \(p_{w}(0)\) by setting \(w=0\). This yields
\[p_{w}(0)=\frac{1}{2}\frac{\alpha}{\alpha-1}. \tag{31}\]
The expression diverges at \(\alpha=1\), indicating a divergence in \(p_{w}\) and the presence of nearly immobile particles. Compared with a similar result in Eq. (21) for a system in 2D, we see that the divergence occurs at the same value of \(\alpha\). This means that the location of the crossover is independent of the system dimension.
In Fig. (3) we plot \(p_{w}\) for different values of \(\alpha\) obtained using Eq. (30). The integrals are calculated numerically and \(p\) is calculated from the moments as explained in the next section.
### recovering \(p\) from moments
The recurrence relation in Eq. (27) permits fast computation of an arbitrary number of even moments of \(p\). In this section we present a procedure for recovering a distribution \(p\) from the moments based on the Fourier-Legendre expansion,
\[p(z)=\sum_{n=0}^{\infty}a_{n}P_{2n}(z), \tag{32}\]
where \(P_{m}\) are Legendre (orthogonal) polynomials. Like \(p\), Legendre polynomials are defined on \([-1,1]\). Due to even symmetry of \(p\), only even Legendre polynomials \(P_{2n}\) are required:
\[P_{2n}=2^{2n}\sum_{k=0}^{n}\frac{z^{2k}\,\Gamma\left(k+n+\frac{1}{2}\right)}{ \left(2k\right)!(2n-2k)!\Gamma\left(k-n+\frac{1}{2}\right)}. \tag{33}\]
The coefficients \(a_{n}\) in Eq. (32) can be determined from the orthogonality relation \(\int_{-1}^{1}dz\,P_{n}P_{m}=\frac{2}{2n+1}\delta_{mn}\) which leads to
\[a_{n}=\frac{4n+1}{2}\int_{-1}^{1}dz\,P_{2n}(z)p(z), \tag{34}\]
and in combination with Eq. (33) yields
\[a_{n}=\frac{4n+1}{2}\left[2^{2n}\sum_{k=0}^{n}\frac{\left\langle z^{2k} \right\rangle\Gamma\left(k+n+\frac{1}{2}\right)}{\left(2k\right)!(2n-2k)! \Gamma\left(k-n+\frac{1}{2}\right)}\right]. \tag{35}\]
The expansion in Eq. (32) together with the coefficients in Eq. (35) provides an exact formula for recovering \(p\) in terms of moments obtained from Eq. (27). Initial coefficients \(a_{n}\) are listed in Table (3).
Note that by setting \(\alpha=0\), \(a_{0}=1/2\) and all the remaining coefficients \(a_{n}\) are zero. This implies a uniform distribution in agreement with the results in Eq. (28) for the same limit.
Recovered distributions obtained from a truncated Fourier-Legendre series \(p=\sum_{n=0}^{N_{c}}a_{n}P_{2n}(z)\) for \(N_{c}=10\) are shown in Fig. (4).
The truncated series shows a very good agreement with \(p\) obtained from simulations. The larger the \(\alpha\), the less terms of expansion are needed. It is generally recognized that the delta and square distributions are not well approximated by the series. But since only the distribution at \(\alpha=0\) is flat (and does not need to be approximated), this limitation does not apply to our situation.
Figure 3: Distributions \(p_{w}\) calculated from \(p\) using numerically evaluated Eq. (30). The height of the peak at \(w=0\) for \(\alpha>1\) is given by Eq. (21).
\begin{table}
\begin{tabular}{c} \hline \(a_{0}=\frac{1}{2}\) \\ \(a_{1}=-\frac{5}{4}\frac{\alpha}{\alpha+1}\) \\ \(a_{2}=-\frac{3}{16}\frac{166-24\alpha^{2}-9\alpha^{3}}{(\alpha+1)(\alpha+2)( \alpha+3)}\) \\ \(a_{3}=-\frac{13}{96}\frac{288-96\alpha^{2}120\alpha^{2}+1200^{3}+15\alpha^{4}}{ (\alpha+1)(\alpha+2)(\alpha+3)(\alpha+4)(\alpha+5)}\) \\ \(a_{4}=-\frac{17}{768}\frac{55296-105984\alpha+94250\alpha^{2}+8512\alpha^{3}-67 20\alpha^{4}-1680\alpha^{6}-105\alpha^{6}}{(\alpha+1)(\alpha+2)(\alpha+3)( \alpha+4)(\alpha+5)(\alpha+6)(\alpha+7)}\) \\ \hline \end{tabular}
\end{table}
Table 3: Coefficients for the Fourier-Legendre series in in Eq. (32).
## V Thermal fluctuations
Up to this point, all the analysis and results were done at zero temperature. To incorporate thermal fluctuations, we use a known result that for particles in a harmonic trap a stationary distribution of active particles in a thermal bath can be represented as convolution [5], for a system with 1D geometry given by
\[p_{T}(x)=\int dx^{\prime}\,p(x^{\prime})p_{eq}(x-x^{\prime}), \tag{36}\]
where \(p_{eq}(x)=\sqrt{\frac{\mu K}{2\pi D}}e^{-\mu Kx^{2}/2D}\) is the Boltzmann distribution of passive particles in a harmonic trap, and \(p\) is the stationary distribution at zero temperature.
A convolution construction applies to any combination of independent random processes [28; 29]. Generally, however, confinement leads to correlations between random processes as it gives rise to nonlinear terms in the Langevin equation, even if these processes are originally independent. The exception is a harmonic potential whose force is linear and it does not introduce nonlinear terms. See Appendix (C) for further discussion regarding this point.
Using Eq. (36), the moments of the distribution \(p_{T}\) are defined as
\[\langle z^{2n}\rangle\tau=\int_{-\infty}^{\infty}dzz^{2n}\int_{-1}^{1}dz^{ \prime}\,p(z^{\prime})p_{eq}(z-z^{\prime}), \tag{37}\]
assuming dimensionless units. Then using the identity \(z^{2n}=\left[(z-z^{\prime})+z^{\prime}\right]^{2n}\) together with binomial expansion, Eq. (37) yields
\[\langle z^{2n}\rangle_{T}=\sum_{k=0}^{n}\frac{(2n)!}{(2k)!(2n-2k)!}\langle z^ {2n-2k}\rangle\langle z^{2k}\rangle_{eq}. \tag{38}\]
And since moments \(\langle z^{2k}\rangle_{eq}\) can be calculated using the Boltzmann distribution, we finally get
\[\langle z^{2n}\rangle_{T}=\sum_{k=0}^{n}\frac{(2n)!B^{k}}{2^{k}k!(2n-2k)!} \langle z^{2n-2k}\rangle, \tag{39}\]
where \(B=\frac{\mu K}{\nu_{0}^{2}}D\) is the dimensionless diffusion constant.
Note that the moments for a finite temperature are given as an expansion in terms of moments at zero temperature. Since all terms in the expansion are positive, the effect of temperature is to increase the value of all the moments.
Using Eq. (38), the initial moments are given by
\[\langle z^{2}\rangle_{T} =\langle z^{2}\rangle+\langle z^{2}\rangle_{eq}\] \[\langle z^{4}\rangle_{T} =\langle z^{4}\rangle+6\langle z^{2}\rangle\langle z^{2}\rangle_{ eq}+\langle z^{4}\rangle_{eq}\] \[\langle z^{6}\rangle_{T} =\langle z^{6}\rangle+15\langle z^{4}\rangle\langle z^{2}\rangle _{eq}+15\langle z^{2}\rangle\langle z^{4}\rangle_{eq}+\langle z^{6}\rangle_{eq}\]
Note that the two contributions of the second moment are completely additive. Using Eq. (39), the initial moments in the actual units are given by
\[\langle x^{2}\rangle_{T} =\langle x^{2}\rangle+\frac{k_{B}T}{K}\] \[\langle x^{4}\rangle_{T} =\langle x^{4}\rangle+6\langle x^{2}\rangle\frac{k_{B}T}{K}+3 \left(\frac{k_{B}T}{K}\right)^{2}\] \[\langle x^{6}\rangle_{T} =\langle x^{6}\rangle+15\langle x^{4}\rangle\frac{k_{B}T}{K}+45 \langle z^{2}\rangle\left(\frac{k_{B}T}{K}\right)^{2}+15\left(\frac{k_{B}T}{K} \right)^{3}\]
where we used \(D=\mu k_{B}T\). This result shows more clearly contribution of the temperature.
## VI Isotropic harmonic trap
The previous analysis was done for a harmonic potential in a single direction, \(u=\frac{1}{2}Kx^{2}\), and it is not clear how and if the obtained results apply to an isotropic potential \(u=\frac{1}{2}Kr^{2}\). In this section we extend the previous results to such an isotropic potential.
To establish a relation between moments for a linear potential, \(\langle x^{2n}\rangle\), and the moments of an isotropic potential, \(\langle r^{2n}\rangle\), we consider first the Boltzmann distribution, \(p_{eq}(r)\propto e^{-\frac{\mu K^{2}}{2D^{2}}}\) to see how respective moments are related in this case. For an arbitrary dimension \(d\), the moments are easily evaluated and are given by
\[\langle r^{2n}\rangle=\left(\frac{2D}{\mu K}\right)^{n}\frac{\Gamma\left(\frac {d}{2}+n\right)}{\Gamma\left(\frac{d}{2}\right)}. \tag{42}\]
Note that \(\langle x^{2n}\rangle=\langle r^{2n}\rangle_{d=1}\) so that we can write
\[\langle x^{2n}\rangle=\left(\frac{2D}{\mu K}\right)^{n}\frac{\Gamma\left( \frac{1}{2}+n\right)}{\Gamma\left(\frac{1}{2}\right)}.\]
This permits us to represent Eq. (42) as
\[\langle r^{2n}\rangle=\frac{\Gamma\left(\frac{1}{2}\right)\Gamma\left(n+\frac{d}{ 2}\right)}{\Gamma\left(\frac{d}{2}\right)\Gamma\left(n+\frac{1}{2}\right)} \langle x^{2n}\rangle, \tag{43}\]
which establishes a relation between \(\langle x^{2n}\rangle\) and \(\langle r^{2n}\rangle\).
The relation in Eq. (43) was derived by considering the equilibrium distribution. We next verify that the same relation applies for RTP particles. Since we know that a stationary distribution of RTP particles in an isotropic harmonic potential in 2D is [5]
\[p(s)=\frac{\alpha}{\pi}(1-s^{2})^{\alpha-1}, \tag{44}\]
where we introduce a dimensionless variable \(s=\mu Kr/v_{0}\), we can calculate the moments \(\langle s^{2n}\rangle=\int_{0}^{1}ds\,2\pi sp(s)s^{2n}\) which can then compared with the moments in Eq. (22) for a linear harmonic potential. The comparison recovers the formula in Eq. (43) for \(d=2\).
The verification of Eq. (43) for \(d=3\) is more intricate and the details are relegated to Appendix. But it leads to the following relation
\[\langle r^{2n}\rangle=(2n+1)\langle x^{2n}\rangle \tag{45}\]
which also agrees with Eq. (43) for \(d=3\). Consequently, the relation in Eq. (43) is general and applies to passive and RTP particles in a harmonic potential.
Combining the relation in Eq. (45) with the recurrence relation in Eq. (27), we next get the recurrence relation for the moments of a stationary distribution \(p(s)\) in an isotropic harmonic potential in 3D
\[\langle s^{2n}\rangle=\frac{\alpha}{2n}\sum_{k=0}^{n-1}\frac{\langle s^{2k} \rangle}{2n-2k+1}\frac{(2k+2)_{\alpha-2}}{(2n+2)_{\alpha-2}}. \tag{46}\]
The central result of this section is Eq. (43), which establishes a relation between moments \(\langle r^{2n}\rangle\) and \(\langle x^{2n}\rangle\). This relation is then used to determine the recurrence relation in Eq. (46).
### recovering \(p\) from moments
To recover a distribution \(p(s)\) for an isotropic harmonic trap in 3D from the moments in Eq. (46), we are going to use the Fourier-Legendre expansion, as was done in Sec. (IV.3). However, since the normalized function is \(4\pi s^{2}p(s)\), we expand this quantity rather than \(p(s)\). The resulting expansion is
\[4\pi s^{2}p(s)=2\sum_{n=0}^{\infty}a_{n}P_{2n}(s). \tag{47}\]
The factor \(2\) in front of the sum comes from the fact that \(p(s)\) is defined on \([0,1]\) while the polynomials \(P_{n}\) are defined on \([-1,1]\).
The coefficients \(a_{n}\) in the expansion are the same as those in Eq. (35) but defined in terms of \(\langle s^{2n}\rangle\):
\[a_{n}=\frac{4n+1}{2}\left[2^{2n}\sum_{k=0}^{n}\frac{\langle s^{2k}\rangle \,\Gamma\left(k+n+\frac{1}{2}\right)}{(2k)!(2n-2k)!\,\Gamma\left(k-n+\frac{1}{ 2}\right)}\right]. \tag{48}\]
Fig. (5) compares distributions \(p(s)\) obtained using the truncated Fourier-Legendre expansion with those obtained from a simulation.
The plots indicate that for \(\alpha>1\) the distributions \(p(s)\) vanish at \(s=1\), confirming \(\alpha=1\) to be a point of crossover.
## VII Summary and conclusion
The central result of this work is the recurrence relation for generating moments of a stationary distribution \(p\) for RTP particles in a harmonic trap in three-dimensions. As there is no available exact expression for \(p\) in this dimension, this approach provides an alternative route with analytical tractability.
For the potential \(u=\frac{1}{2}Kx^{2}\) the recurrence relation in dimensionless parameters is given in Eq. (27). This result is specific for a system embedded in 3D space but it can be generalized to any dimension. A generalized form, valid for any dimension \(d\), is given by
\[\langle z^{2n}\rangle=\frac{\alpha}{2n}\sum_{k=0}^{n-1}\langle z^{2k}\rangle \frac{(2k+1)_{\alpha-1}}{(2n+1)_{\alpha-1}}c_{2n-2k}, \tag{49}\]
where \(c_{2n}=A_{0,2n}\) and is given by
\[c_{2n}=\frac{\Gamma\left(\frac{d}{2}\right)\Gamma\left(n+\frac{1}{2}\right)} {\Gamma\left(\frac{1}{2}\right)\Gamma\left(n+\frac{d}{2}\right)}. \tag{50}\]
Note that the parameter of dimensionality only enters via the coefficient \(c_{2n}\).
The general formula in Eq. (49) can be verified. For \(d=3\), it recovers the result in Eq. (27). For \(d=1\) and \(d=2\), Eq. (49) can be solved for \(\langle z^{2n}\rangle\), with solutions found to agree with Eq. (10) and Eq. (22).
Using the relation in Eq. (43), we can obtain a similar general recurrence relation for the moments of a stationary distri
bution for an isotropic harmonic potential:
\[\langle s^{2n}\rangle=\frac{\alpha}{2n}\sum_{k=0}^{n-1}\langle s^{2k}\rangle\frac {(2k+2)_{\alpha-2}}{(2n+2)_{\alpha-2}}c_{2n-2k}. \tag{51}\]
The general recurrence formulas in Eq. (49) and in Eq. (51) permit us to better understand the role of system dimension. The fact that in \(d=3\) the recurrence relation cannot be solved implies a more complex functional form of \(p(z)\). This might help to explain why in this dimension there is no corresponding trivial differential equation for \(p(z)\)[5]. The idea of a function without a corresponding differential equation was first put forward in 1887 by Holder [30] who considered an Euler gamma function. In 1900 Hilbert conjectured that the Riemann zeta function is another example [31]. In 1920's this conjecture was proven to be correct [32], and in 2015, it was shown that the Riemann zeta function formally satisfies an infinite order linear differential equation [33].
Another important aspect of this work is the determination of a crossover at \(\alpha=1\), regardless of a system dimension. The importance of a crossover is that it indicates when to expect the presence of "nearly immobile" particles accumulated near a trap border. Since \(\alpha=\frac{1}{\tau\mu K}\) is the ratio of two time scales, the persistence time \(\tau\) during which an active particle retains its orientation, and the time a particle needs to reach a trap border \(1/\mu K\), the crossover value gives a way to predict the shape of a distribution once we know \(\alpha\). If a typical persistence time for a E. Coli is \(\tau\sim 1s\) and a typical velocity \(v_{0}=40\mu s^{-1}\)[34], then we should expect the trap size to be \(v_{0}/\mu K\lesssim 40\mu\) in order to see accumulation of particles at a trap border.
The most obvious extension of the "recurrence" method is to apply it to other types of active particles, for example, the ABP model. Since the Fokker-Planck equation for the ABP system is different, one expects a different recurrence relation. It is not clear if the methodology is extendeable to other types of external potentials or simple interactive systems such as the Kuramoto model [35], known to undergo a phase transition, or the one-dimensional asymmetric exclusion process mode [36; 37].
###### Acknowledgements.
D.F. acknowledges financial support from FONDECYT through grant number 1201192.
## VIII Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## Appendix A exact distributions \(\rho\) for RTP particles in a harmonic trap in 2D
In this section we solve the Fokker-Planck equation in Eq. (13) for \(\rho\) by substituting for \(p\) the expression in Eq. (15). This permits us to posit Eq. (13) as a first order inhomogeneous differential equation:
\[0=(z-\cos\theta)\rho^{\prime}+(1-\alpha)\rho+\frac{\alpha A}{2\pi}(1-z^{2})^{ \alpha-\frac{1}{2}}. \tag{52}\]
By multiplying the above equation by the integrating factor
\[e^{\int_{-z}^{z}dy\frac{1-\alpha}{y-\cos\theta}}=\left(\frac{\cos\theta+1}{ \cos\theta-z}\right)^{\alpha-1},\]
the solution can be represented as
\[\rho_{L}=A\frac{\alpha}{2\pi}\left|\cos\theta-z\right|^{\alpha-1}\int_{-1}^{z }dz^{\prime}\left(1-z^{\prime 2}\right)^{\alpha-\frac{1}{2}}\left|\cos\theta-z^{ \prime}\right|^{-\alpha}, \tag{53}\]
for the domain \(-1<z<\cos\theta\), and
\[\rho_{R}=A\frac{\alpha}{2\pi}\left|\cos\theta-z\right|^{\alpha-1}\int_{z}^{1 }dz^{\prime}\left(1-z^{\prime 2}\right)^{\alpha-\frac{1}{2}}\left|\cos\theta-z^{ \prime}\right|^{-\alpha}, \tag{54}\]
for the domain \(\cos\theta<z<1\).
The difference between the solutions in each domain lies in the limits of an integral. The limits ensure that the distribution in Eq. (53) vanishes at \(z=-1\) and the distribution in Eq. (54) vanishes at \(z=1\). Except for \(\rho(z,0)\) and \(\rho(z,\pi)\), for any other orientation \(\theta\), \(\rho\) vanishes at both \(z=\pm 1\). Divergence in \(\rho\) comes from the pre-factor \(\left|\cos\theta-z\right|^{\alpha-1}\). This implies that the divergence for each \(\rho\) is localized at \(z=\cos\theta\) and the crossover corresponds to \(\alpha=1\) -- as verified by the behavior of \(p_{w}\).
The height of the distribution at \(z=\cos\theta\) for \(\alpha>1\), when a divergence disappears, can easily be evaluated from Eq. (52):
\[\rho(z=\cos\theta,\theta)=\frac{A}{2\pi}\frac{\alpha}{\alpha-1}(1-\cos^{2} \theta)^{\alpha-\frac{1}{2}}. \tag{55}\]
Note that the point \(z=\cos\theta\) does not represent a maximal value of \(\rho\) for \(\alpha>1\). The actual peak is shifted toward \(z=0\) as shown in Fig. (6).
The solutions in Eq. (53) and Eq. (54) for \(\cos\theta=0\) become
\[\rho\propto\left|z\right|^{\alpha-1}\left(1-z^{2}\right)^{\alpha+\frac{1}{2} }{}_{2}F_{1}\left(\frac{\alpha+1}{2},\frac{2\alpha+1}{2},\frac{2\alpha+3}{2},1-z^{2}\right), \tag{56}\]
and for \(\cos\theta=\pm 1\)
\[\rho\propto\left(1-z^{2}\right)^{\alpha-1}(1\pm z)^{\frac{3}{2} }{}_{2}F_{1}\left(\frac{1}{2},\frac{2\alpha+1}{2},\frac{2\alpha+3}{2},\frac{1 \pm z}{2}\right). \tag{57}\]
Both solutions are for the full domain \([-1,1]\).
It is interesting to compare the distributions given above with the distributions for the three-state RTP model in [3]. The three-state RTP model is an extension of the RTP model in 1D considered in Sec. (II) that includes the zero swimming velocity, \(v_{swim}=-v_{0},0,v_{0}\). The resulting stationary distribution \(p\) has three divergences at \(z=-1,0,1\). The divergences at different position correspond to different swimming velocities, for example, the divergence at \(z=0\) is linked to particles
with zero velocity. The exact solutions for \(p_{\pm}\) and \(p_{0}\) are expressed in terms of the hypergeometric functions, similar to the solutions in Eq. (43) and Eq. (44). This suggests that the analytical complexity quickly rises if we move away from the two-state model.
In Sec. (III.1) we calculated distributions in \(w\)-space for all particles, that is, averaged over all orientations. But knowing \(\rho\), it is now possible to calculate distributions in \(w\)-space corresponding to a given orientation. To obtain such distributions, we transform \(\rho\) using the change of variables \(z=-w+\cos\theta\).
The distributions \(\rho(w,\theta)\) are plotted in Fig. (7).
Note that all divergences, regardless of the orientation of a motion, are at \(w=0\) which signals the presence of nearly immobile particles. For \(\alpha>1\), nearly immobile particles disappear, manifested by the disappearance of divergences. Another observation is that for \(\alpha>1\), all the peaks, originally at \(w=0\), start to shift away from \(w=0\) toward \(w\to\cos\theta\), that is, the value of a swimming velocity of particles with orientation \(\theta\). Only distribution with orientation \(\theta=\pm\pi/2\) are centered around \(w=0\).
Note that the domain of a distribution \(\rho(w,\theta)\) depends on \(\cos\theta\) and is given by \(w\in(-1+\cos\theta,1+\cos\theta)\).
## Appendix B exact results for \(p_{w}\)
For half integer values of \(\alpha\), the integral in Eq. (20) can be evaluated exactly. For the first three values, \(\alpha=\frac{1}{2},\frac{3}{2},\frac{5}{2}\), the distribution can be represented as
\[p_{w}=\frac{a_{\alpha}}{\pi}\sqrt{\frac{2-w}{w}}+b_{\alpha}\left[\frac{\sin^{ -1}(1-w)}{2\pi}+\frac{1}{4}\right], \tag{45}\]
where \(a_{\alpha}\) and \(b_{\alpha}\) are polynomials given in Table (4).
Fig. (8) compares these analytical results with simulation data points.
## Appendix C Convolution of probability distributions
To show explicitly that the convolution of probability distributions works for particles in a harmonic potential \(u=Kr^{2}/2\) whose dynamics is governed by \(N\) independent forces \(\mathbf{f}_{i}\), we consider the Langevin equation. For an unconfined system the
Figure 8: Distributions \(p_{w}\) for different values of \(\alpha\). Exact distributions (represented by lines) are obtained from Eq. (45). The circular symbols represent simulation data points.
Figure 6: Distributions \(\rho(z,\theta)\) for three swimming orientations: \(\cos\theta=0,\frac{1}{2},1--\) red, blue, green points, respectively. Each distribution integrates to \(\int_{-1}^{1}dz\rho(z,\theta)=\frac{1}{2\pi}\). All circular symbols represent simulation data points, and the lines represent exact results obtained using Eq. (46) and Eq. (47).
Figure 7: Distributions \(\rho(w,\theta)\) for three swimming orientations: \(\cos\theta=0,\frac{1}{2},1--\) red, blue, green points, respectively. All circular symbols represent simulation data points, and the lines represent exact results obtained using Eq. (47) and Eq. (47).
Langevin equation is
\[\dot{\mathbf{r}}=\mu\sum_{i=1}^{N}\mathbf{f}_{i}, \tag{10}\]
where \(\langle\mathbf{f}_{i}(t)\cdot\mathbf{f}_{j}(t^{\prime})\rangle=f_{i}^{2}\delta_ {ij}\,\tau_{i}^{-1}e^{-|t-t^{\prime}|/\tau_{i}}\), where we assumed exponential memory such that in the limit \(\tau_{i}\to 0\), \(\tau_{i}^{-1}e^{-|t-t^{\prime}|/\tau_{i}}\rightarrow\delta(t-t^{\prime})\). For particles in a harmonic trap the Langevin equation is
\[\dot{\mathbf{r}}=\mu\sum_{i=1}^{N}\mathbf{f}_{i}-\mu K\mathbf{r}, \tag{11}\]
and can be solved to yield
\[\mathbf{r}=\mu\sum_{i=1}^{N}\int_{-\infty}^{t}dse^{-\mu K(t-s)}\mathbf{f}_{i}( s). \tag{12}\]
Using the above solution, the Langevin equation in Eq. (11) can be represented as
\[\dot{\mathbf{r}}=\mu\sum_{i=1}^{N}\mathbf{\tilde{f}}_{i} \tag{13}\]
where the new forces are defined as
\[\mathbf{\tilde{f}}_{i}=\mathbf{f}_{i}-\mu K\int_{-\infty}^{t}dse^{-\mu K(t-s) }\mathbf{f}_{i}(s). \tag{14}\]
The new forces can be shows to be independent:
\[\langle\mathbf{\tilde{f}}_{i}(t)\cdot\mathbf{\tilde{f}}_{j}(t^{\prime}) \rangle=f_{i}^{2}\delta_{ij}M(t-t^{\prime}), \tag{15}\]
where \(M\) is a resulting new memory function.
## Appendix D computation of moments for an isotropic harmonic potential in 3D
In this section we provide an explicit verification of the relation in Eq. (43) for the RTP particles in a harmonic trap in 3D. We start with the stationary FP equation
\[0=\nabla\cdot[(\mu K\mathbf{r}-v_{0}\mathbf{n})\,\rho]+\hat{L}\rho,\]
where the operator \(\hat{L}\) for the RTP model in 3D is given by
\[\hat{L}\rho=\frac{1}{\tau}\left[-\rho+\frac{1}{2}\int_{0}^{\pi}d\theta\,\sin \theta\rho(x,\theta)\right].\]
Since \(\rho\) depends on the relative orientation of the vectors \(\mathbf{r}\) and \(\mathbf{n}\), we fix \(\mathbf{n}\) along the \(x\)-axis, \(\mathbf{n}=\mathbf{e}_{x}\), and the FP equation becomes
\[0=\mu K\mathbf{r}\cdot\nabla\rho+\mu K\rho(\nabla\cdot\mathbf{r})-v_{0}\frac {\partial\rho}{\partial x}+\hat{L}\rho. \tag{16}\]
In spherical coordinates: \(\frac{\partial\rho}{\partial x}=\cos\theta\frac{\partial\rho}{\partial r}- \frac{\sin\theta}{r}\frac{\partial\rho}{\partial\theta}\), \(\mathbf{r}\cdot\nabla\rho=r\frac{\partial\rho}{\partial r}\), and \(\nabla\cdot\mathbf{r}=d\). The FP equation in reduced units \(s=\mu Kr/v_{0}\) can now be expressed
\[0=s\rho^{\prime}+3\rho-\cos\theta\rho^{\prime}+\frac{\sin\theta}{s}\frac{ \partial\rho}{\partial\theta}-\alpha\rho+\frac{\alpha}{2}p, \tag{17}\]
where
\[p=\int_{0}^{\pi}d\theta\,\sin\theta\rho(z,\theta).\]
By multiplying the above equation by \(s^{l}\cos^{m}\theta\) and then integrating it as \(4\pi\int_{0}^{\infty}dss^{2}\int_{0}^{\pi}d\theta\,\sin\theta\), we get the following recurrence relation:
\[B_{l,m}=\frac{\alpha}{l+\alpha}B_{l,0}B_{0,m}+\frac{l-m}{l+\alpha}B_{l-1,m+1} +\frac{m}{l+\alpha}B_{l-1,m-1}. \tag{18}\]
where \(B_{l,m}=\langle s^{l}\cos^{m}\theta\rangle\), and the initial condition is \(B_{0,0}=1\). The recurrence relation in Eq. (18) is used to solve for \(B_{2n,0}=\langle s^{2n}\rangle\) that satisfies the relation
\[\langle s^{2n}\rangle=(2n+1)\langle z^{2n}\rangle, \tag{19}\]
where \(\langle z^{2n}\rangle\) is defined in Eq. (27). Consequently, the relation in Eq. (43) is verified for the RTP model.
| 活性粒子に関する場合、ハーモニックポテンシャルを持つ理想気体モデルであっても、数学的課題が生じます。例外は、1次元でランダムと tumbl
モデル(RTP)です。このモデルに対する停滞分布は完全に解明されています。2次元の場合には、解決策は可能です。 incidentally、2次元でも停滞分布はベータ関数に対応しています。3次元では停滞分布が知られていませんが、シミュレーションの結果から、ベータ関数型ではありません。この研究の目的は、ハーモニックトラップを持つ3次元 RTP モデルです。この研究の主要な成果は、停滞分布の再構築を目的としたリカーレンス関係の導出です。これらのモーメントは、フーリエ-ラグランジュ展開を使用して停滞分布を回復します。 |
2309.06542 | In-plane structure of the electric double layer in the primitive model
using classical density functional theory | The electric double layer (EDL) has a pivotal role in screening charges on
surfaces as in supercapacitor electrodes or colloidal and polymer solutions.
Its structure is determined by correlations between the finite-sized ionic
charge carriers of the underlying electrolyte and, this way, these correlations
affect the properties of the EDL and of applications utilizing EDLs. We study
the structure of EDLs within classical density functional theory (DFT) in order
to uncover whether a structural transition in the first layer of the EDL that
is driven by changes in the surface potential depends on specific particle
interactions or has a general footing. This transition has been found in
full-atom simulations. Thus far, investigating the in-plane structure of the
EDL for the primitive model (PM) using DFT proved a challenge. We show here
that the use of an appropriate functional predicts the in-plane structure of
EDLs in excellent agreement with molecular dynamics (MD) simulations. This
provides the playground to investigate how the structure factor within a layer
parallel to a charged surface changes as function of both the applied surface
potential and its separation from the surface. We discuss pitfalls in properly
defining an in-plane structure factor and fully map out the structure of the
EDL within the PM for a wide range of electrostatic electrode potentials.
However, we do not find any signature of a structural crossover and conclude
that the previously reported effect is not fundamental but rather occurs due to
the specific force field of ions used in the simulations. | Peter Cats, Andreas Härtel | 2023-09-12T19:35:12 | http://arxiv.org/abs/2309.06542v2 | In-plane structure of the electric double layer in the primitive model using classical density functional theory
###### Abstract
The electric double layer (EDL) has a pivotal role in screening charges on surfaces as in supercapacitor electrodes or colloidal and polymer solutions. Its structure is determined by correlations between the finite-sized ionic charge carriers of the underlying electrolyte and, this way, these correlations affect the properties of the EDL and of applications utilizing EDLs. We study the structure of EDLs within classical density functional theory (DFT) in order to uncover whether a structural transition in the first layer of the EDL that is driven by changes in the surface potential depends on specific particle interactions or has a general footing. This transition has been found in full-atom simulations. Thus far, investigating the in-plane structure of the EDL for the primitive model (PM) using DFT proved a challenge. We show here that the use of an appropriate functional predicts the in-plane structure of EDLs in excellent agreement with molecular dynamics (MD) simulations. This provides the playground to investigate how the structure factor within a layer parallel to a charged surface changes as function of both the applied surface potential and its separation from the surface. We discuss pitfalls in properly defining an in-plane structure factor and fully map out the structure of the EDL within the PM for a wide range of electrostatic electrode potentials. However, we do not find any signature of a structural crossover and conclude that the previously reported effect is not fundamental but rather occurs due to the specific force field of ions used in the simulations.
## I Introduction
Electrolytes can be found almost anywhere, and attract therefore a lot of interest across a wide variety of disciplines [1; 2; 3; 4; 5; 6]. They have been studied for more than a century [7; 8; 9; 10], but there is still uncharted territory to discover [11; 12; 13]. A particular focus lays on investigating the electric double layer (EDL), _i.e._ the electrode-electrolyte interface, where surface charges are screened by mobile ionic charges from the electrolyte. This ability of ions to screen each other is closely related to structure formation in the EDL that is driven by an interplay between electrostatic and steric particle interactions [14; 15]. Thus, understanding EDLs implies understanding their structure, which we study in this work.
The structure of a liquid electrolyte can be seen from its particle's and charge's distributions. These density profiles measure the local density and, in the electrolyte's bulk, they do not show any structure, thus, they are constant. This changes in the vicinity of a flat electrode, where the flat boundary induces layering and ordering of particles and charges. Here, density profiles are typically resolved perpendicular to the flat electrode, reflecting translational symmetry along the wall. Classical density functional theory (DFT) predicts these perpendicular density profiles of ions very well [16; 17; 18; 19; 20; 21]. In this manuscript we take another perspective and focus on the in-plane structure of the EDL, _i.e._ the distribution of ions parallel to the interface.
Merlet and co-workers [22] studied the in-plane structure of the EDL for a model capacitor consisting of liquid butylmethylimidazolium hexafluorophosphate and graphite electrodes. They performed molecular dynamics (MD) simulations using sophisticated force fields and found hexagonal in-plane ordering for certain electrode potentials by considering an "in-plane" structure factor. We will discuss possible pitfalls in introducing a well-defined in-plane structure factor in this work. To clarify whether this ordering effect is a consequence of the specific force field or a more fundamental property that also occurs in the primitive model of electrolytes, the same system has been mapped onto the primitive model a few years later and the in-plane structure has been studied by employing both DFT and MD [23]. For their DFT study, however, the authors used a rather simple mean-field description that did not perform well across almost all parameters considered. The question, therefore, remains whether the observed ordering effect can be found in the primitive model of electrolytes as well and whether DFT, when using a more sophisticated approach, can accurately predict the in-plane structure of the EDL in the primitive model. These questions will be answered shortly.
In this work, we first precisely define an in-plane structure factor in section II. In section III, then, we introduce the physical systems of interest and the model we use. We consider the same system as studied in Ref. [23], described in section III, but we now consider a DFT functional constructed from the mean-spherical approximation (MSA) that has been proven to work well for primitive model aqueous electrolytes at room temperature [24; 18; 20]. Using this functional, we calculate the density profiles and in-plane structure factor for our system and discuss their agreement with results from previous DFT work and MD simulations in section IV. In section V we present structure factors across the whole
EDL for a wide region of electrode potentials and discuss structural changes that depend on the potential, as found previously [22]. We conclude in section VI.
## II The structure factor
The structure factor for a multi-component inhomogeneous system can be defined as \(S_{ij}(\vec{k})=\langle\frac{1}{N}\rho_{\mathbf{k}}^{i}\rho_{-\mathbf{k}}^{j}\rangle\) with the Fourier components \(\rho_{\mathbf{k}}^{i}=\sum_{n=1}^{N_{i}}\exp(-i\mathbf{k}\cdot\mathbf{r}_{n})\) of the microscopic density. Equivalently, it can be written in the form [25]
\[S_{ij}(\mathbf{k})=\frac{N_{i}}{N}\delta_{ij}+ \tag{1}\] \[\frac{1}{N}\int\mathrm{d}\mathbf{r}\int\mathrm{d}\mathbf{r}^{ \prime}\rho_{i}(\mathbf{r})h_{ij}(\mathbf{r},\mathbf{r}^{\prime})\rho_{j}( \mathbf{r}^{\prime})\exp(-i\mathbf{k}\cdot(\mathbf{r}-\mathbf{r}^{\prime})),\]
where \(N_{i}\) is the number of particles of species \(i\) in the system, \(N=\sum_{j}N_{j}\) is the total number of particles, \(\rho_{j}(\mathbf{r})\) is the density profile of species \(j\) as function of the position \(\mathbf{r}\), and \(h_{ij}(\mathbf{r},\mathbf{r}^{\prime})\) is the total correlation function between particles of species \(i\) and \(j\) located at positions \(\mathbf{r}\) and \(\mathbf{r}^{\prime}\), respectively. The structure factor depends on a wave vector \(\mathbf{k}\) that is the scattering vector if \(S\) describes the scattering of a beam. In the bulk with constant bulk density \(\rho_{i}(\mathbf{r})\equiv\rho_{i}=N_{i}/V\) in a volume \(V\), Eq. (1) reduces to
\[S_{ij}(k)=\frac{\rho_{i}}{\rho_{\mathrm{tot}}}\left[\delta_{ij}+\rho_{j}\int \mathrm{d}\mathbf{r}\,h_{ij}(r)e^{-i\mathbf{k}\cdot\mathbf{r}}\right], \tag{2}\]
where \(\rho_{\mathrm{tot}}=\sum_{i}\rho_{i}\), \(r=|\mathbf{r}|\), and \(k=|\mathbf{k}|\) is the radial wave number in spherical coordinates.
As we aim to study in-plane structure in the presence of a flat wall, it is useful to adopt Eq. 2 to a cylindrical geometry with coordinates \((u,\theta,z)\) that respect the presence of a wall perpendicular to the \(z\) direction. With \(q\) denoting the radial wave number corresponding to the radial in-plane coordinate \(u\) in cylindrical coordinates and \(k_{z}\) the wave number in the \(z\)-direction, Eq. 2 can equivalently be written as
\[S_{ij}(q,k_{z})=\frac{\rho_{i}}{\rho_{\mathrm{tot}}}\left[\delta_{ij}+\rho_{j }\int_{-\infty}^{\infty}\mathrm{d}z\,\hat{h}_{ij}(z,q)e^{-ik_{z}z}\right], \tag{3}\]
where we defined the Hankel-transformed total correlation function
\[\hat{h}_{ij}(z,q)=\int_{0}^{2\pi}\mathrm{d}\theta\int_{0}^{\infty}\mathrm{d} u\,u\,h_{ij}(u,z)e^{-iqu\cos\theta}, \tag{4}\]
which is a Fourier-transformed version of the total correlation function \(h_{ij}(\mathbf{r})\) in only two dimensions.
In a homogeneous isotropic bulk system, one can argue that
\[S_{ij}(k) =S_{ij}(q=k,k_{z}=0)\equiv S_{ij}^{*}(q); \tag{5}\] \[h_{ij}(r) =h_{ij}(u=r,z=0)\equiv h_{ij}^{*}(u), \tag{6}\]
_i.e._ it does not matter in which direction one is looking. However, one can use these relations and combine Eqs. (5) and (3) to find
\[S_{ij}^{*}(q)=\frac{\rho_{i}}{\rho_{\mathrm{tot}}}\left[\delta_{ij}+\rho_{j} \int_{-\infty}^{\infty}\mathrm{d}z\,\hat{h}_{ij}(z,q)\right] \tag{7}\]
in bulk.
### Structure Factor in the Plane
Let us now confine the integration limits in Eq. (7) and consider a slab of thickness \(L^{\prime}\) in the \(z\)-direction around (any) \(z_{0}\) in the bulk, _i.e._\(z\in[z_{0}-L^{\prime}/2,z_{0}+L^{\prime}/2]\). Then Eq. 7 becomes
\[S_{ij}^{*}(q;L^{\prime})=\frac{\rho_{i}}{\rho_{\mathrm{tot}}}\left[\delta_{ij} +\rho_{j}\int_{-L^{\prime}/2}^{L^{\prime}/2}\mathrm{d}z\,\hat{h}_{ij}(z+z_{0}, q)\right]. \tag{8}\]
This equation represents the in-plane structure factor calculated only within the slab. Clearly, in the limit
\[\lim_{L^{\prime}\to\infty}S_{ij}^{*}(q;L^{\prime})=S_{ij}(k), \tag{9}\]
_i.e._\(S_{ij}^{*}(q;L^{\prime})\) converges to the bulk structure factor \(S_{ij}(k)\), as expected. In a similar fashion, we can confine the integration limits in Eq. (1). For this purpose, we first restrict the particle number to the same slab and replace \(N_{i}\) and \(N\) by \(n_{i}\) and \(n_{\mathrm{tot}}\), respectively, where
\[n_{i}(z_{0},L^{\prime})=\int_{z_{0}-L^{\prime}/2}^{z_{0}+L^{\prime}/2}\mathrm{d }z\,\rho_{i}(z) \tag{10}\]
is the number of particles of species \(i\) in the slab of thickness \(L^{\prime}\) centered around \(z_{0}\) per unit area, and \(n_{\mathrm{tot}}(z_{0},L^{\prime})=\sum_{i}n_{i}(z_{0},L^{\prime})\). Then, confining the integration limits in the \(z\)-direction, the in-plane structure factor for an inhomogeneous system (still homogeneous in \(x\) and \(y\) directions) reads
\[S_{ij}^{*}(q,k_{z};z_{0},L^{\prime})=\frac{n_{i}(z_{0},L^{\prime})}{n_{\mathrm{ tot}}(z_{0},L^{\prime})}\delta_{ij}+\frac{1}{n_{\mathrm{tot}}(z_{0},L^{ \prime})}\int_{z_{0}-L^{\prime}/2}^{z_{0}+L^{\prime}/2}\mathrm{d}z\int_{z_{0}-L ^{\prime}/2}^{z_{0}+L^{\prime}/2}\mathrm{d}z^{\prime}\rho_{i}(z)\hat{h}_{ij}(z, z^{\prime},q)\rho_{j}(z^{\prime})e^{-ik_{z}(z-z^{\prime})}, \tag{11}\]
where \(\hat{h}_{ij}(z,z^{\prime},q)\) is the inhomogeneous form of \(\hat{h}_{ij}(z,q)\), because in bulk \(\hat{h}(z,z^{\prime},q)=\hat{h}(z-z^{\prime},q)\equiv\hat{h}(z,q)\). Important to notice is that Eq. (11) is the three-dimensional structure factor determined only in a finite slab of thickness \(L^{\prime}\) around \(z_{0}\). Taking the limit of vanishing \(L^{\prime}\) causes the integral to vanish, _i.e._ the integral is naturally dependent on the integration limits. Naturally, taking the limit of \(L^{\prime}\) to infinity returns Eq. (1).
In order to make practical use of Eq. (11), we first need to simplify it. Motivated by Eq. (5) and (8), we further study the case in which one sets \(k_{z}=0\). We have in mind the idea of a test particle placed at \(z_{0}\) in the center of the slab. Accordingly, we replace the first density profile in the integrand by \(\rho_{i}(z)=n_{i}(z_{0},L^{\prime})\delta(z-z_{0})\). By doing so, we can reduce Eq. (11) to
\[S^{*}_{ij}(q;z_{0},L^{\prime})= \frac{n_{i}(z_{0},L^{\prime})}{n_{\rm tot}(z_{0},L^{\prime})}H_{ ij}(q;z_{0},L^{\prime}); \tag{12}\] \[H_{ij}(q;z_{0},L^{\prime})= \delta_{ij}+\int_{z_{0}-L/2}^{z_{0}+L^{\prime}/2}{\rm d}z\,\hat{h }_{ij}(z_{0},z,q)\rho_{j}(z), \tag{13}\]
which in the limit \(z_{0}\to\infty\) (in bulk) is identical to Eq. (8). The quantity \(H_{ij}\), the normalized structure factor, is introduced for convenience, which will become clear in section IV and in appendix B. Note that Eq. (12) is an approximation of Eq. (11), made for practical purposes.
### The Total Correlation Function in DFT
In order to calculate any of the previous in-plane structure factors, one has to have access to the total correlation function \(\hat{h}_{ij}(z,z^{\prime},q)\). This quantity can be determined via the Hankel-transformed Ornstein-Zernike (OZ) equation [26; 27; 23]
\[\hat{h}_{ij}(z,z^{\prime},q)=\hat{c}_{ij}^{(2)}(z,z^{\prime},q)+ \tag{14}\] \[\sum_{k}\int{\rm d}z^{\prime\prime}\hat{h}_{ik}(z,z^{\prime\prime },q)\rho_{k}(z^{\prime\prime})\hat{c}_{kj}^{(2)}(z^{\prime\prime},z^{\prime},q),\]
in which the Hankel transform is defined by
\[\hat{f}(q) =\int_{0}^{\infty}{\rm d}u\,u\int_{0}^{2\pi}{\rm d}\theta f(u)e^ {iqu\cos\theta} \tag{15}\] \[=2\pi\int_{0}^{\infty}{\rm d}u\,uJ_{0}(qu)f(u), \tag{16}\]
with \(J_{0}(qu)\) the zeroth-order Bessel function of the first kind. Eq. (14) can be solved iteratively via Picard iterations.
The question whether we can detect certain (crystalline-like) in-plane structures is already answered in Eq. (14), because, although we have an inhomogeneous system on the \(z\)-axis, we assume translational and rotational symmetry in the plane perpendicular to the z-axis. Hence, with this approach, one is not able to find a respective order in the \(xy\)-plane. However, the approach gives access to structure in the \(xy\)-plane and, thus, would allow for detecting a precursor of an ordering transition, similar to the total correlation structure predicted by the OZ equation in a homogeneous bulk system: For hard-disk systems it is known that there is a signature in the second peak of the ("homogeneous bulk") total correlation function close to the freezing point [28; 29; 30]. In essence, when we restrict our study to only the first layer next to the surface, we have a hard-disk like system and the question therefore is whether we can observe these subtle signatures in the in-plane structure factor and total correlation function.
### Charge-charge and density-density structure
For convenience, and for future reference, let us define the charge-charge (ZZ) and number-number (NN) structure factors by, respectively, [25]
\[S_{\rm ZZ}=\sum_{ij}Z_{i}Z_{j}S_{ij} \tag{17}\]
and
\[S_{\rm NN}=\sum_{ij}S_{ij}, \tag{18}\]
where the summations are over the number of species.
## III Modelling the system
The system under consideration is an electrolyte at temperature \(T\) that is confined between two parallel flat hard walls located at \(z=0\) and \(z=H\) at which electrostatic surface potentials \(\Phi_{0}\) and \(\Phi_{H}\) are applied, respectively, as depicted in Fig. 1. The electrolyte is in chemical equilibrium with an ion reservoir at chemical potentials \(\mu_{\pm}\) (corresponding to bulk densities \(\rho_{0,\pm}\)) for the cations and anions, respectively [31]. The respective non-electrostatic ion-electrode interaction potential between ions of species \(i\) and the wall at \(z=0\) is
\[\beta V_{\rm ext}^{i}(z)=\begin{cases}\infty&\quad\text{for }z\leq d_{i}/2;\\ 0&\quad\text{for }z>d_{i}/2,\end{cases} \tag{19}\]
where \(\beta=1/k_{\rm B}T\) with \(k_{\rm B}\) Boltzmann's constant. The electrostatic part of the ion-electrode interactions is treated via the Poisson equation
\[\partial_{z}^{2}\Phi(z)=-\frac{\rho_{\rm 2tot}(z)}{\varepsilon_{\rm r} \varepsilon_{0}}, \tag{20}\]
where \(\rho_{\rm 2tot}(z)=\rho_{\rm Z}(z)+\delta(z)\sigma_{0}+\delta(z-H)\sigma_{H}\) denotes the total charge density in the system, _i.e._ the charge density of the mobile ions \(\rho_{\rm Z}(z)\) plus the surface charge densities \(\sigma_{0}\) and \(\sigma_{H}\) of the respective walls. Note already that we use an external potential that only depends on
the out-of-plane coordinate \(z\) without variations in the \(xy\)-plane.
For the electrolyte we use the Primitive Model (PM), in which the ions are modelled as charged hard spheres of diameters \(d_{j}\) residing in a continuum dielectric medium, characterized by the Bjerrum length \(\lambda_{\rm B}=\beta e^{2}/4\pi\varepsilon_{r}\varepsilon_{0}\), where \(e\) is the proton charge and \(\varepsilon_{r}\varepsilon_{0}\) the dielectric permittivity of the medium. The ion-ion interaction potential for two ions separated by a distance \(r\) is then given by
\[\beta u_{ij}^{\rm PM}(r)=\begin{cases}\infty&\text{for $r\leq d_{ij}/2$;}\\ Z_{i}Z_{j}\frac{\lambda_{\rm B}}{r}&\text{for $r>d_{ij}/2$,}\end{cases} \tag{21}\]
where \(Z_{j}\) denotes the valency of the ions of species \(j\), and \(d_{ij}=(d_{i}+d_{j})/2\) the common diameter of species \(i\) and \(j\). Specifically, we consider a system with ion diameters \(d_{-}=0.506\) nm and \(d_{+}=0.618\) nm, valencies \(Z_{\pm}=\pm 0.785\), and Bjerrum length \(\lambda_{\rm B}=4.17\) nm (\(T=400\) K and \(\epsilon_{r}=10\)), mimicking the ionic liquid of Ref. [22]. The system size we use for most of the DFT calculations is \(H=10d_{-}\), the only exception being the calculation of the density profiles when comparing to the simulation data for which we consider the system size \(H=12.32\) nm, in accordance with Ref. [23]. Typically, the density profiles in our system decay to bulk values within \(5\)\(d_{-}\) (\(\approx 2.5\) nm), as can be seen from Fig. 2.
We tackle this model using DFT with the MSAc implementation, which details can be found in our previous works [17; 18; 19; 20] and are summarized in appendix A. In short, DFT is an exact theoretical framework to access the structure and thermodynamics of a given system in an external potential \(V_{\rm ext}^{j}\)[32]. The main object within DFT is the grand potential functional \(\Omega[\{\rho\}]\) of the densities \(\{\rho\}\), which typically is not known exactly. A main problem in DFT is setting up accurate functionals for specific systems. The MSAc functional reads
\[\Omega[\{\rho\}]=\mathcal{F}_{\rm id}[\{\rho\}]+\mathcal{F}_{\rm ex }^{\rm HS}[\{\rho\}]+\mathcal{F}_{\rm ex}^{\rm MFC}[\{\rho\}]+\mathcal{F}_{ \rm ex}^{\rm MSAc}[\{\rho\}]-A\sum_{j}\int\mathrm{d}z\,\rho_{j}(z)(\mu_{j}-V _{\rm ext}^{j}(z))-A\Phi_{0}\sigma_{0}-A\Phi_{H}\sigma_{H}, \tag{22}\]
where \(\mathcal{F}_{\rm id}[\{\rho\}]\) is the ideal Helmholtz free energy functional, \(\mathcal{F}_{\rm ex}^{\rm HS}[\{\rho\}]\) the excess Helmholtz free energy functional that deals with the hard-sphere interactions for which we use the White Bear mark II (WBII) version of fundamental measure theory (FMT) [33; 34], \(\mathcal{F}_{\rm ex}^{\rm MFC}[\{\rho\}]\) is the mean-field functional for the electrostatic (Coulombic) interactions, and \(\mathcal{F}_{\rm ex}^{\rm MSAc}[\{\rho\}]\) is the beyond mean-field functional for the electrostatic interactions that makes use of the direct correlation functions of MSA [24; 35; 36; 37; 38; 20]. The surface area \(A\) is eventually factored out and does not play a role in the DFT calculations. The grand potential functional is minimized by the equilibrium density profiles \(\rho_{j}^{\rm eq}(z)\),
\[\frac{\delta\Omega[\{\rho\}]}{\delta\rho(z)}\bigg{|}_{\rho_{j}(z)=\rho_{j}^{ \rm eq}(z)}=0. \tag{23}\]
From minimizing the grand potential, one only gains access to \(\rho_{i}(z)\) and no information to what happens in the
Figure 1: The system under consideration. Two planar hard walls are located at \(z=0\) and \(z=H\) with surface potentials \(\Phi_{0}\) and \(\Phi_{H}\), respectively. The surfaces confine an electrolyte at temperature \(T\) in chemical equilibrium with a reservoir at chemical potentials \(\mu_{\pm}\) with which it can exchange ions. The electrolyte is modelled in the Primitive Model (PM) with ions as charged hard spheres of diameters \(d_{\pm}\) that carry charges \(\pm 0.785\)e in their centre. The ions move in a continuum medium, characterized by the relative dielectric permittivity \(\varepsilon_{r}\).
plane, because all applied external potentials have translational symmetry and we do not break this symmetry in our calculations. In general, DFT could predict also density profiles that are inhomogeneous in the \(xy\) plane but the numerical calculations would be too expensive [39]. We obtain in-plane structure via the OZ equation as explained in the previous section, for which we need the direct pair-correlation function. This, however, follows from the grand potential function via
\[c_{ij}^{(2)}(\mathbf{r},\mathbf{r}^{\prime})=-\beta\frac{\delta ^{2}\mathcal{F}_{\rm ex}[\{\rho\}]}{\delta\rho_{i}(\mathbf{r}),\rho_{j}( \mathbf{r}^{\prime})} \tag{24}\] \[= c_{ij}^{(2),{\rm HS}}(\mathbf{r},\mathbf{r}^{\prime})+c_{ij}^{( 2),{\rm MFC}}(\mathbf{r},\mathbf{r}^{\prime})+c_{ij}^{(2),{\rm MSAc}}(\mathbf{ r},\mathbf{r}^{\prime}),\]
which naturally contains a HS contribution, a MFC contribution, and a MSAc contribution. Specifically, we are interested in the Hankel-transformed direct-correlation function (see Eq. (14)). The Hankel-transformed \(c_{ij}^{(2),{\rm HS}}(\mathbf{r},\mathbf{r}^{\prime})\) for the Rosenfeld-version of FMT is well described in Ref. [27] for a single-component system. We straightforwardly generalized that approach for the WBII-version of FMT including a tensor correction [40] to more accurately incorporate dense multicomponent systems. The MFC contribution is simply given by
\[c_{ij}^{(2),{\rm MFC}}(z,z^{\prime},q)=-2\pi\lambda_{\rm B}Z_{i}Z_{j}\frac{e^ {-q|z-z^{\prime}|}}{q}, \tag{25}\]
and for the MSAc contribution we numerically Hankel transformed the MSAc direct correlation function, which explicit form can, for example, be found in Ref. [19].
To summarize our approach, we construct a grand potential functional for the PM electrolyte confined between two planar hard surfaces at which we apply surface potentials. This gives us both access to the density profiles perpendicular to the surfaces \(\rho_{j}(z)\) as well as the direct correlation function \(c_{ij}^{(2)}(z,z^{\prime},u)\), where \(u\) is the in-plane radial component. Both these quantities are used to calculate the total correlation function in Hankel space \(\hat{h}_{ij}(z,z^{\prime},q)\), where \(q\) is the radial component of the wave vector in the plane. Consequently, we can calculate the in-plane structure factor \(S(q,z_{0},L^{\prime})\) within a slab of thickness \(L^{\prime}\) centered around \(z_{0}\).
## IV Success of our approach
In Ref. [23], the authors consider an ionic liquid in the PM but similar to the one studied in Ref. [22], where in-plane structure was found in simulations using specific force fields. If in-plane structure would form, then one would expect this to show up in the "in-plane" structure factor (as demonstrated in Ref. [22]). For the following discussion, we consider \(S_{ij}^{*}(q;z)\) from Eq. (12) as the in-plane structure factor and \(H_{ij}(q;z)\) from Eq. (13) as the normalized in-plane structure factor [41].
First let us show the density profiles at the positive-charged hard wall and compare those with the MD simulation data given in Ref. [23]. This comparison is presented in Fig. 2. Note here that the MD simulations were performed within the canonical ensemble, while the DFT calculations are performed within the grand-canonical ensemble. Hence, we matched the number of particles in the system and the charge on the walls within DFT with those given from the simulations. The corresponding numbers used as input parameters for DFT (the concentration \(c\) and surface potential \(\Phi_{0}\) and \(\Phi_{H}\) at the walls located at \(z=0\) and \(z=H\)) are given within each panel in Fig. 2. [42] Overall, the MD and the MSAc DFT density profiles agree reasonably well.
Let us then consider the structure factors, which are calculated in MD [23] via the definition
\[S_{ij}^{\rm MD}(\mathbf{q};z_{0},L^{\prime})=\left\langle\frac{1}{N_{\rm tot}} \sum_{n\in\Gamma_{i}}\sum_{m\in\Gamma_{j}}\exp[-i\mathbf{q}\cdot(\mathbf{r}_{ n}-\mathbf{r}_{m})]\right\rangle, \tag{26}\]
where \(N_{\rm tot}\) denotes the total number of particles within the slab \([z_{0}-L^{\prime}/2,z_{0}+L^{\prime}/2]\), and \(\Gamma_{i}\) the set of particles of species \(i\) within the slab. We define \(\mathbf{q}=(q_{x},q_{y},0)\) as the in-plane wave vector with \(q=|\mathbf{q}|\) the radial wave number defined before. Eq. (26) is comparable to the structure factor calculated from Eq. (12). To demonstrate this, we compare results from MD using Eq. (26) (circles) with MSAc using Eq. (13) (solid lines) in Fig. 3. For this purpose, we define, in accordance with Eq. (12), \(H_{ij}^{\rm MD}=\frac{N_{\rm tot}}{N_{i}}\,S_{ij}^{\rm MD}\). The blue lines depict the \(H_{++}\) component, the orange line the \(H_{--}\) component, and the yellow line the \(H_{+-}\) component. Note that we cannot show the \(H_{-+}\) values for MD, because it was omitted in Ref. [23], due to poor statistics. The MSAc functional performs a lot better than the MFC functional (not shown here, but can be found in Ref. [23]), and the agreement between MD and MSAc is satisfying. Since MD did not provide the \(S_{-+}\) components due to poor statistics, we cannot properly compare the \(S_{\rm ZZ}\) and \(S_{\rm NN}\) against simulations. However, given the agreement between DFT and MD, as seen in Fig. 3, we presume satisfying agreement for \(S_{\rm ZZ}\) and \(S_{\rm NN}\) as well. The corresponding charge-charge and number-number structure factors \(S_{\rm ZZ}\) and \(S_{\rm NN}\) from DFT, respectively, are plotted in Fig. 4. Note the shift in the first maximum in \(S_{\rm ZZ}\), which will be discussed in the following section V.
## V The structure changes with the surface potential
Now we have a well-performing formalism at hand to investigate the structure factor as a function of the surface potential \(\Phi_{0}\). In Fig. 5(a)-(d) we show the normalized in-plane structure factors \(H_{ij}(q)\) for varying surface potential \(\Phi_{0}\). The surface plots show \(H_{ij}(q)\) on the \(z\)-axis, the wave number \(q\) on the \(x\)-axis, and the surface
Figure 3: Structure factors \(H_{ij}(q)\) from DFT (MSAc functional and MD via Eqs. (13) and (26), both for the same parameters and density profiles as given in Fig. 2; the potential increases from (a) \(\Phi_{0}=-0.0104\) V via (b) \(\Phi_{0}=0.0402\) V to (c) \(\Phi_{0}=0.2524\) V. The values \(z_{0}\) and \(L^{\prime}\) (values according to the numerical grid) are chosen such that \(z_{0}\) lies in between the contact value of the smallest ions (at \(d_{-}/2\)) and the first minimum (closest to the wall) in the total density profile (see Fig. 2). In addition, values for the prefactors \(n_{i}(z_{0},L^{\prime})/n_{\rm tot}(z_{0},L^{\prime})\) are presented (the prefactors appear in Eq. (12)).
Figure 2: Density profiles \(\rho_{i}\), normalized to bulk density \(\rho_{\rm b}\), for separations \(z\) from the wall located at \(z_{0}=0\); \(d_{-}\) is the diameter of the negatively charged ions. In blue the cation density profile \(\rho_{+}\) is given, in red the anion density profile \(\rho_{-}\), and in yellow the total density profile \(\rho_{\rm tot}\). The inset in each panel shows the corresponding system parameters, _i.e._, particle numbers \(N_{\pm}\) in the volume \(HA\) with wall separation \(H\), wall charge densities \(\sigma_{0}\) and \(\sigma_{H}\) at the respective walls, and corresponding bulk concentration \(c\) and wall potentials \(\Phi_{0}\) and \(\Phi_{H}\) used in the MSAc functional. The surface potential is smallest in (a), increased a bit in (b), and is largest in (c). The Bjerrum length is \(\lambda_{\rm B}=4.17\) nm.
potential \(\Phi_{0}\) on the \(y\)-axis. By plotting \(H_{ij}\), one emphasizes the role that the cat- and anions play at a given surface potential; at positive potentials \(\Phi_{0}\gtrsim 0.5\) V the (negative) anions dominate (see \(H_{+-}\) and \(H_{--}\)), while at negative potentials \(\Phi_{0}\lesssim-0.5\) V the (positive) cations dominate (see \(H_{-+}\) and \(H_{++}\)). To exemplify this, we plotted the average packing fraction of the cations and anions in the first layer to the electrode in Fig. 6 as function of the applied surface potential \(\Phi_{0}\).
In Ref. [22], the authors found an ordered phase at small surface potentials resulting in large values for the peak height of the structure factor. As mentioned before, we cannot find this ordered structure directly due to the construction of our theoretical approach. Nevertheless, we would expect to find precursors of an ordering transition, if it exists. In Fig. 7(a) and (b) we plot the number-number and charge-charge structure factor \(S_{\rm NN}\) and \(S_{\rm ZZ}\), respectively, as function of the surface potential \(\Phi_{0}\) and wave number \(q\). Similar as in Ref. [22], there is a small bump in \(S_{\rm ZZ}\) at small surface potentials (\(q\approx\pi/d_{-}\)). Interestingly, the location of this maximum (or bump) in Fig. 7(b) is located around \(q=0.45*2\pi/d_{-}=2\pi/(d_{-}+d_{+})\) for vanishing surface potentials, and shifts to \(q=2\pi/d_{-}\) and \(q=0.81*2\pi/d_{-}=2\pi/d_{+}\) at positive and negative potentials, respectively. The reason being that at small surface potentials, the difference in the number of cations and anions within the first layer is small (see Fig.2(a)) and therefore, within the plane, each cation (anion) is approximately surrounded by an anion (cation) and the charge-charge structure spans two ion diameters resulting in a peak at \(q=2\pi/(d_{-}+d_{+})\). At larger (absolute) surface potentials, the first layer gets fully filled with counterions, and therefore the charge-charge structure couples to the number-number structure, causing a peak in \(S_{\rm ZZ}\) at the inverse ion diameter \(2\pi/d_{j}\). Hence, we do find a structural change in \(S_{\rm ZZ}\) in the first layer near the electrode from a diffuse EDL at small surface potentials to a dense-packed EDL at larger surface potentials.
It might be interesting to compare the first layer of the EDL next to the electrode with a two-dimensional system of hard disks. The size ratio between the positive and negative ions is \(d_{+}/d_{-}=0.618/0.506\approx 1.22\). For this size ratio, binary hard disks are not expected to form a crystalline-like phase [43]. However, Fig. 6 shows that at sufficiently large potentials mainly one species of particles fills the first layer to the electrode. At large absolute potentials the system exceeds the packing fraction where freezing would be expected: For monodisperse hard disks in two dimensions this is \(\eta^{\rm 2d}=0.68\)[44], which would correspond to \(\eta\approx 0.46\) for an ideal layer of hard spheres where all spheres are located exactly in the center of the slab of size \(L^{\prime}=d_{+}/2\). This is further reflected in Fig. 8, where the height of the first peak \(S_{ij}(q^{*})\) is shown for both \(S_{\rm NN}\) and \(S_{\rm SS}\) from Fig. 7. At sufficiently large potentials \(\Phi_{0}\) the peak exceeds values of \(2.8-3.1\), which, for hard spheres in bulk, typically happens close to the freezing transition [29]. This can be seen as a precursor of an ordering transition as it would be expected at high packing fractions.
## VI Conclusion
In continuation of previous work [22; 23], we investigated the in-plane structure of EDLs at charged electrodes. We properly derived the equation for the in-plane structure factor; we used a slab of a certain thickness in which the structure factor is calculated. In comparison to previous work, satisfying agreement was found within the primitive model of electrolytes (PM) between MD simulations [23] and our approach, where we made use of DFT with the MSAc functional. This allowed us to further explore the features of the in-plane structure factor as a function of the electrostatic surface potential. At positive potentials the structure factor is dominated by the anions, while at negative potentials it is dominated by the cations. This finding is in line with expectations and with previous work from Ref. [20] in which the same trend was shown in the differential capacitance. Less trivially, we found that for the charge-charge structure factor
Figure 4: The number-number (NN) and charge-charge (ZZ) structure factor calculated via Eqs. (18) and (17), respectively, corresponding to the structure factors shown in Fig. 2, and obtained from MSAc for the system parameters given in Figs. 2 and 3.
\(S_{\rm ZZ}\) at low surface potentials, where neither the cations nor anions dominate, the location of the maximum of the structure factor is located around \(2\pi/(d_{-}+d_{+})\), _i.e._ each coion is surrounded by a counterion. At larger surface potentials, however, the location of the maximum converges to \(2\pi/d_{j}\), _i.e._ the first layer is filled with only counterions. This clearly demonstrates that one is able to distinguish between a diffuse EDL and a dense-packed EDL, using our approach.
The primitive model that we studied here has its known shortcomings. One of the crucial simplifications of the model is considering an implicit solvent. The reason hereof is simplicity, as the solvent molecules, in particular water molecules, have extremely complex interactions and are therefore hard to model accurately. One implication of considering an implicit solvent in our case is that the dielectric permittivity is constant throughout the system, which is certainly not true for real electrolytes near charged surfaces where solvent molecules (and ions to some extend) have the tendency to orient along the electric field [45; 46]. However, there is not a simple theory to take this effect into account. In this manuscript, we want to keep the model as simple as possible while still capturing important features of the real system. Note that the simulations by Merlet et.al [22] do not have a solvent. Another simplification in our model is to not consider explicitly an inhomogeneous surface-charge-density distribution. Doing so would imply us to invoke three-dimensional DFT calculations, which would defeat the purpose of doing DFT in the first place, as three-dimensional DFT is computationally very expensive.
Overall, we explored the EDL from a novel perspec
Figure 5: (a)-(d) The normalized structure factor \(H_{ij}(q)\) for varying surface potential \(\Phi_{0}\), obtained via Eq. (13) from DFT using the MSAc functional. Note the different color schemes in each panel.
tive and presented how to get access to the in-plane structure within the framework of DFT. Although the in-plane structure factor is a tedious object, we showed that it can provide interesting insight in the EDL. One of the most prominent features of the bulk structure factor, which we have not yet mentioned, is that it is a measurable quantity. The main question, therefore, is whether the in-plane structure factor studied in this manuscript theoretically, is also measurable. If so, we established a direct connection between the density profiles, which are very hard (if not impossible) to measure [47], to the measurable in-plane structure factor. This would open a doorway for a much more thorough understanding of the EDL.
## Acknowledgements
We acknowledge funding from the German Research Foundation (DFG) through Project Number 406121234. We acknowledge support by the state of Baden-Wurttemberg through bwHPC and the German Research Foundation (DFG) through Grant No. INST 39/963-1 FUGG (bwForCluster NEMO).
## Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
| 電界二重層 (EDL) は、スーパーキャパシタの電極やコロイドとポリマー溶液における表面の電荷をスクリーニングする上で重要な役割を果たします。その構造は、基底電解質の有限サイズのイオンキャリア間の相関関係によって決定され、これにより、EDL の特性とその利用が影響を受けます。EDL の構造を古典的な密度関数理論 (DFT) で解析することで、EDL の最初の層における構造転換を特定の粒子相互作用によって引き起こされるか、または一般的な根源をもつかを明らかにすることを目的としています。この転換は、完全原子シミュレーションで発見されています。これまでに、DFT を用いてEDL の平面構造を単純モデル (PM) で調べました。しかし、適切な関数が予測することで、EDL の平面構造が分子ダイナミクス (MD) シミュレーションと非常に良好に一致することを示しました。 |
2309.07884 | Single-soft emissions for amplitudes with two colored particles at three
loops | We compute the three-loop correction to the universal single-soft emission
current for the case of scattering amplitudes with two additional color-charged
partons. We present results valid for QCD and $\mathcal{N}=4$ super-symmetric
Yang-Mills theory. To achieve our results we develop a new integrand expansion
technique for scattering amplitudes in the presence of soft emissions.
Furthermore, we obtain contributions from single final-state parton matrix
elements to the Higgs boson and Drell-Yan production cross section at
next-to-next-to-next-to-next-to leading order (N$^4$LO) in perturbative QCD in
the threshold limit. | Franz Herzog, Yao Ma, Bernhard Mistlberger, Adi Suresh | 2023-09-14T17:30:41 | http://arxiv.org/abs/2309.07884v1 | # Single-soft emissions for amplitudes with two colored particles at three loops
###### Abstract
We compute the three-loop correction to the universal single-soft emission current for the case of scattering amplitudes with two additional color-charged partons. We present results valid for QCD and \(\mathcal{N}=4\) super-symmetric Yang-Mills theory. To achieve our results we develop a new integrand expansion technique for scattering amplitudes in the presence of soft emissions. Furthermore, we obtain contributions from single final-state parton matrix elements to the Higgs boson and Drell-Yan production cross section at next-to-next-to-next-to-next-to leading order (N\({}^{4}\)LO) in perturbative QCD in the threshold limit.
## 1 Introduction
A remarkable property of gauge theory scattering amplitudes is that they factorize in infrared limits. Infrared limits are generally characterized by soft and/or collinear momentum configurations, and typically lead to singularities or poles in the amplitude. In turn these singularities are responsible for the infrared divergences encountered in both loop and phase space integrals, which typically appear in intermediate stages of the computation of physical quantities.
The infrared limits are of great interest from both a practical as well as theoretical perspective. For one, they are an important ingredient for building infrared subtraction schemes for higher-order QCD cross section calculations [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. They are also responsible for potentially large logarithmic corrections in a variety of observables, and as such enter as crucial ingredients for resummations [15; 16; 17; 18]. Finally, infrared limits are of fundamental interest in the study of the mathematical properties of scattering amplitudes as they constrain their analytic structure. In this context, infrared limits have played an important role in the analytic bootstrap program [19; 20; 21].
In this work, we will focus on the limit of scattering amplitudes involving three colored partons in which a single external gluon momentum becomes soft. It is well known that
the all-order \(n\)-point amplitude factorizes in this limit into an \(n-1\)-point amplitude times the action of a soft current as an operator in color space on the latter. The corresponding universal factorization limit has been known for a long time at the tree level [22]. At the one-loop level the soft limit was first extracted from color-ordered amplitudes [23; 24; 25] before the full color operator structure of the soft current was uncovered [26]. The two-loop soft current was extracted to finite order in the dimensional regulator \(\epsilon\) in ref. [27] by taking the soft limit of the two-loop splitting function. These results were extended to higher orders in \(\epsilon\) in ref. [28; 29], allowing the two-loop soft current to be used in the calculation of the threshold approximation of the N\({}^{3}\)LO Higgs boson cross section [30]. The two-loop order is also the first order where the soft limit can lead to color correlations of three hard partons, and the calculation of the corresponding current was presented in ref. [31]. Beyond the single-soft emission current, the double-soft current has also been known at tree level for quite some time [32] and more recently also at the one-loop level [33; 34; 35]. Finally, the triple soft limit is known at tree-level [36; 37].
The main methods used so far for calculating soft currents have been either extractions from amplitude calculations, or direct calculations via the Wilson line/SCET formalism. In this work, we introduce a new infrared subgraph expansion approach which we employ directly at the integrand level of the full amplitude. Using this new technique we circumvent the very challenging task of computing a full three-loop scattering amplitude. An infrared-subgraph-finding algorithm, which can be seen as a generalization of the expansion-by-subgraph approach [38; 39] from Euclidean space to Minkowski space, was recently developed in the context of on-shell expansions for wide-angle scattering [40]. Here we outline how to employ the same algorithm to find the set of infrared subgraphs contributing to the soft expansion. We present a general strategy for the soft expansion at arbitrary loop order with emphasis on the single-soft case. In particular, we provide criteria to identify infrared subgraphs which are in one-to-one correspondence with regions identified by the method-of-regions approach in parametric space, as implemented in [41; 42; 43; 44; 45]. The calculation of the three-loop soft current not only represents a new result, but also serves as a proof of concept demonstrating the potential of the expansion-by-subgraph approach in a highly non-trivial example. Indeed, this approach has been employed before in next-to-soft expansions of the Drell-Yan cross section at NNLO [46], as well as at higher orders in the soft expansion in double-real-virtual and real-virtual squared corrections to the N\({}^{3}\)LO Higgs boson cross section [47; 48; 49]; however this marks the first application of a fully systematic and automated approach in momentum space.
Our approach facilitates the generation of an integrand in the soft limit of scattering amplitudes. The challenging task of actually performing the loop integration remains. To this end, we employ integration-by-parts (IBP) identities [50; 51; 52] to express our amplitudes in terms of soft master integrals. We then introduce another scale into these soft MIs by completing the square of a certain subset of propagators, which are linear in the loop momentum. The resulting integrals can be identified as _collinear_ MIs and contain the original soft MIs in their soft limits. We solve these collinear MIs via the method of differential equations [53; 54; 55; 56; 57] in terms of harmonic polylogarithms [58; 59] up to weight eight. Apart from one simple integral, we find that the soft boundary integrals are fully
determined from regularity and consistency conditions [60; 61; 62] of the system of differential equations.
The main result of this article is the extraction of three-loop QCD corrections to the single-soft emission current acting on two additional colored partons. In addition, we use our techniques to perform a computation of the single-soft limit of the stress tensor multiplet three-point form factor in \(\mathcal{N}=4\) sYM theory based on an integrand provided in ref. [63]. Our calculation explicitly confirms the principle of maximal transcendentality [64; 65] for the contribution to the single-soft emission current considered in this article. Furthermore, we use our newly obtained results for the soft current at three loops to derive contributions to the Higgs boson and Drell-Yan production cross section at N\({}^{4}\)LO in perturbative QCD due to single real emission contributions in the soft limit.
The remainder of this work is organized as follows. In section 2 we introduce notation and the main results for the three-loop single-soft current with two colored partons. We describe the steps of our calculation in section 3. In section 4 we provide a detailed description of our new integrand expansion technique. In section 5 we discuss the universal pole structure of the soft limit of our newly computed scattering amplitudes as a consistency check. Next, we discuss results for the three-loop form factor in \(\mathcal{N}=4\) sYM theory in section 6. Furthermore, we present the threshold limit to single-real emission contributions to the Higgs boson and Drell-Yan production cross section at threshold at N\({}^{4}\)LO in QCD perturbation theory in section 7. Finally, we conclude in section 8.
## 2 Single-soft current up to three loops in QCD
We consider a scattering amplitude \(\mathcal{A}\) in which the momentum \(q\) of a single gluon is very low-energetic (i.e. the gluon is soft). In this single-soft limit, scattering amplitudes factorize into a universal operator acting on the scattering amplitude without the soft gluon [66; 22].
\[\lim_{q\to 0}\mathcal{A}_{p_{1}p_{2}\ldots p_{n}q}=\mathbf{J}(q)\mathcal{A}_{ p_{1}p_{2}\ldots p_{n}}. \tag{1}\]
The operator \(\mathbf{J}\) is referred to as the _single-soft emission current_ and acts on the colored degrees of freedom of the scattering amplitude \(\mathcal{A}_{p_{1}p_{2}\ldots p_{n}}\). In general, this operator will correlate emissions from all color-charged particles of the scattering amplitude. In this article, we determine the contribution to the soft-emission current that correlates two color-charged particles through three loops in perturbative QCD and \(\mathcal{N}=4\) sYM theory. Our result is exact for scattering amplitudes involving only two color-charged external particles on top of the emitted soft gluon. Furthermore, our results represent an important first step in determining \(\mathbf{J}(q)\) to third order in the coupling constant.
Up to three-loop order, the single-soft emission current can be decomposed as follows.
\[\mathbf{J}(q) = \frac{ig_{S}}{C_{A}}\epsilon_{\mu}^{a}(q)\sum_{i\neq j}\left( \frac{p_{i}^{\mu}}{p_{i}\cdot q}-\frac{p_{j}^{\mu}}{p_{j}\cdot q}\right)\left[ f^{abc}\mathbf{T}_{i}^{b}\mathbf{T}_{j}^{c}K_{2}(q,p_{i},p_{j})\right. \tag{2}\] \[\left.+iC_{A}\left(d_{4A}^{abcd}K_{4A}(q,p_{i},p_{j})+n_{f}d_{4F}^ {abcd}K_{4F}(q,p_{i},p_{j})\right)\left\{\mathbf{T}_{i}^{b},\mathbf{T}_{i}^{c }\right\}\mathbf{T}_{j}^{d}\right].\]
The bold faced notation above indicates an operator acting on the color structure of the amplitude. Note that the general structure of the \({\bf J}(q)\) can be more complex if its action on amplitudes with more than two colored particles is considered [31]. We work using dimensional regularization as a framework to regulate ultraviolet and infrared singularities, with \(\epsilon\) as the dimensional regulator related to the spacetime dimension via \(d=4-2\epsilon\). The number of quark flavors is given by \(n_{f}\). The index \(i\) sums over all color-charged particles of the scattering amplitude (to which the current is applied). The factors \(K_{X}\) are scalar quantities that can be expanded in perturbative QCD as follows.
\[K_{X}(q,p_{i},p_{j})=\sum_{o=0}^{\infty}a_{S}^{o}\left(\frac{(-2qp_{i}-i0)(-2 qp_{j}-i0)}{(-2p_{i}p_{j}-i0)\mu^{2}}\right)^{-o\epsilon}K_{X}^{(o)}. \tag{3}\]
The scalar products of the momenta appearing above are equipped with an infinitesimal imaginary part, inherited from Feynman's \(i0\) prescription. It is our convention that all momenta are treated as incoming. Consequently, all scalar products are positive such that the term in the bracket in eq. (3) above introduces imaginary parts to the scattering amplitude. If one computes other configurations of scattering of particles (incoming and outgoing), then the corresponding soft-emission current can be derived by the appropriate crossing and analytic continuation according to the \(i0\) prescription indicated earlier. Above, \(a_{S}\) is related to the bare strong coupling constant \(\alpha_{S}^{0}\) by some universal factors.
\[a_{S}=\frac{\alpha_{S}^{0}}{\pi}\left(\frac{4\pi}{\mu^{2}}\right)^{\epsilon}e^ {-\gamma_{E}\epsilon},\hskip 28.452756pt\gamma_{E}=0.577216\ldots. \tag{4}\]
The coefficients \(K_{X}^{(o)}\) have been computed in perturbative QCD at one loop [23; 24; 25; 26] and two loops [27; 28; 29; 31], where only the terms \(K_{2}^{(o)}\) are non-zero. From three loops, non-vanishing contributions from \(K_{4A}^{(3)}\) and \(K_{4F}^{(3)}\) emerge. The color tensors \(d_{4A}^{abcd}\) and their contractions are defined as follows:
\[C_{4}^{R_{1}R_{2}}=d_{R1}^{abcd}d_{R2}^{abcd},\hskip 28.452756ptd_{R}^{abcd}= \frac{1}{4!}\left[\text{Tr}\big{(}T_{R}^{a}T_{R}^{b}T_{R}^{c}T_{R}^{d}\big{)}+ \text{symmetric permutations}\right]\,. \tag{5}\]
Above \(T_{R}^{a}\) are the generators in representation \(R\) of a general compact semi-simple Lie algebra; explicitly for the fundamental and adjoint representation we find:
\[T_{F,\,ij}^{a}=T_{ij}^{a}\,,\hskip 28.452756ptT_{A,\,ij}^{a}=-if^{aij}. \tag{6}\]
The labels \(A\) and \(F\) refer to the adjoint and fundamental representations respectively. For \(SU(n_{c})\) we can express the quartic Casimirs in terms of the number of colors \(n_{c}\):
\[C_{4}^{AA}=\frac{n_{c}^{2}}{24}(n_{c}^{2}-1)(36+n_{c}^{2}),\quad C_{4}^{AF}= \frac{n_{c}}{48}(n_{c}^{2}-1)(6+n_{c}^{2}),\quad C_{4}^{FF}=\frac{(n_{c}^{2}-1 )(18-6n_{c}^{2}+n_{c}^{4})}{96n_{c}^{2}}. \tag{7}\]
One of the main results of this article is the explicit computation of the coefficients \(K_{X}^{(o)}\) through three-loop order. These coefficients are stated here explicitly and are also provided as ancillary files to the arXiv submission of this article. Their computation will be discussed in more detail later.
\(K_{2}^{(0)}\) = 1.
\[K_{2}^{(1)} = -C_{A}\frac{e^{\gamma_{5}\Gamma(1-\epsilon)^{3}\Gamma(\epsilon+1)^{2}} }{4\epsilon^{2}\Gamma(1-2\epsilon)}\] \[= C_{A}\Bigg{[}-\frac{1}{4\epsilon^{2}}-\frac{\zeta_{2}}{8}+\frac{7 \zeta_{3}\epsilon}{12}+\frac{39\zeta_{4}\epsilon^{2}}{64}+\left(\frac{7\zeta_{2 }\zeta_{3}}{24}+\frac{31\zeta_{5}}{20}\right)\epsilon^{3}+\left(\frac{1555 \zeta_{6}}{512}-\frac{49\zeta_{3}^{2}}{72}\right)\epsilon^{4}\] \[+\left(-\frac{91}{64}\zeta_{3}\zeta_{4}+\frac{31\zeta_{2}\zeta_{5 }}{40}+\frac{127\zeta_{7}}{28}\right)\epsilon^{5}+\left(-\frac{49}{144}\zeta_ {2}\zeta_{3}^{2}-\frac{217\zeta_{5}\zeta_{3}}{60}+\frac{37009\zeta_{8}}{4096} \right)\epsilon^{6}\Bigg{]}+{\cal O}(\epsilon^{7}).\]
\[K_{2}^{(2)} = C_{A}^{2}\Bigg{[}\frac{1}{32\epsilon^{4}}-\frac{11}{192\epsilon ^{3}}+\left(\frac{\zeta_{2}}{16}-\frac{67}{576}\right)\frac{1}{\epsilon^{2}}+ \left(-\frac{11\zeta_{2}}{192}-\frac{11\zeta_{3}}{96}-\frac{193}{864}\right) \frac{1}{\epsilon}\] \[-\frac{67\zeta_{2}}{576}+\frac{341\zeta_{3}}{288}+\frac{7\zeta_ {4}}{128}-\frac{571}{1296}\] \[+\left(-\frac{7}{96}\zeta_{3}\zeta_{2}-\frac{139\zeta_{2}}{864}+ \frac{2077\zeta_{3}}{864}+\frac{2035\zeta_{4}}{768}-\frac{247\zeta_{5}}{160}- \frac{1705}{1944}\right)\epsilon\] \[+\left(-\frac{205\zeta_{3}^{2}}{288}+\frac{341\zeta_{2}\zeta_{3} }{288}+\frac{1597\zeta_{3}}{324}-\frac{109\zeta_{2}}{324}+\frac{12395\zeta_{4 }}{2304}+\frac{5621\zeta_{5}}{480}-\frac{3307\zeta_{6}}{768}-\frac{5107}{2916 }\right)\epsilon^{2}\] \[+\left(-\frac{10571\zeta_{3}^{2}}{864}+\frac{2077\zeta_{2}\zeta_ {3}}{864}-\frac{509\zeta_{4}\zeta_{3}}{384}+\frac{37427\zeta_{3}}{3888}-\frac {2411\zeta_{2}}{3888}+\frac{41105\zeta_{4}}{3456}-\frac{219\zeta_{2}\zeta_{5}} {160}\right.\] \[+\frac{34237\zeta_{5}}{1440}+\frac{42361\zeta_{6}}{1024}-\frac{4 573\zeta_{7}}{224}-\frac{15313}{4374}\right)\epsilon^{3}\] \[+\epsilon^{4}\left(-\frac{5\zeta_{5,3}}{2}-\frac{845}{288}\zeta_ {2}\zeta_{3}^{2}-\frac{64387\zeta_{3}^{2}}{2592}+\frac{1381\zeta_{2}\zeta_{3} }{324}-\frac{63085\zeta_{4}\zeta_{3}}{1152}-\frac{29\zeta_{5}\zeta_{3}}{240}\right.\] \[\left.+\frac{226405\zeta_{3}}{11664}-\frac{14785\zeta_{2}}{11664} +\frac{119135\zeta_{2}}{5184}+\frac{5621\zeta_{2}\zeta_{5}}{480}+\frac{27187 \zeta_{5}}{540}+\frac{258017\zeta_{6}}{3072}+\frac{90101\zeta_{7}}{672}\right.\] \[\left.-\frac{1264777\zeta_{8}}{18432}-\frac{45931}{6561}\right) \Bigg{]}\] \[+n_{f}C_{A}\Bigg{[}\frac{1}{96\epsilon^{3}}+\frac{5}{288 \epsilon^{2}}+\left(\frac{\zeta_{2}}{96}+\frac{19}{864}\right)\frac{1}{ \epsilon}+\frac{5\zeta_{2}}{288}-\frac{31\zeta_{3}}{144}+\frac{65}{2592}\] \[+\left(-\frac{35\zeta_{2}}{864}-\frac{155\zeta_{3}}{432}-\frac{18 5\zeta_{4}}{384}+\frac{211}{7776}\right)\epsilon\] \[+\left(-\frac{31}{144}\zeta_{3}\zeta_{2}-\frac{367\zeta_{2}}{2592 }-\frac{497\zeta_{3}}{648}-\frac{925\zeta_{4}}{1152}-\frac{511\zeta_{5}}{240} +\frac{665}{23328}\right)\epsilon^{2}\] \[+\left(\frac{961\zeta_{3}^{2}}{432}-\frac{155\zeta_{2}\zeta_{3} }{432}-\frac{5255\zeta_{3}}{388}-\frac{3083\zeta_{2}}{7776}-\frac{8915\zeta_{ 4}}{3456}-\frac{511\zeta_{5}}{144}-\frac{3851\zeta_{6}}{512}+\frac{2059}{69 984}\right)\epsilon^{3}\] \[+\left(\frac{4805\zeta_{3}^{2}}{1296}-\frac{65\zeta_{2}\zeta_{3}}{ 648}+\frac{5735\zeta_{4}\zeta_{3}}{576}-\frac{15623\zeta_{3}}{5832}-\frac{20503 \zeta_{2}}{23328}-\frac{55225\zeta_{4}}{10368}-\frac{511\zeta_{2}\zeta_{5}}{240}\right.\] \[\left.-\frac{9917\zeta_{5}}{1080}-\frac{19255\zeta_{6}}{1536}-\frac {8191\zeta_{7}}{336}+\frac{6305}{209952}\right)\epsilon^{4}\Bigg{]}+{\cal O}( \epsilon^{5}).\]
\[K_{2}^{(3)} = C_{A}^{3}\Bigg{[}-\frac{1}{384\epsilon^{6}}+\frac{11}{768 \epsilon^{5}}+\left(\frac{119}{20736}-\frac{3\zeta_{2}}{256}\right)\frac{1}{ \epsilon^{4}}+\left(\frac{649\zeta_{2}}{13824}+\frac{\zeta_{3}}{96}-\frac{151 7}{31104}\right)\frac{1}{\epsilon^{3}} \tag{11}\]
\[+\left(\frac{2501\zeta_{2}}{41472}-\frac{2101\zeta_{3}}{6912}-\frac{14 87\zeta_{4}}{18432}-\frac{7271}{31104}\right)\frac{1}{\epsilon^{2}}\] \[+\left(\frac{11\zeta_{3}\zeta_{2}}{1152}+\frac{437\zeta_{2}}{62208 }+\frac{2575\zeta_{3}}{2304}-\frac{22583\zeta_{4}}{36864}+\frac{49\zeta_{5}}{1 60}-\frac{446705}{559872}\right)\frac{1}{\epsilon}+\frac{293\zeta_{3}^{2}}{2304}\] \[-\frac{2453\zeta_{2}\zeta_{3}}{4608}+\frac{203705\zeta_{3}}{31104} -\frac{12911\zeta_{2}}{186624}+\frac{493381\zeta_{4}}{110592}-\frac{26543\zeta_{ 5}}{3840}+\frac{445679\zeta_{6}}{442368}-\frac{8206861}{3359232}\] \[+\left(-\frac{17149\zeta_{3}^{2}}{13824}+\frac{21031\zeta_{2} \zeta_{3}}{13824}+\frac{43\zeta_{4}\zeta_{3}}{288}+\frac{2330483\zeta_{3}}{93 312}-\frac{403379\zeta_{2}}{1119744}+\frac{1228523\zeta_{4}}{55296}\right.\] \[+\frac{9773\zeta_{2}\zeta_{5}}{5760}+\frac{262597\zeta_{5}}{11520 }-\frac{25965643\zeta_{6}}{884736}+\frac{151631\zeta_{7}}{16128}-\frac{48027739 }{6718464}\right)\epsilon\] \[+\left(-\frac{469\zeta_{5,3}}{90}+\frac{10045\zeta_{2}\zeta_{3}^ {2}}{4608}-\frac{920995\zeta_{3}^{2}}{13824}+\frac{71831\zeta_{2}\zeta_{3}}{6 912}+\frac{388289\zeta_{4}\zeta_{3}}{36864}-\frac{9907\zeta_{5}\zeta_{3}}{1920}\right.\] \[+\frac{15854467\zeta_{3}}{186624}-\frac{5363867\zeta_{2}}{6718464 }+\frac{42678481\zeta_{4}}{497664}-\frac{71533\zeta_{2}\zeta_{5}}{7680}+\frac {82837\zeta_{5}}{640}\] \[+\frac{112195243\zeta_{6}}{884736}-\frac{1343045\zeta_{7}}{8064}+ \frac{3738034847\zeta_{8}}{53084160}-\frac{2482106477}{120932352}\right) \epsilon^{2}\Bigg{]}\] \[+\ C_{A}^{2}n_{f}\Bigg{[}-\frac{1}{384\epsilon^{5}}+\frac{43}{10 368\epsilon^{4}}+\left(\frac{895}{31104}-\frac{59\zeta_{2}}{6912}\right) \frac{1}{\epsilon^{3}}+\left(-\frac{31\zeta_{2}}{20736}+\frac{239\zeta_{3}}{3 456}+\frac{2603}{31104}\right)\frac{1}{\epsilon^{2}}\] \[+\left(\frac{3265\zeta_{2}}{62208}-\frac{4945\zeta_{3}}{10368}+ \frac{2437\zeta_{4}}{18432}+\frac{24169}{139968}\right)\frac{1}{\epsilon}\] \[+\frac{271\zeta_{3}\zeta_{2}}{2304}-\frac{3925\zeta_{2}}{186624}- \frac{2513\zeta_{3}}{1152}-\frac{33109\zeta_{4}}{18432}+\frac{7799\zeta_{5}}{5 760}+\frac{397699}{1679616}\] \[+\left(-\frac{4969\zeta_{3}^{2}}{6912}-\frac{1595\zeta_{2}\zeta_ {3}}{2304}-\frac{720299\zeta_{3}}{93312}-\frac{228895\zeta_{2}}{279936}-\frac {1168171\zeta_{4}}{165888}-\frac{187753\zeta_{5}}{17280}+\frac{2476865\zeta_{6} }{442368}\right.\] \[\left.-\frac{22273}{373248}\right)\epsilon+\left(\frac{404075 \zeta_{3}^{2}}{20736}-\frac{78295\zeta_{2}\zeta_{3}}{20736}-\frac{121555\zeta_ {4}\zeta_{3}}{18432}-\frac{3316207\zeta_{3}}{139968}-\frac{17477627\zeta_{2}}{3 359232}\right.\] \[\left.-\frac{15232813\zeta_{4}}{497664}+\frac{7063\zeta_{2}\zeta _{5}}{3840}-\frac{52115\zeta_{5}}{1152}-\frac{7659793\zeta_{6}}{1327104}+ \frac{13871\zeta_{7}}{448}-\frac{125652667}{60466176}\right)\epsilon^{2}\Bigg{]}\] \[+\ C_{F}C_{A}n_{f}\Bigg{[}\frac{1}{576\epsilon^{3}}+\left(\frac{ 55}{3456}-\frac{\zeta_{3}}{72}\right)\frac{1}{\epsilon^{2}}+\left(\frac{ \zeta_{2}}{384}-\frac{19\zeta_{3}}{432}-\frac{\zeta_{4}}{48}+\frac{1819}{20736 }\right)\frac{1}{\epsilon}\] \[-\frac{1}{48}\zeta_{3}\zeta_{2}+\frac{67\zeta_{2}}{2304}-\frac{13 85\zeta_{3}}{5184}-\frac{19\zeta_{4}}{288}-\frac{7\zeta_{5}}{72}+\frac{45967} {124416}\] \[+\left(\frac{17\zeta_{3}^{2}}{18}-\frac{19\zeta_{2}\zeta_{3}}{288}- \frac{50495\zeta_{3}}{31104}+\frac{3547\zeta_{2}}{13824}-\frac{16237\zeta_{4}}{2 7648}-\frac{133\zeta_{5}}{432}-\frac{101\zeta_{6}}{384}+\frac{1007179}{74696} \right)\epsilon\] \[+\left(\frac{323\zeta_{3}^{2}}{108}-\frac{809\zeta_{2}\zeta_{3}}{3 456}+\frac{599\zeta_{4}\zeta_{3}}{128}-\frac{1661303\zeta_{3}}{186624}+ \frac{99931\zeta_{2}}{82944}-\frac{635899\zeta_{4}}{165888}\right.\] \[+\left.-\frac{7\zeta_{2}\zeta_{5}}{48}-\frac{70417\zeta_{5}}{25920 }-\frac{1919\zeta_{6}}{2304}-\frac{49\zeta_{7}}{72}+\frac{20357263}{4478976} \right)\epsilon^{2}\Bigg{]}+\mathcal{O}(\epsilon^{3})\] \[+\ C_{A}n_{f}^{2}\Bigg{[}-\frac{1}{1296\epsilon^{4}}-\frac{5}{1944 \epsilon^{3}}+\left(-\frac{\zeta_{2}}{864}-\frac{1}{216}\right)\frac{1}{ \epsilon^{2}}+\left(-\frac{5\zeta_{2}}{1296}+\frac{65\zeta_{3}}{1296}-\frac{11 1}{2187}\right)\frac{1}{\epsilon}\]
\[+\frac{11\zeta_{2}}{432}+\frac{325\zeta_{3}}{1944}+\frac{1229\zeta_{4} }{6912}+\frac{10}{6561}+\Bigg{(}\frac{65\zeta_{3}\zeta_{2}}{864}+\frac{187\zeta_ {2}}{1458}+\frac{37\zeta_{3}}{72}+\frac{6145\zeta_{4}}{10368}\] \[+\frac{2521\zeta_{5}}{2160}+\frac{190}{6561}\Bigg{)}\epsilon+ \Bigg{(}-\frac{4225\zeta_{3}^{2}}{2592}+\frac{325\zeta_{2}\zeta_{3}}{1296}+ \frac{2632\zeta_{3}}{2187}+\frac{1058\zeta_{2}}{2187}+\frac{9355\zeta_{4}}{3456}\] \[+\frac{2521\zeta_{5}}{648}+\frac{999593\zeta_{6}}{165888}+\frac{ 6614}{59049}\Bigg{)}\epsilon^{2}\Bigg{]}.\]
\[K_{4A}^{(0)} = K_{4A}^{(1)}=K_{4A}^{(2)}=0. \tag{12}\] \[K_{4A}^{(3)} = \left(\frac{\zeta_{2}\zeta_{3}}{8}+\frac{\zeta_{5}}{16}\right) \frac{1}{\epsilon}+\frac{3\zeta_{3}^{2}}{4}-\frac{\zeta_{3}}{12}+\frac{\zeta_ {2}}{4}-\frac{55\zeta_{5}}{24}+\frac{235\zeta_{6}}{64}\] \[+\left(-\frac{77\zeta_{3}^{2}}{12}+\frac{139\zeta_{4}\zeta_{3}}{ 32}+\frac{53\zeta_{3}}{72}+\frac{13\zeta_{2}}{8}-\frac{13\zeta_{4}}{16}+\frac {187\zeta_{2}\zeta_{5}}{32}-\frac{335\zeta_{5}}{72}-\frac{55\zeta_{6}}{8}+ \frac{63\zeta_{7}}{4}\right)\epsilon\] \[+\left(-\frac{459\zeta_{5,3}}{20}+\frac{15}{8}\zeta_{2}\zeta_{3}^ {2}-\frac{469\zeta_{3}^{2}}{36}-\frac{19\zeta_{2}\zeta_{3}}{8}-\frac{77\zeta _{4}\zeta_{3}}{4}-\frac{343\zeta_{5}\zeta_{3}}{16}+\frac{665\zeta_{3}}{108}\right.\] \[\left.+\frac{97\zeta_{2}}{12}+\frac{157\zeta_{4}}{12}-\frac{55 \zeta_{2}\zeta_{5}}{16}-\frac{3163\zeta_{5}}{216}-\frac{335\zeta_{6}}{24}- \frac{1705\zeta_{7}}{16}+\frac{1306649\zeta_{8}}{5760}\right)\epsilon^{2}+ \mathcal{O}(\epsilon^{3}).\]
\[K_{4F}^{(0)} = K_{4F}^{(1)}=K_{4F}^{(2)}=0. \tag{14}\] \[K_{4F}^{(3)} = -\frac{\zeta_{2}}{2}+\frac{\zeta_{3}}{6}+\frac{5\zeta_{5}}{6}+ \left(\frac{7\zeta_{3}^{2}}{3}-\frac{47\zeta_{3}}{36}-\frac{15\zeta_{2}}{4}+ \frac{13\zeta_{4}}{8}+\frac{25\zeta_{5}}{18}+\frac{5\zeta_{6}}{2}\right)\epsilon\] \[\left(\frac{35\zeta_{3}^{2}}{9}+\frac{19\zeta_{2}\zeta_{3}}{4}+7 \zeta_{4}\zeta_{3}-\frac{1471\zeta_{3}}{108}-\frac{239\zeta_{2}}{12}-\frac{58 9\zeta_{4}}{24}+\frac{5\zeta_{2}\zeta_{5}}{4}+\frac{1423\zeta_{5}}{108}\right.\] \[\left.+\frac{25\zeta_{6}}{6}+\frac{155\zeta_{7}}{4}\right) \epsilon^{2}+\mathcal{O}(\epsilon^{3}).\]
## 3 Calculation of the soft limit of scattering amplitudes
In order to compute three-loop corrections to the single-soft emission current, we calculate the soft limit of physical scattering amplitudes. In particular, we compute the limit of the scattering amplitude involving a Higgs boson and three gluons, as well as the limit of the amplitude involving an off-shell transverse photon, a gluon, and a quark-antiquark pair. These scattering amplitudes at three-loop order are contributions to the production cross section of a Higgs boson or vector boson in association with a jet at the LHC at N\({}^{3}\)LO in perturbative QCD, for example. For the purposes of our calculation, we work in a kinematic configuration where the color singlet boson (with momentum \(p_{1}\)) decays to three color charged partons (with momenta \(p_{2}\), \(p_{3}\), and \(p_{4}\)).
\[h(p_{1})\to g(p_{2})\,g(p_{3})\,g(p_{4}),\hskip 28.452756pt\gamma^{*}(p_{1}) \to q(p_{2})\,\bar{q}(p_{3})\,g(p_{4}). \tag{15}\]
These scattering amplitudes were computed through two-loop order in refs. [67; 68; 69; 70; 71; 72], and first results for planar contributions to the virtual photon amplitudes have appeared recently
in ref. [73]. However, a complete set of three-loop amplitudes is still elusive. In the case of Higgs boson scattering amplitudes, we work in the limit where the top quark is treated as infinitely massive and its degrees of freedom are integrated out [74; 75; 76; 77]. As a result, a dimension-five operator is introduced that couples [78; 79; 80; 81] the Higgs boson directly to the gluon field strength. We treat all quarks as massless and work with \(n_{f}\) light quark degrees of freedom.
We create the integrands for our desired scattering amplitudes by generating Feynman diagrams using the QGRAF program [82] and dressing them with QCD Feynman rules. To facilitate our computation, we define suitable projectors to Lorentz tensor structures whose scalar coefficients we compute in the so-called conventional dimensional regularization (CDR) scheme employing standard methods.
Once we obtain scalar integrals, it is our goal to avoid the complexity of computing full, three-loop scattering amplitudes. Instead, we develop a new technique, based on the method of regions [83], to expand the scalar integrands around the limit of the gluon momentum \(p_{4}\) becoming soft (i.e. \(p_{4}\to 0\)). We identify that the contribution to the single-soft emission current is provided by the _maximally soft regions_ of our integrals. In these regions, all the loop momenta are equally as soft as the external soft gluon with momentum \(p_{4}\). We keep only the first term in the expansion for both the Higgs boson and virtual photon amplitudes. More details regarding this expansion technique will be discussed in section 4.
Once an expanded scalar integrand is obtained, we use standard multi-loop techniques in order to integrate over all the loop momenta. First, we use integration-by-part (IBP) identities [50; 51; 52] in the form of the Laporta algorithm [50] to relate all the soft Feynman integrals to a set of soft master integrals. These soft master integrals only depend on the external kinematics via an overall multiplicative prefactor [84]. For example, one soft integral contributing at two-loop order is given by
\[I = \int\frac{d^{d}p_{5}}{(2\pi)^{d}}\frac{d^{d}p_{6}}{(2\pi)^{d}} \frac{1}{[p_{5}^{2}][(p_{5}-p_{6})^{2}][(p_{5}-p_{6})^{2}][(p_{5}-p_{6}-p_{4} )^{2}]}\] \[\times \frac{1}{[2p_{2}p_{5}][-2p_{3}p_{6}][-2p_{3}p_{5}][2p_{2}p_{4}+2p_ {2}p_{6}]}.\]
All propagators involving the hard external momenta \(p_{2}\) and \(p_{3}\) were linearized in the expansion procedure. Consequently, it is now possible to read off the integer power of the dependence of the integral on \(p_{2}\) and \(p_{3}\) directly from the integrand (see for example refs. [48; 85] for details). It is exactly this property, in addition to knowing the overall energy dimension of the integral, that fixes all kinematic dependence of the integral and determines it up to a function in the space-time dimension \(d\):
\[I=(2p_{2}p_{3})^{-1+2\epsilon}(2p_{2}p_{4})^{-1-2\epsilon}(2p_{3}p_{4})^{-1-2 \epsilon}F(\epsilon). \tag{10}\]
Especially at three-loop order, computing the remaining soft master integrals using straightforward integration techniques is challenging. Thus, we follow a different path and temporarily undo our soft expansion for all those propagators depending on one of the hard external momenta, \(p_{3}\).
\[\frac{1}{[-2p_{3}p_{6}][-2p_{3}p_{5}]}\ \to\ \frac{1}{[(p_{6}-p_{3})^{2}][(p_{5}-p_{3})^{2}]}. \tag{11}\]
It is now no longer possible to read off the dependence of the integral on \(p_{3}\) from the integrand, and the result will consequently be a nontrivial function of the dimensionless ratio
\[w=\frac{s_{24}}{s_{23}},\hskip 28.452756pts_{ij}=(p_{i}+p_{j})^{2}. \tag{12}\]
We now apply the method of differential equations [53; 54; 55; 56; 57] to determine our integrals as a function of \(w\). To accomplish this, we transform the differential equations into the canonical form [53] using algorithmic techniques [86]. The solutions to these differential equations can be expressed in terms of harmonic polylogarithms in \(w\)[58]. However, the differential equations determine the master integrals only up to boundary constants. To constrain them, we first compute differential equations for master integrals undoing our soft-expansion for propagators involving \(p_{2}\) and \(p_{3}\) separately. We then demand that the solutions are consistent among themselves when taking the strict soft limit \(w\to 0\). Demanding consistency relations from our system of differential equations, as in refs. [60] and [61], relates all the required boundary conditions to one soft master integral, which is easily computed using direct integration in parameter space. Consequently, we determine all soft master integrals through transcendental weight eight.
The soft master integrals, which serve as building blocks for the computation of the single-soft emission current at three loops, are one of the main results of this article. In total, we compute 50 soft master integrals and label them with an index \(i\):
\[M^{i}\equiv(4\pi)^{-3\epsilon}e^{3\gamma_{\rm E}\epsilon}\left(\frac{(-s_{24} )(-s_{34})}{(-s_{23})}\right)^{3\epsilon}\int\frac{d^{d}p_{5}}{(2\pi)^{d}} \frac{d^{d}p_{6}}{(2\pi)^{d}}\frac{d^{d}p_{7}}{(2\pi)^{d}}\left.\mathcal{I}_{ i}\right|_{s_{23}=s_{24}=s_{34}=1}. \tag{13}\]
Above, we set all kinematic Mandelstam invariants to unity and remove non-rational dependence on them via a universal prefactor. Furthermore, we anticipate the renormalization of the strong coupling constant and absorb some \(\overline{\rm MS}\) scheme prefactors. The integrand \(\mathcal{I}_{i}\) is a rational function of the Lorentz invariant scalar products of the internal loop momenta \(\{p_{5},p_{6},p_{7}\}\) and the external parton momenta \(\{p_{2},p_{3},p_{4}\}\). The soft integrals are related to canonical soft master integrals (i.e. functions of pure transcendental weight) by a linear transformation of the vector of soft master integrals.
\[\vec{M}=T_{\rm can.}(\epsilon)\cdot\vec{M}_{c}. \tag{14}\]
The matrix \(T_{\rm can.}(\epsilon)\) only depends on the dimensional regulator and rational numbers. We provide the integrands \(\mathcal{I}_{i}\), the transformation matrix \(T_{\rm can.}\), and solution to the canonical master integrals \(M^{i}_{c}\) in ancillary files together with the arXiv submission of this article.
Having calculated the strict soft limit of our scattering amplitudes, we can now extract the coefficients \(K^{(o)}_{X}\) of eq. (3) by identifying
\[\lim_{\begin{subarray}{c}\text{maximally soft}\\ p_{4}\to 0\end{subarray}}\mathcal{A}_{p_{1}\to p_{2}p_{3}p_{4}}=\mathbf{J}(p_{4}) \mathcal{A}^{(0)}_{p_{1}\to p_{2}p_{3}}. \tag{15}\]
Above, \(\mathcal{A}^{(0)}\) is the tree-level scattering amplitude for the \(2\to 1\) Born process not involving the soft gluon with momentum \(p_{4}\). We find complete agreement of our computation of the
coefficients of the single-soft emission current from our \(h\to ggg\) and \(\gamma^{*}\to q\bar{q}g\) amplitude, which serves as a strong consistency check of our computation and of the color structure identified in eq. (2).
## 4 Regions in the soft expansion
In this section, we develop a method for the expansion of scattering amplitudes in the limit of a set of partons becoming very low energetic (i.e. soft). The decay processes we introduced in eq. (10) is a specific case to which we apply this new technology. First, we introduce a general set-up which contains our expansion as a special case. Next, we explain how to identify the subgraphs which correspond to the regions of the expansion. Finally, we discuss the particular soft expansion of Feynman diagrams in our concrete setting.
To set up our expansion technique, we divide the external momenta of a Feynman graph into the following three categories:
1. \(K\) massless momenta \(p_{i}\),
2. \(L\) off-shell momenta \(q_{j}\),
3. \(M\) soft massless momenta \(l_{k}\).
We define the soft expansion of a graph as an expansion around the limit \(l_{k}\to 0\). Scalar products involving of momenta including \(l_{k}\) are consequently much smaller than scalar products not involving any \(l_{k}\). Introducing a small parameter \(\lambda\) and a hard reference scale \(Q\), we find
\[p_{i}^{2}=0\ \ (i=1,\ldots,K),\quad q_{j}^{2}\sim Q^{2}\ \ (j=1, \ldots,L),\quad l_{k}^{2}=0\ \ (k=1,\ldots,M), \tag{23a}\] \[p_{i_{1}}\cdot p_{i_{2}}\sim Q^{2}\ \ (i_{1}\neq i_{2}),\quad p _{i}\cdot l_{k}\sim q_{j}\cdot l_{k}\sim\lambda Q^{2},\quad l_{k_{1}}\cdot l _{k_{2}}\sim\lambda^{2}Q^{2}\ \ (k_{1}\neq k_{2}). \tag{23b}\]
Our strategy of identifying the regions in the soft expansion is based on the observation that each region \(R\) (see figure 0(a)) must conform with the solutions of the Landau equations [87]. Furthermore, once all the external soft momenta \(l_{k}\) are removed from \(R\), the resulting configuration \(R^{\prime}\) (see figure 0(b)) must be a region in the on-shell expansion developed in ref. [40]. In other words, the regions in the soft expansion can be derived from those in the on-shell expansion with additional requirements. The regions in the on-shell expansion have been studied in detail in ref. [40], and in particular, a graph-finding algorithm was developed to obtain the complete list of regions from the given Feynman graph. Here, we aim to leverage this knowledge to straightforwardly identify the regions in the soft expansion. To this end, we will first review the graph-finding algorithm for the on-shell expansion and then delve into the generic configuration of regions in the soft expansion.
### The graph-finding algorithm for on-shell expansions
In the context of an on-shell expansion of wide-angle scattering, the Feynman graphs feature on-shell external momenta \(p_{1},\ldots,p_{K}\) and off-shell external momenta \(q_{1},\ldots,q_{L}\), satisfying
\[p_{i}^{2}\sim\lambda Q^{2}\ \ (i=1,\ldots,K),\quad q_{j}^{2}\sim Q^{2}\ \ (j=1,\ldots,L),\quad p_{i_{1}}\cdot p_{i_{2}}\sim Q^{2}\ \ (i_{1}\neq i_{2}). \tag{24}\]
In contrast to the soft expansion defined in eq. (10), there are no soft external momenta present here, and every on-shell momentum \(p_{i}\) is slightly off its corresponding lightcone.
A graph-finding algorithm has been provided in ref. [40], along with a rigorous proof demonstrating that this algorithm generates all the regions for the on-shell expansion. This allows us to comprehend the structures of these regions and derive them more efficiently by circumventing the approach of constructing Newton polytopes in ref. [41; 42; 43; 44].
A key concept in this graph-finding algorithm is that of _moietic graphs_. We call a graph mojetic if it becomes _one-vertex irreducible_ after connecting all of its external edges to an auxiliary vertex. Note that a graph is called one-vertex irreducible if it remains connected after the removal of any one of its vertices. The algorithm can then be described by the following steps.
* _Step 1_: For each nearly on-shell external momentum \(p_{i}\) (\(i=1,\ldots,K\)), we draw a cut through a set of edges \(\{e_{c}\}\) such that: 1, \(\{e_{c}\}\) disconnects a graph \(G\) into two connected subgraphs, one of which, denoted by \(\widehat{\gamma}_{i}\), is attached by \(p_{i}\) only; 2, the graph \(\gamma_{i}\equiv\widehat{\gamma}_{i}\cup\{e_{c}\}\) is mojetic.
* _Step 2_: For all possible sets \(\{\gamma_{1},\ldots,\gamma_{K}\}\), we overlay the graphs \(\gamma_{1},\ldots,\gamma_{K}\) and associate the edges \(e\in G\) to certain subgraphs as follows. If \(e\) has been assigned to two or more \(\gamma_{i}\), it belongs to the soft subgraph \(S\); if \(e\) has been assigned to exactly one \(\gamma_{i}\), it belongs to the jet subgraph \(J_{i}\); if \(e\) has not been assigned to any \(\gamma_{i}\), it belongs to \(H\). Let us also denote \(J\equiv\cup_{i=1}^{K}J_{i}\).
Figure 1: The partitioning of a generic wide-angle scattering graph into infrared subgraphs corresponding to a particular region (a) \(R\) in the soft expansion, eq. (10), and (b) \(R^{\prime}\) in the on-shell expansion, eq. (11). The doubled lines connecting different blobs represent any number of propagators.
* _Step 3_: We now require that the result obtained in _Step 2_ satisfies the following three further conditions: (i) each jet subgraph \(J_{i}\) is connected; (ii) each hard subgraph \(H\) is connected; (iii) each of the \(K\) subgraphs \(H\cup J\setminus J_{i}\) (\(i=1,\ldots,K\)) is mojetic. The region would be ruled out if any of these conditions are not satisfied.
Let us illustrate how this algorithm works through the following example of a \(3\times 2\) fishnet graph, which has four on-shell external momenta \(p_{1}\), \(p_{2}\), \(p_{3}\) and \(p_{4}\). A choice of the graphs \(\gamma_{1}\), \(\gamma_{2}\), \(\gamma_{3}\) and \(\gamma_{4}\), which satisfy the conditions outlined in _Step 1_, is shown below. Note that in each figure, the edges \(\{e_{c}\}\) are cut by the dotted curve.
(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,100)
soft. The kinematic limit can be summarized as
\[p_{1}^{2}\sim Q^{2},\qquad p_{2}^{2}=p_{3}^{2}=p_{4}^{2}=0, \tag{4.5a}\] \[p_{1}\cdot p_{4}\sim p_{2}\cdot p_{4}\sim p_{3}\cdot p_{4}\sim \lambda Q^{2},\qquad p_{1}\cdot p_{2}\sim Q^{2}, \tag{4.5b}\]
which is a special case of soft expansion described in the beginning of this section in eq. (4.1).
Note that in this particular case, where \(p_{4}\) is the unique soft external momentum, additional requirements of the configurations of \(H\), \(J\), and \(S\) are needed [87], giving the following possibilities.
* If there are no internal soft propagators, then there can be at most one nontrivial1 jet \(J_{i}\) (\(i=2\) or \(3\)), to which \(p_{4}\) directly attached. In the special case that neither \(J_{2}\) or \(J_{3}\) is nontrivial, the region is referred to as the _hard region_, where all the loop momenta are equally off shell. Footnote 1: A jet is called nontrivial if it has one or more edges.
* If there are internal soft propagators, then each component of \(S\) must be adjacent to both \(J_{2}\) and \(J_{3}\). In addition, \(p_{4}\) must enter a soft vertex. In general, such regions are depicted in figure 2, where the hard, jet, and soft subgraphs satisfy:
* the hard subgraph \(H\) is connected and attached by \(p_{1}\);
* the jet subgraphs \(J_{2}\) and \(J_{3}\) are both connected and adjacent to \(H\), and are attached by \(p_{2}\) and \(p_{3}\) respectively;
* the soft subgraph \(S\) is attached by \(p_{4}\), and each of its connected components is adjacent to both \(J_{2}\) and \(J_{3}\).
Figure 2: The general configuration of regions in the process of eq. (3.1), where there are internal soft propagators. The external momenta \(p_{1}\), \(p_{2}\), \(p_{3}\), and \(p_{4}\) attach to \(H\), \(J_{2}\), \(J_{3}\), and \(S\), respectively.
This is illustrated below with some examples of regions (marked ) and non-region configurations (marked ).
(20,20)(20,20
Our analyses above is sufficient to develop soft expansions for the \(p_{1}\to p_{2}p_{3}p_{4}\) process, namely, eq. (4.5). Based on the findings of ref. [87], our method can be readily extended to soft expansions for generic wide-angle scattering, eq. (4.1).
## 5 Renormalization and infrared pole structure
In this section, we briefly describe the renormalization and subtraction of singularities of the bare emission current. In general, the infrared and ultraviolet singularities of a scattering amplitude computed in perturbative QCD can be subtracted to yield a finite remainder using the following definitions.
\[\mathcal{A}_{f}(\alpha_{S}(\mu^{2}),\{p_{i}\})=\mathbf{Z}(\alpha_{S}(\mu^{2}), \{p_{i}\},\epsilon)\ \mathbf{Z}_{\rm UV}\ \mathcal{A}(\{p_{i}\},\epsilon). \tag{5.1}\]
The factor \(\mathbf{Z}_{\rm UV}\) implements the renormalization of the strong coupling constant in the \(\overline{\rm MS}\) scheme and \(\epsilon\) is the dimensional regulator related to the space-time dimension by \(d=4-2\epsilon\).
\[\mathbf{Z}_{\rm UV}\alpha_{S}=\alpha_{S}(\mu^{2})\left(\frac{\mu^{2}}{4\pi} \right)^{-\epsilon}e^{\gamma_{E}\epsilon}Z_{\alpha_{S}}. \tag{5.2}\]
The factor \(Z_{\alpha_{S}}\) is given in terms of the \(\beta\)-function [88; 89; 90; 91; 92; 93] through three loops in QCD by the following expression.
\[Z_{\alpha_{S}} = 1-\frac{\alpha_{S}(\mu^{2})}{\pi}\frac{1}{\epsilon}\beta_{0}+ \left(\frac{\alpha_{S}(\mu^{2})}{\pi}\right)^{2}\left(\frac{1}{\epsilon^{2}} \beta_{0}^{2}-\frac{1}{2\epsilon}\beta_{1}\right)\] \[-\left(\frac{\alpha_{S}(\mu^{2})}{\pi}\right)^{3}\left(\frac{1}{ \epsilon^{3}}\beta_{0}^{3}-\frac{7}{6\epsilon^{2}}\beta_{0}\beta_{1}+\frac{1} {3\epsilon}\beta_{2}\right)+\mathcal{O}(\alpha_{S}^{4}).\]
The factor \(\mathbf{Z}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)\) is an operator in color space and implements the universal subtraction of infrared and collinear singularities of loop amplitudes [94; 95; 96; 97; 98; 99; 100; 101]. It can be expressed in terms of the _soft anomalous dimension matrix_\(\mathbf{\Gamma}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)\) by the following path ordered exponential.
\[\mathbf{Z}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)=\mathcal{P}\exp\left\{- \frac{1}{4}\int_{0}^{\mu^{2}}\frac{d\mu^{\prime 2}}{\mu^{\prime 2}}\mathbf{ \Gamma}(\alpha_{S}(\mu^{\prime 2}),\{p_{i}\},\epsilon)\right\}\,, \tag{5.4}\]
with
\[\mathbf{\Gamma}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)=\sum_{i\neq j} \mathbf{T}_{i}^{a}\mathbf{T}_{j}^{a}\Gamma_{\rm cusp}(\alpha_{S}(\mu^{2}))\log \frac{-s_{ij}}{\mu^{2}}+\frac{1}{2}\sum_{i}\mathds{1}\gamma_{c}^{R_{i}}+ \mathbf{\Delta}(\alpha_{S}(\mu^{2}),\{p_{i}\}). \tag{5.5}\]
Above, \(\Gamma_{\rm cusp}\) refers to the cusp anomalous dimension [99], which is currently known exactly through four-loop order [102; 103], and approximately at five loops [104]. Furthermore, \(\gamma_{c}^{R}\) is the collinear anomalous dimension, obtained through four-loop order in refs. [105; 103]. The formula above was derived and calculated through three-loop order in ref. [94] and verified in \(\mathcal{N}=4\) super Yang-Mills theory [106] and QCD [107; 108; 109]. In ref. [101], its general structure was determined at four-loop order. The term \(\mathbf{\Delta}(\alpha_{S}(\mu^{2}),\{p_{i}\})\) is known as the correction to the dipole formula and starts at three-loop order. As the name suggests,
it contains contributions where the color operator acts on more than two color-charged external particles simultaneously for the first time. This term can be further decomposed as follows.
\[\mathbf{\Delta}(\alpha_{S}(\mu^{2}),\{p_{i}\})=\left(\frac{\alpha_{S}(\mu^{2})}{\pi} \right)^{3}\left[\mathbf{\Delta}_{3}^{(3)}+\mathbf{\Delta}_{4}^{(3)}(\{p_{i}\})\right]+ \mathcal{O}\left(\alpha_{S}^{4}\right). \tag{5.6}\]
The expression \(\mathbf{\Delta}_{4}^{(3)}(\{p_{i}\})\) is known as the quadruple correction and involves color correlations among four or more different color-charged particles. Consequently, this term will not contribute to the scattering amplitudes we consider here. The term \(\mathbf{\Delta}_{3}^{(3)}\) relates three different color-charged external particles and is explicitly given by
\[\mathbf{\Delta}_{3}^{(3)}=\frac{1}{4}C\,f_{abe}f_{cde}\sum_{i\neq j,i\neq k,j\neq k }\left\{\mathbf{T}_{i}^{a},\mathbf{T}_{i}^{d}\right\}\mathbf{T}_{j}^{b}\mathbf{ T}_{k}^{c}\,, \tag{5.7}\]
with the constant \(C=\zeta_{5}+2\zeta_{3}\zeta_{2}\). The color operators \(\mathbf{T}_{i}^{a}\) are defined below, via their actions on an outgoing quark, antiquark and gluon.
\[\mathbf{T}_{i}^{a}\epsilon(p_{i})_{b}^{\mu} = -if^{abc}\epsilon_{c}^{\mu}(p_{i}).\] \[\mathbf{T}_{i}^{a}\bar{u}_{k}(p_{i}) = T_{jk}^{a}\bar{u}_{j}(p_{i}).\] \[\mathbf{T}_{i}^{a}u_{k}(p_{i}) = -T_{kj}^{a}u_{j}(p_{i}). \tag{5.8}\]
We are particularly interested in scattering amplitudes involving three final-state gluons (\(ggg\)) or the combination of a final-state quark-antiquark pair and a gluon (\(q\bar{q}g\)). With the definitions above, we can now evaluate the action of the operator given in eq. (5.5) on such an amplitude.
\[\mathbf{\Gamma}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)\mathcal{A}_{ggg} \tag{5.9}\] \[=\left[-C_{A}\Gamma_{\text{cusp.}}\left(\log\frac{-s_{12}}{\mu^{2 }}+\log\frac{-s_{13}}{\mu^{2}}+\log\frac{-s_{23}}{\mu^{2}}\right)+\frac{3}{2} \gamma_{c}^{A}\right.\] \[\left.\hskip 113.811024pt-\frac{C}{8}\left(\frac{\alpha_{S}(\mu^{2 })}{\pi}\right)^{3}\left(C_{A}^{3}-24\frac{C_{4}^{AA}}{d_{A}T_{A}}\right) \right]\mathcal{A}_{ggg}.\] \[\mathbf{\Gamma}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)\mathcal{A}_{ q\bar{q}g}\] \[=\left[-\Gamma_{\text{cusp.}}\left(-(C_{A}-2C_{F})\log\frac{-s_{ 12}}{\mu^{2}}+C_{A}\log\frac{-s_{13}}{\mu^{2}}+C_{A}\log\frac{-s_{23}}{\mu^{2} }\right)\right.\] \[\left.\hskip 113.811024pt+\frac{1}{2}\gamma_{c}^{A}+\gamma_{c}^{F} -\frac{C}{8}\left(\frac{\alpha_{S}(\mu^{2})}{\pi}\right)^{3}\left(C_{A}^{3}- 24\frac{C_{4}^{AF}}{d_{A}T_{F}}\right)\right]\mathcal{A}_{q\bar{q}g}.\]
Note that the formulae above are valid up to three loops. The action of the soft anomalous dimension operator on our amplitudes is diagonal in color space such that the subtraction of infrared singularities becomes multiplicative. We now want to make use of the factorization introduced in eq. (2.1) in order to simplify the subtraction of infrared poles. By rewriting eq. (2.1) for finite amplitudes, we find
\[\mathbf{Z}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)\mathbf{Z}_{\text{UV}} \mathcal{A}(\{p_{i}\},\epsilon)\]
\[=\left[{\bf Z}_{J}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon){\bf Z}_{ \rm UV}{\bf J}(p_{4})\right]\times\left[{\bf Z}(\alpha_{S}(\mu^{2}),\{p_{i}\}, \epsilon){\bf Z}_{\rm UV}{\cal F}_{1\to 23}{\cal A}_{1\to 23}^{(0)} \right]. \tag{11}\]
The action of \({\bf\Gamma}\) on the amplitudes \({\cal A}_{1\to 23}\) is given by
\[{\bf\Gamma}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon){\cal A}_{1 \to 23}=\left[-2C_{R}\Gamma_{\rm cusp.}\log\frac{-s_{12}}{\mu^{2}}+\gamma_{c}^{R }\right]{\cal A}_{1\to 23}. \tag{12}\]
Above, the sub- or superscript \(R\) indicates the color representations of the colored particles of \({\cal A}_{1\to 23}\). This result can now be used in order to find
\[{\bf\Gamma}_{J}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon){\bf Z}_{ \rm UV}{\bf J}(p_{4})\] \[=\left[C_{A}\Gamma_{\rm cusp.}\left(\log\frac{-s_{12}}{\mu^{2}}- \log\frac{-s_{13}}{\mu^{2}}-\log\frac{-s_{23}}{\mu^{2}}\right)+\frac{1}{2} \gamma_{c}^{A}\right.\] \[\left.-\frac{C}{8}\left(\frac{\alpha_{S}(\mu^{2})}{\pi}\right)^{3 }\left(C_{A}^{3}-24\frac{C_{A}^{4}}{d_{A}T_{R}}\right)\right]{\bf Z}_{\rm UV}{ \bf J}(p_{4}).\]
Next, we perform the integration over \(\mu^{\prime 2}\) in eq. (10) and consequently obtain the necessary ingredients to remove all infrared singularities of the single-soft emission current. Indeed, we find that our results are finite once the subtraction procedure of eq. (11) is complete. The fact that the poles of the soft emission current computed from \(\gamma^{*}\to q\bar{q}g\) amplitude and the \(h\to ggg\) amplitude agree with the prediction based on the soft anomalous dimension matrix discussed here is a robust cross-check of our results.
## 6 The soft current in \({\cal N}=4\) super Yang-Mills theory
Maximally super-symmetric Yang-Mills theory (\({\cal N}=4\) sYM) is an excellent testing ground for many aspects of four-dimensional non-abelian gauge theory. It has countless times served as a laboratory to explore perturbation theory to perturbative orders simply not accessible in realistic theories like QCD. One particular observation is that there is an interesting similarity between QCD and \({\cal N}=4\) sYM: The leading transcendental part of the perturbative expansion of certain quantities agrees among the two theories [64; 65]. This correspondence even holds true for certain form factors of operators of the stress tensor multiplet [110; 111; 112; 113]. In particular, the form factor of three on-shell states \(\Phi\) and the trace of two scalar fields \(\phi\),
\[{\cal F}_{2}=\int d^{d}x\langle\Phi_{1}\Phi_{2}\Phi_{3}|\phi^{I}(x)\phi^{I}(x )|0\rangle, \tag{13}\]
corresponds to the amplitude of a Higgs boson decaying to three gluons in QCD. This form factor has been of great interest to the community [114; 115; 116; 117; 118; 119; 120] and was recently computed to staggering eight-loop accuracy in the planar limit of the theory [121].
Similar to the QCD case discussed above, the soft limit of these form factors can be used to extract the soft current in \({\cal N}=4\) sYM theory. To achieve this, we start by using the integrand for the form factor determined in ref. [63] at two- and three-loop order. We then apply our integrand expansion technology and compute the first term in the maximally soft limit of this form factor. We obtain a pure function (i.e. of maximal transcendental weight) for both the two- and three-loop result. We then compare our result with the maximally soft
limit of the decay form factor of a Higgs to three gluons in QCD. Indeed, we find that these two results agree to all orders in the dimensional regulator for the leading transcendental contribution. Consequently, we determine that the single-soft emission current in \(\mathcal{N}=4\) sYM theory is identical to the leading transcendental part of the QCD results quoted above. This validates the principle of maximal transcendentality for the quantity computed here.
We find that the maximally soft limit of the \(\mathcal{F}_{2}\) form factor at three-loop order relative to its Born contribution can be cast in the following form.
\[\mathcal{F}_{2}^{(3)}/\mathcal{F}_{2}^{(0)}=\frac{a_{S}^{3}\pi^{3}i}{2^{11} \times 3^{6}\times 5\times\epsilon^{6}}\left(\frac{(-2p_{2}p_{4})(-2p_{3}p_{4})}{(-2p_{2}p _{3})\mu^{2}}\right)^{-3\epsilon}\left[C_{A}^{3}F^{(3),\mathrm{P}}+\frac{C_{4 }^{AA}}{d_{A}T_{A}}F^{(3),\mathrm{NP}}\right]. \tag{103}\]
In the equation above, we defined two uniform transcendental functions that we present here as an integer-linear combination of our canonical soft master integrals defined in eq. (101). We would like to emphasize that the solution of the canonical soft master integrals we provide is valid only up to transcendental weight 8. In contrast, the expressions below are correct to arbitrary order of the Laurent expansion in \(\epsilon\).
\[F^{(3),\mathrm{P}} = -3996M_{c}^{50}+4032M_{c}^{49}+1350M_{c}^{48}-8640M_{c}^{47}-1296 0M_{c}^{46}+3807M_{c}^{45}\] \[+6156M_{c}^{43}+720M_{c}^{41}+702M_{c}^{40}+5400M_{c}^{39}-3240M_{ c}^{37}-1125M_{c}^{36}-6570M_{c}^{35}\] \[+360M_{c}^{33}-12960M_{c}^{31}-540M_{c}^{28}-2376M_{c}^{26}+1890M_ {c}^{25}+5184M_{c}^{24}\] \[+9720M_{c}^{23}+16200M_{c}^{21}-560M_{c}^{20}+8076M_{c}^{19}-120M_ {c}^{18}-116640M_{c}^{17}\] \[-1944M_{c}^{16}-4860M_{c}^{15}-180M_{c}^{13}+103680M_{c}^{12}-936M _{c}^{11}+16200M_{c}^{10}\] \[-19440M_{c}^{9}-378M_{c}-7344M_{c}^{7}-3240M_{c}^{6}+6480M_{c}^{ 5}+864M_{c}^{4}-15552M_{c}^{1}.\]
\[F^{(3),\mathrm{NP}} = 95904M_{c}^{50}-96768M_{c}^{49}-32400M_{c}^{48}-103680M_{c}^{47} -91368M_{c}^{45}-147744M_{c}^{43} \tag{104}\] \[-16848M_{c}^{40}-129600M_{c}^{39}+77760M_{c}^{37}+27000M_{c}^{36} +157680M_{c}^{35}+4320M_{c}^{33}\] \[+12960M_{c}^{28}+57024M_{c}^{26}-45360M_{c}^{25}+62208M_{c}^{24}- 233280M_{c}^{23}+67392M_{c}^{21}\] \[+13440M_{c}^{20}+75744M_{c}^{19}+2880M_{c}^{18}+46656M_{c}^{16}+4 320M_{c}^{13}+22464M_{c}^{11}\] \[+77760M_{c}^{10}+466560M_{c}^{9}+9072M_{c}^{8}+176256M_{c}^{7}+77 760M_{c}^{6}-155520M_{c}^{5}\] \[+10368M_{c}^{4}.\]
Above we introduced
\[d_{A}=(n_{c}^{2}-1),\hskip 28.452756ptT_{A}=n_{c}. \tag{105}\]
Single-real threshold contribution to the Higgs boson and Drell-Yan production cross sections at N\({}^{4}\)LO
The inclusive gluon fusion Higgs boson production cross section and the Drell-Yan production cross section of an electron-positron pair are some of the most important LHC observables. Currently, their predictions are known through N\({}^{3}\)LO in perturbative QCD [122; 123; 124; 125]. Going beyond the current state of the art is a formidable challenge and we present here a first contribution towards this step.
The LHC cross sections for the production of a virtual photon or a Higgs boson in gluon fusion in the infinite top quark mass limit is described by the following factorization formula.
\[\sigma_{B}=\tau\hat{\sigma}_{0}^{B}C_{B}^{2}\sum_{ij}f_{i}(\tau)\circ_{\tau}\eta _{ij}^{B}(\tau)\circ_{\tau}f_{j}(\tau),\hskip 28.452756ptB\in\{H,\gamma^{*}\}. \tag{115}\]
Above, the \(f_{i}\) are parton distribution functions (PDFs), \(\hat{\sigma}_{0}^{B}\) represents the partonic Born cross section, and we define the ratio \(\tau=Q^{2}/S\), such that \(Q\) is the virtuality of the virtual photon or the mass of the Higgs boson, and \(S\) is the hadronic center-of-mass energy. The PDFs are convoluted with the partonic coefficient functions using standard Mellin convolutions indicated by the symbol \(\circ\). The partonic coefficient functions \(\eta_{ij}^{B}\) are given by
\[\eta_{ij}^{B}(z)=\frac{\mathcal{N}_{ij}}{2Q^{2}\hat{\sigma}_{0}^{B}}\sum_{m=0} ^{\infty}\int d\Phi_{B+m}\mathcal{M}_{ij\to B+m}. \tag{116}\]
The normalization factor \(\mathcal{N}_{ij}\) depends on the initial state and is given by
\[\mathcal{N}_{gg}=\frac{1}{4(n_{c}^{2}-1)^{2}(1-\epsilon)^{2}},\hskip 28.452756pt \mathcal{N}_{q\bar{q}}=\frac{1}{4n_{c}^{2}}, \tag{117}\]
where \(g\), \(q\) and \(\bar{q}\) represent a gluon, quark and anti-quark respectively, and \(n_{c}\) denotes the number of colors. The coefficient \(C_{B}\) is simply unity for the production cross section of a virtual photon and equal to the Wilson coefficient [78; 79; 80; 81] for the effective field theory describing the interactions of a Higgs boson with gluons in the limit of infinitely large top quark mass [74; 75; 76; 77]. The color and spin summed squared matrix element is given by \(\mathcal{M}_{ij\to B+m}\). This squared matrix element describes the production of the desired boson \(B\) and \(m\) final state partons in the collision of initial state partons \(i\) and \(j\). In this article, we focus in particular on the contribution for one final state gluon (i.e. \(m=1\)). We refer to this contribution as the single real emission (R) contribution to the inclusive cross section. The corresponding partonic coefficient function is consequently given by
\[\eta_{ij}^{B,R}(z)=\frac{\mathcal{N}_{ij}}{2Q^{2}\hat{\sigma}_{0}^{B}}\int d \Phi_{B+1}\mathcal{M}_{ij\to B+1}. \tag{118}\]
We focus on the limit in which the energy of the final state parton vanishes. This limit is referred to as the production threshold, as all energy of the colliding partons is used to produce the final state boson. To parametrize this limit, we introduce the following variables
\[\bar{z}=1-z,\hskip 28.452756ptz=\frac{Q^{2}}{s}. \tag{119}\]
The threshold (or soft) limit is given by \(\bar{z}\to 0\). We can now exploit the factorization scattering amplitudes as introduced in eq. (1) to compute the threshold limit of the single-real emission partonic coefficient function.
\[\eta_{ij}^{B,R,\text{thr.}}(z)=\lim_{\bar{z}\to 0}\eta_{ij}^{B,R}(z)=\frac{ \mathcal{N}_{ij}}{2Q^{2}\hat{\sigma}_{0}^{B}}\int d\Phi_{B+1}\sum_{\text{Spin, Color}}\Big{|}\mathbf{J}(p_{g})\mathcal{A}_{p_{i}p_{j}\to B}\Big{|}^{2}. \tag{120}\]
The result for this part of the partonic coefficient function can be expanded in the strong coupling constant (eq. (4)).
\[\eta^{B,R,{\rm thr.}}_{ij}(z)=\sum_{o=0}^{\infty}a_{S}^{o}\eta^{B,R,{\rm thr.},(o )}_{ij}(z). \tag{110}\]
The above single-real emission contribution to the partonic coefficient function computed through N\({}^{4}\)LO in perturbative QCD represents a major result of this article. To obtain this result through three loops in QCD, we make use of our newly derived results for the soft current (eq. (2)) and apply it to the purely virtual amplitudes of the scattering process in question. These virtual amplitudes are currently available in the literature to four-loop order [103; 126; 127; 128; 129; 130; 105; 131], even beyond what is required here. To arrive at the desired result for the partonic coefficient function through N\({}^{4}\)LO, we first perform an analytic continuation of the soft current in eq. (3) into the production region. Next, we apply the current to the purely virtual amplitudes to obtain the threshold limit of the desired scattering amplitude. Then, we interfere the soft scattering amplitude with its complex conjugate and finally perform the integration over the single emission phase space \(d\Phi_{B+1}\).
We express our results in terms of Laurent expansions in the dimensional regulator \(\epsilon\) and express threshold singularities in terms of standard Dirac delta functions and plus distributions. Their action on a test function \(f(\bar{z})\) is given by
\[f(0)=\int_{0}^{1}d\bar{z}\delta(\bar{z})f(\bar{z}),\hskip 28.452756pt\int_{0}^{1} d\bar{z}\left[\frac{\log^{n}\bar{z}}{\bar{z}}\right]_{+}f(\bar{z})=\int_{0}^{1}d \bar{z}\frac{\log^{n}\bar{z}}{\bar{z}}(f(\bar{z})-f(0)). \tag{111}\]
In order for our results to be usable for the computation for the N\({}^{4}\)LO production cross section we truncate the Laurent expansion in \(\epsilon\) at \({\cal O}(\epsilon^{8-2n})\) at N\({}^{n}\)LO in QCD perturbation theory for \(n\in\{1,2,3,4\}\). Note, that first approximate results for the full threshold limit of the N\({}^{4}\)LO production cross section already appeared in refs. [132; 85]. We confirm previous computations through N\({}^{3}\)LO in the literature [47; 61; 133; 134; 135; 136; 137]. We present our results in terms of computer readable files in association with the arXiv submission of this article.
## 8 Conclusion
In this article, we computed N\({}^{3}\)LO corrections to the single-soft emission current applied to amplitudes with two color-charged partons. Our result is a significant contribution to our understanding of soft factorization at N\({}^{3}\)LO in perturbative QFT and the understanding of infrared singularities at N\({}^{4}\)LO.
We achieved our results by performing a systematic expansion of three-loop scattering amplitudes involving one color-neutral boson and three partons around the limit of one gluon becoming soft. To facilitate this expansion, we develop a new method for the systematic expansion of Feynman graphs around soft limits. We emphasize the generality of this technique and apply it to obtain our results for the soft emission current as a first example.
We perform the expansion of scattering matrix elements in QCD and in maximally super-symmetric Yang-Mills theory. We observe that the result from the two different
gauge theories agree at the highest transcendentality, in accord with previous conjectures. Furthermore, we use our new results to determine the contributions to the threshold approximation of the Higgs boson and Drell-Yan production cross sections at the LHC at N\({}^{4}\)LO in perturbative QCD. To facilitate the use of our results, we make them available in terms of computer readable files associated with the arXiv submission of this article.
**Note:** During the completion of this manuscript a separate calculation of the three-loop single-soft emission current became public on the arXiv in ref. [138]. We have found complete agreement among our main results for coefficients \(K_{2}^{(3)}\) (eq. (11)), \(K_{4A}^{(3)}\) (eq. (13)) and \(K_{4F}^{(3)}\) (eq. (15)). It is worth noting that the methods employed in ref. [138] and in this article are substantially different, and obtaining matching results thus provides a robust cross-validation.
###### Acknowledgments.
We thank Lance Dixon, Einan Gardi and Stephen Jones for useful discussions. FH and YM are supported by the UKRI FLF grant "Forest Formulas for the LHC" (Mr/S03479x/1) and the STFC Consolidated Grant "Particle Physics at the Higgs Centre". BM and AS are supported by the United States Department of Energy, Contract DE-AC02-76SF00515. YM would like to thank the Galileo Galilei Institute for Theoretical Physics for the hospitality and the INFN for partial support, during the completion of this work.
| ```
3ループ corrections を用いて、2 つの追加のカラー荷電 Parton が存在する散乱振幅の場合の宇宙的な単一ソフトエミッタ電流を計算する。
QCD と $\mathcal{N}$ = 4 超対称ヤング-Mills theorie に対して有効な結果を提示する。
散乱振幅にソフトエミッタが存在する場合に新たな積分関数の展開手法を開発することで、これらの結果を得た。
さらに、Higgs boson と Drell-Yan production cross section の状態の単一の最終状態 Parton
行列要素の寄与が、Perturbative QCD の次次次次の次次 (N$^4$LO)
の閾値の境界において得られる。
``` |
2309.16751 | Secondary Whistler and Ion-cyclotron Instabilities driven by Mirror
Modes in Galaxy Clusters | Electron cyclotron waves (whistlers), are commonly observed in plasmas near
Earth and the solar wind. In the presence of nonlinear mirror modes, bursts of
whistlers, usually called lion roars, have been observed within low magnetic
field regions associated to these modes. In the intracluster medium (ICM) of
galaxy clusters, the excitation of the mirror instability is expected, but it
is not yet clear whether electron and ion cyclotron waves can also be present
under conditions where gas pressure dominates over magnetic pressure (high
$\beta$). In this work, we perform fully kinetic particle-in-cell (PIC)
simulations of a plasma subject to a continuous amplification of the mean
magnetic field $\textbf{B}(t)$ to study the nonlinear stages of the mirror
instability and the ensuing excitation of whistler and ion cyclotron (IC) waves
under ICM conditions. Once mirror modes reach nonlinear amplitudes, both
whistler and IC waves start to emerge simultaneously, with sub-dominant
amplitudes, propagating in low-$\textbf{B}$ regions, and quasi-parallel to
$\textbf{B}(t)$. We show that the underlying source of excitation is the
pressure anisotropy of electrons and ions trapped in mirror modes with
loss-cone type distributions. We also observe that IC waves play an essential
role in regulating the ion pressure anisotropy at nonlinear stages. We argue
that whistler and IC waves are a concomitant feature at late stages of the
mirror instability even at high-$\beta$, and therefore expected to be present
in astrophysical environments like the ICM. We discuss the implications of our
results for collisionless heating and dissipation of turbulence in the ICM. | Francisco Ley, Ellen G. Zweibel, Drake Miller, Mario Riquelme | 2023-09-28T18:00:00 | http://arxiv.org/abs/2309.16751v1 | # Secondary Whistler and Ion-cyclotron Instabilities driven by Mirror Modes in Galaxy Clusters
###### Abstract
Electron cyclotron waves (whistlers), are commonly observed in plasmas near Earth and the solar wind. In the presence of nonlinear mirror modes, bursts of whistlers, usually called lion roars, have been observed within low magnetic field regions associated to these modes. In the intracluster medium (ICM) of galaxy clusters, the excitation of the mirror instability is expected, but it is not yet clear whether electron and ion cyclotron waves can also be present under conditions where gas pressure dominates over magnetic pressure (high \(\beta\)). In this work, we perform fully kinetic particle-in-cell (PIC) simulations of a plasma subject to a continuous amplification of the mean magnetic field \(\textbf{B}(t)\) to study the nonlinear stages of the mirror instability and the ensuing excitation of whistler and ion cyclotron (IC) waves under ICM conditions. Once mirror modes reach nonlinear amplitudes, both whistler and IC waves start to emerge simultaneously, with sub-dominant amplitudes, propagating in low-**B** regions, and quasi-parallel to \(\textbf{B}(t)\). We show that the underlying source of excitation is the pressure anisotropy of electrons and ions trapped in mirror modes with loss-cone type distributions. We also observe that IC waves play an essential role in regulating the ion pressure anisotropy at nonlinear stages. We argue that whistler and IC waves are a concomitant feature at late stages of the mirror instability even at high-\(\beta\), and therefore expected to be present in astrophysical environments like the ICM. We discuss the implications of our results for collisionless heating and dissipation of turbulence in the ICM.
Plasma astrophysics(1261) -- Intracluster medium(858) -- High energy astrophysics(739) -- Extragalactic magnetic fields(507)
## 1 Introduction
Several classes of astrophysical plasmas display fully developed turbulent states and a weak collisionality, in the sense that the particles' mean free path is several orders of magnitude larger than the typical radius at which they gyrate around the ambient magnetic field. These two characteristics alone can make the transport properties and global evolution of the astrophysical environment in question challenging and dependent on the local evolution at particles' scales. Therefore a detailed study of the behavior of these plasmas at the kinetic level becomes a necessity.
That is the case of the intracluster medium of galaxy clusters (ICM). The ICM is a hot, magnetized (Bonafede, A. et al. (2010)), weakly collisional and turbulent (Schuecker, P. et al. (2004); Zhuravleva et al. (2014); Hitomi Collaboration et al. (2016)) gas in the plasma state where the thermal pressure greatly exceeds the magnetic pressure (\(\beta\equiv 8\pi P/B^{2}\sim 10-100\), \(P\) is the isotropic thermal pressure and \(B\) the magnetic field strength). In these conditions, departures from thermodynamic equilibrium, such as pressure anisotropies, are easy to achieve. For example, slow compression of the magnetic field increases particle kinetic energy perpendicular to the magnetic field such that the magnetic moment (or, the magnetic flux through the particle gyro-orbit) remains constant, leading to an excess of perpendicular pressure \(P_{\perp}\) over parallel pressure \(P_{\parallel}\). However, pressure anisotropy cannot grow unchecked. Pressure anisotropies can easily destabilize microinstabilities such as mirror, firehose, ion-cyclotron and whistler (Schekochihin et al. (2005); Schekochihin & Cowley (2006)). The back reaction of these instabilities on the particles can maintain pressure anisotropy near its marginally unstable value, and are thought to play an important role in several aspects of ICM transport and heating (Kunz et al. (2011); Berlok et al. (2021); Drake et al. (2021); Perrone & Latter (2022a,b); Ley et al. (2023); Tran et al. (2023)).
In a similar vein, the solar wind and some regions of the Earth's magnetosheath and magnetosphere host plasmas that are also collisionless and turbulent. Even when the plasma \(\beta\) is lower than in the ICM (\(\beta_{i}\sim 1-10\), \(\beta_{e}\sim 1\)), we can encounter some similarities. In particular, the plasma is also pressure anisotropic, and the same microinstabilities above mentioned are found to be present, usually in their fully developed, nonlinear stage (Bale et al. (2009)). Particularly important to this work is the presence of the mirror instability (Chandrasekhar et al. (1958); Rudakov & Sagdeev (1961); Hasegawa (1969); Southwood & Kivelson (1993); Kivelson & Southwood (1996); Pokhotelov et al. (2002, 2004)) and its interplay with the whistler and (potentially) ion-cyclotron in
stabilities (Gary (1992),Gary & Wang (1996)). An example of this has been observed in these space plasmas, and termed whistler lion roars.
Whistler lion roars are short bursts of right-hand polarized waves, with frequencies below the electron cyclotron frequency (\(\omega_{c,e}\)) commonly observed in the Earth's magnetosheath and magnetosphere (Smith et al. (1969); Tsurutani et al. (1982); Baumjohann et al. (1999); Breuillard et al. (2018); Giagkiozis et al. (2018); Kitamura et al. (2020); Zhang et al. (2021)), therefore identified as whistler waves. They have also been observed in Saturn's magnetosheath (Pisa et al. (2018)) and the solar wind. They are observed in regions of locally low magnetic field strength (magnetic troughs, or magnetic holes) of magnetic fluctuations. These magnetic troughs are usually identified as structures produced by mirror instability modes, which are able to trap electrons with low parallel velocity within these regions due to the aforementioned invariance of magnetic moment (Southwood & Kivelson (1993)).
Several mechanisms have been proposed to explain the excitation of whistler lion roars. They usually invoke the pressure anisotropy \(P_{\perp,e}>P_{\parallel,e}\) that electrons generate while trapped inside the magnetic troughs (\(P_{\perp,e}\) and \(P_{\parallel,e}\) are, respectively, the electron pressure perpendicular and parallel with respect to the local magnetic field **B**). Other mechanisms have also been proposed involving counter-propagating electron beams inside these regions, and butterfly distributions in pitch-angle (Zhang et al. (2021); Jiang et al. (2022)). As the waves propagate out from the magnetic troughs, they are thought to interact with electrons, regulating the number of trapped electron inside magnetic troughs and also the global anisotropy of electrons in the magnetosheath. This way, there would be a causal connection between an ion-scale mirror instability with an electron scale whistler instability at nonlinear stages, providing valuable insight into the interaction of mirror modes with electrons.
The question arises as to whether a similar interplay can be expected in the ICM. Such behavior would imply a more complex scenario in which several microinstabilities would be causally connected and coexisting with each other, and several channels of turbulent energy dissipation would open, leading to a much richer dynamics.
Mirror instability and its consequences have been extensively studied using particle-in-cell (PIC) simulations of moderately and high-\(\beta\) plasmas, both hybrid (Kunz et al. (2014); Melville et al. (2016); Arzamasskiy et al. (2023)) and fully kinetic (Sironi & Narayan (2015); Riquelme et al. (2015, 2016); Ley et al. (2023)), up to nonlinear stages. Consistent with early theoretical works (Southwood & Kivelson (1993); Kivelson & Southwood (1996)), it has been demonstrated that mirror modes are efficient in trapping ions inside regions of low magnetic field strength during their secular growth (Kunz et al. (2014)). When mirror modes reach amplitudes of order \(\delta B/B\sim 1\), they reach a saturated stage and the ions eventually undergo scattering, allowing them to escape. This trapping process is similar for electrons, and it has been shown to have important consequences in the electron viscosity and thermal conduction of the plasma (Riquelme et al. (2016); Roberg-Clark et al. (2016, 2018)). Interestingly, Riquelme et al. (2016) reported the observation of whistler waves in the nonlinear, saturated stages of mirror modes in their simulations, along with ion-cyclotron (IC) waves, although they did not pinpoint the cause of the excitation.
In this work, we use PIC simulations to investigate the nonlinear stages of the mirror instability at moderate and high-\(\beta\), focusing on the abovementioned excitation of whistler and IC waves. We observe that, indeed, both right hand and left hand polarized, quasi parallel-propagating waves are excited at the end of mirror's secular growth and during its saturated stage, and provide evidence for their excitation mechanism associated to the pressure anisotropy electrons and ions within magnetic troughs of mirror modes. The right- and left-handed circular polarization of these waves lead to their identification as electron-cyclotron (i.e. whistlers) and ion-cyclotron (IC) waves. We also provide some additional discussion about their nature. We describe the interaction of these waves with electrons and ions, and their effect on the regulation of the pressure anisotropy at late stages.
This paper is organized as follows. Section SS2 describes our simulation setup and the runs we perform. Section SS3 shows our simulation results starting from the excitation of the mirror instability, an early whistler burst and then the late excitation of the electron and ion cyclotron waves at nonlinear stages of the mirror instability. We also detail the mechanism by which these cyclotron waves are excited during the saturated stage of mirror modes, by tracking ions and electrons throughout the simulations. We also describe the subsequent interaction of these waves with the ions and electrons at late stages. In section SS4 we discuss the dependence of our results on the mass ratio used in our simulations and show that they are fairly insensitive to it. In section SS5 we present results of simulations at different initial ion plasma beta, and show these cyclotron waves are also present at lower and higher betas as well. Finally, we discuss the implication of our work in the context of galaxy clusters and present our conclusions in section SS6.
## 2 Simulation Setup
We perform fully kinetic, 2.5D particle-in-cell (PIC) simulations using TRISTAN-MP (Buneman (1993); Spitkovsky (2005)), in which we continuously shear a collisionless, magnetized plasma composed of ions and electrons (Riquelme et al. (2012)). The magnetic field is initially spatially uniform and starts pointing along the \(x\)-axis. A shear velocity field is imposed with \(\textbf{v}=-sx\hat{y}\) (red arrows in fig. 1), where \(x\) is the distance along the \(x\)-axis and \(s\) is a constant shear rate. We solve the PIC system of equations using shearing coordinates, as implemented in Riquelme et al. (2012) (The suitability of this approach to studying ion Larmor scale phenomena is also discussed in Riquelme et al. (2015)). The conservation of magnetic flux implies that the \(y\)-component of the magnetic field **B** evolves as \(dB_{y}/dt=-sB_{0}\), whereas \(dB_{x}/dt=0\) and \(dB_{z}/dt=0\). The action of the shear then
continuously amplifies the magnetic field strength such that its magnitude evolves as \(B(t)=B_{0}\sqrt{1+s^{2}t^{2}}\).
In our simulations, ions and electrons are initialized with Maxwell-Juttner distributions (the relativistic generalization of the Maxwell-Boltzmann distribution, Juttner (1911)) with equal initial temperatures \(T_{i}^{\text{init}}=T_{e}^{\text{init}}\), and \(k_{B}T_{i}^{\text{init}}/m_{i}c^{2}\) between \(0.01\) and \(0.02\). The physical parameters of our simulations are the initial temperature of ions and electrons (\(T_{i}^{\text{init}}=T_{e}^{\text{init}}\)), the initial ion plasma beta, \(\beta_{i}^{\text{init}}\), the mass ratio between ions and electrons \(m_{i}/m_{e}\), and the ratio between the initial ion cyclotron frequency and the shear frequency, \(\omega_{c,i}^{\text{init}}/s\), that we call the "scale-separation ratio". The numerical parameters in our simulations are the number of macroparticles per cell, \(N_{\text{ppc}}\), the plasma skin depth in terms of grid point spacing, \(c/\sqrt{\omega_{p,e}^{2}+\omega_{p,i}^{2}}/\Delta x\), and the domain size in terms of the initial ion Larmor radius, \(L/R_{L,i}^{\text{init}}\), where \(R_{L,i}^{\text{init}}=v_{\text{th},i}/\omega_{c,i}^{\text{init}}\) and \(v_{\text{th},i}^{2}=k_{B}T_{i}/m_{i}\). These physical and numerical parameters are listed in Table 1. We fix \(c/\sqrt{\omega_{p,e}^{2}+\omega_{p,i}^{2}}/\Delta x=3.5\) in the simulations presented in Table 1.
In the bulk of the paper we discuss a representative, fiducial simulation with \(m_{i}/m_{e}=8\), \(\beta_{i}^{\text{init}}=20\) (thus \(\beta^{\text{init}}=\beta_{i}^{\text{init}}+\beta_{e}^{\text{init}}=40\)) and \(\omega_{c,i}^{\text{init}}=800\) (simulation b20m8w200 in Table 1, highlighted in boldface). We vary the above parameters in a series of simulations, all listed in Table 1. Importantly, given the available computational capabilities, performing a simulation with realistic mass ratio \(m_{i}/m_{e}=1836\) becomes prohibitively expensive. Therefore, a range of values of ion-to-electron mass ratio are presented in order to ensure that our results do not strongly depend on this parameter. The effects of varying these parameters are discussed in SSS4 & 5.
In the absence of a scattering mechanism and/or collisions, the ion and electron magnetic moments \(\mu_{j}\equiv p_{\perp,j}^{2}/(2m_{j}B)\) and longitudinal action \(\mathcal{J}_{j}\equiv\oint p_{j,\parallel}d\ell\) are adiabatic invariants (\(p_{\perp,j}\) and \(p_{\parallel,j}\) are the components of the momentum of a particle of species \(j\) perpendicular and parallel to the local magnetic field, respectively, and \(j=i,e\)), and therefore are conserved as the system evolves, provided that the variation of **B** is sufficiently slow compared to the particle cyclotron frequencies; in our case, \(s\ll\omega_{c,j}\), where \(\omega_{c,j}=eB/m_{j}c\) is the cyclotron frequency of particles of species \(j\), \(c\) is the speed of light, and \(e\) is the magnitude of the electric charge.
The continuous amplification of the magnetic field **B** implies that the particles' adiabatic invariance drives a pressure anisotropy in the plasma such that \(P_{\perp,j}>P_{\parallel,j}\). In the very early stages of the simulation, we expect the evolution of \(P_{\perp,j}\) and \(P_{\parallel,j}\) to be dictated by the double-adiabatic scalings (Chew et al. (1956)). Soon after this stage, however, the pressure anisotropy acts as a free energy source in the plasma and is able to excite several kinetic microinstabilities after surpassing their excitation thresholds, which are proportional to \(\beta^{-\alpha}\), \((\alpha\sim 0.5-1)\)(Hasegawa (1969); Gary & Lee (1994); Gary & Wang (1996)). These microinstabilities break the adiabatic invariants and act upon the pressure anisotropy to regulate the anisotropy growth in the nonlinear stages.
In our simulations, and given our initial physical parameters (namely, \(\beta_{i}^{\text{init}}\equiv 8\pi P_{i}^{\text{init}}/B^{2\text{init}}=20\)), we expect the dominant instability to be the mirror instability. Mirror modes are purely growing (i.e. zero real frequency), with the fastest growing modes propagating highly obliquely with respect to the mean magnetic field. Their most unstable wavenumbers satisfy \(k_{\perp}R_{L,i}\sim 1\), where \(R_{L,i}\) is the ion Larmor radius. This instability presents Landau resonances with particles of very small parallel momentum, \(p_{\parallel}\approx 0\), that become trapped in between mirror modes, and contribute to regulating the pressure anisotropy.
In addition to the mirror instability, we also observe wave activity that we associate with the ion-cyclotron (Gary (1992)) and whistler (Gary & Wang (1996)) instabilities at ion and electron scales, respectively, during the late stages of our simulations. Ion cyclotron (IC) modes are left circularly polarized and have real frequency below the ion-cyclotron frequency \(\omega_{c,i}\), with modes of maximum growth
\begin{table}
\begin{tabular}{l c c c c c c} \hline Runs & \(\beta_{i}^{\text{init}}\) & \(m_{i}/m_{e}\) & \(\omega_{c,i}^{init}/s\) & \(\frac{k_{B}T}{m_{i}c^{2}}\) & \(N_{\text{ppc}}\) & \(L/R_{L,i}^{\text{init}}\) \\ \hline \hline
**b20m8w800** & **20** & **8** & **800** & **0.02** & **600** & **54** \\ b20m32w800 & 20 & 32 & 800 & 0.01 & 300 & 50 \\ b20m64w800 & 20 & 64 & 800 & 0.01 & 200 & 40 \\ b40m8w800 & 40 & 8 & 800 & 0.02 & 300 & 49 \\ b2m8w800 & 2 & 8 & 800 & 0.02 & 300 & 68 \\ \hline \end{tabular}
\end{table}
Table 1: Simulation List: The physical parameters of the simulations are: the initial ion plasma beta \(\beta\equiv 8\pi P_{i}^{\text{init}}/B^{2}\), where \(P_{i}^{\text{init}}\) is the initial ion pressure, the mass ratio between ions and electrons \(m_{i}/m_{e}\), and the magnetization \(\omega_{c,i}/s\). The numerical parameters are the number of particles per cell \(N_{\text{ppc}}\) and the domain size in terms of the initial ion Larmor radius \(L/R_{L,i}^{\text{init}}\). Our fiducial simulation is highlighted in bold.
Figure 1: The evolution of the simulation domain. Panel \(a\): Initially, the box is straight, the magnetic field is initialized pointing in the \(\hat{x}\) direction and a shear velocity field \(\textbf{v}=-sx\hat{y}\) is imposed in the y–direction (red arrows). Panel \(b\): The velocity field shears the box continuously throughout the simulation, amplifying the magnetic field and changing its direction in the process due to magnetic flux conservation.
rate propagating parallel to the mean magnetic field \(\mathbf{B}\). Similarly, whistler modes are right circularly polarized and have real frequency below the electron cyclotron frequency \(\omega_{c,e}\), with modes of maximum growth rate also propagating parallel to \(\mathbf{B}\). As we will see, this wave activity is associated with the ion and electron trapping processes that mirror modes generate.
## 3 Results
Figures 2 and 3 summarize the evolution of magnetic field fluctuations and particle pressure anisotropy over time.
Figure 2 shows the fluctuations in the magnetic field \(\delta\mathbf{B}\equiv\mathbf{B}-\langle\mathbf{B}\rangle\) (where \(\langle\cdot\rangle\) denotes a volume average over the entire simulation domain) in its three different components at two different times: \(t\cdot s=0.4\) (first row, panels \(a\),\(b\) and \(c\)) and at \(t\cdot s=1.4\) (second row, panels \(d\), \(e\) and \(f\)). The black arrows in panels \(a\)-\(f\) denote the direction of the mean magnetic field \(\langle\mathbf{B}\rangle\) at those particular times. The components of \(\delta\mathbf{B}\) are defined as parallel with respect to the main field \(\langle\mathbf{B}\rangle\) (\(\delta B_{\parallel}\), panels \(b\) and \(e\)), perpendicular to \(\langle\mathbf{B}\rangle\) in the plane of the simulation (\(\delta B_{\perp,xy}\), panels \(a\) and \(d\)) and perpendicular to \(\langle\mathbf{B}\rangle\) in the direction out of the simulation plane (\(\delta B_{z}\), panels \(c\) and \(f\)). Additionally, figure \(2g\) shows the evolution of the energy in each of the three components of \(\delta\mathbf{B}\), normalized by \(B(t)^{2}\); \(\delta B_{\parallel}^{2}\) (blue line), \(\delta B_{\perp,xy}^{2}\) (red line), and \(\delta B_{z}^{2}\) (green line).
Figure \(3a\) shows the evolution of the ion pressure anisotropy \(\Delta P_{i}\equiv P_{\perp,i}-P_{\parallel,i}\) for run b20m8w800, and the dashed gray line shows the approximate instability threshold for the mirror instability (Hasegawa (1969); Hellinger (2007)). We can see that the ion anisotropy surpasses the mirror threshold very early in the simulation, and reaches its maximum value at \(t\cdot s\approx 0.5\) (we will call this stage the anisotropy overshoot hereafter). We will show that this is consistent with the beginning of the secular growth of mirror modes (Kunz et al. (2014), Riquelme et al. (2016)). Figure \(3b\) shows the same for the electron pressure anisotropy, which we will show relaxes by efficient scattering.
### Mirror Instability Evolution
Since mirror modes are highly oblique, their evolution is well represented by the time trace of \(\delta B_{\parallel}^{2}\) shown in fig. \(2g\). We identify both a linear, exponentially growing phase until \(t\cdot s\approx 0.45\), and a subsequent nonlinear, slower growing secular phase, consistent with the different evolutionary phases of the ion and electron pressure anisotropies described above. Besides the break in the mirror mode's evolution at \(t\cdot s\approx 0.45\), a second break in the secular growth occurs around \(t\cdot s=0.6\) followed by a shallower slope of growth. We will show that this break coincides with the excitation of both whistler and IC waves in \(\delta B_{\perp,xy}^{2}\) and \(\delta B_{z}^{2}\), implying that whistler and IC waves, albeit smaller in amplitude, modulate the evolution of mirror modes during nonlinear stages.
#### 3.1.1 Linear, exponentially growing mirror phase
After an early CGL phase of the pressure anisotropy \(\Delta P_{j}\) (\(j=i,e\), see fig. 3), fig. \(2g\) shows the excitation of the mirror instability starting at \(t\cdot s\approx 0.35\), mainly in the parallel component of the magnetic fluctuations, \(\delta B_{\parallel}\) (blue line), consistent with theoretical expectations (Southwood & Kivelson (1993); Pokhotelov et al. (2004)). Figure \(2g\) also shows that \(\delta B_{\parallel}\) grows first and it has the largest amplitude throughout the entire simulation, meaning that the mirror instability is indeed the dominant instability.
Figure \(2b\) (i.e. \(\delta B_{\parallel}^{2}\)) shows the linear, exponentially growing phase of mirror modes at \(t\cdot s=0.4\), where small filamentary structures of high local magnetic field amplitude start to emerge and slowly grow, in between wider regions of low local magnetic field amplitude. The obliqueness of the modes is readily apparent, as well as the fact that the mirror generated magnetic fluctuations lie mainly in the (**k**,**B**) plane (they can be seen in \(\delta B_{\perp,xy}^{2}\) too, but not in \(\delta B_{z}^{2}\), as expected from linear theory (Pokhotelov et al. (2004))). The oblique nature of mirror modes can also be seen in fig. \(4a\), where we show the power spectrum in space of \(\delta B_{\parallel}\) at \(t\cdot s=0.4\). The solid and dashed lines represent the directions parallel and perpendicular to the mean magnetic field \(\langle\mathbf{B}\rangle\), respectively. Therefore, we can see that at \(t\cdot s=0.4\), the power is mostly concentrated between wavevectors \(0.44\lesssim kR_{L,\mathrm{t}}^{\mathrm{init}}\lesssim 1.35\) and angles of \(52^{\circ}\lesssim\theta_{k}\lesssim 77^{\circ}\), where \(\theta_{k}\equiv\cos^{-1}(\mathbf{k}\cdot\langle\mathbf{B}\rangle/kB)\) is the angle between mirror modes' wavevector and the mean magnetic field \(\langle\mathbf{B}\rangle\).
It should be emphasized that the ion-cyclotron wave activity only starts at \(t\cdot s=0.6\), and not before. There is no sign of an early excitation of the ion-cyclotron instability competing with the mirror instability for the available free energy in \(\Delta P_{i}\). Instead, at earlier stages, only the mirror instability is excited, consistent with our initial conditions of high-beta (\(\beta_{i}^{\mathrm{init}}=20\)), where the mirror instability is expected to dominate (e.g. Riquelme et al. (2015)).
The absence of ion-cyclotron waves early in the simulation (\(0<t\cdot s<0.6\)) is clearly seen in fig. \(5a\), where we show the power spectrum in time and space of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp,xy}(\omega,k_{\parallel})\) at early stages: \(0.3<t\cdot s<0.5\). This particular combination of the two perpendicular components of \(\delta\mathbf{B}\) allows us to disentangle the parallel-propagating waves (with respect to the main magnetic field \(\langle\mathbf{B}\rangle\), e.g. ion-cyclotron and whistlers), and also their left-handed and right-handed circular polarizations (Ley et al. (2019); Tran et al. (2023)). In this case, the left-hand circularly polarized wave activity is shown for \(\omega>0\), whereas right-hand circularly polarized wave activity is shown for \(\omega<0\). We readily see that, apart from the \(\omega\approx 0\) power consistent with mirror modes appearing in \(\delta B_{\perp,xy}\), there is no left-handed polarized wave activity throughout \(0.3<t\cdot s<0.5\), only right-handed polarized waves, which corresponds to an early excitation of the whistler instability, as we will see in section 3.2.
#### 3.1.2 Nonlinear, secular mirror phase
At \(t\cdot s\approx 0.45\), we can clearly see the beginning of the secular growth of the mirror instability, where the modes reach nonlinear amplitudes, and keep growing but at a slower rate. This evolution is consistent with previous works (Kunz et al. (2014); Riquelme et al. (2016)).
Figure 2: **First row:** The different component of magnetic fluctuations \(\delta\mathbf{B}=\mathbf{B}-\langle\mathbf{B}\rangle\) for run b20mSw800 in the simulation domain at \(t\cdot s=0.4\): \(\delta B_{\perp}\) (Panel \(a\)) is the component perpendicular to the main field \(\langle\mathbf{B}\rangle\) in the \(x\)–\(y\) plane of the simulation, \(\delta B_{\parallel}\) (panel \(b\)) is the component parallel to \(\langle\mathbf{B}\rangle\) and \(\delta B_{z}\) (panel \(c\)) is the component perpendicular to \(\langle\mathbf{B}\rangle\) in the direction out of the plane of the simulation. **Second row:** Panels \(d\), \(e\) and \(f\) show the same as panels \(a\), \(b\) and \(c\), but but at \(t\cdot s=1.4\). **Third row:** The evolution of the energy in the three component of the magnetic field fluctuations \(\delta\mathbf{B}\) normalized to \(B(t)^{2}\), \(\delta B_{\parallel}^{2}\) (blue line), \(\delta B_{\perp,xy}^{2}\) (red line) and \(\delta B_{z}^{2}\) (green line). The dashed gray lines indicate the time at which the fluctuations in the first and second row are shown. An animation is available in the online version.
Interestingly, the mirror secular growth is interrupted at \(t\cdot s\approx 0.6\), and the slope of \(\delta B_{\parallel}^{2}\) breaks. This is also approximately where the ion pressure anisotropy experiences its fastest decline (fig. 3). Mirror modes continue to grow, but at a much slower rate. This is consistent with the saturation of energy in the subdominant components \(\delta B_{\perp,xy}^{2}\) and \(\delta B_{z}^{2}\) (solid red and green line in fig. 2\(g\), respectively), which also presents a distinct pattern of oscillations. This activity is a clear evidence of a new burst of waves with components mainly in the direction perpendicular to \(\delta\)**B**, and we will see that they are consistent with both electron cyclotron waves (whistlers) and ion cyclotron waves excited by electron and ion populations, respectively, that become trapped within mirror modes (see sec. 3.3).
Figure 2\(e\) shows a late, nonlinear stage of the mirror instability, at \(t\cdot s=1.4\). At this time, the regions of high magnetic field of mirror modes (e.g. red filamentary structures seen in fig. 2\(b\)) have grown significantly and merged with neighboring structures to form wider and sharper regions of high local amplitudes (\(\delta B_{\parallel}/B\sim 0.9\)), whose sizes are comparable to regions of low magnetic field. At this stage, most of the power is concentrated in wavevectors \(0.2\lesssim kR_{L,i}^{\rm init}\lesssim 1.1\), and angles \(57^{\circ}\lesssim\theta_{k}\lesssim 85^{\circ}\) (see fig. 4\(b\)).
After reaching its overshoot, the ion anisotropy starts to decrease towards marginal stability. However, this decrease stops around \(t\cdot s\approx 0.65\) at \(\Delta P_{i}/P_{\parallel,i}\approx 0.18\), well above the approximate mirror threshold (dashed gray line, (Hasegawa (1969); Hellinger (2007))). The anisotropy then reaches a marginal stability level that is above the mirror threshold, similar to some previous works using both hybrid and fully kinetic simulations (Sironi & Narayan (2015); Melville et al. (2016); Ley et al. (2023)).
In order to better characterize the evolution of \(\Delta P_{i}\), we fit a relation \(\Delta P_{i}=A_{i}\beta_{\parallel,i}^{\alpha_{i}}\) from \(0.7\leq t\cdot s\leq 2\) (In our simulations, the shear motion continuously amplifies \(B\), therefore \(\beta_{\parallel,i}\) also evolves.). As shown in fig. 3\(a\), our best-fit parameters are \(A_{i}=0.544\pm 0.003\) and \(\alpha_{i}=-0.445\pm 0.003\). The obtained exponent is consistent with marginal stability threshold given by the ion-cyclotron instability for lower \(\beta_{i}\) (Gary & Lee (1994)). Indeed, the threshold for the IC instability, \(\Delta P_{i}=0.53\beta_{\parallel,i}^{-0.4}\), is plotted as dotted-dashed orange line in fig. 3\(a\) for \(\gamma_{IC}/\omega_{c,i}=10^{-2}\) (Gary & Lee (1994)), and we can clearly see the similarity with our best-fit threshold, even at this higher value of initial \(\beta_{\parallel,i}^{\rm init}\). This observation was also reported in Sironi & Narayan (2015), and we will see that, indeed, we do observe ion-cyclotron waves as part of the saturated phase of the mirror instability that starts at \(t\cdot s=0.6\). The presence of ion and electron cyclotron waves coexisting with mirror modes at late, nonlinear stages of the mirror instability has been reported in previous works (Riquelme et al. (2016); Sironi & Narayan (2015); Ahmadi et al. (2018)). In SS3.3, we argue that a natural explanation of the source of these cyclotron waves is pressure anisotropy of ions trapped within nonlinear mirror modes.
### First Whistler Burst - \(t\cdot s\approx 0.4\)
Figure 3: Panel \(a\): The evolution of the ion pressure anisotropy \(\Delta P_{i}/P_{\parallel,i}\) for run b20m8w800 is shown as a solid green line. The dashed green line shows the double-adiabatic evolution of \(\Delta P_{i}/P_{\parallel,i}\)(Chew et al. (1956)). The dashed gray line shows the approximate threshold for the mirror instability: \(1/\beta_{\parallel,i}\)(Hasegawa (1969)). The dotted-dashed orange line shows the threshold for the IC instability from Gary & Lee (1994) for \(\gamma_{IC}/\omega_{c,i}=10^{-2}\) (\(\gamma_{IC}\) is the IC growth rate). The red dashed line shows the best-fit to \(\Delta P_{i}/P_{\parallel,i}=A_{i}\beta_{\parallel,i}^{\alpha_{i}}\) from \(t\cdot s=0.7\) to \(t\cdot s=2.0\), with \(A_{i}=0.544\pm 0.003\) and \(\alpha_{i}=0.445\pm 0.003\). Panel \(b\): The evolution of the electron pressure anisotropy \(\Delta P_{e}/P_{\parallel,e}\) is shown as solid orange line. The dashed orange line shows the double-adiabatic evolution of \(\Delta P_{e}/P_{\parallel,e}\). The dashed blue line shows the best-fit to \(\Delta P_{e}/P_{\parallel,e}=A_{e}\beta_{\parallel,e}^{\alpha_{e}}\) from \(t\cdot s=0.7\) to \(t\cdot s=2.0\), with \(A_{e}=0.036\pm 0.0002\) and \(\alpha_{e}=0.341\pm 0.003\). The dashed gray line shows the linear threshold for the anisotropic whistler instability from (Gary & Wang (1996)) for growth rate \(\gamma_{W}/\omega_{c,e}=0.01\) (\(\gamma_{W}\) is the whistler growth rate).
Figure 3\(b\) shows the evolution of the electron pressure anisotropy \(\Delta P_{e}\equiv P_{\perp,e}-P_{\parallel,e}\) for run b20m8w800. Initially, the electrons develop their own pressure anisotropy alongside ions and for the same reasons. The anisotropy follows double-adiabatic (CGL) scaling (dashed orange line) until \(t\cdot s\approx 0.4\), when it has already reached a value significantly larger than the theoretical threshold for the growth of whistler modes, marked by grey-dashed lines (Gary & Wang (1996)). Around this time, the whistler instability starts to grow, as seen by the time trace of \(\delta B_{z}^{2}\) in fig. 2\(g\), which is
Figure 4: Panel \(a\): Power spectrum in space of \(\delta B_{\parallel}(k_{x},k_{y})\) at \(t\cdot s=0.4\). The wavenumbers \(k_{x},k_{y}\) are normalized by the initial Larmor radius of the ions, \(R_{L,i}^{\rm ini}\). The solid and dashed black lines represent the direction parallel and perpendicular to the main magnetic field at that time, respectively. Panel \(b\): Power spectrum in space of \(\delta B_{\parallel}(k_{x},k_{y})\) at \(t\cdot s=1.4\). Note that the scale of colorbars in panel \(a\) and \(b\) are different.
Figure 5: Panel \(a\): The power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\parallel,xy}(\omega,k_{ \parallel})\) in the entire simulation domain and between \(0.3<t\cdot s<0.5\). The frequency is normalized by the initial electron cyclotron frequency \(\omega_{c,e}\), and the wavevector is normalized by the plasma frequency \(\omega_{p,e}\) over the speed of light \(c\). The solid black line shows the linear dispersion relation \(\omega_{r}(k)\) for the whistler instability according to our linear dispersion solver, whereas the dashed black line shows its growth rate \(\gamma\). Panel \(b\): The power spectrum in space of \(\delta B_{z}(k_{x},k_{y})\) at \(t\cdot s=0.4\). The wavenumbers \(k_{x},k_{y}\) are normalized to the initial Larmor radius of the electrons, \(R_{L,e}^{\rm ini}\). The solid and dashed black lines represent the direction parallel and perpendicular to the main magnetic field at that time.
a rough proxy for whistler waves, and also because there are no left-handed IC waves as shown in fig. 5\(a\). At \(t\cdot s\approx 0.45\) the whistler modes saturate and enter a regime of quasi-steady amplitude, which lasts until \(t\cdot s\approx 0.53\). During this \(t\cdot s\approx 0.4-0.53\) period, \(\Delta P_{e}\) is rapidly drawn down by frequent scattering, reaching a more slowly decreasing regime between \(t\cdot s\approx 0.53\) and \(0.6\). The draw down of electron anisotropy happens at a time when the ion anisotropy is still growing. This lasts until mirror modes are sufficiently high amplitude to start trapping the electrons (\(t\cdot s=0.6\)).
The presence of whistler modes at \(t\cdot s=0.4\) can be seen mainly in the perpendicular components of \(\delta\)**B**, namely, \(\delta B_{\perp,xy}\) and \(\delta B_{z}\), figures 2\(a\) and 2\(c\), respectively. They propagate quasi-parallel to the main magnetic field **B** in a fairly homogeneous way inside the simulation domain. This quasi-parallel propagation can also be seen in fig. 5\(b\), where we show the power spectrum in space of \(\delta B_{z}(k_{x},k_{y})\) at \(t\cdot s=0.4\) for run b20m8w800, and the solid and dashed black lines indicate the directions parallel and perpendicular to the main magnetic field \(\langle\textbf{B}\rangle\) at \(t\cdot s=0.4\). The power of \(\delta B_{z}(k_{x},k_{y})\) is concentrated at parallel propagation and wavevectors \(0.6<kR_{L,e}^{\text{init}}<1\).
We show the whistler wave frequencies in the power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp,xy}(\omega,k_{\parallel})\) in the interval \(0.3<t\cdot s<0.5\) in fig. 5\(a\). We can see that the power is localized in the region \(\omega<0\), i.e. right-handed circularly polarized waves, consistent with the whistler polarization, and within frequencies \(0.02<\omega/\omega_{c,e}<0.05\). As mentioned above, no IC activity is present during this time period.
We also calculated the theoretical dispersion relation of the anisotropic whistler instability using a linear dispersion solver assuming an initial bi-maxwellian distribution of electrons (Tran et al. (2023)), using the initial parameters and values of \(T_{\perp,e},T_{\parallel,e}\) directly from the simulations. The dispersion relation \(\omega(k)\) is shown as a solid black line in fig. 5\(a\), whereas the instability growth rate is shown in dashed black lines. We can see that the power in right-hand circularly polarized waves is consistent with the whistler dispersion relation.
This way, the early evolution of the electrons is determined by an early burst of whistler modes associated to the initial electron pressure anisotropy growth. We will see that, once electrons start to become trapped in between mirror modes at \(t\cdot s\approx 0.6\), another burst of whistler activity happens, this time associated with the trapping process within mirror modes during their secular and saturated phase.
### Whistler and Ion-cyclotron Excitations \(-t\cdot s\approx 0.6\)
At the end of its secular growth, when mirror modes have reached sufficiently high-amplitudes, we simultaneously observe right-hand and left-hand circularly polarized wave activity, which we identify as whistler and ion-cyclotron waves, respectively. We will see below (SS3.3) that these whistler and ion-cyclotron waves propagate mainly in regions of locally low magnetic field (magnetic troughs). The source of this wave activity is identified to be the pressure anisotropic population of ions and electrons mainly due to trapped particles inside the magnetic troughs. The whistlers and ion cyclotron waves then pitch-angle scatter both the trapped and untrapped particles, contributing to regulation of the global anisotropy.
Figure 6 shows different spectral properties of the late burst of waves excited from \(t\cdot s\approx 0.6\) onwards. Figure 6\(a\) shows the power spectrum in time of \(\delta B_{z}(\omega)+i\delta B_{\perp,xy}(\omega)\) between \(0.5<t\cdot s<1.1\), so we can see both left-hand (solid blue line) and right-hand (solid orange line) circular polarizations. The power spectrum peaks at low-frequencies, consistent with the nature of the dominant mirror modes (mainly appearing in \(\delta B_{\perp,xy}\)). Additionally, we can clearly see a secondary peak at around \(\omega\sim 0.2\omega_{c,i}\), with a spread that goes from \(\omega\sim 0.1\omega_{c,i}\) to \(\omega\sim 0.3\omega_{c,i}\), in both left and right hand circular polarizations. This constitutes the characteristic feature informing the late burst of wave activity. This peak resembles observations of whistler lion roars in the Earth's Magnetosheath (see e.g. figs. 1 and 2 of Giagkiozis et al. (2018), fig. 3 of Zhang et al. (2021) for right-hand polarized waves.).
Figure 6\(b\) shows the spectrogram of \(\delta B_{z}(\omega)+i\delta B_{\perp,xy}(\omega)\) in frequency and time, ranging \(0.4<t\cdot s<1.3\), with positive frequencies representing left-hand circularly polarized waves, and negative frequencies denoting right-hand circularly polarized waves. Here we can also see the early burst of whistler waves starting at \(t\cdot s\approx 0.4\) and peaking at \(t\cdot s\approx 0.45\) (see section SS3.2), followed by the burst of both left-hand and right-hand circularly polarized waves at \(t\cdot s\approx 0.53\) and peaking at \(t\cdot s\approx 0.65\). This coincides with the rise in amplitude of \(\delta B_{z}^{2}\) and \(\delta B_{\perp,xy}\) (see fig. 2\()g\), and the waves are continuously maintained throughout the simulation at around the same frequencies.
Finally, figure 6\(c\) shows the power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp,xy}(\omega,k_{\parallel})\) in time and space, at \(0.5<t\cdot s<1.1\). Frequencies and wavenumbers are normalized by \(\omega_{c,i}\) and \(\omega_{p,i}/c\), respectively. Here we can also see the power at low frequencies consistent with the dominance of mirror modes appearing in \(\delta B_{\perp,xy}\). The burst of left and right hand circularly polarized waves can be seen concentrated around frequencies \(\omega\approx 0.2\omega_{c,i}\) and \(\omega\approx-0.15\omega_{c,i}\), respectively. Their range in wavenumbers is \(0.2\lesssim ck_{\parallel}/\omega_{p,i}\lesssim 0.5\). Overall, the power spectra of both left and right hand polarized waves are very similar to those of ion-cyclotron and electron cyclotron whistlers, and we will identify these waves as such from now on. In the next section, we will confirm that the population of particles that excites these waves have anisotropic distributions that are IC and whistler unstable.
The morphology of IC and whistler waves can also be seen in figures 2\(d\) and 2\(f\). The short wavelength, wavepacket-like structures are identified with whistler modes, which propagate mainly through regions of low magnetic field strength of mirror modes, as we can see from \(\delta B_{\perp,xy}\) ( blue shaded regions in fig. 2\(d\)). The IC modes, on the other hand, are identified as the longer wavelength, extended modes that can be seen in \(\delta B_{z}\). The IC modes seem to propagate through the entire simulation box, given their ion-scale wavelength, whereas whistler modes clearly propagate within mirrors'
magnetic troughs. This also resembles magnetosheath's observations of whistler waves within magnetic troughs (e.g. Kitamura et al. (2020)).
The peak frequencies observed in figure 6 for both ion-cyclotron and whistler waves can be understood in terms of their dispersion relations. At high-\(\beta\) and \(kR_{L,e}\sim 1\), and for quasi-parallel propagation, the dispersion relation for whistler waves can be written as (Six (1992); Drake et al. (2021))
\[\omega_{W}=\omega_{c,e}k_{W}^{2}d_{e}^{2}=\omega_{c,i}k_{W}^{2}d_{i}^{2}, \tag{1}\]
where \(d_{e}=c/\omega_{p,e}\) and \(d_{i}=c/\omega_{p,i}\) are the electron and ion skin depths, respectively. Knowing that \(d_{i}^{2}=R_{L,i}^{2}/\beta_{i}\), we can also write
\[\omega_{W}=\omega_{c,i}k_{W}^{2}R_{L,i}^{2}/\beta_{i}. \tag{2}\]
Similarly, at high-\(\beta\) and \(kR_{L,i}\sim 1\), and for quasi-parallel propagation, the ion-cyclotron wave dispersion relation is approximately (Six (1992))
\[\omega_{\rm IC}=\omega_{c,i}k_{\rm IC}d_{i}, \tag{3}\]
and we can also write
\[\omega_{\rm IC}=\omega_{c,i}k_{\rm IC}R_{L,i}/\sqrt{\beta_{i}}. \tag{4}\]
Figure 6: Panel a: The power spectrum of \(\delta B_{z}(\omega)+i\delta B_{\perp,xy}(\omega)\) as a function of frequency. The frequencies are normalized by the initial ion-cyclotron frequency. The power spectrum of left-handed circularly polarized waves (\(\omega>0\)) is shown as a solid blue line, whereas the power spectrum corresponding to right-handed circularly polarized waves (\(\omega<0\)) is shown as an orange line folded into positive frequencies. Panel b: Spectrogram of \(\delta B_{z}(\omega)+i\delta B_{\perp,xy}(\omega)\) in frequency and time, at \(0.4<t\cdot s<1.3\). The frequency is normalized by the initial ion-cyclotron frequency. Positive and negatives frequencies corresponds to left-hand and right-hand circularly polarized waves, respectively. Panel c: The power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp}(\omega,k_{\parallel})\) at \(0.5<t\cdot s<1.1\). Frequencies are normalized by the initial ion gyrofrequency, and wavenumbers are normalized by the initial ion skin depth. Here also, positive and negative frequencies show left-hand and right-hand polarized waves, respectively.
Figure 7: The power spectrum in space of \(\delta B_{\perp,xy}(k_{x},k_{y})\) at \(t\cdot s=0.9\). The wavenumbers \(k_{x},k_{y}\) are normalized by the initial ion Larmor radius \(R_{L,i}^{\rm ini}\). The solid and dashed white lines represent, respectively, the direction parallel and perpendicular to the main magnetic field at that time.
We can estimate \(k_{W}\), \(k_{\text{IC}}\) by looking at the power spectrum of any of the perpendicular components of the magnetic field fluctuations. Figure 7 shows the power spectrum of \(\delta B_{\perp,xy}(k_{x},k_{y})\) at \(t\cdot s=0.9\), where the solid and dashed white lines denote the direction parallel and perpendicular to the mean magnetic field \(\mathbf{B}\) at that time, respectively. Apart from the power in the perpendicular direction corresponding to the mirror modes, in the power parallel to \(\mathbf{B}\) (i.e. along the solid black line in fig. 7) we can distinguish large wavenumbers centered at \((k_{y}R_{L,i}^{\text{init}},k_{x}R_{L,i}^{\text{init}})\approx(0.75,-1.5)\) (and also at \((-1.5,0.75)\)), corresponding to whistlers, and also smaller wavenumbers centered at \((k_{x}R_{L,i}^{\text{init}}\), \(k_{y}R_{L,i}^{\text{init}})\approx(0.5,0.7)\), corresponding to ion-cyclotron waves.
The large wavenumber extent in \(k_{x},k_{y}\) observed in fig. 7 gives us an approximate range of wavenumbers \(1.5\lesssim k_{W}R_{L,i}^{\text{init}}\lesssim 3.2\) for whistlers, implying frequencies \(0.1\lesssim\omega_{W}/\omega_{c,i}^{\text{init}}\lesssim 0.5\) (as \(\beta_{i}^{\text{init}}=20\)), consistent with the frequencies observed in the negative half of fig. 6\(c\), corresponding to right-hand polarized waves. Similarly, the small wavenumber extent in \(k_{x},k_{y}\) gives us a range of wavenumbers \(0.4\lesssim k_{W}R_{L,i}^{\text{init}}\lesssim 1.1\), implying frequencies \(0.1\lesssim\omega_{IC}/\omega_{c,i}^{\text{init}}\lesssim 0.25\), also consistent with the frequencies in the positive half of fig. 6\(c\), corresponding to left-hand polarized waves.
### 2D Particle Distributions
The specific time at which ion and electron cyclotron wave activity saturates, which coincides with the end of mirror instability's secular growth (\(t\cdot s\approx 0.6\)), and the propagation of whistler waves within regions of low-magnetic field strength, give a hint towards uncovering the mechanism by which the whistler and IC waves are excited.
As a first step, we explore the evolution of the pressure anisotropy of ions and electrons at the time at which the IC and whistler waves are excited. At this time, mirror modes have achieved high amplitudes, and created sharp regions of high and low magnetic field strength, making the plasma spatially inhomogeneous. This implies that, in general, the plasma \(\beta\) of ions and electrons would not be the same at different locations in the simulation domain, making the anisotropy thresholds for the growth of the modes different in different regions. For this reason, a more appropriate method would be to measure the 2D distribution of pressure anisotropy, \(\beta_{\parallel}\) and \(\delta B_{\parallel}/B\) in the simulation domain.
Figure 8 shows the distribution of ion and electron pressure anisotropy as a function of ion \(\beta_{\parallel,i}\) (panels \(a\), \(b\), \(c\)) and electron \(\beta_{\parallel,e}\) (panels \(g\), \(h\), \(i\)), respectively, and the distribution of \(\delta B_{\parallel}/B\) versus \(\beta_{\parallel,i}\) (panels \(d\), \(e\), \(f\)) and electron \(\beta_{\parallel,e}\) (panels \(j\), \(k\), \(l\)), respectively. These distributions are shown at three different times: beginning of the simulation (\(t\cdot s\approx 0\), left column); end of mirror's secular growth and beginning of ion and electron cyclotron wave activity (\(t\cdot s=0.6\), middle column), and a late stage well into the saturated regime of mirror instability (\(t\cdot s=1.4\), right column). In the top row of fig. 8 (i.e. panels \(a\), \(b\), and \(c\)), the dashed gray line corresponds to the approximate mirror instability threshold \(1/\beta_{\parallel,i}\)(Hasegawa (1969)), the dashed-dotted orange line corresponds to the theoretical IC threshold \(0.53/\beta_{\parallel,i}^{0.4}\) from Gary & Lee (1994) for \(\gamma_{IC}/\omega_{c,i}=10^{-2}\), and the solid black line is the best-fit to the global ion anisotropy derived in section 3.1 (see fig. 3\(a\)). In the third row of fig. 8 (panels \(g\), \(h\), \(i\)), the dotted-dashed black line shows the whistler instability threshold \(0.36/\beta_{\parallel,e}^{0.55}\) from Gary & Wang (1996), for \(\gamma_{W}/\omega_{c,e}=10^{-2}\).
Starting with the ions, we can see that, from a stable, isotropic distribution at the very beginning of the simulation (fig. 8\(a\)), the ions become anisotropic enough to surpass both the mirror and the theoretical IC threshold from Gary & Lee (1994), as well as our best-fit instability threshold, as shown in fig. 8\(b\). At this point (\(t\cdot s=0.6\)), we start to observe the excitation of ion-cyclotron waves that seem to interact with the ions and start driving them towards a marginally stable state. This can be seen in fig. 8\(c\), where the distribution becomes bimodal, with one population of ions under both the IC-threshold and our best-fit threshold (centered at \(\beta_{\parallel,i}\sim 5\) and \(P_{\perp,i}/P_{\parallel,i}\sim 1.2\)), meaning that they are driven towards marginal stability with respect to the IC threshold. Interestingly, there exists another ion population that is still unstable (centered at \(\beta_{\parallel,i}\sim 18\) and \(P_{\perp,i}/P_{\parallel,i}\sim 1.4\)), therefore IC waves could then continue being excited even at this late stages. This could explain the sustained amplitude observed in \(\delta B_{z}^{2}\) and \(\delta B_{\perp,xy}^{2}\) in figure 2\(g\). Therefore, we can see that the unstable population has a higher \(\beta_{\parallel,i}\), and the marginally stable population moves to lower \(\beta_{\parallel,i}\).
For a similar value of \(P_{\parallel,i}\), the difference in the values of \(\beta_{\parallel,i}\) between the unstable and marginally stable populations should imply a difference in the local magnetic field strength (recall \(\beta_{\parallel,i}=8\pi P_{\parallel,i}/B^{2}\)). This gives us a hint on the location of the unstable and marginally stable populations in the domain, as mirror modes generate distinct regions of low and high magnetic field strength.
As we can see in figs. 8\(d\), 8\(e\), and 8\(f\), the ions also separate into two populations now in \(\delta B_{\parallel}/B\). Starting from zero magnetic field fluctuations at the beginning (\(t\cdot s\approx 0\), fig. 8\(d\)), we see how \(\delta B_{\parallel}/B\) starts to grow at \(t\cdot s=0.6\) (fig. 8\(e\)), until we clearly see the bimodal distribution at \(t\cdot s=1.4\), separating the two ion populations: the high-\(\beta_{\parallel,i}\) population located in regions of \(\delta B_{\parallel}/B<0\) (i.e. low-\(B\) strength), and the low-\(\beta_{\parallel,i}\) population located in regions of \(\delta B_{\parallel}/B>0\) (i.e. high-\(B\) strength).
We can therefore conclude that, after mirror modes develop and the IC waves are excited (\(t\cdot s\gtrsim 0.6\)), the ions separate into two populations: one of low-\(\beta_{\parallel,i}\), located mainly in high-\(B\) strength regions, and marginally stable to IC waves, and the second population with high-\(\beta_{\parallel,i}\), low-\(B\) strength regions, and still unstable to IC waves. This suggests that the IC wave are excited by the unstable ion populations in regions of low magnetic field strength, and then interact with the ions in such a way that the ions move to regions of high-\(B\) strength and low \(\beta_{\parallel,i}\). In sections 3.5 and 3.6 we will see that the population of ions that contribute most to the
Figure 8: Top row: The distribution of ion \(P_{\perp,i}/P_{\parallel,i}\) versus \(\beta_{\parallel,i}\) in the simulation domain at different times: \(t\cdot s=0.01\) (left column), \(t\cdot s=0.6\) (middle column), and \(t\cdot s=1.4\) (right column). The dashed gray line represents the approximate mirror instability threshold \(1/\beta_{\parallel,i}\)(Hasegawa (1969)), the dotted-dashed orange line represents the IC instability threshold from \(\mathrm{Gary}\) & Lee (1994) for \(\gamma_{IC}/\omega_{c,i}=10^{-2}\) (\(\gamma_{IC}\) is the IC instability growth rate), and the solid black line represents our best-fit threshold from section 3.1 (see fig. 3a). Second row: The distribution of \(\delta B_{\parallel}/B\) versus in \(\beta_{\parallel,i}\) for the same three times as in the top row. Third row: The distribution of electron \(P_{\perp,e}/P_{\parallel,e}\) versus \(\beta_{\parallel,e}\) in the simulation domain at the same three times as in the top row. The dotted-dashed black line represents the whistler instability threshold from Gary & Wang (1996). Fourth row: The distribution of \(\delta B_{\parallel}/B\) versus electron \(\beta_{\parallel,e}\) for the same three times as in the top row. An animated version of this plot is available in the online version.
anisotropy that destabilize the IC waves are the ones that become trapped within mirror troughs.
In the case of the electrons, we can see a similar evolution. From a stable, isotropic distribution at \(t\cdot s\approx 0\) (fig. 8\(d\)), we can see how part of it becomes now whistler unstable at \(t\cdot s=0.6\) (fig. 8\(e\)), after which the excited whistler waves interact with the electrons driving again part of the distribution gradually towards marginal stability, also generating a bimodal distribution similar to that of the ions. At \(t\cdot s=1.4\) (fig. 8\(f\)), we can see that the electron population with low \(\beta_{\parallel,e}\) (centered at \(\beta_{\parallel,e}\sim 5\) and \(P_{\perp,e}/P_{\parallel,e}\sim 1\)) is marginally stable with respect to the whistler threshold, whereas the electron population with high \(\beta_{\parallel,e}\) (centered at \(\beta_{\parallel,e}\sim 18\) and \(P_{\perp,e}/P_{\parallel,e}\sim 1.2\)) is still unstable with respect to the whistler threshold. This also implies that whistler waves can still be excited at late stages in the simulation.
Analogously, the electrons also separate into two populations with respect to \(\delta B_{\parallel}/B\). Similarly to ions, we also see that the population with low-\(\beta_{\parallel,e}\) is located in regions of \(\delta B_{\parallel}/B<0\) (low \(B\) strength), whereas the high-\(\beta_{\parallel,e}\) population is located in regions of \(\delta B_{\parallel}/B>0\) (high \(B\) strength). In this sense, we also conclude that in the case of electrons, the unstable population is located mainly in regions of low-\(B\) strength and high-\(\beta_{\parallel,e}\), where whistler waves are being excited, and the marginally stable population is located mainly in regions of high-\(B\) field and low-\(\beta_{\parallel,e}\). This also suggests that whistler waves interact with electrons so they move to regions of high-\(B\) strength. We will also see in sections 3.5 and 3.6 that the electrons that contributes the most to the pressure anisotropy that destabilizes whistler waves are the ones that become trapped within mirror modes.
### Physical Mechanism of Secondary IC/Whistler Excitation: Trapped and Passing Particles
In this section, we study the evolution of the ions and electrons that become trapped within mirror modes as part of the mirror instability's interaction with the particles. We characterize the pressure anisotropy and distribution functions of these populations at the moment of trapping, and provide evidence that they are able to destabilize parallel propagating modes that ultimately allow them to escape the mirrors and regulate the overall anisotropy.
As part of their evolution, and after reaching secular growth, mirror modes start to trap particles of low parallel momentum \(p_{\parallel,j}\) (\(j=i,e\)) in regions of low local magnetic field strength. The trapped particles bounce between these regions and conserve their magnetic moment in the process (Southwood & Kivelson (1993); Kunz et al. (2014)). In order to investigate the relation between this trapping process and the excitation of the these late IC and whistler waves, we select and track a population of ions and electrons throughout the evolution of the simulation, and study the trapped and passing (i.e. untrapped) subpopulations separately.
We select and track two populations of ions and two populations of electron having relatively small and large parallel momentum at a particular time in the simulation. This way, we make sure that we can capture particles that eventually become trapped and others that remained passing. In our fiducial simulation b20m8w800, the two populations of ions that we track have parallel momentum \(-0.12<p_{\parallel,i}/m_{i}c<0.12\) and \(0.3395<p_{\parallel,i}/m_{i}c<0.3405\) at \(t\cdot s=0.4\). Similarly, the two populations of electrons have \(-0.2<p_{\parallel,e}/m_{e}c<0.2\) and \(0.4599<p_{\parallel,i}/m_{i}c<0.4601\) at \(t\cdot s=0.4\).
In order to study the behavior of the tracked particles when the IC and whistler activity starts, we ask how many particles become trapped and how many become passing during the interval of time at which this activity happens, which we denote by \(\Delta\tau_{LR}\). To answer this, we look at fig. 2\(g\) and define \(\Delta\tau_{LR}\) as the interval of time \(0.52<t\cdot s<0.62\), which covers the exponential growth that \(\delta B_{z}^{2}\) and \(\delta B_{\perp,xy}^{2}\) undergo before saturating. This interval of time also covers the majority of the secular growth of mirror modes (see \(\delta B_{\parallel}^{2}\)).
Having this time interval well defined, we now must define the criterion by which we consider a particle to become trapped and passing during \(\Delta\tau_{LR}\), and for this we look at the evolution of their parallel momentum. Similarly to Ley et al. (2023), we define a particle as trapped during \(\Delta\tau_{LR}\) if the median of its parallel momentum over \(\Delta\tau_{LR}\) is smaller than one standard deviation over \(\Delta\tau_{LR}\). We then define a particle as passing if the median of its parallel momentum over \(\Delta\tau_{LR}\) is greater than or equal than one standard deviation over \(\Delta\tau_{LR}\). This is a statement of small variation of \(p_{\parallel,j}\) over \(\Delta\tau_{LR}\), which in turn is a proxy for an oscillatory be
Figure 9: Panel a: Evolution of the parallel momentum of an individual trapped ion (blue line) and passing ion (red line) for our fiducial simulation b20m8w800. Panel b: Evolution of the parallel momentum of a trapped electron (blue line) and passing electron (red line) for run b20m8w800. The dashed vertical gray lines in each panel indicates the time interval \(\Delta\tau_{LR}\).
havior of \(p_{\parallel,j}\), characteristic of a bouncing particle between mirror points. We confirm that this simple criterion gives excellent results separating trapped from passing particles.
Figure 9 shows the evolution of the parallel momentum of a trapped and a passing ion (panels \(a\)) and a trapped and a passing electron (panels \(b\)), where the dashed vertical gray lines indicate \(\Delta\tau_{LR}\). We can see the the oscillation pattern in the evolution of the parallel momentum of the trapped ion during \(\Delta\tau_{LR}\) and until \(t\cdot s\approx 0.7\), when it escapes. The parallel momentum of the passing ion evolves without major changes as the ion streams through the simulation box. This behavior is consistent with previous works using hybrid and fully kinetic simulations Kunz et al. (2014); Riquelme et al. (2016).
In figure 9\(d\) we can also see the oscillating pattern of the parallel momentum of the trapped electron, indicating bouncing inside mirror modes, which ends at \(t\cdot s\approx 1.1\), when it escapes. The parallel momentum of the passing electron does not vary significantly during \(\Delta\tau_{LR}\), confirming that it was streaming along field lines at least at that interval.
It is worth noting, however, what happens after \(\Delta\tau_{LR}\). Our criterion for identifying particles as trapped and passing was only within \(\Delta\tau_{LR}\), and after that period of time particles can continue evolving into the saturated stage of mirror modes, where they can escape, be trapped again or continue streaming unperturbed. Indeed, by looking at its parallel momentum, we can see that after escaping and streaming for a while, the trapped ion shown in figure 9\(a\) gets trapped again at \(t\cdot s\approx 1.1\), bounces inside a mirror mode and escapes again at \(t\cdot s\approx 1.4\). Similarly, we can also see that the trapped electron shown in figure 9\(b\) gets trapped again at \(t\cdot s\approx 1.2\) and seems to stay trapped until the end of the simulation. Interestingly, the passing electron also gets trapped at around \(t\cdot s\approx 0.7\), by looking at its parallel momentum, and then escapes again at \(t\cdot s\approx 1.2\). Therefore, in a statistical sense, we can consider the particles as trapped and passing only over the particular period of time \(\Delta\tau_{LR}\) that we chose, after which they can continue evolving and turn into passing or trapped again, as long as the mirror saturation persists in the simulation.
### Physical Mechanism of Secondary IC/Whistler Excitation: Distribution Functions
In this section, we look at the evolution of the pressure anisotropy and distribution functions of trapped and passing ions and electrons defined according to the criterion described in section SS3.5. We see that during \(\Delta\tau_{LR}\), both trapped ions and trapped electrons contribute most of the pressure anisotropy necessary to destabilize IC and whistler modes. We show that these IC and whistler waves interact in a quasilinear fashion with ions and electrons, respectively, and quickly regulate their pressure anisotropy such that their distributions evolve to a more isotropic state.
Figure 10\(a\) shows the evolution of the pressure anisotropy of trapped and passing ions. We can see that the anisotropy of trapped ions initially follows a double-adiabatic (CGL, dotted blue line) evolution until \(t\cdot s\approx 0.5\) (i.e. just start
Figure 10: Panel a: Evolution of the pressure anisotropy of ions identified as trapped (blue line) and passing (red line). The dashed green line indicates the best-fit threshold to \(\Delta P_{\parallel,i}/P_{\parallel,i}\) shown in fig. 3\(a\), and the dotted blue-gray and red lines show the corresponding double-adiabatic (CGL) evolution of trapped and passing ions, respectively. Panel b: Evolution of the pressure anisotropy of trapped (blue line) and passing (red line) electrons. The dotted blue and red lines show the corresponding CGL evolution of trapped and passing electrons, respectively.
ing \(\Delta\tau_{LR}\)), when the mirror modes start to trap them. We can readily see that during \(\Delta\tau_{LR}\), the trapped ions develop a significant anisotropy, peaking at around \(t\cdot s\approx 0.55\). The anisotropy is quickly regulated and converges to the best-fit threshold that we derived in section 3.1 and show in figure 3\(a\). Similarly, the pressure anisotropy of passing ions evolves in a relatively unperturbed fashion following CGL evolution (dotted red line) through the majority of \(\Delta\tau_{LR}\), until \(t\cdot s\approx 0.6\), where it passes from negative values (consistent with passing ions having preferentially large parallel momentum) to a positive but, more isotropic state consistent with the best-fit threshold from fig. 3\(a\).
The behavior of the pressure anisotropy of trapped and passing particles can be understood as follows. Mirror modes interact resonantly with ions and electrons according to the resonance condition \(\omega_{M}-k_{\parallel,M}v_{\parallel}=0\), where \(\omega_{M}\) and \(k_{\parallel,M}\) are the frequency and parallel wavenumber of mirror modes, respectively, and \(v_{\parallel}\) is the parallel velocity of the particle. The very low frequency of mirror modes, \(\omega_{M}\sim 0\), implies that the resonant particles are the ones having very low \(v_{\parallel}\) (\(v_{\parallel}<\gamma_{M}/k_{\parallel,M}\), where \(\gamma_{M}\) is the mirror growth rate, Southwood and Kivelson (1993); Pokhotelov et al. (2002)). These are the particles that become trapped within mirror modes (Kivelson and Southwood (1996)). Consequently, all trapped particles have very low parallel velocity and, as a whole, they should naturally have a pressure anisotropy \(P_{\perp,j}>P_{\parallel,j}\) (\(j=i,e\)). Similarly, all passing particles have large \(v_{\parallel}\), and therefore they have a pressure anisotropy \(P_{\parallel,j}>P_{\perp,j}\). In this sense, fig. 10 is consistent with the trapping argument described in Kivelson and Southwood (1996) (see their fig. 1).
The fact that both trapped and passing ions evolve into the average level of ion anisotropy shown in fig 3\(a\) shows that their trapped or passing condition corresponds to a transient state, that passes after a time comparable to \(\Delta\tau_{LR}\). Also, notice that the anisotropy of the two populations (and for the whole population for that matter) is significant enough to drive IC waves unstable (see section 3.3), and therefore this can provide evidence for the source of the IC waves that we see. If this is the case, their interaction with ions is the source of the quick regulation of the anisotropy that we see in fig. 10\(a\). Interestingly, under this scenario, the regulation of the pressure anisotropy of passing ions, which happens at the same time as that of the trapped ions, should also be due to the interaction with these IC waves, meaning that the IC waves interact with both populations of trapped and passing ions simultaneously, and therefore regulate the global ion anisotropy. We confirm that this is the case by looking at the evolution of the distribution functions of trapped and passing ions.
In the case of electrons, we observe a similar evolution in figure 10\(b\). Initially, both trapped and passing electrons detach from their respective CGL evolution (dotted blue and red lines, respectively), and develop a significant anisotropy \(\Delta P_{e}>0\), that peaks at \(t\cdot s\approx 0.4\). We also see that trapped electrons detach from their CGL evolution much earlier than passing electrons. This evolution then leads to the early burst of whistler waves, which also quickly regulates and drives anisotropies of both trapped and passing electrons towards a more isotropic state (see section 3.2). As expected, the anisotropy of trapped electrons is higher than the one of the passing electrons. After this process, and during \(\Delta\tau_{LR}\), the anisotropy of trapped electrons increases again, while that of passing electrons continues to decrease. This way, we see that trapped electrons build up a pressure anisotropy \(\Delta P_{e}>0\) that is also quickly regulated after \(\Delta\tau_{LR}\), converging to an anisotropy level similar to the one of the general electron populations. The anisotropy \(\Delta P_{e}<0\) of the passing electrons also gets regulated towards a similar anisotropy level during the same time. This evolution of trapped electrons also suggests that they become anisotropic enough to destabilize whistler waves, and therefore could be the source of the whistler activity observed at \(t\cdot s>0.6\). We provide evidence of this by showing the evolution of the distribution function of electrons.
Figure 11 shows the distribution functions of trapped and passing ions and electrons at three different times \(t\cdot s=0.57\), \(t\cdot s=0.61\), and \(t\cdot s=0.75\), spanning \(\Delta\tau_{LR}\) and also part of mirror's saturated stage. In the following we describe the evolution of each population:
The distribution of trapped ions (figs. 11\(a\), 11\(b\), and 11\(c\)) shows a clear loss-cone like form at \(t\cdot s=0.57\) (all outside the loss-cone), meaning that all trapped ions are effectively trapped in mirror troughs. At this time, trapped ions have reached its maximum pressure anisotropy according to figure 10\(a\).
Once IC waves are excited, they interact with both trapped and passing ions via pitch-angle scattering in a quasilinear fashion (Kennel and Engelmann (1966)). This diffusion process happens along paths of constant particle's energy in the frame moving with the waves (see e.g. Squire et al. (2022)):
\[v_{\perp,j}^{2}+(v_{\parallel,j}-\omega/k_{\parallel})^{2}=\text{const.} \tag{5}\]
We plot these contours in solid white lines in each plot of figure 11 as \(v_{\perp,j}^{2}+(v_{\parallel,j}-\omega/k_{\parallel})^{2}\approx v_{\perp,j} ^{2}+v_{\parallel,j}^{2}=\text{const.}\), as in a high-\(\beta\) scenario, the phase velocity of an IC wave offers a small correction of order \(v_{A}/v_{th,i}=\sqrt{1/\beta}\). Additionally, the IC waves in our simulations are destabilized in both parallel and anti-parallel directions to \(\mathbf{B}\). We can see that the relaxation of the distribution function of trapped ions by the quasi-linear interaction with IC waves agrees very well with these paths, by looking at \(t\cdot s=0.61\) and \(t\cdot s=0.75\).
The distribution of passing ions (figs. 11\(d\), 11\(e\), and 11\(f\)) shows, on the one hand, a concentration of ions at low perpendicular velocities and relatively large parallel velocities, and it looks fairly symmetric in \(v_{\parallel}\). This is consistent with having untrapped ions mainly streaming along the mean magnetic field in both directions. On the other hand, the population of large parallel velocity is also shown at \(v_{\parallel}/c\approx 0.3\) (see section 3.5). Interestingly, the passing ions also interact quasilinearly with IC waves, and this is particularly evident in the evolution of passing ions. Indeed, we can clearly see how the large parallel velocity population of passing ions evolves along the contours of of constant particle energy with
Figure 11: The distribution function \(f(v_{\parallel,j},v_{\perp,j})\) of trapped and passing ions and electrons at three different times: \(t\cdot s=0.57\) (first column), \(t\cdot s=0.61\) (second column), and \(t\cdot s=0.75\) (third column). The distribution function \(f_{\text{\rm trapped}}(v_{\parallel,i},v_{\perp,i})\) of the trapped ions is shown in the first row, \(f_{\text{\rm passing}}(v_{\parallel,i},v_{\perp,i})\) for the passing ions are shown in the second row, \(f_{\text{\rm trapped}}(v_{\parallel,e},v_{\perp,e})\) for the trapped electrons are shown in the third row, and \(f_{\text{\rm passing}}(v_{\parallel,e},v_{\perp,e})\) for the passing electrons are shown in the fourth row. In all the plots, the solid white curves denote contours of constant particle energy in the frame moving with the waves: \(v_{\perp,j}^{2}+(v_{\parallel,j}-\omega/k_{\parallel})^{2}\approx v_{\perp,j} ^{2}+v_{\parallel,j}^{2}=\text{const.}\) (\(j=i,e\)). An animation is available.
excellent agreement at \(t\cdot s=0.61\) and \(t\cdot s=0.75\). We can understand the evolution of this population by looking at the gyroresonance condition
\[\omega-k_{\parallel}v_{\parallel,i}=\pm\omega_{c,i}. \tag{6}\]
If we look at the peak power at positive frequencies in the power spectrum shown in fig. 6\(c\), we can estimate the frequency and wavenumber at which most of the power of IC waves resides: \(\omega/\omega_{c,i}^{\text{init}}\approx 0.2\), and \(ck_{\parallel}/\omega_{p,i}^{\text{init}}\approx\pm 0.15\). From eq. (6) we can estimate then the parallel velocity of the ions interacting gyroresonantly with these IC waves:
\[\frac{v_{\parallel,i}}{c}=\frac{\omega/\omega_{c,i}^{\text{init}}\mp 1}{(ck_{ \parallel}/\omega_{p,i}^{\text{init}})(m_{i}c^{2}/k_{B}T_{i}^{\text{init}})^{1 /2}(\beta_{i}^{\text{init}}/2)^{1/2}}, \tag{7}\]
which gives \(v_{\parallel,i}/c\approx 0.36\) and \(v_{\parallel}/c\approx-0.24\), which falls in the range of the large parallel velocity population. The quasilinear evolution also happens with the population with smaller parallel velocity.
The population of trapped electrons (figs. 11\(g\), 11\(h\), and 11\(i\)) shows a very similar evolution to that of trapped ions; the loss-cone like distribution is also apparent. The evolution of this distribution is also consistent with a quasilinear interaction now between the electron and whistler waves, driving the distribution towards isotropy along paths of constant particle energy, as can be seen at later times in figure 11.
Finally, the population of passing electrons (figs 11\(j\), 11\(k\), and 11\(l\)) also shows a very similar evolution to that of the ions. The populated loss-cone shape of the distribution is also apparent, and we can see the quasilinear evolution of the distribution function along constant particle energy contours at later times.
This way, we have provided evidence for the source of both IC and whistler waves observed in our simulations. Once ions and electrons get trapped in regions of low magnetic field strength of mirror modes, they get significantly anisotropic with a loss-cone like distribution, which is able to destabilize parallel-propagating IC and whistler waves, respectively. These waves then interact with both population of trapped and passing particles in a quasilinear fashion, driving both populations of trapped and passing ions and electrons towards a more isotropic state. Consequently, this mechanism can contribute to regulate the global anisotropy of ions and electrons, and can thus be a pathway for particle escape and consequent saturation of mirror modes (Kunz et al. (2014)).
## 4 Mass-Ratio Dependence
In this section, we compare simulations with different mass ratios: \(m_{i}/m_{e}=8\), \(m_{i}/m_{e}=32\), but with the same initial conditions for ions, as shown for runs b20m8w800, b20m32w800,and b20m64w800 in Table 1, although with somewhat different temperatures. We see that IC and whistler waves' signatures do appear in all three simulations, and thus they do not seem to present a strong dependence on mass ratio.
Figure 12 shows the evolution of \(\delta B_{\parallel}^{2}\) (panel \(a\)) and \(\delta B_{z}^{2}\) (panel \(b\)) for the three runs with mass ratios: \(m_{i}/m_{e}=8,32\), and \(64\) (runs b20m8w800, b20m32w800, and b20m64w800 in table 1). We can see a very consistent evolution of \(\delta B_{\parallel}^{2}\) in all three runs, meaning that \(m_{i}/m_{e}\) does not play a significant role on the early evolution and saturation of the mirror instability. Similarly, \(\delta B_{z}^{2}\) shows the same features in all three runs, especially during mirrors' secular growth and saturated stages (\(t\cdot s\approx 0.5\) onwards). The early peak in \(\delta B_{\parallel}^{2}\) at \(t\cdot s\approx 0.4\) corresponding to the early whistler burst is also seen in the three runs, but more prominently in the simulation with \(m_{i}/m_{e}=8\). This is possibly due to an enhancement of this wave activity by the ions, which are able to weakly feel the presence of whistlers, as the mass separation is not very large. This effect disappears as the mass ratio increases, and the early whistlers only affect the electrons. More importantly, for \(t\cdot s>0.5\), all three runs show a very similar evolution of \(\delta B_{\parallel}^{2}\).
Figure 13 shows the evolution of the pressure anisotropy of ions (panel \(a\)) and electrons (panel \(b\)) for the same three runs. In the case of the ions, we can see an overall evolution that is very consistent in all three runs, both in early and late stages. We can see a smaller anisotropy overshoot for the simulation with \(m_{i}/m_{e}=8\) at \(t\cdot s\approx 0.4\), coincident with the enhancement seen in \(\delta B_{z}^{2}\), during the early whistler burst, suggesting that ions can weakly interact with the whistlers at this mass ratio, and consequently their anisotropy does not reach the
Figure 12: Panel a: The energy in the parallel component of the magnetic field fluctuations \(\delta\mathbf{B}\), for three simulations with different mass ratios: \(m_{i}/m_{e}=8\) (run b20m8w8, blue line), \(m_{i}/m_{e}=32\) (run b20m32w8, orange line), and \(m_{i}/m_{e}=64\) (run b20m64w80, green line). Panel b: same as in panel a but for the perpendicular component of \(\delta\mathbf{B}\) out of the plane of the simulation in the same runs.
same overshoot as the rest of the runs. Notwithstanding the foregoing, we can see how all three runs display a very similar pressure anisotropy evolution afterwards, which is also well described by the best-fit threshold \(\Delta P_{i}\propto\beta_{i}^{-0.45}\) shown in fig. 3.
In the case of the electron pressure anisotropy \(\Delta P_{e}\), we can also see a similar evolution overall in fig. 13\(b\). The overshoot at \(t\cdot s\approx 0.4\) is larger for decreasing mass ratios, possibly due to the fact that the whistler amplitude required for efficient scattering decreases as \(m_{i}/m_{e}\) increases, as explained above. This means that, after \(\Delta P_{e}/P_{e,\parallel}\) has surpassed the threshold for efficient growth of the whistler modes, the simulations with larger \(m_{i}/m_{e}\) take shorter times to reach the necessary whistler amplitude to efficiently scatter the electrons. This implies that the overshoot decreases for higher mass ratios. During late stages, we can see a very similar evolution of \(\Delta P_{e}\) in all three runs, that is even more evident for \(m_{i}/m_{e}=32\) and \(m_{i}/m_{e}=64\) (orange and green curves in fig. 13\(a\)), which essentially lie on top of each other.
Finally, figure 14 shows the power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp,xy}(\omega,k_{\parallel})\) for the simulation with \(m_{i}/m_{e}=32\) (fig. 14\(a\)) and with \(m_{i}/m_{e}=64\) (fig. 14\(b\)). Here we also see a very similar power distribution at both mass ratios, showing both left-hand and right-hand polarized waves (positive and negative frequencies, respectively). The peak power is also observed at the same frequencies and wavenumbers as in fig. 6 for both polarizations.
This way, we can see that the linear and nonlinear evolution of the mirror instability and the late IC and whistler evolution are well captured in our simulations, and it does not strongly depend on mass ratio.
## 5 Dependence on initial plasma \(\beta\)
We tested whether the IC and whistler waves' activity is present in simulations with \(\beta_{i}^{\text{init}}=2\) (i.e, total \(\beta^{\text{init}}=4\)), and \(\beta_{i}^{\text{init}}=40\) (i.e. total \(\beta^{\text{init}}=80\)), and compare them with our fiducial simulation at \(\beta_{i}^{\text{init}}=20\). We confirm that the mirror instability can develop in all simulations, and both IC and whistler waves do appear at nonlinear stages.
The power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp,xy}(\omega,k_{\parallel})\) is shown in figure 15, and we can see that it is similar among the three \(\beta_{i}\) cases. In all three cases we see the power concentrated at \(\omega\sim 0\) corresponding to mirror modes. In addition, we also see a concentration of power in right and left polarized waves, so both IC and whistler waves are also present, although their peak frequency changes. For the \(\beta_{i}^{\text{init}}=2\) case we see that the peak frequency is at \(\omega/\omega_{c,i}^{\text{init}}\approx 0.5\), whereas in the \(\beta_{i}^{\text{init}}=40\) case it shifts to smaller values, \(\omega/\omega_{c,i}^{\text{init}}\approx 0.1\). This shift in peak frequency can also be explained by the IC and whistler dispersion relations analogous to our discussion in section 3.3.
Figure 16 compares the evolution of \(\delta B_{\parallel}^{2}\) (i.e., mainly the development of the mirror instability) for the three runs with different initial \(\beta^{\text{init}}\) (the other phyiscal parameters are the same, see table 1). In all three cases we can see an exponential phase followed by the secular and saturated stages characteristic of the mirror instability, which develops earlier for higher initial \(\beta^{\text{init}}\), consistent with the smaller anisotropy threshold for the growth of the mirror instability at larger beta. The amplitude of \(\delta B_{\parallel}^{2}\) at the saturated stage is comparable for both \(\beta^{\text{init}}=20\) and \(\beta^{\text{init}}=40\) runs, and is smaller for the \(\beta^{\text{init}}=2\) run, as also seen by previous works (e.g. Riquelme et al. (2015)).
Indeed, when we look at the evolution of \(\delta B_{z}^{2}\), we can see that for both \(\beta^{\text{init}}=20\) and \(\beta^{\text{init}}=40\) runs, the evolution is similar: both display an early whistler burst at \(t\cdot s\approx 0.4\), and
Figure 14: The power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp}(\omega,k_{\parallel})\) at \(0.5<t\cdot s<0.7\) for \(m_{i}/m_{e}=32\) (run b20m32w800, left panel) and \(m_{i}/m_{e}=64\) (run b20m64w800, right panel). Positive and negatives frequencies show the power in left-hand and right-hand polarized waves, respectively.
Figure 13: Panel a: Evolution of the ion pressure anisotropy for three simulations with different mass ratios: \(m_{i}/m_{e}=8\) (run b20m8w8, blue line), \(m_{i}/m_{e}=32\) (run b20m32w8, orange line), and \(m_{i}/m_{e}=64\) (run b20m64w8, green line). The dashed red line indicates the best-fit the threshold shown figure \(3a\), \(\Delta P_{i}/P_{\parallel,i}\propto\beta_{\parallel,i}^{-0.45}\). Panel b: same as in panel a but for the electron pressure anisotropy in the same runs.
a IC/whistler excitation stage (\(t\cdot s\approx 0.5\) onwards) at almost the same amplitude. In the case of the \(\beta^{\text{init}}=2\) run, we can see that the first exponential growth in \(\delta B_{z}^{2}\) at \(t\cdot s\approx 0.6\) is consistent with an IC burst (see e.g. Ley et al. (2019)), after which we see the typical oscillation pattern that the excitation of late IC and whistler waves produces, from \(t\cdot s\approx 0.8\) onwards, saturating at a similar amplitude than the rest of the runs, and displaying a very high-frequency oscillation.
In figure 17, we compare the evolution of the ion and electron pressure anisotropy plotted as a function of their parallel plasma \(\beta_{i}\) for the three simulations with different initial \(\beta_{i}\) (As in all our simulations the mean magnetic field strength is continuously increasing, so the particles' \(\beta_{i}\) decreases over time, and therefore the simulation evolves towards the left in fig. 17.).
In the case of the ions (fig. 17\(a\)), we can see a similar overshoot and subsequent regulation, but the overshoot occurs at a lower anisotropy value for increasing \(\beta_{i}\). This is consistent with the inverse \(\beta_{i}\) dependence of the mirror instability threshold: mirror modes are excited earlier at higher \(\beta_{i}\), and therefore have relatively more time to regulate the anisotropy before it reaches a higher overshoot. Interestingly, the saturated stage of the ion pressure anisotropy is consistent with the theoretical IC threshold from Gary & Lee (1994): \(\Delta P_{i}/P_{\parallel,i}=0.53\beta_{i,i}^{-0.40}\) for \(\gamma_{IC}/\omega_{c,i}=10^{-2}\) (see fig. 3\(a\)) in all three runs, suggesting a universality in the threshold that \(\Delta P_{i}/P_{\parallel,i}\) follows, as a consequence of the excitation of IC waves during mirrors' saturated stage. (In the case of the \(\beta_{i}^{\text{init}}=40\) run, however, it is more unclear whether it can follow the above mentioned threshold at late stages, given the short duration of this run.)
In the case of electrons (fig. 17\(b\)), we can also see that the overshoot is reached at lower values of the pressure anisotropy \(\Delta P_{e}/P_{\parallel,e}\) for increasing initial beta, consistent with an inverse-\(\beta_{i}\) dependence now of the whistler instability anisotropy threshold. It is interesting to note that after the anisotropy overshoot, and during these late stages, the electron pressure anisotropy tends to be significantly smaller than the expectation from the threshold for the whistler instability in the higher initial \(\beta_{i}\) runs (\(\beta_{i}^{\text{init}}=20\) and \(\beta_{i}^{\text{init}}=40\)), irrespective of the generation of pressure anisotropy that the continuous amplification of the magnetic field produces as a consequence of the shear motion in the simulation. Notice, however, that in low magnetic field regions the electron pressure anisotropy is larger than the whistler threshold
Figure 16: Panel \(a\) : Evolution of \(\delta B_{\parallel}^{2}\) for three simulations with different initial ion beta: \(\beta_{i}^{\text{init}}=2\) (solid red line, run b2m8w800), \(\beta_{i}^{\text{init}}=20\) (solid black line, run b20m8w800), and \(\beta_{i}^{\text{init}}=40\) (solid blue line, run b40m8w800). Panel \(b\): Evolution of \(\delta B_{z}^{2}\) for the same three simulations shown in panel \(a\).
and, therefore, enough to excite whistlers (fig 8). This shows the key role played by mirror-generated magnetic troughs in creating the conditions to excite whistlers despite the fact that, globally, the pressure anisotropy may be not be enough to make these waves unstable. On the other hand, in the \(\beta_{i}^{\text{init}}=2\) run, \(\Delta P_{e}/P_{\parallel,e}\) continues to weakly grow because of the continuous \(B\) amplification, and this is done following a marginal stability state well described by the threshold of the whistler instability \(\Delta P_{e}/P_{\parallel,e}\propto\beta^{-0.55}\)(Gary & Wang (1996)), consistent with previous works at lower \(\beta_{\parallel,e}\)(Ahmadi et al. (2018)).
The persistence of the late IC and whistler activity at different initial plasma \(\beta_{i}\) suggests that this phenomenon is a natural consequence of the excitation of the mirror instability. In other words, in a weakly collisional plasma with an initial plasma \(\beta_{i}\) sufficiently high to effectively excite the mirror instability, the excitation of IC and whistler waves at its late, saturated stages seems to be ubiquitous.
## 6 Summary and Discussion
In summary, we have performed fully kinetic PIC simulations of a collisionless plasma subject to a continuous amplification of the background magnetic field to study the nonlinear stages of the mirror instability and the ensuing excitation of secondary ion-cyclotron (IC) and whistler instabilities, in conditions where plasma pressure dominates over magnetic pressure (high-\(\beta\)). After mirror modes reach high-amplitudes and are able to trap ions and electrons within regions of low-**B**, we observe the excitation of sub-dominant left-hand polarized IC and right-hand polarized whistler waves that persist throughout the rest of the simulation, well into the nonlinear stages of the mirror instability (see section 3.3). The whistler waves in our simulations seem to be consistent with the observations of whistler lion roars in the Earth's magnetosheath.
By tracking ions and electrons through the simulation, we studied the excitation mechanism of both IC and whistler waves. We characterized the population of tracked particles as trapped and passing (i.e. untrapped) within mirror modes, and followed the evolution of their distribution functions. We observed that the trapped population of both ions and electrons become highly anisotropic while trapped inside mirror modes, contributing most of the anisotropy that allows the plasma to become unstable to IC and whistler waves, respectively. On the other hand, the passing ions and electrons developed features concentrated at small perpendicular and large parallel velocities, and fairly symmetric with respect to \(v_{\parallel}\), with a clear absence at small parallel velocities (see section 3.6).
Once IC and whistlers are excited, they interact with both trapped and passing population of ions and electrons, respectively, via gyroresonant pitch-angle scattering. As a result of this interaction, both trapped ions and electrons reduce their anisotropy and escape from magnetic troughs of mirror modes, following the prediction of quasilinear theory. The passing ion and electron populations evolve in a similar manner (see fig. 11). Interestingly, this process is observed to regulate the global anisotropy of ions and electrons in the simulation, driving the ion pressure anisotropy towards the IC instability threshold (Gary & Lee (1994)), and the electron pressure anisotropy towards a global anisotropy much smaller than expected from theoretical whistler threshold. Given this low electron pressure anisotropy, the whistler excitation can be explained by the fact that, within mirror-generated magnetic troughs, the pressure anisotropy is locally larger than the whistler threshold (fig. 8\(i\)). Thus, we interpret the whistler-driven regulation of electron pressure anisotropy as a local phenomenon, mainly produced by trapped electrons within non-linear mirror structures.
The excitation of the secondary IC and whistler waves is maintained as long as mirror modes are present and growing, and this also was observed in simulations of lower and higher initial plasma \(\beta\). This way, IC and whistler waves could be a concomitant feature of the nonlinear evolution of the mir
Figure 17: Panel \(a\): Ion Anisotropy, \(\Delta P_{i}/P_{\parallel,i}\) as a function of parallel ion beta, \(\beta_{\parallel,i}\) (with respect to the main magnetic field **B**) for three different simulations with different initial ion beta: \(\beta_{i}^{\text{init}}=2\) (solid red line, run b2m8w800), \(\beta_{i}^{\text{init}}=20\) (solid black line, run b20m8w800), and \(\beta_{i}^{\text{init}}=40\) (solid blue line, run b40m8w800). The dotted-dashed orange line shows the IC threshold \(\Delta P_{i}/P_{\parallel,i}=0.53/\beta_{i,4}^{0.4}\) from Gary & Lee (1994) for \(\gamma_{IC}/\omega_{c,i}=10^{-2}\). Panel \(b\): Electron anisotropy \(\Delta P_{e}/P_{\parallel,e}\) as a function of parallel electron beta, \(\beta_{\parallel,e}\) for the same three simulations shown in panel \(a\). The dashed gray line in this case shows the threshold for the whistler instability, \(\Delta P_{e}/P_{\parallel,e}=0.36\beta_{\parallel,e}^{-0.55}\) for growth rate \(\gamma=0.01\omega_{c,e}\), from Gary & Wang (1996).
ror instability, and provide an interesting physical connection between ion-scale instabilities and electron-scale physics.
In this work, we did not vary the scale-separation ratio \(\omega_{c,i}/s\). In an environment like the ICM, turbulent eddies could drive the plasma locally through shear motions at kinetic scales with a wide range of frequencies \(s\), and we typically expect larger kinetic energy at low frequencies (i.e., higher \(\omega_{c,i}/s\)). For larger values of \(\omega_{c,i}/s\), previous works have shown that mirror modes can develop comparatively earlier in the simulations, therefore having relatively more time to saturate, and reaching similar amplitudes (Kunz et al. (2014); Melville et al. (2016); Riquelme et al. (2016); Ley et al. (2023)). In this sense, we would expect a similar late excitation of IC and whistler waves once mirror modes have reached a saturated stage.
The excitation of IC and whistler waves at saturated stages of the mirror instability modulates its nonlinear evolution, and therefore could affect transport processes in the ICM in which mirror modes come into play.
Particularly important is the pressure anisotropy regulation in the context of collisionless heating and dissipation via magnetic pumping in the ICM (Kunz et al. (2011); Ley et al. (2023)). The marginal stability level that the ion pressure anisotropy reaches at the saturated stage, \(\Delta P_{i}\propto\beta_{\parallel,i}^{0.45}\) (see fig. 3\(a\), also correctly pointed out by Sironi & Narayan (2015)) is larger than the usual mirror threshold \(1/\beta_{\parallel,i}\) by a factor \(\sim\beta_{\parallel,i}^{0.55}\). which directly translates into an excess heating of the same order. Indeed, given that \(\beta\) is estimated to be \(\beta\sim 10-100\), and that the heating rate is directly proportional to the pressure anisotropy, this could imply a heating rate several times larger than predicted from the mirror threshold, enhancing the efficiency of the mechanism by draining more energy from the turbulent motions that drive the pumping.
The structures of high and low magnetic field that mirror modes produce in the saturated stage seem to be persistent in time, and its energy \(\delta B_{\parallel}^{2}\) does not decrease as long as the amplification of the mean magnetic field \(B\) is maintained (see fig. 2\(g\)). Even when this amplification is halted or reversed, the decaying timescales of mirror modes are large compared to the typical ion gyroperiod (Melville et al. (2016); Ley et al. (2023)). This implies that the trapping process of ions and electrons also persists, along with the excitation of secondary IC and whistlers. This source of whistler waves can have interesting implications in the context of ICM thermal conduction models like whistler-regulated MHD (Drake et al. (2021)), as they can dominate the electron scattering in the presence of mirror modes.
This source of whistler waves associated to mirror modes can also influence the suppression of the effective heat conductivity in the plasma even in the absence of heat-fluxes (Komarov et al. (2016); Riquelme et al. (2016); Roberg-Clark et al. (2016, 2018)), and this can have consequences in larger-scale instabilities such as the Magneto-thermal instability (MTI, Balbus (2000); Berlok et al. (2021); Perrone & Latter (2022a,b)).
Future work aimed towards 3D fully kinetic PIC simulations would be required to have a full understanding of the consequences of the mirror instability and secondary IC/whistler excitation in these high-\(\beta\) plasmas.
We thank Aaron Tran for providing the dispersion solver used in this work, and we thank Lorenzo Sironi, Jonathan Squire and Alexander Schekochihin for useful comments and discussion. FL. acknowledges support from NSF Grant PHY-2010189. M.R. thanks support from ANID Fondecyt Regular grant No. 119167. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant No. ACI-1548562. This work used the XSEDE supercomputer Stampede2 at the Texas Advanced Computer Center (TACC) through allocation TG-AST190019 (Towns et al. (2014)). This research was performed using the compute resources and assistance of the UW-Madison Center For High Throughput Computing (CHTC) in the Department of Computer Sciences. This research was partially supported by the supercomputing infrastructure of the NLHPC (ECM-02).
| 電子サイクロtron波( whistler)、地球周辺のプラズマにおいては、広く観察されており、太陽風にも観測されている。非線形ミラーモードの存在下では、通常「ライオンの咆哮」と呼ばれる、これらのモードに関連する低磁場領域内に、ブーストが発生している。銀河系クラスターのintracluster medium (ICM) では、ミラー不安定性の励起が期待されているが、電子とイオンサイクロtron波が磁力圧がガス圧に支配される条件下(高$\beta$)に存在できるかどうかは明らかではない。この研究では、平均磁場$\textbf{B}(t)$ の連続的増幅によってプラズマを満たす完全性を持つ粒子インセル(PIC)シミュレーションを実施し、ミラー不安定性の非線形段階と、ICM条件下での whistler と ion cyclotron (IC) 波の励起を研究する。ミラーモードが |
2309.00133 | Distraction-free Embeddings for Robust VQA | The generation of effective latent representations and their subsequent
refinement to incorporate precise information is an essential prerequisite for
Vision-Language Understanding (VLU) tasks such as Video Question Answering
(VQA). However, most existing methods for VLU focus on sparsely sampling or
fine-graining the input information (e.g., sampling a sparse set of frames or
text tokens), or adding external knowledge. We present a novel "DRAX:
Distraction Removal and Attended Cross-Alignment" method to rid our cross-modal
representations of distractors in the latent space. We do not exclusively
confine the perception of any input information from various modalities but
instead use an attention-guided distraction removal method to increase focus on
task-relevant information in latent embeddings. DRAX also ensures semantic
alignment of embeddings during cross-modal fusions. We evaluate our approach on
a challenging benchmark (SUTD-TrafficQA dataset), testing the framework's
abilities for feature and event queries, temporal relation understanding,
forecasting, hypothesis, and causal analysis through extensive experiments. | Atharvan Dogra, Deeksha Varshney, Ashwin Kalyan, Ameet Deshpande, Neeraj Kumar | 2023-08-31T21:02:25 | http://arxiv.org/abs/2309.00133v1 | # Distraction-free Embeddings for Robust VQA
###### Abstract
The generation of effective latent representations and their subsequent refinement to incorporate precise information is an essential prerequisite for Vision-Language Understanding (VLU) tasks such as Video Question Answering (VQA). However, most existing methods for VLU focus on sparsely sampling or fine-graining the input information (e.g., sampling a sparse set of frames or text tokens), or adding external knowledge. We present a novel _"DRAX: Distraction Removal and Attended Cross-Alignment"_ method to rid our cross-modal representations of distractors in the latent space. We do not exclusively confine the perception of any input information from various modalities but instead use an attention-guided distraction removal method to increase focus on task-relevant information in latent embeddings. _DRAX_ also ensures semantic alignment of embeddings during cross-modal fusions. We evaluate our approach on a challenging benchmark (SUTD-TrafficQA dataset), testing the framework's abilities for feature and event queries, temporal relation understanding, forecasting, hypothesis, and causal analysis through extensive experiments.
## Introduction
The process of comprehending an environment relies on receiving specific sensory inputs. As the number and diversity of these inputs increase, our level of cognition also tends to rise. Humans, for example, possess primary sensory inputs such as sight, sound, touch, smell, and taste, which contribute to our understanding of the world. However, as the quantity of these inputs grows, it leads to an _information overload_[12], and it becomes increasingly challenging to discern which information is essential and should be prioritized and which can be disregarded.
In the context of multi-modal systems that combine vision and language, achieving a relational understanding between the two modes of input is crucial for various tasks. Significant advancements have been made in areas like text-to-video retrieval [22, 13, 14, 15, 16], video-text matching [17], text-based video moment retrieval [13, 14, 15, 16], video captioning [23, 15, 17] and video question answering [16, 24, 25].
One of the tasks in Video-Language understanding and the task of our focus, Video Question Answering, has seen developments both with [1, 16, 15] and without [14] attention-based approaches. The fundamental concept behind this task involves generating global feature representations for both images and questions, merging them into a shared space [16, 17] through fusion, and feeding the fused features into a classifier [1] to generate answer probabilities. Given that questions can be diverse and multidirectional [24], video question-answering systems must encode representations for crucial properties within a video, such as temporal relations [18, 19, 12], and relations among different modalities [16]. These systems also need to learn to focus on relevant and informative features and relationships. Cross-modal attention techniques have proven effective in capturing intricate relationships among different modalities [18, 19].
However, previous such methods [14, 15, 16] operating on dense video and text features often suffered from the inclusion of excessive and irrelevant information. Methods like _ClipBERT_[14] and _Eclipse_[24] have tackled this issue by sparse sampling of frames and, to an extreme, selecting one frame at a time over multiple steps. _ClipBERT_ also address domain disconnection due to extracting offline features, as seen in _ECLPSE_[24] and _HCRN_[14], and incorporates a cross-attention mechanism for frames and text tokens, enabling an end-to-end learning process that encompasses even the pixel-level details of video frames.
Several works have successfully addressed the need for high-quality datasets. For example, [14] released _TVQA_ leveraging the abundance of visual language data in television shows, covering domains such as crime, medical, and sitcom, and providing dialogue textual captions. Another notable development is the introduction of _DocVQA_[16], emphasizing that reading systems should utilize cues like layout, non-textual elements, and style. This represents a departure from scene text visual question answering [17, 18] and generic visual question answering [19].
2017) approaches.
Newer video-language understanding works have focussed on bringing in more modalities by processing raw inputs Liu et al. (2021), extracting discrete tokens Fu et al. (2021) from the input video, end-to-end training of image encoders Zellers et al. (2021), and learning video representations through extensive pre-training Li et al. (2023). All discussed methods have had their primary focus on the input modalities or making systems end-to-end or random sampling Lei et al. (2021), while no work until now, to the best of our knowledge, has looked into making the latent embeddings more robust to focus on task-specific elements. We propose a simple method in this new direction of removing irrelevant information from the latent representations and making them "distraction-free". To show the efficiency of our proposal in generating effective latent representations, we design a simple framework keeping the offline feature extraction method, and our comparisons are limited to similar works.
The _SUTD TrafficQA_ dataset Xu et al. (2021) (described in ix A) provides a comprehensive set of tasks to evaluate the framework's (1) feature and event recognition capabilities, (2) understanding of temporal relations, (3) judge the capabilities on hypothetical situations, (4) enables forecasting capabilities in the framework and (5) perform causal analysis.
Our framework is structured to condition the appearance, motion, and linguistic features hierarchically Le et al. (2020), using self and cross attention, including cross-attended vector transformations for multi-modal semantic alignment and a guided distraction masking technique, which acts as a filter before producing cross attended vectors. Guided masking helps to focus on relevant information and ignore distractions, which roughly corresponds to the notion of _attention control_ in psychology James (1890).
Major contributions of this work are as follows:
1. We propose a novel approach "Distraction Removal and Attended Cross Alignment (DRAX)" that identifies and removes distractors from the embeddings and semantically aligns embeddings for fusing multi-modal representations while conditioning modalities in a hierarchical fashion.
2. We incorporate distraction masking in both cross-attention and semantic alignment during fusion which refines the embeddings during all cross-modal interactions.
3. We perform an extensive study on the driving scenes to display the effectiveness of our method in understanding event queries, temporal relation understanding, forecasting, hypothesis and causal analysis through the tasks provided in the dataset SUTD-TrafficQA.
## DRAX Framework
The task for the network \(\mathcal{F}_{\theta}\) here is to select an answer out of the given answer candidates in set \(\mathcal{A}\). In Equation (1), \(q\) and \(\mathcal{V}\) represent the question and visual information, respectively. In this work, we are limiting our application to multi-choice answers with four answer candidates.
\[\widetilde{a}=\underset{a\in\mathcal{A}}{\text{argmax}}\mathcal{F}_{\theta}(a |q,\mathcal{V}) \tag{1}\]
The component structure of our VQA system starts with self-attention encoders Vaswani et al. (2017) followed by a cross-modality encoder Tan and Bansal (2019), which employs a dynamic attention-guided distraction masking mechanism. Lastly, using cross-attended vector-space transformation, we fuse two vector embeddings, either from individual modalities or previously fused vectors, and a new modality input vector (Figure 1). All the above components are applied to \(\mathcal{V}\) video feature vectors (motion and appearance), and the resulting fused representation vectors are added with linguistic context from question feature vectors. To generate probabilities for answer candidates, individual candidate embeddings are fused with representation from the previous step and are fed to the decoder. \(\mathcal{F}_{\theta}\) computes logits, and the answer candidate with the highest probability is selected using the \(argmax\) operation. Our system is structured hierarchically Zhao et al. (2017, 2018); Le et al. (2020) as shown in Figure 1 where each modality is given as input during different stages of the hierarchy to refine feature vectors.
Figure 1: Structured Hierarchy of Input Modalities to the Distraction Removal and Attended Cross Alignment (**DRAX**) blocks. Each solid (or gradient) rectangle represents a set of vectors, until after the output of Answer Candidates DRAX block where the set of vectors is reduced to a single to fuse with answer candidates.
In our approach, we partition the video input into \(\mathcal{C}\) clips of equal lengths and extract \(\mathcal{N}\) frames uniformly from the entire video [10], which form the \(\mathcal{M}\) and \(\mathcal{E}\) vector sets, respectively. Below we describe the different modality inputs which are considered independent in our work:
Appearance FeaturesThese are represented by the sequence of feature vectors \(\{\epsilon_{i}|\mathcal{E}\in\mathbb{R}^{512}\}_{i=1}^{\mathcal{N}}\), which correspond to the \(\mathcal{N}\) frames. In our implementation, we calculate these features using the CNN-based ResNet-18 model [10].
Motion FeaturesThe motion features \(\{m_{i}|\mathcal{M}\in\mathbb{R}^{2048}\}_{i=1}^{\mathcal{C}}\) are a sequence of feature vectors that represent the \(\mathcal{C}\) clips. They capture motion information with the intuition of encoding temporal relations and dynamic context among frames or clips. To extract these motion features, we utilize the ResNeXt-101 [13].
Linguistic RepresentationThe _questions_ and _answer candidates_ are transformed into \(\mathbb{R}^{300}\) space vectors using GloVe word embeddings [12]. These linguistic features serve as a single modality. However, the question \(\mathcal{Q}\in\mathbb{R}^{d_{q}}\) and answer candidates \(\mathcal{A}\in\mathbb{R}^{d_{a}}\) representations are fused separately at different stages in the hierarchical process and operated on individually by the DRAX block. Here \(d_{q}\) and \(d_{a}\) are \(300\).
Other components imputted into the encoder, "added" to the above-described embeddings or the fused embeddings \(X_{feat}\) produced in between the hierarchical blocks, are:
\[X_{M}=[x_{cls}|X_{feat}]+x_{pos}\]
CLS and Positional EncodingWe extend the input sequences at each step of the input hierarchy by appending CLS tokens \(x_{cls}\)[14] which capture the overall sequence information and facilitate information transfer between embeddings. Furthermore, we add positional encodings \(x_{pos}\) to account for the position-agnostic behavior of the attention mechanism. Understanding the position of elements is crucial for both linguistic and vision comprehension tasks. In contrast to existing literature, we utilize sinusoidal encodings [15] for linguistic embeddings, and we adopt learnable 1D positional encoding for motion and appearance, and the previously fused embedding vectors, inspired by ViT [16] and CrossViT [10].
### Encoders
In our framework, the encoder stack comprises two separate single-input self-attention encoders along with a cross-modality encoder for the cross-attention operation.
Background:AttentionThe basic idea of attention [1, 12], involves retrieving information from a set of context vectors \(y_{j}\) that are "_relevant_" to a query vector \(x\). This retrieval is accomplished by calculating matching scores \(a_{i}\) between the query vector \(x\) and each context vector \(y_{i}\). These matching scores are then normalized using \(softmax\):
\[a_{i}=\text{score}(x,y_{i});\quad\alpha_{i}=\frac{\exp(a_{i})}{\sum_{k}\exp(a_ {k})} \tag{2}\]
After obtaining the normalized matching scores, referred to as attention weights, we create a weighted sum of the context vectors. This weighted sum (Equation 3) represents the attended information from the set of context vectors \(y_{i}\) with respect to a specific query vector \(x\).
\[Att_{X\to Y}(x,\{y_{i}\})=\sum_{i}\alpha_{i}y_{i} \tag{3}\]
Self-attention is when the _query_ vector \(x\) is in the \(\{y_{j}\}\) set of _context_ vectors. Although we can have the _query_ and _context_
Figure 2: Architecture of the **DRAX**: Distraction Removal and Attended Cross Alignment Block framework. Distraction Removal (\(DR\)) and Attended Cross Alignment (\(AX\)) functions are shown in the zoomed-in views of Cross-Encoder and Cross-Fusion blocks. NOTE: Vectors displayed in vertical form are not \(X_{M}^{T}\) unless specifically mentioned. DR \(\simeq\) Distraction Masking.
vectors from mutually exclusive sets and can retrieve information from different domains (e.g., vision and language) by bringing them to a common vector space, which is how cross-modal attention is applied.
Single Input Self Attention EncodersThe pair of single-input multi-headed self-attention (MSA) encoders in the framework operates on the offline extracted appearance, motion, and linguistic features (questions, answer candidates). At a particular hierarchical level of the complete pass of the framework, one input is an independent modality input, while the other can be another modality input or the output of a cross-fusion from a previous hierarchical level as in Figure 1. Unlike the standard implementations with SA applied only to language Devlin et al. (2019), vision Tan and Bansal (2019), or any single modality input, our inputs are single modality as well as some previously fused semantics-aligned embeddings (Described in Cross Fusion Section). Each single-input self-attention encoder is built of a self-attention sublayer followed by a feed-forward sublayer with a residual connection and layer normalization added after each sublayer, following Vaswani et al. (2017).
Cross EncoderThe cross-modality encoder consists of pair of linear layers \(f_{j}\) and \(g_{j}\) on both ends of the multi-headed cross-attention (\(MCA\)) sublayer, where \(f_{i}\) and \(g_{i}\) are acting as projection and back-projection functions mainly for dimension alignment and we apply a _Pre-LayerNorm_ (\(LN\)) on the inputs (\(X^{SA}_{M_{i}}\) and \(X^{SA}_{M_{i}}\)) to multi-head cross attention (\(MCA\)) function and is finally followed by a residual connection.
\[\begin{split} X^{CA}_{M_{1}},X^{CA}_{M_{2}}=MCA\big{(}[f_{M_{1}}( LN(X^{SA}_{M_{1}})),\\ f_{M_{2}}(LN(X^{SA}_{M_{2}}))]\big{)}\\ X^{k+1}_{M_{j}}=X^{k}_{M_{j}}+g_{j}(X^{CA}_{M_{j}})\end{split} \tag{4}\]
As mentioned in the background, the query and context can be mutually exclusive, and information can be retrieved, keeping any set as context using the other as a query and this flexibility is leveraged in the cross-encoders. This is shown by Equation 5,6,7:
\[\begin{split} K_{M}=X^{SA}_{M}\cdot W_{k}\quad Q_{M}=X^{SA}_{M} \cdot W_{q}\\ V_{M_{j}}=X^{SA}_{M_{j}}\cdot W_{v_{j}}\end{split} \tag{5}\]
\[\begin{split} A_{M_{1}\to M_{2}}=softmax\big{(}\frac{Q_{M_{1}}X^{T} _{M_{2}}}{\sqrt{d/h}}\big{)}\\ A_{M_{2}\to M_{1}}=softmax\big{(}\frac{K_{M_{2}}Q^{T}_{M_{1}}}{ \sqrt{d/h}}\big{)}\end{split} \tag{6}\]
\[\begin{split} X^{CA}_{M_{1}}=A_{M_{1}\to M_{2}}\cdot V_{M_{2}} \end{split} \tag{7}\]
\(W_{k}\), \(W_{q}\), \(W_{v_{j}}\) are the parameter matrices and \(d\) is the inner dimension of cross-attention layer. The overall encoder system has \(K\) layers (refer Figure 2) and the output of the \(k\)-th cross-modality encoder will again be given as input to the SA encoder at the \((k+1)\)-th layer.
### Distraction Removal
We have termed the process of removing irrelevant information from vectors as _"distraction removal"_. When humans fail to ignore task-irrelevant or task-relevant distractions, it can interfere with their ability to complete tasks effectively Forster and Lavie (2008). For instance, such distractions can lead to dangerous situations like car accidentsArthur and Doverspike (1992). The hypothesis is that incorporating distractions into the data, even with a small weight as in attention Bahdanau et al. (2016), negatively affects the model's training process and prediction accuracy during inference. Intuitively, even a small amount of irrelevant information can deteriorate the model's performance, akin to how distractions can influence human performance.
We first formulate a simple method to identify distractions and then describe the removal mechanism in a multi-headed attention setting. This eventually enhances our model's focus on task-relevant information among the cross-information interactions.
Distraction IdentificationThe result of taking the dot product of the projected _embeddings matrices_, \(Q\) and \(K\), and then normalizing the scores using \(softmax\) yields the attention weights matrix as shown in Equation (6). In this matrix, each row represents the "relevance" of the context vectors \(y_{j}\) (columns) for each of the query vectors \(x\) (rows).
\[\rho=max(A_{M_{1}->M_{2}},dim=-1) \tag{8}\]
The highest _relevance_ w.r.t. each query (highest in each row) becomes the representative _relevance_ score \(\rho\) (for the query), and a threshold \(\tau\) is set as a percentage of \(\rho\) multiplying by _distraction factor_\(d_{f}\) (e.g., \(0.3:30\%\) of \(\rho\)). All attention weights below \(\tau\) are considered _distractors_.
\[\tau=\rho*d_{f} \tag{9}\]
Distraction Removal in Multihead SubspacesWe have adopted the multi-head attention mechanism as described by Vaswani et al. (2017), having its essence in the distraction removal process as well. The core concept behind the
Figure 3: (Upper) Shows representative _relevance score_\(\rho\) generation from the CA matrix. (Lower) \(\rho\) * \(d_{f}\) sets the threshold to determine and mask out distracting weights in \(A_{M_{2}\to M_{1}}\)
multihead attention mechanism involves dividing the input embedding into \(h\) different subspaces, also known as heads. Each head is responsible for learning specific relationships and patterns within its subspace. By calculating joint attention across these different subspaces, the model can capture cross-relations or information from various parts of the embedding. This approach has the advantage of identifying distractor subspaces within the representations rather than considering the entire embedding vector as a distractor. By doing so, the model can identify irrelevant information more precisely in specific parts of the vectors, making the distraction removal process more effective. The multi-head attention mechanism helps localize and handle distractions more efficiently.
The distraction removal is then applied to these joint attention weights, and the context vector heads with attention weights below \(\tau\) are set to \(0\) in the attention weight matrix \(A_{M_{1}->M_{2}}\) before the weighted averaging process (Equation 3) to generate attended representations, which, in implementation is a modified Equation (7): \(A^{masked}_{M_{1}->M_{2}}\cdot V_{M2}\) (Figure 3).
\[\mu_{i}=\Big{\{}\big{\{}a^{h}_{i,j}<\tau^{h}_{i}\big{\}}_{j=1}^{len(X_{M_{2}})} |a^{h}_{i}\in A^{h}_{M_{1}->M_{2}}\Big{\}}_{h=1}^{H} \tag{10}\]
\[(a^{h}_{i,j})^{masked}=a^{h}_{i,j}*(1-\mu^{h}_{i,j}) \tag{11}\]
Here \(\mu\) is the boolean mask for the weights in \(A_{M_{1}->M_{2}}\) to be set to \(0\), \(H\) is the total number of heads and \(len(X_{M_{2}})\) gives the number of row vectors in \(X_{M_{2}}\) which equals number of columns in \(A_{M_{1}->M_{2}}\) (i.e., context vectors _relevance scores_).
Distraction FactorIn addition to learning more complex representations and learning to attend to different combinations of attention weights of the input and output sequences, the repetition of the overall encoder system in our framework also refines the distraction removal process. At each of the \(K\) layers of the encoder system, the _distraction factor_\(d_{f}\) is increased by some value determined by a hyperparameter \(\delta\).
\[d^{k+1}_{f}=d^{k}_{f}+\delta \tag{12}\]
This makes the threshold more strict to distractors, and by the last layer, only the most relevant _context_ vectors with high attention weights would be taken for the weighted averaging to generate an attended vector for a _query_\(x\). Intuitively, this can be seen as enhancing the relevant semantic information in the embeddings.
### Cross-Aligned Fusion
Previously, the generation of fused multi-modal representations has been based on simple concatenation and linear projection of vectors [20] or parallel concatenation of the tokens with [CLS] token and applying cross-attention [19, 10], following [10], simply using the CLS as the cross-modality representation.
We apply a semantic and dimensional alignment of vectors as a part of the DRAX instead of simply concatenating and linear projection or using the CLS token. We also implement the distraction removal operation in the vector space projection stage for semantically aligning only the relevant and non-distraction features.
Vector Space TransformationWe're using the original interpretation of attention for aligning the _important_ vectors [1] from a different (_tailing_) vector space \(X_{t}\in\mathbb{R}^{m\times d}\) to build the _context_ for particular _query_ tokens of the _anchor_ vector space \(X_{a}\in\mathbb{R}^{n\times d}\). The \(X_{t}\) vectors undergo a vector space transformation for the semantic as well as dimension alignment with the \(X_{a}\) vector space.
\[A^{masked}_{a\to t}=DR\big{(}softmax(\frac{Q_{a}K^{T}_{t}}{\sqrt{d/h}})\big{)} \tag{13}\]
\[X^{align}_{t}=A^{masked}_{a\to t}\cdot X_{t} \tag{14}\]
Attention matrix \(A^{masked}_{a\to t}\) acts as the \(\mathbb{R}^{m\times d}\rightarrow\mathbb{R}^{n\times d}\) space transformation or alignment matrix for \(X_{t}\). Each row in \(A^{masked}_{a\to t}\) would contain constants to form the linear combinations for the new rows of \(X^{align}_{t}\). Although \(A^{masked}_{a\to t}\) also undergoes _distraction removal_\(DR\) (as in the Distraction Removal section), which sets some \(\alpha_{i}(s)\) to 0 hence leaving out some parts (_subspaces_, due to multi-head attention) of vectors. In the new \(X^{align}_{t}\in\mathbb{R}^{n\times d}\) space, each vector is just a linear combination (\(\alpha_{1}x_{1}+\alpha_{2}x_{2}+...\)) of the original \(X_{t}\) vectors (Equation 3), but due to _distraction removal_, some irrelevant parts of vectors are lost which introduces changes in the vector space [20].
Anchor VectorsWhile fusing two sets of vectors, a decision is involved regarding which vector matrix is to be chosen as the anchor \(X_{a}\). Meaning that the dimensions of one
Figure 4: Different parts (subspaces) of the embeddings masked due to multiple heads. Above example: 3-head space (\(H=3\))
Figure 5: Masked (grey boxes in left figure) or \(0\) weighted positions are ignored during the dot product in multi-head space. (Right most) Figure shows a weighted average of only \(2^{nd}\), \(4^{th}\), and \(5^{th}\) embedding vectors. This also shows vector space transformation (in cross fusion) as the semantic alignment process is shown for \(X_{M_{1}}\) vector being transformed to the space of \(X_{M_{2}}\).
of the vector-matrix \(X_{t}\) would be changed according to the other (anchor) to _align_ dimensions for the concatenation and fusion operation. The best resulting (empirical) combination is reported in the main results of the experiments section, while the ablation is shown further.
The semantically aligned \(X_{t}^{align}\) vector is finally concatenated (along \(dim=-1\)) with \(X_{a}\) and undergoes a feed forward fusion operation:
\[X_{fused}=[X_{a}||X_{t}^{align}]\cdot W_{f}+b \tag{15}\]
### Answer Decoder
As shown in Figure 1, after the question conditioning, the embeddings are repeated for the number of answer candidates \(|\mathcal{A}|\), (along the batch dimension)1 so we can fuse the relevant information into the space of individual answer candidates, again, using the _DRAX_ block. Until now, CLS token-removed embeddings were passed between hierarchical blocks after fusion. But here, to get a final representation for every answer candidate \(X_{fused}^{answer}\in\mathbb{R}^{4\times d}\), we take a mean along with the CLS, of the final vectors reducing \(\mathbb{R}^{4\times(m\times d)}\rightarrow\mathbb{R}^{4\times d}\), which is passed through the classifier to get the final label probabilities \(p\in R^{|\mathcal{A}|}\).
Footnote 1: Implementation insight
\[y=ELU(W_{a}X_{fused}^{answer}+b) \tag{16}\]
\[y^{\prime}=ELU(W_{y}y+b) \tag{17}\]
\[p=softmax(Wy^{\prime}y^{\prime}+b) \tag{18}\]
Following [11] we use the hinge loss [13] on pairwise comparisons, \(max(0,1+n-p)\), for incorrect \(n\) and correct \(p\) answers.
## Experiments
We compare with related frameworks previously known with offline extracted feature vectors. We first discuss the main results and then the ablation of components, along with a discussion about the capabilities of the model tested by each task. Reported results are averaged over \(3\) different seeds along with the standard deviation (\(\sigma\)). See Appendix B for visualizations of samples from the dataset.
### Main Results
Table 1 displays our framework outperforming previous similar SOTA approaches that operated on offline extracted features. We get an accuracy of \(40.4\) (\(\sigma=0.76\)) with a gain of \(3.35\) (\(9.04\%\)) and \(3.91\) (\(10.71\%\)) absolute (and relative) scores over _ECLIPSE_[20] and _HCRN_[11], respectively, with our large model having \(K=6\) encoder layers, i.e., DRAX-large. Although for ablation study, we report the most significant results with the following parameters: 3 encoder blocks (i.e., DRAX-base), an initial masking factor \(d_{f}\) of \(0.3\) (i.e., 30% of "representative relevance" score) for cross-attention masking, which increases by the same factor \(\delta=0.3\) (\(30\%\to 60\%\to 90\%\)) at each encoder layer, and a fusion masking factor of \(0.4\), which stays constant as there is only a single fusion block. Encoder layers are reduced to \(3\) due to memory and compute time constraints.
### Tasks
Table 2 shows task-wise accuracies. We discuss the diverse abilities of the framework tested by the tasks below:
#### Basic Understanding
This task tests a basic-level perception of the model. It covers queries regarding existing features in the scenes (e.g., types of objects, vehicles, and environment situations) and events or event classification (e.g., if an accident happened, type of accident, and actions of the pedestrians). Our method shows comparable performance to the HCRN. This task consists of the largest subset of the whole dataset with very basic queries, and the results show that distraction removal or semantic alignment doesn't play a significant role in solving this subset individually.
#### Attribution
We focus on the model's capabilities for causal analysis in this task (e.x., what are the reasons for this crash). We get a score of \(38.58\) with an increase of \(4.89\) absolute points over the baseline and more robustness in results with a standard deviation (\(\sigma\)) of \(0.59\) versus \(1.6\) in the baseline.
#### Event Forecasting
Testing the model's ability to predict future events by observing and analyzing a given scene, we are producing a score of \(30.94\) with \(\sigma=0.45\), gaining \(1.95\) over the baseline (\(\sigma=1.12\)).
#### Reverse Reasoning
This task makes the model look into the past of the provided segment of scenes and answer queries. Our method gets a score of 35.34 (\(\sigma=1.01\)) with a gain of \(5.69\) points over baseline.
#### Counterfactual Inference
This tests the model's understanding of hypothetical situations in the context of the videos (e.g., would the accident still occur if the driver slows down in time?). So the model has to make inferences
\begin{table}
\begin{tabular}{c c} \hline \hline Models & Accuracy \\ \hline Q-type (random) & 25.00 \\ QE-LSTM & 25.21 \\ QA-LSTM & 26.65 \\ Avgpooling & 30.45 \\ CNN+LSTM & 30.78 \\ I3D+LSTM & 33.21 \\ VIS+LSTM [13] & 29.91 \\ BERT-VQA [20] & 33.68 \\ TVQA [11] & 35.16 \\ HCRN [11] & 36.49 \\ Eclipse [20] & 37.05 \\
**DRAX**-base (ours) & 39.63 \\
**DRAX**-large (ours) & **40.4** \\ \hline _Human_ & \(95.43\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: DRAX-base and large comparison with previous methods
based on the imagined situations under given conditions. Our method scores \(43.06\) gaining \(3.06\) points over the baseline.
IntrospectionThis lets the model learn to provide preventive advice for avoiding certain traffic situations (e.g., Could the vehicle steer away to prevent the accident?), which actually tests the model's capabilities to think and provide resolutions. We get a score of \(31.76\), gaining \(6.99\) over baseline with a significant improvement in standard deviation \(\sigma_{HCRN}=2.72\rightarrow\sigma_{DRAX}=0.67\).
### Ablation Study on Full Dataset
Cross-Aligned FusionWe ablated the component, replacing it with a simpler concatenation and linear projection fusion operation similar to HCRN. For appearance-motion feature fusion, we average consecutive 16 frames corresponding to each of the 8 clips, concatenate them along hidden dimension (\(d\)), and take a linear projection through a linear layer. Instead of using the final hidden state of \(LSTM\) layer for the question tokens as in _HCRN_ and _Eclipse_, we use the CLS token after self-encoder to repeat and concatenate with the previously fused embedding. This shows a decrease in the performance by \(0.88\) on the full dataset and a subsequent performance decrease in subset tasks.
Distraction MaskingRemoving distraction masking brought down the result by \(0.7\) points for the full dataset. Not getting a similar decrease in subtasks, we infer that distraction masking is beneficial for larger and more generalized dataset.
Removing Distraction Masking and Cross-AlignmentA decrease of \(1.57\) points is seen by completely ablating our proposed methods which clearly displays their significance in cross attended "_distraction-free_" embeddings.
Anchor Vector SpacesOur fusion mechanism includes vector space projection from a tailing to anchor space. The best results were achieved with {Appearance \(\rightarrow\)Motion\(\leftarrow\)Question\(\rightarrow\)Answer} where the "\(\rightarrow\)" and "\(\leftarrow\)" notations denote direction of space projection. As our framework has a hierarchical structure, {Appearance \(\rightarrow\)Motion\(\leftarrow\)Question} means appearance feature vectors are projected to motion vector space, and then question feature vectors are projected to the previous fused vectors' space. Experiments on the rest are shown in Table 3
Using [CLS] for decodingAs in the more recent works like _ClipBERT_Lei et al. (2021), _BLIP-2_Li et al. (2023) and _VIOLET_Fu et al. (2021), we've experimented using only the final [CLS] embedding for decoding which gives a score of \(38.47\) (\(0.55\)) showing that instead, taking a mean of our semantically aligned embeddings provides more significant query related information.
## Conclusion
We presented our novel framework DRAX with the goal of producing "distraction-free" and semantically aligned embeddings from cross-modality interactions. Instead of adding extra modalities, refining the input information (e.g., tokens), or heavy pre-training of the model before applying it to a task, we simply rid our embeddings of distractions in the latent space. As has been explicitly mentioned above, comparing larger models like BLIP-2, VIOLET, and ClipBERT is away from the scope of our study, and we've focused only on related previous works. DRAX demonstrates the existence of distractors in the embeddings and the advantage in removing them. Applying the distraction removal mechanism on other video-language understanding tasks is the clear next step to our work, and we encourage the reader to do it before we do.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{2}{c}{**Method**} & \multicolumn{6}{c}{**Task Accuracies**} \\ \hline & **Full** & Basic & Attribution & Event & Reverse & Counterfactual & Introspection \\ & & Understanding & & Forecasting & Reasoning & Inference & \\ \hline HCRN & \(36.4\,(0.6)\) & \(38.0\,(0.33)\) & \(33.59\,(1.6)\) & \(28.99\,(1.12)\) & \(29.65\,(0.38)\) & \(0.4\,(0.54)\) & \(0.2477\,(2.72)\) \\ Ours & **39.63 (0.24)** & \(37.48\,(0.5)\) & \(38.58\,(0.59)\) & \(30.94\,(0.45)\) & \(35.34\,(1.01)\) & \(43.06\,(1.89)\) & \(31.76\,(0.67)\) \\ Ours & **38.75\,(0.14)** & \(37.19\,(0.27)\) & \(36.9\,(0.06)\) & \(29.59\,(3.5)\) & \(35.0\,(2.78)\) & \(41.98\,(1.55)\) & \(29.73\,(4.78)\) \\ Cross(X)-Aligned Fusion & & & & & & & \\ Ours & \(38.93\,(0.33)\) & \(37.4\,(0.47)\) & \(39.24\,(0.49)\) & \(31.98\,(1.81)\) & \(37.45\,(2.65)\) & \(41.98\,(1.64)\) & \(35.14\,(0.1)\) \\ Distraction(D) Masking & & & & & & & \\ Ours & \(38.06\,(0.2)\) & \(38.04\,(1.05)\) & \(38.58\,(0.26)\) & \(29.14\,(1.26)\) & \(35.45\,(0.1)\) & \(42.97\,(1.14)\) & \(33.78\,(0.1)\) \\ \cline{2-7}
[X-Alignment \& D-Masking] & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison with baseline HCRN and ablation of components for full dataset and sub-tasks upon DRAX-base. Accuracies mentioned are averaged on 3 seeds, “\(()\)” marks standard deviation and “\(-\)” represents the removal of component(s).
\begin{table}
\begin{tabular}{c c} \hline \hline Projections & Accuracy (\(\sigma\)) \\ \hline Appearance \(\rightarrow\)Motion\(\rightarrow\)Question & \(38.84\) (\(0.61\)) \\ \(\rightarrow\)Answer & \\ Appearance\(\rightarrow\)Motion\(\rightarrow\)Question & \(38.47\) (\(0.75\)) \\ Appearance\(\rightarrow\)Motion\(\leftarrow\)Question & \(38.74\) (\(0.23\)) \\ \(\leftarrow\)Answer & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Anchor Space Projection Directions | 有効な潜在的な表現の生成とその後の精密情報を組み込むための refinement は、Video Question Answering (VQA) などの視覚言語理解 (VLU)タスクのための必須前提条件です。しかし、多くの既存の VLU 方法は、入力情報を散évaluateur または細分化することで処理 (例: sparseなフレームセットまたはテキストトークンのサンプリング、または外部知識の追加) に集中しています。私たちは、クロスモダリティ表現の distracter を除去する新しい「DRAX: Distraction Removal and Attended Cross-Alignment」メソッドを提案しました。様々なモデリティからの入力情報を制限するのではなく、注意を誘導した distracter 削除メソッドを使用して、潜在的なエンベディングにおけるタスク関連情報を強調します。DRAX は、クロスモダリティ融合時のエンベディングの文脈的整合性を保証します。私たちは、このアプローチを、困難なベンチマーク (SUTD-TrafficQA |
2309.16989 | Extension realizing affine datum: low-dimensional cohomology | For arbitrary varieties of universal algebras, we develop the theory around
the first and second-cohomology groups characterizing extensions realizing
affine datum. Restricted to varieties with a weak-difference term, extensions
realizing affine datum are exactly extensions with abelian kernels. This
recovers many classic examples of extensions with abelian coefficients since
varieties with a weak-difference term give a far-reaching generalization of
algebras like groups with multiple operators; indeed, any variety of algebras
whose congruences form modular lattices. We introduce a notion of action and
its model relation with a set of equations. In varieties with a difference
term, central extensions are characterized by a property of their actions.
Restricting further to a subclass of varieties with a difference term which
still includes groups with multiple operators, we recover a special case of the
representation of extensions with abelian kernels. | Alexander Wires | 2023-09-29T05:21:23 | http://arxiv.org/abs/2309.16989v2 | # Extensions realizing affine datum : Low-dimensional cohomology
###### Abstract.
For arbitrary varieties of universal algebras, we develop the theory around the first and second-cohomology groups characterizing extensions realizing affine datum. Restricted to varieties with a weak-difference term, extensions realizing affine datum are exactly extensions with abelian kernels. This recovers many classic examples of extensions with abelian coefficients since varieties with a weak-difference term give a far-reaching generalization of algebras like groups or modules expanded by multilinear operations; indeed, any variety of algebras whose congruences form modular lattices. We introduce a notion of action and its model relation with a set of equations. In varieties with a difference term, central extensions are characterized by a property of their actions.
## 1. Introduction
Let \(\operatorname{Eqv}A\) denote the set of equivalence relations on the set \(A\).
**Definition 1.1**.: The algebra \(A\) is an _extension_ of the equivalence relation \(\alpha\in\operatorname{Eqv}A\) by the algebra \(Q\) if there is a surjective homomorphism \(\pi:A\to Q\) such that \(\ker\pi=\alpha\).
Here we will be interested in the following general problem.
**Problem 1.2**.: (The Extension Problem) Given the algebra \(Q\) in the signature \(\tau\) and an equivalence relation \(\alpha\in\operatorname{Eqv}A\), classify the extensions of \(\alpha\) by \(Q\).
As stated in the problem, the task is to understand the different interpretations of \(\tau\) on the set \(A\) such that \(\alpha\) becomes a congruence \(\alpha\in\operatorname{Con}A\) on the algebra \(A\) and \(A/\alpha\approx Q\). Note that we did not assume any additional structure on the equivalence relation \(\alpha\) - this is almost certainly too general to begin with and we will require that \(\alpha\) be presented with additional information of some sort; in particular, a partial structure related to abelianess in a given commutator theory. For universal algebras, a commutator can be a complicated object and there are several available. The term-condition commutator (or TC-commutator) [6, Ch 2.5] and rectangular commutator [6, Ch 5] would be natural choices to develop a cohomology theory since for large classes of varieties their abelian algebras are related to modules and semimodules, respectively. It would be interesting for future study to consider central extensions or extensions with abelian kernels in more general commutators, since it is possible to axiomatize commutator theories even in non-regular categories such as topological spaces and ordered sets [15].
In the present manuscript, we consider the deconstruction/reconstruction of extensions of universal algebras realizing affine datum. Instead of fixing a class of algebras with a particular equational theory, we elect to follow the approach of the Riemann integrable: that is, with abelian congruences in varieties with a weak-difference term as a motivating example, we isolate properties which will serve as a definition for a particular type of affine datum and then develop the standard \(1^{\text{st}}\) and \(2^{\text{nd}}\)-cohomology constructions for this abstract definition of datum. Then one would prove that the extensions with abelian kernels in a particular class of algebras under consideration are captured by this notion; analogously, functions with countable jump discontinuities are Riemann integrable, etc. One benefit of this approach is that
the equational theory (or varietal membership) of extensions is included in the \(2^{\mathrm{nd}}\)-cohomology group as a parameter which affords a Galois correspondence between the lattice of subvarieties and the subgroup lattice in cohomology. Formalizing this approach leads us to consider the \(2\)-cocycles parametrizing extensions as a particular type of multisorted structure together with their equations. Future work might explore the general model-theoretic constructions in this situation, but at the moment it suggests to us in the same way that third cohomology for groups describes the realization of outer actions by possible extensions, higher cohomologies of universal algebras should be describing structures interpreting special multisorted expansions of the language of \(2\)-cocycles together with the equations they satisfy.
The genesis for the present work and its sequel Wires [13] is the curious result [3, Prop 7.1] and the following corollary [3, Cor 7.2] which provides a characterization of \(2\)-step nilpotent algebras in congruence modular varieties. This is extended in Theorem 2.7 and Proposition 2.9 below to varieties with a difference term. While not stated explicitly, [3, Prop 7.1] is essentially the decomposition of a central extension in a manner similar to that found in group theory where a factor set is added to the operations of the direct product of the datum groups. The analogy can be developed further where the unital ring, which comes from the derived module structure of abelian algebras in varieties with a difference term, encodes the action terms which correspond in the case of groups to the trivial action, or \(\mathds{Z}\)-module structure, on abelian kernels in central extensions. Some classical results concerning central extensions of groups are generalized in [13] to the broader class of varieties with a difference term. The current manuscript is a generalization to arbitrary varieties of the original motivating observations.
The approach follows the constructions and concrete manipulations of functions in what is sometimes referred to as the Schreier cohomology of group extensions found in Schreier [9] and Dedecker[2]. We define the notion of affine datum and establish the machinery around the \(1^{\mathrm{st}}\) and \(2^{\mathrm{nd}}\)-cohomology groups characterizing extensions in a variety \(\mathcal{V}\) which realize the datum. The development is satisfyingly replete with the standard toolkit of \(2\)-cocycles, \(2\)-coboundaries, actions, derivations and stabilizing automorphisms which reduce to previous definitions upon specialization for many familiar classes of algebras. We provide a summary of the key points of the development:
* The notion of an action compatible with a set of equations (Definition 3.2).
* The notion of \(2\)-cocycles as interpretations of a multisorted signature compatible with a set of equations (Definition 3.14).
* Definition of affine datum (Definition 3.9).
* Reconstruction of an algebra from affine datum and \(2\)-cocycle compatible with a variety \(\mathcal{U}\) (Theorem 3.21).
* For an abelian congruence \(\alpha\in\operatorname{Con}A\) in which \(\mathcal{V}(A)\) has a weak-difference term, decomposition of the algebra into the quotient \(A/\alpha\), a partial structure \(A^{\alpha,\tau}\), and a \(2\)-cocycle \(T\) and homomorphic action \(A/\alpha*A^{\alpha,\tau}\) derived from \(A^{\alpha,\tau}\) both compatible with any parameter variety \(\mathcal{U}\geq\mathcal{V}(A)\) (Theorem 3.19).
* The characterization of semidirect products realizing affine datum (Proposition 3.22).
* Definition of \(2\)-coboundaries associated to affine datum (Definition 3.24). We shall see \(2\)-coboundaries, are compatible with any variety containing the datum (Lemma 3.26).
* An equivalence on \(2\)-cocycles (and so extensions) derived from \(2\)-coboundaries which is finer than isomorphism (Theorem 3.31).
* The abelian \(2^{\mathrm{nd}}\)-cohomology groups \(H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) for affine datum defined from \(2\)-cocycles modulo the \(2\)-coboundaries. The cohomology group accepts as an additional parameter a variety \(\mathcal{U}\) which restricts the class of extensions to membership in \(\mathcal{U}\) (Theorem 3.29).
* The \(2^{\mathrm{nd}}\)-cohomology groups define a Galois connection between the lattice of varieties containing fixed datum and the subgroup lattice of the ableian group generated by \(2\)-cocycles of the datum (Proposition 3.30).
* The automorphisms stabilizing an extension form an abelian group isomorphic to the \(1\)-cocycles (derivations) of the datum (Theorem 3.34).
* In varieties with a difference-term, a characterization of central extensions by affine datum with trivial actions (Proposition 2.4 and Proposition 3.37). In such varieties, central extensions of the datum are characterized by the \(2^{\mathrm{nd}}\)-cohomology group \(H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) restricted to trivial actions (Theorem 3.40).
* In varieties with a weak-difference term, abelian extension are classified by a subgroup of \(2^{\mathrm{nd}}\)-cohomology which generalizes the classic group-theoretic case which characterizes abelian extensions by symmetric \(2\)-cocycles (Corollary 3.45).
In the sequel manuscript [13], we consider central extensions in varieties with a difference term. Such varieties include groups, quasigroups, relative Rota-Baxter groups and braces but also many familiar examples of abelian groups expanded by multilinear operations such as rings, Liebniz algebras, diassociative algebras, Rota-Baxter algebras and conformal algebras. In this very general setting, we establish a low-dimensional Hochschild-Serre exact sequence for central extensions and prove theorems of Schur-type relating relative commutators of free presentations and \(2^{\mathrm{nd}}\)-cohomology of regular datum, covers and perfect algebras. This generalizes analogous results for many previously established cases, including groups, which can then be derived by specialization to those particular varieties. Highlights include:
* A low-dimensional Hochschild-Serre exact sequence (or inflation/restriction sequence) for a central extension with an additional affine datum in varieties with a difference term.
* Characterizing injectivity and surjectivity of the transgression map and its relation to lifting homomorphisms through central extensions of regular datum.
* Generalization of Schur's formula relating commutators of presentations with second-cohomology to any variety with a difference term which has an idempotent element (see Schur [10] and Karpilovski [4]). By specialization this recovers the classical result for groups and more recent work on algebras of Loday-type.
* Discussion of covers and the relationship between cohomology and perfect algebras (see Milnor [8]).
Consult Bergman [1] for fundamentals of universal algebras. For the basic definition and general properties of the term-condition commutator, Kearnes and Kiss [6, Cha 2.5] is useful and for the theory of the commutator in congruence modular varieties consult Freese and McKenzie [3] and McKenzie and Snow [7]. The properties of the term-condition commutator in varieties with a difference term is developed in Kearnes [5]. Any special definitions are introduced in the text as needed.
## 2. Varieties with a difference term
Fix a variety \(\mathcal{V}\) in the signature \(\tau\), an algebra \(A\in\mathcal{V}\) and \(\alpha,\beta\in\operatorname{Con}A\). A _section_ of a surjective map \(\pi:A\to Q\) is a map \(l:Q\to A\) such that \(\pi\circ l=\operatorname{id}_{Q}\). An \(\alpha\)-_trace_ is a map \(r:A\to A\) such that \(r=l\circ\pi\) for a section \(l\) of the canonical map \(\pi:A\to A/\alpha\); equivalently, \((r(x),x)\in\alpha\) and \(|r(x/\alpha)|=1\) for all \(x\in A\). Let us recall the definition and some properties of the congruence \(\Delta_{\alpha\beta}\).
* \(M(\alpha,\beta)\) is the subalgebra of \(A^{4}\) generated by \(\left\{\begin{bmatrix}x&x\\ y&y\end{bmatrix},\begin{bmatrix}u&v\\ u&v\end{bmatrix}:(x,y)\in\alpha,(u,v)\in\beta\right\}\).
* \(A(\alpha)=\{(x,y)\in A\times A:(x,y)\in\alpha\}\) is the congruence \(\alpha\) as a subalgebra of \(A\times A\).
* \(\Delta_{\alpha\beta}=\operatorname{Cg}^{A(\alpha)}\left(\left\{\left\langle \begin{bmatrix}u\\ u\end{bmatrix},\begin{bmatrix}v\\ v\end{bmatrix}\right\rangle:(u,v)\in\beta\right\}\right)=\operatorname{Tr}M( \alpha,\beta)\in\operatorname{Con}A(\alpha)\) where \(\operatorname{Tr}\) denotes the transitive closure of a binary relation.
* The diagonal homomorphism is the map \(\delta:A\to A(\alpha)/\Delta_{\alpha\alpha}\) given by \(\delta(u)=\begin{bmatrix}u\\ u\end{bmatrix}/\Delta_{\alpha\alpha}\).
* We have the following inclusions \[\Delta_{\alpha\beta}\vee\eta_{0}=p_{0}^{-1}(\beta)\qquad\text{and}\qquad\Delta_{ \alpha\beta}\vee\eta_{1}=p_{1}^{-1}(\beta)\] for the projections \(p_{i}:A(\alpha)\to A\) with \(\eta_{i}=\ker p_{i}\).
In the case that \(\mathcal{V}\) is a congruence modular variety, \(\Delta_{\alpha\beta}\) has stronger properties.
* We have the following description of the commutator ([3, Thm 4.9]): \[(x,y)\in[\alpha,\beta]\qquad\text{iff}\qquad(\exists u\in A)\ \ \begin{bmatrix}u\\ u\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}x\\ y\end{bmatrix}\qquad\text{iff}\qquad(\exists v\in A)\ \ \begin{bmatrix}v\\ x\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}v\\ y\end{bmatrix}.\]
* If \(\alpha\leq\beta\) and \([\alpha,\beta]=0\), then we have the following description of the congruence ([3, Thm 5.5, Prop 5.7]): \[\begin{bmatrix}x\\ y\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}u\\ v\end{bmatrix}\qquad\text{iff}\qquad x\,\alpha\,y\,\beta\,u\text{ and }v=m(y,x,u).\]
* We have the following inclusions ([7, Lem 8.6]): \[\Delta_{\alpha\beta}\wedge\eta_{0}\leq p_{1}^{-1}([\alpha,\beta])\qquad\qquad \Delta_{\alpha\beta}\wedge\eta_{1}\leq p_{0}^{-1}([\alpha,\beta])\qquad\qquad[ p_{0}^{-1}(\beta),p_{1}^{-1}(\beta)]\leq\Delta_{\alpha\beta}.\]
If \(\alpha\) is an abelian congruence, then we can easily observe some analogues in varieties with a difference term or weak-difference term. Given an equivalence relation \(\alpha\) on a set \(A\), there is an equivalence relation \(\hat{\alpha}\) on the set \(A(\alpha)\subseteq A\times A\) defined by \(\begin{bmatrix}x\\ y\end{bmatrix}\,\hat{\alpha}\,\begin{bmatrix}u\\ v\end{bmatrix}\Leftrightarrow x\;\alpha\;y\;\alpha\;u\;\alpha\;v\).
**Lemma 2.1**.: Let \(\mathcal{V}\) be a variety, \(A\in\mathcal{V}\) and \(\alpha,\beta\in\operatorname{Con}A\).
1. If \(\mathcal{V}\) has a difference term, then \[\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ d\end{bmatrix}\quad\Longrightarrow\quad b\;[\alpha,\beta]\;d.\]
2. If \(\mathcal{V}\) has a weak-difference term and \(\alpha\) is abelian, then \[(\exists a\in A)\ \begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ d\end{bmatrix}\quad\Longleftrightarrow\quad b\;[\beta,\alpha]\;d.\]
3. If \(\mathcal{V}\) has a weak-difference term and \(\alpha\) is abelian, then \(\Delta_{\alpha\alpha}=\Delta_{\alpha\gamma}\wedge\hat{\alpha}\) for any \(\alpha\leq\gamma\leq(0:\alpha)\).
4. If \(\mathcal{V}\) has a difference term and \(\alpha\) is abelian, the following are equivalent: 1. \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ d\end{bmatrix}\); 2. \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ m(b,a,c)\\ c\end{bmatrix}\quad\text{and}\quad d\;[\alpha,\beta]\;m(b,a,c)\); 3. \(c\;\beta\;a\;\alpha\;b\quad\text{and}\quad d\;[\alpha,\beta]\;m(b,a,c)\).
Proof.: (1) Note \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ d\end{bmatrix}\) implies \((b,d)\in\alpha\wedge\beta\), and so \(d\;[\alpha,\beta]\;m(d,b,b)\) since \(m\) is a difference term. Applying the difference term to the sequence
\[\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ d\end{bmatrix}\,\ \begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ b\end{bmatrix}\,\ \begin{bmatrix}b\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}b\\ b\end{bmatrix}\ \ \Longrightarrow\ \ \begin{bmatrix}b\\ b\end{bmatrix}=\begin{bmatrix}m(a,a,b)\\ m(b,b,b)\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}m(a,a,b)\\ m(d,b,b)\end{bmatrix}=\begin{bmatrix}b\\ m(d,b,b)\end{bmatrix}.\]
Since \(\Delta_{\alpha\beta}=\operatorname{Tr}M(\alpha,\beta)\), we see that \((b,m(d,b,b))\in[\beta,\alpha]=[\alpha,\beta]\). Then \(b\;[\alpha,\beta]\;m(d,b,b)\;[\alpha,\beta]\;d\).
(2) For necessity, this is the same calculation as part (1) except \(m\) is the weak-difference term. We use \(m(a,a,b)=b\) because \((a,b)\in\alpha\) is abelian, and \((b,d)\in\alpha\wedge\beta\leq\alpha\) implies \(d=m(d,b,b)\), as well. For sufficiency, use the recursive generation of the TC-commutator by \(M(\alpha,\beta)\) matrices starting from the equality relation. The thing to note is that the elements in the matrices in \(M(\alpha,\beta)\) which witness any
inclusion \((a,b)\in[\alpha,\beta]\) will all be contained in a single \(\alpha\)-class on which the weak-difference term behaves as a Mal'cev term.
(3) Note \(\Delta_{\alpha\alpha}\leq\Delta_{\alpha\gamma}\wedge\hat{\alpha}\). Conversely, suppose \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\gamma}\wedge\hat{\alpha}\begin{bmatrix}c\\ d\end{bmatrix}\). Then the coordinates are all contained in a single \(\alpha\)-class. Apply the weak-difference term to the generators
\[\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}a\\ b\end{bmatrix}\,\ \begin{bmatrix}a\\ a\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}a\\ a\end{bmatrix}\,\ \begin{bmatrix}a\\ a\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}c\\ c\end{bmatrix}\quad\Rightarrow\quad\begin{bmatrix}a\\ b\end{bmatrix}=\begin{bmatrix}m(a,a,a)\\ m(b,a,c)\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}m(a,a,c)\\ m(b,a,c)\end{bmatrix}=\begin{bmatrix}c\\ m(b,a,c)\end{bmatrix}\]
using that \(\alpha\) is abelian. Then \(\Delta_{\alpha\alpha}\leq\Delta_{\alpha\gamma}\) implies \(\begin{bmatrix}c\\ d\end{bmatrix}\Delta_{\alpha\gamma}\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\gamma}\begin{bmatrix}c\\ m(b,a,c)\end{bmatrix}\), and part (2) yields \((d,m(b,a,c))\in[\gamma,\alpha]=0\).
(4) Assuming (a), we have \((a,b)\in\alpha\) and \((a,c)\in\beta\). Apply the difference term to the sequence
\[\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ b\end{bmatrix}\,\ \begin{bmatrix}a\\ a\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ a\end{bmatrix}\,\ \begin{bmatrix}a\\ a\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ c\end{bmatrix}\quad\Rightarrow\quad\begin{bmatrix}a\\ b\end{bmatrix}=\begin{bmatrix}m(a,a,a)\\ m(b,a,c)\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}m(a,a,c)\\ m(b,a,c)\end{bmatrix}=\begin{bmatrix}c\\ m(b,a,c)\end{bmatrix}.\]
Then applying part (1) to \(\begin{bmatrix}c\\ d\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ m(b,a,c)\end{bmatrix}\) produces \((d,m(b,a,c))\in[\alpha,\beta]\). From (b), (c) follows immediately.
Now assume (c). By (2) above, there is \(x\in A\) such that \(\begin{bmatrix}x\\ d\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}x\\ m(b,a,c)\end{bmatrix}\). We also have \(m(b,a,c)\ \alpha\)\(m(a,a,c)=c\) and \(d\ \alpha\wedge\beta\ m(b,a,c)\ \beta\ p(b,a,a)=b\). The condition \(c\ \beta\ a\ \alpha\ b\) produces from the generators
\[\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ b\end{bmatrix}\,\ \begin{bmatrix}a\\ a\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ a\end{bmatrix}\,\ \begin{bmatrix}a\\ a\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ c\end{bmatrix}\quad\Rightarrow\quad\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ m(b,a,c)\end{bmatrix}.\]
We then apply the difference term to the sequence
\[\begin{bmatrix}x\\ d\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}x\\ d\end{bmatrix}\,\ \begin{bmatrix}x\\ d\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}x\\ m(b,a,c)\end{bmatrix}\,\ \begin{bmatrix}b\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}m(b,a,c)\\ m(b,a,c)\end{bmatrix}\quad\Rightarrow\quad\begin{bmatrix}b\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}m(b,a,c)\\ d\end{bmatrix}.\]
We then use these last two relations and apply the difference term to derive
\[\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ m(b,a,c)\end{bmatrix}\,\ \begin{bmatrix}b\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}m(b,a,c)\\ m(b,a,c)\end{bmatrix}\,\ \begin{bmatrix}b\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}m(b,a,c)\\ d\end{bmatrix}\quad\Rightarrow\quad\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ d\end{bmatrix}.\]
**Lemma 2.2**.: Let \(\mathcal{V}\) be a variety, \(A\in\mathcal{V}\) and \(\alpha,\beta,\sigma\in\operatorname{Con}A\). Let \(r:A\to A\) be a \(\sigma\)-trace.
1. If \(\mathcal{V}\) has a difference term, then 1. \(A(\alpha)/\Delta_{\alpha 1}\) is abelian; 2. the set map \(\ \psi:A/[\alpha,\beta]\longrightarrow A(\alpha)/\Delta_{\alpha\beta}\,\times\,A/\sigma\) which is defined by \[\psi(x/[\alpha,\beta])=(\left\langle r(x),x\right\rangle/\Delta_{\alpha\beta}, x/\sigma)\] is injective.
2. If \(\mathcal{V}\) is congruence modular, then 1. for all \((a,b)\in\alpha\) and \(u\in A\), \(\begin{bmatrix}u\\ u\end{bmatrix}/\Delta_{\alpha 1}=\begin{bmatrix}a\\ d(a,b,b)\end{bmatrix}/\Delta_{\alpha 1}\) where \(d\) is the difference term for \(\mathcal{V}\); 2. \(A/[\alpha,1](\alpha/[\alpha,1])/\Delta_{\alpha/[\alpha,1]1}\approx A(\alpha)/ \Delta_{\alpha 1}\);
Proof.: (1a) Since \(\mathcal{V}\) has a difference term, the TC-commutator is join-additive in both coordinates [5]; therefore,
\[[1_{A(\alpha)},1_{A(\alpha)}]=[p_{0}^{-1}(1_{A}),p_{1}^{-1}(1_{A})] =[\eta_{1}\vee\Delta_{\alpha 1},\eta_{0}\vee\Delta_{\alpha 1}]\] \[=[\eta_{1},\eta_{0}]\vee[\eta_{1},\Delta_{\alpha 1}]\vee[\Delta_{ \alpha 1},\eta_{0}]\vee[\Delta_{\alpha 1},\Delta_{\alpha 1}]\] \[=[\eta_{1},\Delta_{\alpha 1}]\vee[\Delta_{\alpha 1},\eta_{0}]\vee[ \Delta_{\alpha 1},\Delta_{\alpha 1}]\] \[\leq\Delta_{\alpha 1}.\]
(1b) Since \(r:A\to A\) is a \(\sigma\)-trace, we have \(r(x)\ \sigma\ x\) and \(r(x)=r(y)\) iff \((x,y)\in\sigma\). Suppose \(\psi(x/[\alpha,\beta])=\psi(y/[\alpha,\beta])\). Then \(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\beta}=\begin{bmatrix}r(y)\\ y\end{bmatrix}/\Delta_{\alpha\beta}\) and \(x/\sigma=y/\sigma\). So we have \((x,y)\in\sigma\) and \(\begin{bmatrix}r(x)\\ x\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}r(y)\\ y\end{bmatrix}\). Then \(r(x)=r(y)\) implies \((x,y)\in[\alpha,\beta]\) by Lemma 2.1(1), and so \(x/[\alpha,\beta]=y/[\alpha,\beta]\).
(2a) For \((a,b)\in\alpha\), \(a\ [\alpha,\alpha]\ d(a,b,b)\) since \(d\) is a difference term. Since \([\alpha,\alpha]\leq[\alpha,1]\), by the remarks before Lemma 2.1 there is \(v\in A\) such that \(\begin{bmatrix}a\\ d(a,b,b)\end{bmatrix}\Delta_{\alpha 1}\begin{bmatrix}v\\ v\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}u\\ u\end{bmatrix}\) using the generators of \(\Delta_{\alpha 1}\) for the second step.
(2b) Define \(\phi:A(\alpha)\to A/[\alpha,1](\alpha/[\alpha,1])/\Delta_{\alpha/[\alpha,1]1}\) by \(\phi\left(\begin{bmatrix}a\\ b\end{bmatrix}\right):=\begin{bmatrix}a/[\alpha,1]\\ b/[\alpha,1]\end{bmatrix}/\Delta_{\alpha/[\alpha,1]1}\). It is easy to see that \(\phi\) is a surjective homomorphism. We now calculate the kernel. We have \(\Delta_{\alpha 1}\subseteq\ker\phi\) since \(\phi\) identifies the generators of \(\Delta_{\alpha 1}\). Now assume \(\left(\begin{bmatrix}a\\ b\end{bmatrix},\begin{bmatrix}c\\ e\end{bmatrix}\right)\in\ker\phi\). Then \((a,b),(c,e)\in\alpha\) and
\[\begin{bmatrix}a/[\alpha,1]\\ b/[\alpha,1]\end{bmatrix}\Delta_{\alpha/[\alpha,1]1}\begin{bmatrix}c/[\alpha,1 ]\\ e/[\alpha,1]\end{bmatrix}.\]
Since \(\alpha/[\alpha,1]\) is central in \(A/[\alpha,1]\), we have \(d(b,a,c)/[\alpha,1]=e/[\alpha,1]\) and so \((e,d(b,a,c))\in[\alpha,1]\). Then by congruence modularity, \(\begin{bmatrix}u\\ u\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}e\\ d(b,a,c)\end{bmatrix}\) for some \(u\in A\). Define \(x+_{0}y:=d(x,0,y)\) where \(0:=\begin{bmatrix}u\\ u\end{bmatrix}/\Delta_{\alpha 1}\) is the \(\Delta_{\alpha 1}\)-class containing the diagonal. Since \(A(\alpha)/\Delta_{\alpha 1}\) is abelian by (1a), \(x+_{0}y\) is the operation of an abelian group in which \(0=\begin{bmatrix}e\\ d(b,a,c)\end{bmatrix}/\Delta_{\alpha 1}\) is the identity.
Now observe
\[\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}e\\ c\end{bmatrix}/\Delta_{\alpha 1}=\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}a\\ a\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}e\\ c\end{bmatrix}/\Delta_{\alpha 1} =\begin{bmatrix}d(a,a,e)\\ d(b,a,c)\end{bmatrix}/\Delta_{\alpha 1}\] \[=\begin{bmatrix}e\\ d(b,a,c)\end{bmatrix}/\Delta_{\alpha 1}=0\]
implies \(\begin{bmatrix}e\\ c\end{bmatrix}/\Delta_{\alpha 1}=-\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha 1}\). Then
\[\begin{bmatrix}c\\ e\end{bmatrix}/\Delta_{\alpha 1}-\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha 1}=\begin{bmatrix}c\\ e\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}e\\ c\end{bmatrix}/\Delta_{\alpha 1} =\begin{bmatrix}d(c,c,e)\\ d(e,c,c)\end{bmatrix}/\Delta_{\alpha 1}\] \[=\begin{bmatrix}e\\ d(e,c,c)\end{bmatrix}/\Delta_{\alpha 1}=0.\]
This yields \(\ker\phi\subseteq\Delta_{\alpha 1}\)
**Remark 2.3**.: There is freedom in the choice of \(\sigma\) in Lemma 2.2(4). Injectivity is immediate if we choose \(\sigma=0_{A}\) since the second coordinates will always be distinct. If we take \(\sigma=1_{A}\), then \(A/[\alpha,\beta]\) is injective with \(A(\alpha)/\Delta_{\alpha\beta}\). The question becomes for which choices of \(\sigma\) do we have that \(\pi\) is surjective and a homomorphism?
Suppose we have two algebras \(B\) and \(Q\) in the same signature \(\tau\) and a binary operation on \(B\) denoted by \(x+y\). Suppose further that for every operation symbol \(f\in\tau\) we have an operation \(T_{f}:Q^{\operatorname{ar}f}\to B\) we shall call the transfer of \(f\). We define a new algebra \(B\otimes^{T}Q\) over the universe of the direct product \(B\times Q\) where each operation symbol \(f\in\tau\) is interpreted by the rule
\[F_{f}\left((b_{1},q_{1}),\ldots,(b_{n},q_{n})\right):=\left(f(b_{1},\ldots,b_{n })+T_{f}(q_{1},\ldots,q_{n}),f(q_{1},\ldots,q_{n})\right).\]
In order to prove Theorem 2.7, we will first establish a special case in the next proposition which extends [3, Prop 7.1] by considering an abelian congruence instead of the center. We observe the proof is almost the same.
**Proposition 2.4**.: Let \(\mathcal{V}\) be a variety with a difference term and \(A\in\mathcal{V}\). If \(\alpha\in\operatorname{Con}A\) is abelian, then
\[A/[\alpha,1_{A}]\approx A(\alpha)/\Delta_{\alpha 1}\otimes^{T}A/\alpha.\]
Proof.: We argue by two claims. That \(A(\alpha)/\Delta_{\alpha 1}\) is abelian is Lemma 2.2(1). Fix an \(\alpha\)-trace \(r:A\to A\). Define the map \(\psi:A/[\alpha,1]\to A(\alpha)/\Delta_{\alpha 1}\otimes^{T}A/\alpha\) by
\[\psi(x/[\alpha,1]):=\left(\genfrac{[}{]}{0.0pt}{}{r(x)}{x}\big{/}\Delta_{ \alpha 1}\,\ x/\alpha\right)\]
**Claim**: \(\psi\) is bijective.
Proof.: Injectivity of \(\psi\) is Lemma 2.2(1b) where we take \(\beta=1_{A}\) and \(\sigma=\alpha\). To show surjectivity, take \(\left(\genfrac{[}{]}{0.0pt}{}{a}{b}\big{/}\Delta_{\alpha 1}\,\ x/\alpha \right)\in A(\alpha)/\Delta_{\alpha 1}\times A/\alpha\). Let \(d\) be a difference term for \(\mathcal{V}\). Then applying the difference term to the sequence
\[\genfrac{[}{]}{0.0pt}{}{a}{b}\Delta_{\alpha 1}\genfrac{[}{]}{0.0pt}{}{a}{b}\,\ \genfrac{[}{]}{0.0pt}{}{a}{a}\Delta_{\alpha 1} \genfrac{[}{]}{0.0pt}{}{a}{a}\,\ \genfrac{[}{]}{0.0pt}{}{a}{a}\Delta_{\alpha 1} \genfrac{[}{]}{0.0pt}{}{r(x)}{r(x)}\]
produces
\[\genfrac{[}{]}{0.0pt}{}{a}{b}=\genfrac{[}{]}{0.0pt}{}{d(a,a,a)}{d(b,a,a)}\Delta _{\alpha 1}\genfrac{[}{]}{0.0pt}{}{d(a,a,r(x))}{d(b,a,r(x))}=\genfrac{[}{]}{0.0pt}{}{r( x)}{d(b,a,r(x))}\,. \tag{1}\]
Then \((a,b)\in\alpha\) implies \(r(a)=r(b)\) and so \(d(b,a,r(x))\)\(\alpha\)\(d(r(b),r(a),r(x))=r(x)\); thus, \(r(d(b,a,r(x)))=r(r(x))=r(x)\) and so \(d(b,a,r(x))/\alpha=x/\alpha\). Altogether we have
\[\psi\left(d(b,a,r(x))/[\alpha,1]\right)=\left(\genfrac{[}{]}{0.0pt}{}{r(x)}{d( b,a,r(x))}\,/\Delta_{\alpha 1}\,\ d(b,a,r(x))/\alpha\right)=\left(\genfrac{[}{]}{0.0pt}{}{a}{b}\big{/}\Delta_{ \alpha 1}\,\ x/\alpha\right).\]
We now define the transfer functions. For any basic operation \(f\) with \(\operatorname{ar}f=n\), define \(T_{f}:A/\alpha\to A(\alpha)/\Delta_{\alpha 1}\) by
\[T(x_{1}/\alpha,\ldots,x_{n}/\alpha):=\genfrac{[}{]}{0.0pt}{}{r(f(x_{1},\ldots, x_{n})}{f(r(x_{1}),\ldots,r(x_{n}))}\,/\Delta_{\alpha 1}.\]
For the binary operation on \(A(\alpha)/\Delta_{\alpha 1}\), we take \(x+_{0}y:=d(x,0,y)\) where \(0:=\genfrac{[}{]}{0.0pt}{}{u}{u}\big{/}\Delta_{\alpha 1}\) is the \(\Delta_{\alpha 1}\)-class containing the diagonal. Since \(A(\alpha)/\Delta_{\alpha 1}\) is abelian by Lemma 2.2(1a), \(x+_{0}y\) is the operation of an abelian group in which \(0\) is the identity. Since \(\alpha\) is an abelian congruence, the difference term evaluates as a Mal'cev operation on \(\alpha\)-classes.
**Claim**: \(\psi\) is a homomorphism.
Proof.: Take \(\bar{x}=(x_{1},\ldots,x_{n})\in A\) and write \(r(\bar{x})=(r(x_{1}),\ldots,r(x_{n}))\). We calculate
\[F_{f}(\psi(\bar{x})) =\left(\begin{bmatrix}f(r(\bar{x}))\\ f(\bar{x})\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}r(f(\bar{x}))\\ f(r(\bar{x}))\end{bmatrix}/\Delta_{\alpha 1}\,\ f(\bar{x})/\alpha\right)\] \[=\left(\begin{bmatrix}f(r(\bar{x}))\\ f(\bar{x})\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}f(r(\bar{x}))\\ f(r(\bar{x}))\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}r(f(\bar{x}))\\ f(r(\bar{x}))\end{bmatrix}/\Delta_{\alpha 1}\,\ f(\bar{x})/\alpha\right)\] \[=\left(\begin{bmatrix}d\left(f(r(\bar{x})),f(r(\bar{x})),r(f(\bar {x}))\right)\\ d\left(f(\bar{x}),f(r(\bar{x})),f(r(\bar{x}))\right)\end{bmatrix}\,\ f(\bar{x})/\alpha\right)\] \[=\left(\begin{bmatrix}r(f(\bar{x}))\\ f(\bar{x})\end{bmatrix}\,\ f(\bar{x})/\alpha\right)\] \[=\psi(f(\bar{x}))\]
because \(f(r(\bar{x}))\ \alpha\ f(\bar{x})\ \Rightarrow\ d\left(f(\bar{x}),f(r(\bar{x})),f(r(\bar{x})) \right)=f(\bar{x})\).
The theorem is established.
**Remark 2.5**.: The proof of Proposition 2.4 can show the following: Let \(\mathcal{V}\) be a variety with a difference term and \(A\in\mathcal{V}\). If \(\alpha,\beta\in\operatorname{Con}A\) such that \(\alpha\leq\beta\) with \(\alpha\) abelian, then
\[A/[\alpha,\beta]\approx\operatorname{Sg}\left(\left\{\left(\begin{bmatrix}a \\ b\end{bmatrix}/\Delta_{\alpha\beta},c/\alpha\right):(a,c)\in\beta\right\} \right)\leq A(\alpha)/\Delta_{\alpha 1}\otimes^{T}A/\alpha.\]
We can recover [3, Prop 7.1] from the next corollary by taking \(\alpha\) to be the center and the variety to be congruence modular.
**Corollary 2.6**.: (see [3, Prop 7.1]) Let \(\mathcal{V}\) be a variety with a difference term, \(A\in\mathcal{V}\) and \(\alpha\in\operatorname{Con}A\) central. Then
\[A\approx A(\alpha)/\Delta_{\alpha 1}\otimes^{T}A/\alpha.\]
Projection onto the second factor \(p_{2}:A(\alpha)/\Delta_{\alpha 1}\otimes^{T}\to A/\alpha\) is a surjective homomorphism. If \(\psi\) effects the isomorphism in Corollary 2.6, then \(\ker p_{2}=\psi(\alpha)\).
**Theorem 2.7**.: Let \(\mathcal{V}\) be a congruence modular variety, \(A\in\mathcal{V}\) and \(\alpha\in\operatorname{Con}A\). Then
\[A/[\alpha,1_{A}]\approx A(\alpha)/\Delta_{\alpha 1}\otimes^{T}A/\alpha.\]
Proof.: Apply Corollary 2.6 to the algebra \(A/[\alpha,1]\) and the central congruence \(\alpha/[\alpha,1]\) to conclude
\[A/[\alpha,1]\approx A/[\alpha,1](\alpha/[\alpha,1])/\Delta_{\alpha/[\alpha,1] 1}\otimes^{T}(A/[\alpha,1])/(\alpha/[\alpha,1]).\]
For the second factor, the \(2^{\text{nd}}\)-isomorphism theorem yields \((A/[\alpha,1])/(\alpha/[\alpha,1])\approx A/\alpha\). For the first factor, Lemma 2.2(2b) gives the isomorphism \(A/[\alpha,1](\alpha/[\alpha,1])/\Delta_{\alpha/[\alpha,1]1}\approx A(\alpha)/ \Delta_{\alpha 1}\).
To finish, we make the following observation. Suppose \(\psi:B\to C\) is an isomorphism and \(B\otimes^{T}Q\) is defined using the binary polynomial \(x+y:=t(x,y,b_{1},\ldots,b_{k})\) for some term \(t(x_{1},\ldots,x_{k+2})\). Then \(B\otimes^{T}Q\approx C\otimes^{T^{\prime}}Q\) where \(C\otimes^{T^{\prime}}Q\) is defined using the binary polynomial \(x\oplus y:=t(x,y,\psi(b_{1}),\ldots,\psi(b_{k}))\) and transfer \(T^{\prime}_{f}:=\psi(T_{f}):Q\to C\) for each fundamental operation \(f\).
The following proposition is an extension of [3, Cor 7.2] to nilpotent algebras. We need to understand the evaluation of a term \(t\) in the algebra \(B\otimes^{T}Q\). Since the Mal'cev term is compatible with the operations
of the abelian algebra \(B\), by recursive evaluation along the composition tree of the term \(t\), we see that the interpretation of \(t\) in \(B\otimes^{T}Q\) is given by
\[F_{t} \left(\langle\vec{a},\vec{x}\rangle\right)\] \[=\left\langle t^{B}(\vec{a})+\sum f^{B}\left(T_{g_{1}}\big{(}h_{11 }^{Q}(\vec{y}_{11}),\ldots,h_{1n_{1}}^{Q}(\vec{y}_{1n_{1}})\big{)},\ldots,T_{g_ {m}}\big{(}h_{m1}^{Q}(\vec{y}_{m1}),\ldots,h_{mn_{m}}^{Q}(\vec{y}_{mn_{m}}) \big{)}\right),t^{Q}(\vec{x})\right\rangle\] \[=\left\langle t^{B}(\vec{a})+s(\vec{x}),t^{Q}(\vec{x})\right\rangle\]
where the sum is taken over all \(f,g_{i},h_{ij}\) such that \(f\) and \(h_{ij}\) are subterms of \(t\), \(g_{i}\in\tau\) are operation symbols or variables and from the composition tree of \(t\) we have \(t=f\left(g_{1}(h_{11},\ldots,h_{1n_{1}}),\ldots,g_{m}(h_{m1},\ldots,h_{mn_{m }})\right)\). The coordinates of the tuples \(\vec{y}_{ij}\) all belong to \(\vec{x}\). In the last line, we have written \(s(\vec{x})\) for the above sum and we note that it is an operation that depends only on the tuples \(\vec{x}\in Q^{\operatorname{ar}t}\). This suffices for the calculation in Lemma 2.8.
Define \([\alpha]_{1}:=[\alpha,\alpha]\) and recursively, \([\alpha]_{n+1}:=[\alpha,[\alpha]_{n}]\) for any \(\alpha\in\operatorname{Con}A\), \(A\in\mathcal{V}\). A congruence \(\alpha\) is \(n\)-step nilpotent if \([\alpha]_{n}=0\).
**Lemma 2.8**.: Let \(B\) and \(Q\) be algebras in the same signature \(\tau\) and suppose \(B\) abelian with a Mal'cev term m(x,y,z). If the binary term \(m(x,0,y):=x+y\) for some \(0\in B\) is used to define \(\otimes^{T}\), then \([1,\ker q]=0\) where \(q:B\otimes^{T}Q\to Q\) is projection.
Proof.: Since \(B\) is abelian, the \(m(x,0,y)\) defines an abelian group operation. We verify the centrality condition. Fix a term \(F_{t}(\bar{x},\bar{y})\) and tuples \(\bar{a}=(\bar{a}^{1},\bar{a}^{2}),\bar{b}=(\bar{b}^{1},\bar{b}^{2}),(\bar{c}^ {1},\bar{c}^{2})=\bar{c}\ker q\ \bar{d}=(\bar{d}^{1},\bar{d}^{2})\) such that \(F_{t}(\bar{a},\bar{c})=F_{t}(\bar{a},\bar{d})\). This means
\[\left\langle t^{B}(\bar{a}^{1},\bar{c}^{1})+s(\bar{a}^{2},\bar{c}^{2}),t^{Q}( \bar{a}^{2},\bar{c}^{2})\right\rangle=\left\langle t^{B}(\bar{a}^{1},\bar{d}^ {1})+s(\bar{a}^{2},\bar{d}^{2}),t^{Q}(\bar{a}^{2},\bar{d}^{2})\right\rangle.\]
Since \(\bar{c}\ker q\ \bar{d}\), we have \(\bar{c}^{2}=\bar{d}^{2}\). This yields the equalities
\[s(\bar{a}^{2},\bar{c}^{2})=s(\bar{a}^{2},\bar{d}^{2})\qquad s(\bar{b}^{2},\bar {c}^{2})=s(\bar{b}^{2},\bar{d}^{2})\qquad t^{B}(\bar{b}^{2},\bar{c}^{2})=t^{B }(\bar{b}^{2},\bar{d}^{2}).\]
Then from \(t^{B}(\bar{a}^{1},\bar{c}^{1})+s(\bar{a}^{2},\bar{c}^{2})=t^{B}(\bar{a}^{1}, \bar{d}^{1})+s(\bar{a}^{2},\bar{d}^{2})\) we conclude \(t^{B}(\bar{a}^{1},\bar{c}^{1})=t^{B}(\bar{a}^{1},\bar{d}^{1})\). Since \(B\) is abelian, we then have \(t^{B}(\bar{b}^{1},\bar{c}^{1})=t^{B}(\bar{b}^{1},\bar{d}^{1})\). Together with the above equalities we conclude \(F_{t}(\bar{b},\bar{c})=F_{t}(\bar{b},\bar{d})\).
**Proposition 2.9**.: ( see [3, Cor 7.2]) Let \(\mathcal{V}\) be a variety with a difference term. An algebra \(A\in\mathcal{V}\) is n-step nilpotent if and only if it can be represented as a right-associated product
\[A\approx Q_{n}\otimes^{T_{n-1}}Q_{n-1}\otimes^{T_{n-2}}\cdots\otimes^{T_{1}}Q _{1}\]
for abelian algebras \(Q_{1},\ldots,Q_{n}\in\mathcal{V}\).
Proof.: Assume \(A\) is n-step nilpotent. Then \(0=[1]_{n}=[1,[1]_{n-1}]<[1,[1]_{n-2}]\). Set \(Q_{1}=A/[1,1]\) and for \(2\leq k\leq n\), define \(Q_{k}=A/\Delta_{[1]_{k-1}1}\) which are abelian by Lemma 2.2(1a). By Theorem 2.7, we have
\[A/[1]_{k}\approx Q_{k}\otimes^{T}A/[1]_{k-1}\]
for \(2\leq k\leq n\). The associated product now follows since \(A/[1,1]\) is abelian.
Now, suppose we have
\[A\approx Q_{n}\otimes^{T_{n-1}}Q_{n-1}\otimes^{T_{n-2}}\cdots\otimes^{T_{1}}Q _{1}\]
a right-associated product for abelian algebras \(Q_{1},\ldots,Q_{n}\in\mathcal{V}\). Set \(B_{k}=Q_{k}\otimes^{T_{k-1}}\cdots\otimes^{T_{1}}Q_{1}\) for the right-associated product. Note we have surjective homomorphisms \(q_{k+1}:B_{k+1}\to B_{k}\) given by right-projections This implies each \(B_{k}\in\mathcal{V}\) for \(1\leq k\leq n\). The argument is by induction on \(k\). Assume \(B_{k}\) is
\(k\)-step nilpotent and let \(\alpha=\ker q_{k+1}\). By recursive application of the homomorphism property in varieties with a difference term,
\[\left(\left[1_{B_{k+1}}\right]_{k}\vee\alpha\right)/\alpha =\left[\left(1_{B_{k+1}}\vee\alpha\right)/\alpha,\left(\left[1_{B_ {k+1}}\right]_{k-1}\vee\alpha\right)/\alpha\right]\] \[=\left[1_{B_{k}},\left[\left(1_{B_{k+1}}\vee\alpha\right)/\alpha, \left(\left[1_{B_{k+1}}\right]_{k-2}\vee\alpha\right)/\alpha\right]\right]\] \[=\left[1_{B_{k}},\left[1_{B_{k}},\left[\left(1_{B_{k+1}}\vee\alpha \right)/\alpha,\left(\left[1_{B_{k+1}}\right]_{k-3}\vee\alpha\right)/\alpha \right]\right]\right]\] \[\vdots\] \[=[1_{B_{k}}]_{k}=0;\]
thus, \(\alpha\geq[1]_{k}\). The hypothesis of Lemma 2.8 is satisfied by Lemma 2.2(1a) and so \(0=[1,\alpha]\leq[1,[1]_{k}]\); therefore, \(B_{k+1}\) is k+1-step nilpotent. The argument is concluded since \(A\approx B_{n}\).
It is instructive to consider the previous development in the case of groups; in particular, the \(\otimes^{T}\) construction recovers the reconstruction of central extensions where the transfer corresponds to the addition of a 2-cocycle. For a normal subgroup \(K\triangleleft G\), let \(\alpha_{K}=\{(x,y)\in G^{2}:xy^{-1}\in K\}\) denote the corresponding congruence. Recall, given a homomorphism \(\phi:Q\to\operatorname{Aut}K\) and group 2-cocycle \(f:Q\times Q\to K\) the group \(K\rtimes_{\phi,f}Q\) is defined over the set \(K\times Q\) with operation
\[(a,x)\cdot(b,y)=(a\cdot\phi(x)(b)\cdot f(x,y),x\cdot y).\]
**Lemma 2.10**.: Let \(G\) be a group and \(K,H\triangleleft G\) with \(K\leq H\).
1. \(G(\alpha_{K})/\Delta_{\alpha_{K}\alpha_{H}}\approx K/[K,H]\rtimes_{\phi}G/H\) for a homomorphism \(\phi:G/H\to\operatorname{Aut}K/[K,H]\).
2. \(G(\alpha_{K})/\Delta_{\alpha_{K}1}\approx K/[K,G]\).
3. \(G/[K,G]\approx K/[K,G]\ \otimes^{T}G/K\) for some transfer \(T\).
4. For a central extensions \(\pi\colon G\to Q\) with \(K=\ker\pi\), the transfers \(T_{\sigma}:Q^{\operatorname{ar}(\sigma)}\to G(\alpha_{K})/\Delta_{\alpha_{K}1}\) and their image under the isomorphism from (2) are given by * \(T_{\times}(x,y)=\begin{bmatrix}l(xy)\\ l(x)l(x)\end{bmatrix}/\Delta_{\alpha_{K}1}\longmapsto l(x)l(y)l(xy)^{-1}\) * \(T_{-1}(x)=\begin{bmatrix}l(x^{-1})\\ l(x)^{-1}\end{bmatrix}/\Delta_{\alpha_{K}1}\longmapsto l(x)^{-1}l(x^{-1})^{-1}\) * \(T_{1}(x)=\begin{bmatrix}0\\ 0\end{bmatrix}/\Delta_{\alpha_{K}1}\longmapsto 0\) ; therefore, \(G\approx K\otimes^{T}Q\approx K\rtimes_{0,f}Q\) for the 2-cocycle \(f(x,y)=l(x)l(y)l(xy)^{-1}\).
Proof.: (1) Let \(l:G/H\to G\) be a lifting associated to \(\alpha_{H}\). Define \(\phi:G/H\to\operatorname{Aut}K/[K,H]\) by \(\phi(xH)(y[K,H]):=l(xH)yl(xH)^{-1}[K,H]=y^{l(xH)}[K,H]\). Define \(\psi:K\rtimes_{\phi}G/H\to G(\alpha_{K})/\Delta_{\alpha_{K}\alpha_{H}}\) by \(\psi(k,q):=\begin{bmatrix}l(q)\\ kl(q)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\). From the generators of \(\Delta_{\alpha_{K}\alpha_{H}}\), we see that \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha_{K}\alpha_{H}}\begin{bmatrix}c\\ d\end{bmatrix}\) implies \((a,b),(c,d)\in\alpha_{K}\) and \((a,c),(b,d)\in\alpha_{H}\). Note \(\psi(k_{1},q_{1})=\psi(k_{2},q_{2})\Rightarrow\begin{bmatrix}l(q_{1})\\ k_{1}l(q_{1})\end{bmatrix}\Delta_{\alpha_{K}\alpha_{K}}\begin{bmatrix}l(q_{2}) \\ k_{2}l(q_{2})\end{bmatrix}\Rightarrow(l(q_{1}),l(q_{2}))\in\alpha_{H} \Rightarrow q_{1}=q_{2}\) since \(l\) is a lifting for \(\pi:G\to G/H\). Then we must have \((k_{1},k_{2})\in[K,H]\); thus, \(\ker\psi\leq\alpha_{[K,H]}\times 0_{G/H}\). Conversely, for \((k_{1},k_{2})\in\alpha_{[K,G]}\) and \(q\in G/H\), we have \(\begin{bmatrix}q\\ k_{1}q\end{bmatrix}\Delta_{\alpha_{K}\alpha_{H}}\begin{bmatrix}q\\ k_{2}q\end{bmatrix}\) which shows \(\alpha_{[K,H]}\times 0_{G/H}\leq\ker\psi\).
For surjectivity, take \(\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\in G(\alpha_{K})/\Delta_{\alpha_{K} \alpha_{H}}\). Then \(a\) and \(b\) are in the same coset of \(K\leq H\) and so there is \(q\in G/H\), \(h_{1},h_{2}\in H\) such that \(\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}=\begin{bmatrix}h_{1}l(q)\\ h_{2}l(q)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\). Since \((a,b)\in\alpha_{K}\), we
have \(h_{2}h_{1}^{-1}\in K\) and so can write \(h_{2}=kh_{1}\) for some \(k\in K\). Note \((l(q),h_{1}^{-1}l(q))\in\alpha_{H}\) which implies \(\begin{bmatrix}l(q)\\ l(q)\end{bmatrix}\Delta_{\alpha_{K}\alpha_{H}}\begin{bmatrix}h_{1}^{-1}l(q)\\ h_{1}^{-1}l(q)\end{bmatrix}\) as a generator of the congruence. Then
\[\begin{bmatrix}a\\ b\end{bmatrix}=\begin{bmatrix}h_{1}l(q)\\ h_{2}l(q)\end{bmatrix}=\begin{bmatrix}h_{1}\\ h_{2}\end{bmatrix}\cdot\begin{bmatrix}l(q)\\ l(q)\end{bmatrix}\Delta_{\alpha_{K}\alpha_{H}}\begin{bmatrix}h_{1}\\ h_{2}\end{bmatrix}\cdot\begin{bmatrix}h_{1}^{-1}l(q)\\ h_{1}^{-1}l(q)\end{bmatrix}=\begin{bmatrix}l(q)\\ h_{2}h_{1}^{-1}l(q)\end{bmatrix}=\begin{bmatrix}l(q)\\ kl(q)\end{bmatrix}.\]
Then \(\psi((k,q))=\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\).
We check \(\psi\) is a homomorphism:
\[\psi\left(k_{1},q_{1}\right)\cdot\psi(k_{2},q_{2}) =\begin{bmatrix}l(q_{1})\\ k_{1}l(q_{1})\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\cdot\begin{bmatrix}l(q _{2})\\ k_{2}l(q_{2})\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\] \[=\begin{bmatrix}l(q_{1})l(q_{2})\\ k_{1}l(q_{1})k_{2}l(q_{2})\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\] \[=\begin{bmatrix}f(q_{1},q_{2})l(q_{1}q_{2})\\ k_{1}k_{2}^{l(q_{1})}f(q_{1},q_{2})l(q_{1}q_{2}))\end{bmatrix}/\Delta_{\alpha_ {K}\alpha_{H}}\] \[=\begin{bmatrix}l(q_{1}q_{2})\\ k_{1}k_{2}^{l(q_{1})}l(q_{1}q_{2})\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\] \[=\psi\left(k_{1}k_{2}^{l(q_{1})},q_{1}q_{2}\right)=\psi\left((k_{ 1},q_{1})\cdot(k_{2},q_{2})\right).\]
(2) This follows from (1) since \(G(\alpha_{K})/\Delta_{\alpha_{K}1}\approx K/[K,G]\rtimes_{\phi}G/G\approx K/[K,G]\).
(3) This is just Theorem 2.4.
(4) Use (1) and (2) and the fact that \(K\) is central.
**Remark 2.11**.: From Lemma 2.10(1), for \(K\) abelian we see that \(G(\alpha_{K})/\Delta_{\alpha_{K}\alpha_{K}}\) is isomorphic to the semidirect product induced by the extension \(\pi:G\to G/K\). The inverse to the map \(\psi\) is given by \(\sigma:\begin{bmatrix}x\\ y\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\longmapsto(yx^{-1},\pi(x))\).
At this point, it is possible to use Corollary 2.6 and the group example in Lemma 2.10 to define analogues of the machinery for abelian \(2^{\mathrm{nd}}\)-cohomology groups in varieties with a difference term. We will observe how to derive the central extension theory as a specialization to those varieties from the more general extension theory of affine datum developed in the next section.
## 3. Deconstruction/reconstruction with affine datum
We start be defining the notion of action. For \(0<n\in\mathds{N}\), write \([n]=\{1,\ldots,n\}\) for the initial segment of positive integers and \([n]^{*}=\{s\subseteq\{1,\ldots,n\}:0<|s|<n\}\) for the non-empty proper subsets of \([n]\). Given sets \(Q\) and \(A\) and \(s\in[n]^{*}\), we define the set of coordinates \([Q,A]^{s}=\{\vec{x}\in Q\cup A:\vec{x}(i)\in A\text{ for }i\in s,\vec{x}(i)\in Q \text{ for }i\notin s\}\).
**Definition 3.1**.: Let \(Q,A\) be sets and \(f\) an n-ary operation on \(Q\) with \(n\geq 2\). A _pairing_ of \(Q\) on \(A\) with respect to \(f\) is a choice of subsets \(\sigma(f)\subseteq[n]^{*}\) and a sequence of functions \(a(f,s):[Q,A]^{s}\to A\) with \(s\in\sigma(f)\).
Let \(Q\) be an algebra in the signature \(\tau\). For a set \(A\), an _action_\(Q*A\) is a sequence of pairings \(\{a(f,s):f\in\tau,\operatorname{ar}f\geq 2,s\in\sigma(f)\subseteq[\operatorname{ar }f]^{*}\}\).
In the following definition, we demonstrate how an action can be associated to a variety (or equational theory). Given a fixed action \(Q*A\), for any term \(t\) in the signature \(\tau\), there are no operations on \(A\) available in which to define an interpretation of \(t\) as an operation \(t^{A}:A^{\operatorname{ar}t}\to A\); however, via the pairings
\(a(f,s)\) associated to each \(f\in\tau\) with \(\operatorname{ar}f\geq 2\), it is sometimes possible to define tuples \(\bar{c}\in(Q\cup A)^{\operatorname{ar}t}\) on which to define a value \(t(\bar{c})\in(Q\cup A)\) using compositions of the available operations on \(Q\) and pairings. If we consider the composition tree of the term with variables on the leaves, the idea is to choose values in \(Q\cup A\) for the variables so that when propagated through the nodes of the tree yields a meaningful evaluation using either the function symbols \(f\in\tau\) for the algebra \(Q\) or the pairing functions \(a(f,s)\).
**Definition 3.2**.: Let \(Q\) be an algebra in the signature \(\tau\) and \(A\) a set. Let \(Q*A\) be an action. For each term \(f\), we will define a subset of sequences \(\bar{a}\in C_{f}\subseteq(Q\cup A)^{\operatorname{ar}f}\)_compatible_ with \(f\) and an operation \(f^{*}:C_{f}\to Q\cup A\). If \(f=p_{i}\) is the n-ary i-th projection, then \(C_{f}:=(Q\cup A)^{n}\) and \(f^{*}:=p_{i}\). If \(f\in\tau\) with \(\operatorname{ar}f<1\), then define \(C_{f}:=Q\) and \(f^{*}(a):=f^{Q}(a)\). If \(f\in\tau\) with \(\operatorname{ar}f\geq 2\), then define \(C_{f}:=Q^{\operatorname{ar}f}\cup\bigcup_{s\in\sigma(f)}[Q,A]^{s}\) and
\[f^{*}(\bar{a}):=\begin{cases}f^{Q}(\bar{a})&,\bar{a}\in Q^{\operatorname{ar}f} \\ a(f,s)(\bar{a})&,\bar{a}\in[Q,A]^{s}\end{cases}\]
for \(\bar{a}\in C_{f}\). Let \(f(x_{1},\ldots,x_{m})=h(g_{1}(x_{11},\ldots,x_{1n_{1}}),\ldots,g_{m}(x_{m1}, \ldots,x_{mn_{m}}))\) where \(h\in\tau\), \(g_{i}\) are terms or projections and \(C_{g_{i}}\) are the compatible sequences for \(g_{i}\). Then take \(\bar{a}\in C_{f}\) provided \(\bar{a}_{i}=(a_{i1},\ldots,a_{in_{i}})\in C_{g_{i}}\) for \(i=1,\ldots,m\) and \((g_{1}^{*}(\bar{a}_{1}),\ldots,g_{m}^{*}(\bar{a}_{m}))\in C_{h}\). Then for \(\bar{a}\in C_{f}\), define \(f^{*}(\bar{a}):=h^{Q}(g_{1}^{*}(\bar{a}_{1}),\ldots,g_{m}^{*}(\bar{a}_{m}))\) in the case \((g_{1}^{*}(\bar{a}_{1}),\ldots,g_{m}^{*}(\bar{a}_{m}))\in Q^{m}\) and \(f^{*}(\bar{a}):=a(h,s)(g_{1}^{*}(\bar{a}_{1}),\ldots,g_{m}^{*}(\bar{a}_{m}))\) in the case \((g_{1}^{*}(\bar{a}_{1}),\ldots,g_{m}^{*}(\bar{a}_{m}))\in[Q,A]^{s}\) for some \(s\in[\operatorname{ar}h]^{*}\).
Let \(\operatorname{var}t\) be the set of variables of the term \(t\). For an equation \(f(\vec{x})=g(\vec{y})\), a pair \((\vec{a},\vec{b})\in C_{f}\times C_{g}\) is _appropriate_ if there is an assignment \(\epsilon:\operatorname{var}f\cup\operatorname{var}g\to Q\cup A\) such that \((\epsilon(\vec{x}),\epsilon(\vec{y}))=(\vec{a},\vec{b})\) and \(f^{*}(\vec{a})\in A\Leftrightarrow g^{*}(\vec{b})\in A\).
Let \(\Sigma\) be a set of equation in the signature \(\tau\). An action \(Q*A\) is _compatible_ with \(\Sigma\) if for any equation \(f(\vec{x})\approx g(\bar{y})\in\Sigma\) and appropriate pair \((\vec{a},\vec{b})\in C_{f}\times C_{g}\) we have \(f^{*}(\bar{a})=g^{*}(\bar{b})\). We write \(Q*A\vDash_{*}\Sigma\) if the action \(Q*A\) is compatible with \(\Sigma\).
If \(\mathcal{V}\) is a variety in the same signature \(\tau\), then an action \(Q*A\) is _compatible_ with \(\mathcal{V}\) if \(Q*A\vDash_{*}\operatorname{Id}\mathcal{V}\).
We emphasize that for an action \(Q*A\) with an algebra \(Q\) in the signature \(\tau\), the compatible sequences for a term are determined according to the composition tree in the fundamental operations given by \(\tau\). Note from Definition 3.2, \(Q^{\operatorname{ar}f}\subseteq C_{f}\) for a term \(f\) and so an action \(Q*A\) compatible with \(\mathcal{V}\) implies \(Q\in\mathcal{V}\). An action \(Q*A\) is _full_ if \(\sigma(f)=[\operatorname{ar}f]^{*}\) for each \(f\in\tau\) with \(\operatorname{ar}f\geq 2\).
**Example 3.3**.: Let \(G\in\mathcal{G}\) the variety of groups and \(X\) a set. Asserting an action \(G*X\) is compatible with \(\mathcal{G}\) recovers the classic definition of group-action. For the binary group operation \(f\), we write different notation for the actions by \(a(f,1)(x,g)=x\circ g\) and \(a(f,2)(g,x)=g\cdot x\). For the terms \(f(x,f(y,z))\) and \(f(f(x,y),z)\), the compatible sequences can be seen to be
\[C_{f(f(x,y),z)}=C_{f(x,f(y,z))}=\{(g_{1},g_{2},g_{3}),(g_{1},g_{2},x),(x,g_{1},g_ {2}),(q_{1},x,q_{2}):g_{1},g_{2},g_{3}\in G,x\in X\}.\]
Then enforcing compatibility with associativity we calculate
\[g_{1}\cdot(g_{2}\cdot x)=a(f,2)(g_{1},a(f,2)(g_{2},x)) =f(x,f(y,z))^{*}(g_{1},g_{2},x)\] \[=f(f(x,y),z)^{*}(g_{1},g_{2},x)=a(f,2)(f^{G}(g_{1},g_{2}),x)=(g_{1} g_{2})\cdot x.\]
Similarly, the consequences for compatibility of a full action with the associative equation of \(\mathcal{G}\) yields
1. \((g_{1}g_{2})g_{3}=g_{1}(g_{2}g_{3})\) (associativity in the group \(G\in\mathcal{G}\))
2. \(g_{1}\cdot(g_{2}\cdot x)=(g_{1}g_{2})\cdot x\) (left-action of \(G\) on \(X\))
3. \((x\circ g_{1})\circ g_{2}=x\circ(g_{1}g_{2})\) (right-action of \(G\) on \(X\))
4. \((g_{1}\cdot x)\circ g_{2}=g_{1}\cdot(x\circ g_{2})\)
By choosing an action with \(\sigma(f)=\{2\}\) we would recover only consequences (1) and (2) above, or an action with \(\sigma(f)=\{1\}\) would recover only (1) and (3) above.
**Example 3.4**.: It is possible for an action to be vacuously compatible with a set of equations in the following sense. Take \(\tau=\{f\}\) a single binary symbol and \(\Sigma=\{f(x,y)=f(y,x)\}\). For an action \(Q*A\) such that \(\sigma(f)=\{2\}\), we have \(C_{f(x,y)}\cap C_{f(y,x)}=Q^{2}\). If \(Q\vDash f(x,y)=f(y,x)\), then the action is compatible with \(\Sigma\) independently with how \(a(f,1):Q\times A\to A\) is defined.
In the case of an action \(Q*A\) on a partial-algebra \(A\), there is a modification in the definition of a compatible sequence to reflect the fact that we have partial access to operations on the universe \(A\). For \(f\in\tau\), the corresponding partial-operation on \(A\) may only be defined on a subset \(\operatorname{dom}f\subseteq A^{\operatorname{ar}f}\); consequently, we define \(C_{f}:=\operatorname{dom}f\cup Q^{\operatorname{ar}f}\cup\bigcup_{s\in\sigma( f)}[Q,A]^{s}\) and
\[f^{*}(\bar{a}):=\begin{cases}f^{Q}(\bar{a})&,\bar{a}\in Q^{\operatorname{ar} f}\\ a(f,s)(\bar{a})&,\bar{a}\in[Q,A]^{s}\\ f^{A}(\bar{a})&,\bar{a}\in\operatorname{dom}f\end{cases}\]
for \(\bar{a}\in C_{f}\). The rest of Definition 3.2 follows in a similar manner with appropriate allowances made for the partial-operations over \(A\).
The notion of action is explicitly concerned with the operations of the signature which are binary or of higher arity. The nullary and unary operations will be encoded into a new structure comprising datum which will play the role of the normal subgroup associated to the kernel of the extension. The information in the new structure will have to compensate for the fact that congruences in general are not determined by a privileged class as is the case of normal subgroups and kernels of group homomorphisms.
**Definition 3.5**.: Fix signature \(\tau\) and ternary term symbol \(m\). Define \(A^{\alpha,\tau}=\left\langle A,\alpha,\{f^{\Delta}:f\in\tau\}\right\rangle\) where
* \(A=\left\langle A,m\right\rangle\) is an algebra in the single operation symbol \(m\);
* \(\alpha\in\operatorname{Con}A\);
* \(\{f^{\Delta}:f\in\tau\}\) is a sequence of operations \(f^{\Delta}:A(\alpha)/\Delta_{\alpha\alpha}\times\delta(A)^{\operatorname{ar} f-1}\to A(\alpha)/\Delta_{\alpha\alpha}\) where \(\delta:A\to A(\alpha)/\Delta_{\alpha\alpha}\) is the diagonal map.
In order to make the treatment of the operations in reconstructed extensions more uniform, the partial operations \(f^{\Delta}\) in the above definition are taken over the full signature \(\tau\); however, we shall see that we are really concerned with only the nullary and unary operations of \(\tau\). For higher arity operation symbols \(f\in\tau\), we shall declare \(f^{\Delta}\) to agree with action terms \(a(f,1)\) and for unary and nullary operation symbols we shall take \(f^{\Delta}\) to be independent of all but the first coordinate. This will be made formal in the definition of datum (Definition 3.9).
For a surjective map \(\pi:A\to Q\) with \(\alpha=\ker\pi\), there is a surjective map \(\rho:A(\alpha)\to Q\) with \(\hat{\alpha}=\ker\rho\). Note that a lifting \(l:Q\to A\) for \(\pi\) has the property that \((x,l(q))\in\alpha\Leftrightarrow\rho\left(\begin{bmatrix}l(q)\\ x\end{bmatrix}\right)=q\). This differs with the notion from group theory in that \(l\) is no longer taken as a right-inverse for \(\rho\) as a set map, but that \(\rho\circ(\delta\circ l)=\operatorname{id}\). The \(\alpha\)-trace \(r:A\to A\) associated to the lifting is defined by \(r=l\circ\rho\).
In analogy with the group case, an important distinction arises when an action \(Q*K\) operates by an automorphism; that is, there is an induced homomorphism \(\gamma:Q\to\operatorname{Aut}K\) such that \(q*k=\gamma(q)(k)\) for all \(q\in Q,k\in G\). This is what Definition 3.6 addresses.
**Definition 3.6**.: Fix \(A^{\alpha,\tau}\), a ternary term \(m\), an algebra \(Q\) in the signature \(\tau\) and surjective map \(\rho:A(\alpha)\to Q\) with \(\hat{\alpha}=\ker\rho\). For any \(u\in A\), define the polynomial operation \(x+_{u}y:=m\left(x,\delta(u),y\right)\). An action \(Q*A(\alpha)/\Delta_{\alpha\alpha}\) is _homomorphic_ with respect to \(m\) if the following holds:
* for all \(f\in\tau\) with \(\operatorname{ar}f\geq 2\) and \(s\in\sigma(f)\), all \(\vec{u}\in A^{s}\) and all \(\vec{a},\vec{b}\in\left(A(\alpha)/\Delta_{\alpha\alpha}\right)^{s}\) such that \(a_{i}\ \hat{\alpha}/\Delta_{\alpha\alpha}\ \delta(u_{i})\ \hat{\alpha}/\Delta_{\alpha\alpha}\ b_{i}\) for \(i\in s\),
* if we let \(\vec{x},\vec{y},\vec{z}\in[Q,A(\alpha)/\Delta_{\alpha\alpha}]^{s}\) such that \(x_{i}=y_{i}=z_{i}\in Q\) for \(i\not\in s\) and \(x_{i}=a_{i}\), \(y_{i}=b_{i}\) and \(z_{i}=a_{i}+_{u_{i}}b_{i}\) for \(i\in s\),
then we have
\[a(f,s)(\vec{z})=a(f,s)(\vec{x})+_{v}a(f,s)(\vec{y})\]
where \(v=l(f^{A/\alpha}(q_{1},\ldots,q_{i},u_{1},\ldots,u_{k}))\) for any lifting \(l:Q\to A\) associated to \(\rho\).
An action \(Q*A\) is _unary_ if \(\sigma(f)\subseteq[\operatorname{ar}f]^{*}\) consists of exactly all the singleton subsets. In the case of unary actions, the homomorphic property is more directly written as
\[a(f,i)(q_{1},\ldots,a+_{u}b,\ldots,q_{n})=a(f,i)(q_{1},\ldots,a,\ldots,q_{n})+ _{v}a(f,i)(q_{1},\ldots,b,\ldots,q_{n})\]
for \(\operatorname{ar}f=n\) and \(v=l(f^{A/\alpha}(q_{1},\ldots,u,\ldots,q_{n}))\).
**Definition 3.7**.: The structure \(A^{\alpha,\tau}=\left\langle A,\alpha,\{f^{\Delta}:f\in\tau\}\right\rangle\) is _homomorphic_ with respect to \(m\) if for each \(f\in\tau\) and \(a\)\(\hat{\alpha}/\Delta_{\alpha\alpha}\)\(\delta(u)\)\(\hat{\alpha}/\Delta_{\alpha\alpha}\)\(b\), \(\vec{x}\in A^{\operatorname{ar}f-1}\) we have
\[f^{\Delta}\left(a+_{u}b,\delta(\vec{x})\right)=f^{\Delta}\left(a,\delta(\vec{ x})\right)+_{v}f^{\Delta}\left(b,\delta(\vec{x})\right)\]
where \(v=l(f^{A/\alpha}(u,\vec{x}))\) for any lifting \(l:A/\alpha\to A\) associated to \(\rho:A(\alpha)\to Q\).
**Remark 3.8**.: Note Definitions 3.6 and 3.7 do not depend on the lifting. If \(l,l^{\prime}:Q\to A\) are liftings associated to \(\rho\), then \((l(q),l^{\prime}(q))\in\alpha\) for all \(q\in Q\) which implies \(\delta(l(q))=\delta(l^{\prime}(q))\) since \(\left(\begin{bmatrix}l(q)\\ l(q)\end{bmatrix},\begin{bmatrix}l^{\prime}(q)\\ l^{\prime}(q)\end{bmatrix}\right)\) is a generator of \(\Delta_{\alpha\alpha}\).
When referring to homomorphic actions the choice of the operation \(m\) will consistently be chosen to be the same operation given by \(A^{\alpha,\tau}\). A ternary operation \(m:A^{3}\to A\) is a _ternary abelian group operation_ if it is a Mal'cev operation which is _self-commuting_; that is,
\[A \vDash m(x,y,y)=x\wedge m(y,y,x)=x\] \[A \vDash m(m(x_{1},x_{2},x_{3}),m(y_{1},y_{2},y_{3}),m(z_{1},z_{2}, z_{3}))=m(m(x_{1},y_{1},z_{1}),m(x_{2},y_{2},z_{2}),m(x_{3},y_{3},z_{3})).\]
A ternary abelian group operation \(m\) may also be referred to as an _affine operation_ since for any choice of \(a\in A\), the definitions \(x+y:=m(x,a,y)\) and \(-x:=m(a,x,a)\) yields an abelian group \(\left\langle A,+,-,a\right\rangle\) in which \(m(x,u,z)=x-y+z\)[1, Thm 7.34]. We now arrive at the definition of datum and what it means for an extension to realize the datum.
**Definition 3.9**.: Fix a signature \(\tau\). A triple \((Q,A^{\alpha,\tau},*)\) is _datum_ if the following holds:
1. \(A^{\alpha,\tau}\) is homomorphic;
2. \(Q\) is an algebra in the signature \(\tau\);
3. The ternary operation symbol \(m\) referenced in \(A^{\alpha,\tau}\) also has an interpretation in \(Q\) which is idempotent and there is a surjective homomorphism \(\rho:A(\alpha)\to\left\langle Q,m\right\rangle\) such that \(\hat{\alpha}=\ker\rho\);
4. The action \(Q*A(\alpha)/\Delta_{\alpha\alpha}\) is homomorphic with the following property: for any \(f\in\tau\) with \(n=\operatorname{ar}f\geq 2\) and \(\rho\left(\begin{bmatrix}y_{i}\\ x_{i}\end{bmatrix}\right)=q_{i}\) for \(i=1,\ldots,n\), we have \[f^{\Delta}\left(\begin{bmatrix}y_{1}\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(x_{2}),\ldots,\delta(x_{n}) \right)\,\hat{\alpha}/\Delta_{\alpha\alpha}\;a(f,s)\left(\vec{z}\right)\] for all \(s\in\sigma(f)\) and \(\vec{z}\in[Q,A(\alpha)/\Delta_{\alpha\alpha}]^{s}\) such that \(z_{i}=q_{i}\) for \(i\not\in s\) and \(z_{i}=\begin{bmatrix}y_{i}\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha}\) for \(i\in s\).
We say \((Q,A^{\alpha,\tau},*)\) is _affine datum_ if in addition
1. \(\alpha\in\operatorname{Con}\left\langle A,m\right\rangle\) is an abelian congruence and \(m\) is a ternary abelian group operation when restricted on each \(\alpha\)-class;
* the action is unary and \[f^{\Delta}\left(\begin{bmatrix}y_{1}\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(x_{2}),\ldots,\delta(x_{n}) \right)=a(f,1)\left(\begin{bmatrix}y_{1}\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},q_{2},\ldots,q_{n}\right)\] for any \(f\in\tau\) with \(n=\operatorname{ar}f\geq 2\).
**Remark 3.10**.: The notion of datum requires comment. In the case of groups, from datum the \((\mathbf{K},Q,\phi)\) one can always construct two extensions of \(K\) by \(Q\): the direct product \(p_{2}:K\times Q\to Q\) with \(\alpha_{K}=\ker p_{2}\) and the semidirect product \(\pi:K\rtimes_{\phi}Q\to Q\) with \(\alpha_{K}=\ker\pi\). In this way, we can treat the direct product and semidirect product as already explicitly provided by the datum; as a consequence, the datum provides a universe on which to define the algebras for possible extensions. The partial structure \(A^{\alpha,\tau}\) includes the underlying set \(A\) from which we can calculate the set \(A(\alpha)/\Delta_{\alpha\alpha}\) which will serve as the universe of the reconstructed extensions.
It is not required that there is an interpretation of the signature \(\tau\) on the set \(A\) - only that there is a ternary operation \(m\) which is defined on \(A\) so that \(A(\alpha)/\Delta_{\alpha\alpha}\) is calculated explicitly from the congruence \(\alpha\) of the algebra \(\langle A,m\rangle\). The membership in the congruence \(\Delta_{\alpha\alpha}\) is then only determined by \(m\). The idea is to preserve this particular aspect from the example where \(\alpha\) is an abelian congruence of an algebra \(A\in\mathcal{V}\) in a variety with weak-difference term; in this case, the universe \(A(\alpha)/\Delta_{\alpha\alpha}\) can be reconstructed from the resulting datum by the rule \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}c\\ d\end{bmatrix}\Leftrightarrow d=m(b,a,c)\) since \(\Delta_{\alpha\alpha}\leq\hat{\alpha}\). This is data which resides in \(A^{\alpha,\tau}\).
**Definition 3.11**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum. An algebra \(B\)_realizes_ the datum if there is an extension \(\pi:B\to Q\) with \(\beta=\ker\pi\), a bijection \(i:B(\beta)/\Delta_{\beta\beta}\to A(\alpha)/\Delta_{\alpha\alpha}\) and a lifting \(l:Q\to B\) such that for all \(f\in\tau\):
* if \(n=\operatorname{ar}f\geq 2\), \(q_{1},\ldots,q_{n}\in Q\) and \(x_{1},\ldots,x_{n}\in B(\beta)/\Delta_{\beta\beta}\), then \[a(f,i)\left(q_{1},\ldots,i\circ x_{i},\ldots,q_{n}\right)=i\circ f^{B(\beta)/ \Delta_{\beta\beta}}\left(\delta\circ l(q_{1}),\ldots,x_{i},\ldots,\delta \circ l(q_{n})\right);\]
* if \(\operatorname{ar}f\leq 1\) and \(x\in A(\alpha)/\Delta_{\alpha\alpha}\), then \(f^{\Delta}(i(x))=i\circ f^{B(\beta)/\Delta_{\beta\beta}}(x)\).
For special choices of the operations in \(T\), the following algebras will serve as our reconstruction of extensions which realize affine datum. This will be shown in Theorem 3.21
**Definition 3.12**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\). Let \(T=\{T_{f}:f\in\tau\}\) a sequence of functions such that \(T_{f}:(Q)^{\operatorname{ar}f}\to A(\alpha)/\Delta_{\alpha\alpha}\). Fix a lifting \(l:Q\to A\) associated to \(\rho:A(\alpha)\to Q\). Define an algebra \(A_{T}(Q,A^{\alpha,\tau},*)\) on the universe \(A(\alpha)/\Delta_{\alpha\alpha}\) of the datum with operations
\[F_{f}\left(\begin{bmatrix}a_{1}\\ b_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\ldots,\begin{bmatrix}a_{n}\\ b_{n}\end{bmatrix}/\Delta_{\alpha\alpha}\right):=f^{\Delta}\left(\begin{bmatrix} a_{1}\\ b_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(b_{2}),\ldots,\delta(b_{n})\right)\] \[+_{u}\ \sum_{i=2}^{n}a(f,i)\left(q_{1},\ldots,q_{i},\begin{bmatrix}a_{i+ 1}\\ b_{i+1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(b_{i+2}),\ldots,\delta(b_{n})\right)\] \[+_{u}\ \ T_{f}(q_{1},\ldots,q_{n}) (f\in\tau).\]
a left-associated composition where \(u=l(f^{Q}(q_{1},\ldots,q_{n}))\) and \(\rho\left(\begin{bmatrix}a_{i}\\ b_{i}\end{bmatrix}\right)=q_{i}\).
Definition 3.2, Definition 3.13 and Definition 3.14 will constitute our scheme which given membership \(Q\in\mathcal{U}\) in a variety will guarantee the extension \(A_{T}(Q,A^{\alpha,\tau},*)\in\mathcal{U}\):
\[\text{identities in }\mathcal{U} \Rightarrow\ \text{expressions in }\{T_{f}:f\in\tau\}\text{ and action terms }Q*A(\alpha)/\Delta_{\alpha\alpha}\] \[\Rightarrow\ \text{identities satisfied by }A_{T}(Q,A^{\alpha,\tau},*).\]
Given a term \(t(\bar{x})\) in the signature \(\tau\), we will be interested in its interpretation in the algebra \(A_{T}(Q,A^{\alpha,\tau},*)\). By using the homomorphism property of the action terms and compatibility of the ternary operation \(m\) over the blocks of \(\hat{\alpha}/\Delta_{\alpha\alpha}\), we can distribute the sum in Definition 3.12 at every point in the composition tree of \(t^{A_{T}(Q,A^{\alpha,\tau},*)}\). The end result is a large sum in mixed action terms and functions in \(T\) indexed by operation symbols of \(\tau\). The next definition will describe a way to separate the terms in the sum which use some operations from \(T\).
**Definition 3.13**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\) and \(T=\{T_{f}:f\in\tau\}\) a sequence of functions \(T_{f}:(Q)^{\operatorname{ar}f}\to A(\alpha)/\Delta_{\alpha\alpha}\). Let \(m\) be the ternary operation on \(A\) included in the datum and \(x+_{u}y=m(x,\delta(u),y)\) the binary operation defined on \(A(\alpha)/\Delta_{\alpha\alpha}\) for \(u\in A\).
For each term \(t\) in the signature \(\tau\), we inductively define along the composition tree of the term a set \(E_{t}\) of operations of the form \(\nu:Q^{\operatorname{ar}t}\to A(\alpha)/\Delta_{\alpha\alpha}\). Then we define an operation \(t^{\partial,T}:Q^{\operatorname{ar}t}\to A(\alpha)/\Delta_{\alpha\alpha}\) given by
\[t^{\partial,T}(\vec{q})=\sum_{\nu\in E_{t}}\nu(\vec{q}) \tag{2}\]
where the sum is the left-associated composition over \(+_{u}\) where \(u=l\left(t^{Q}(\vec{q})\right)\) for any lifting \(l:Q\to A\) associated to \(\rho:A(\alpha)\to Q\). By Remark 3.8, the definition does not depend on the choice of the lifting \(l\).
If \(t=f\in\tau\), define \(E_{t}=\{T_{f}\}\). Suppose \(t(\vec{y})=f(\sigma(\vec{x}))\) is derived from a fundamental operation \(f\in\tau\) by identifying or permuting variables according to the surjective map \(\sigma:\operatorname{var}f\to\operatorname{var}t\). Define \(E_{t}=\{T_{f}\circ\sigma\}\) where \(T_{f}\circ\sigma:Q^{\operatorname{ar}t}\to A(\alpha)/\Delta_{\alpha\alpha}\) denotes the corresponding operation evaluated according to \(\sigma\); that is, for each evaluation \(\epsilon:\operatorname{var}t\to Q\) there is an evaluation \(\epsilon^{\prime}:\operatorname{var}f\to Q\) such that \(\epsilon^{\prime}(\vec{x})=\epsilon\circ\sigma(\vec{x})\) and \((T_{f}\circ\sigma)(\epsilon^{\prime}(\vec{y}))=T_{f}(\epsilon(\vec{x}))\).
Suppose \(t(y_{1},\ldots,y_{n})=f(g_{1}(\bar{z}_{1}),\ldots,g_{m}(\bar{z}_{m}))\) where \(f\in\tau\) with \(\operatorname{ar}f=m\) and each \(g_{i}(\bar{z}_{i})\) is a term or variable. The set \(E_{t}\) will consist of three different types of operations. For the first type, we take
\[\nu(\vec{q}):=T_{f}\left(g_{1}^{Q}(\bar{q}_{1}),\ldots,g_{m}^{Q}(\bar{q}_{m})\right)\]
where \(\bar{q}_{i}\in Q^{\operatorname{ar}g_{i}}\) is the restriction of \(\vec{q}\) to the variables of \(g_{i}\). For the second type, it may be that \(g_{1}\) is not a variable. We take operations of the form
\[\nu(\vec{q}):=f^{\Delta}\left(\mu(\bar{q}_{1}),\delta\circ l(g_{2}^{Q}(\bar{q }_{2})),\ldots,\delta\circ l(g_{m}^{Q}(\bar{q}_{m}))\right).\]
where \(\mu\in E_{g_{1}}\) and \(\vec{q}_{i}\) is the restriction of \(\vec{q}\) to the variables of \(g_{i}\). For the third type, for any \(2\leq k\leq m\) such that \(g_{k}\) is not a variable, we take operations of the form
\[\nu(\vec{q}):=a(f,k)\left(g_{1}^{Q}(\bar{q}_{1}),\ldots,g_{k-1}^{Q}(\bar{q}_{k -1}),\mu(\vec{q}_{k}),g_{k+1}^{Q}(\delta(\bar{a}_{k+1})),\ldots,g_{m}^{Q}( \delta(\bar{a}_{m}))\right)\]
where \(\mu\in E_{g_{k}}\) and \(\vec{q}_{i}\) is the restriction of \(\vec{q}\) to the variables of \(g_{i}\).
**Definition 3.14**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum and \(\Sigma\) a set of equations in the same signature \(\tau\). A sequence of operations \(T=\{T_{f}:f\in\tau\}\) where \(T_{f}:Q^{\operatorname{ar}f}\to A(\alpha)/\Delta_{\alpha\alpha}\) is a _2-cocycle compatible with \(\Sigma\)_ if
1. for all \(f\in\tau\), \(T_{f}(q_{1},\ldots,q_{\operatorname{ar}f})\ \hat{\alpha}/\Delta_{\alpha\alpha}\)\(f^{\Delta}\left(\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha\alpha},\delta\circ l(q_{2}),\ldots,\delta \circ l(q_{\operatorname{ar}f})\right)\) where \(\rho\left(\begin{bmatrix}a\\ b\end{bmatrix}\right)=q_{1}\);
2. for all \(t(\vec{x})=g(\vec{y})\in\Sigma\) and evaluations \(\epsilon:\operatorname{var}t\cup\operatorname{var}g\to Q\) we have \[t^{\partial,T}(\epsilon(\vec{x}))=g^{\partial,T}(\epsilon(\vec{y})).\]
It follows immediately from the definition, that if \(T\) is a 2-cocycle compatible with \(\operatorname{Id}\mathcal{V}\), then \(T\) is compatible for any variety \(\mathcal{U}\geq\mathcal{V}\).
**Example 3.15**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\) and \(l:Q\to A\) a lifting associated to \(\rho:A(\alpha)\to Q\). For a single binary operation symbol \(f\), enforcing compatibility with associativity yields
\[f^{\Delta}\left(T_{f}(q_{1},q_{2}),\delta\circ l(q_{3})\right)+_{u}T_{f}(f^{Q}( q_{1},q_{2}),q_{3})=a(f,1)\left(q_{1},T_{f}(q_{2},q_{3})\right)+_{v}T_{f}(q_{1},f^{Q}( q_{2},q_{3})) \tag{3}\]
where \(u=l\left(f(f(q_{1},q_{2}),q_{3})\right)\) and \(v=l\left(f(q_{1},f(q_{2},q_{3}))\right)\).
In the case of an abelian normal subgroup \(K\triangleleft G\) with \(Q=G/K\), this specializes to the classic 2-cocycle identity in the following manner. By Lemma 2.10, there is the isomorphism \(\psi:G(\alpha_{K})/\Delta_{\alpha_{K}\alpha_{K}}\ni\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\longmapsto\left\langle ab^{-1}, \pi(b)\right\rangle\in K\rtimes_{\phi}Q\). If we fix a lifting \(l:Q\to G\) for the canonical surjection \(\pi:G\to Q\), then this means each \(\Delta_{\alpha_{K}\alpha_{K}}\)-class is uniquely represented as \(\begin{bmatrix}l(x)\\ kl(x)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\) for some \(k\in K\), \(x\in Q\). For the binary operation we set \(T(x,y):=\begin{bmatrix}l(xy)\\ l(x)l(y)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\) and define action terms by
\[a(\cdot,1)\left(\begin{bmatrix}l(y)\\ kl(y)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}},x\right) :=\begin{bmatrix}l(y)\\ kl(y)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\cdot\delta\circ l(x)= \begin{bmatrix}l(y)l(x)\\ kl(y)l(x)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\longmapsto\left\langle k,l (x)l(y)\right\rangle\] \[a(\cdot,2)\left(x,\begin{bmatrix}l(y)\\ kl(y)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\right) :=\delta\circ l(x)\cdot\begin{bmatrix}l(y)\\ kl(y)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}=\left\langle l(x)l(y)\\ l(x)kl(y)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\longmapsto\left\langle x*k,l (x)l(y)\right\rangle\]
where \(x*k=l(x)kl(x)^{-1}\) is the conjugation action. Recall, the group 2-cocyle is defined by \(l(x)l(y)=f(x,y)l(xy)\).
Then using \(v=l(xyz)=u\) and \(m(x,y,z)=xy^{-1}z\) we apply \(\psi\) to calculate
\[T(x,y)\cdot\delta(l(z))+_{u}T(xy,z) =\left(\begin{bmatrix}l(xy)\\ l(x)l(y)\end{bmatrix}\begin{bmatrix}l(z)\\ l(z)\end{bmatrix}\right)/\Delta_{\alpha_{K}\alpha_{K}}+_{u}\begin{bmatrix}l(xyz) \\ l(xy)l(z)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\] \[=m\left(\begin{bmatrix}l(xy)l(z)\\ l(x)l(y)l(z)\end{bmatrix},\begin{bmatrix}l(xyz)\\ l(xyz)\end{bmatrix},\begin{bmatrix}l(xyz)\\ l(xy)l(z)\end{bmatrix}\right)/\Delta_{\alpha_{K}\alpha_{K}}\] \[=m\left(\begin{bmatrix}l(xy)l(z)\\ f(x,y)l(xy)l(z)\end{bmatrix},\begin{bmatrix}l(xyz)\\ l(xyz)\end{bmatrix},\begin{bmatrix}l(xyz)\\ f(xy,z)l(xyz)\end{bmatrix}\right)/\Delta_{\alpha_{K}\alpha_{K}}\] \[\longmapsto m\left(\left\langle f(x,y),l(xyz)\right\rangle,\left\langle 0,l(xyz)\right\rangle,\left\langle f(xy,z),l(xyz)\right\rangle\right)\] \[=\left\langle f(x,y),l(xyz)\right\rangle\cdot\left\langle 0,l(xyz) \right\rangle^{-1}\cdot\left\langle f(xy,z),l(xyz)\right\rangle\] \[=\left\langle f(x,y)+f(xy,z),l(xyz)\right\rangle\]
and
\[a(\cdot,1)(x,T(y,z))+_{v}T(x,yz) =\begin{bmatrix}l(x)l(yz)\\ l(x)l(y)l(z)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}+_{v}\begin{bmatrix}l(xyz) \\ l(x)l(yz)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\] \[=m\left(\begin{bmatrix}l(x)l(yz)\\ l(x)f(y,z)l(yz)\end{bmatrix},\begin{bmatrix}l(xyz)\\ l(xyz)\end{bmatrix},\begin{bmatrix}l(xyz)\\ f(x,yz)l(xyz)\end{bmatrix}\right)/\Delta_{\alpha_{K}\alpha_{K}}\] \[\longmapsto m\left(\left\langle x*f(y,z),l(xyz)\right\rangle, \left\langle 0,l(xyz)\right\rangle,\left\langle f(x,yz),l(xyz)\right\rangle\right)\] \[=\left\langle x*f(y,z),l(xyz)\right\rangle\cdot\left\langle 0,l(xyz) \right\rangle^{-1}\cdot\left\langle f(x,yz),l(xyz)\right\rangle\] \[=\left\langle x*f(y,z)+f(x,yz),l(xyz)\right\rangle.\]
The equality in Eq.(3) and the above calculations yields
\[f(x,y)+f(xy,z)=x*f(y,z)+f(x,yz)\]
which is the group theoretic 2-cocycle identity.
We would like to consider the interpretation of a term in an algebra \(A_{T}(Q,A^{\alpha,\tau},*)\) defined from affine datum. Inductively along the composition tree of the term, we can use the homomorphism property of the action and datum operations to distribute across the operations in Definition 3.12 for each fundamental symbol of the signature. While a term interpreted in the algebra \(A_{T}(Q,A^{\alpha,\tau},*)\) with domain \(A(\alpha)/\Delta_{\alpha\alpha}\) may have identified variables in different operations symbols in its composition tree, the interpretation of the operation symbols in Definition 3.12 depends on all the coordinates and have domains over \(Q\cup\delta(A)\cup A(\alpha)/\Delta_{\alpha\alpha}\). So while the repeated variable \(x\) in the term has a fixed evaluation \(x\mapsto\left[\begin{matrix}a\\ b\end{matrix}\right]/\Delta_{\alpha\alpha}\), the interpretation of the term is expanded into sums of operations in which x has evaluations among the values \(\left\{\rho\left(\left[\begin{matrix}a\\ b\end{matrix}\right]\right),\delta\circ l\circ\rho\left(\left[\begin{matrix} a\\ b\end{matrix}\right]\right),\left[\begin{matrix}a\\ b\end{matrix}\right]/\Delta_{\alpha\alpha}\right\}\) depending in which coordinates the variable \(x\) appears.
The next definitions relate the two different domains together. The first step is given a term, we produce a corresponding term with the same composition tree except for no repeated variables.
**Definition 3.16**.: Let \(f(\vec{x})\) be a term in the signature \(\tau\). Let \(f^{\sigma}\) be a term in the signature \(\tau\) and variables \(X_{\omega}=\{x_{0},x_{1},\ldots\}\) which has the same composition tree as \(f\) except the leaves have no repeated variables; in addition, reading from left-to-right the variables of \(f^{\sigma}\) are an initial segment of \(X_{\omega}\). There is a surjective map \(\sigma:\operatorname{var}f^{\sigma}\to\operatorname{var}f\) such that \(f^{\sigma}(\sigma(\operatorname{var}f^{\sigma}))=f(\vec{x})\).
Note for any evaluation \(\epsilon:\operatorname{var}f\to A\), there is a corresponding evaluation \(\epsilon^{\sigma}:\operatorname{var}f^{\sigma}\to A\) such that \(\epsilon\circ\sigma=\epsilon^{\sigma}\).
Fix \(\rho:A(\alpha)\to Q\) associated with affine datum \((Q,A^{\alpha,\tau},*)\) and a lifting \(l:Q\to A\). Let \(t(\vec{x})\) be a term in the same signature with \(n=|\operatorname{var}f^{\sigma}|\) and \(\epsilon:\operatorname{var}f\to A(\alpha)/\Delta_{\alpha\alpha}\) an evaluation. An evaluation \(\mu:\operatorname{var}f^{\sigma}\to Q\cup\delta(A)\cup A(\alpha)/\Delta_{ \alpha\alpha}\) is _consistent_ with \(\epsilon(\vec{x})\) if \(\nu(x_{i})\in\{\rho\circ\epsilon^{\sigma}(x_{i}),\delta\circ l\circ\rho\circ \epsilon^{\sigma}(x_{i}),\epsilon^{\sigma}(x_{i})\}\) for each \(x_{i}\in\operatorname{var}f^{\sigma}\). Define \(L(f,\epsilon(\vec{x}))=\{\mu(\operatorname{var}f^{\sigma})\in C_{t^{\sigma}}:\mu\) is consistent with \(\epsilon(\vec{x})\}\). This is the set of evaluations which will allow us to describe equations in semidirect products realizing affine datum.
**Definition 3.17**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum and \(\Sigma\) a set of identities in the same signature. The action in the datum is _weakly compatible_ with \(\Sigma\) if for all \(f=g\in\Sigma\),
\[\sum_{\mu\in L(f,\epsilon(\vec{x}))}(f^{\sigma})^{*}(\mu(\operatorname{var}f^ {\sigma}))=\sum_{\mu\in L(g,\epsilon(\vec{x}))}(g^{\sigma})^{*}(\mu( \operatorname{var}g^{\sigma})).\]
The action is _weakly compatible_ with a variety \(\mathcal{V}\) if it is weakly compatible with \(\operatorname{Id}\mathcal{V}\).
**Lemma 3.18**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\) and \(t(\vec{x})\) a term. For any evaluation \(\epsilon(\vec{x})=\left(\left[\begin{matrix}a_{1}\\ b_{1}\end{matrix}\right]/\Delta_{\alpha\alpha},\ldots,\left[\begin{matrix}a_{n} \\ b_{n}\end{matrix}\right]/\Delta_{\alpha\alpha}\right)\), the interpretation of the term \(t\) in the algebra \(A_{T}(Q,A^{\alpha,\tau},*)\) is represented by
\[F_{t}\left(\left[\begin{matrix}a_{1}\\ b_{1}\end{matrix}\right]/\Delta_{\alpha\alpha},\ldots,\left[\begin{matrix}a_{n} \\ b_{n}\end{matrix}\right]/\Delta_{\alpha\alpha}\right)=\sum_{\mu\in L(t,\epsilon( \vec{x}))}(t^{\sigma})^{*}(\mu(\operatorname{var}t^{\sigma}))\ +_{u}\ t^{\partial,T}\left(\rho\circ\epsilon(\vec{x})\right). \tag{4}\]
where \(u=l\left(t^{\partial}(\rho\circ\epsilon(\vec{x}))\right)\) for any lifting \(l\) associated to the datum.
Proof.: By induction on the composition tree of \(t\). This is precisely what Definition 3.2, Definition 3.13 and Definition 3.16 accomplish.
The next theorem guarantees abelian congruences in varieties with a weak-difference term are sources of affine datum by decomposing an extension with an abelian kernel into appropriate datum. It will be convenient to work in terms of \(\alpha\)-traces rather than liftings associated to datum.
**Theorem 3.19**.: Let \(\mathcal{V}\) be a variety with a weak-difference term in the signature \(\tau\). Let \(A\in\mathcal{V}\) and surjective \(\pi:A\to Q\) with \(\alpha=\ker\pi\in\operatorname{Con}A\) abelian. Then there exists homomorphic \(A^{\alpha,\tau}\), a \(2\)-cocycle
\(T=\{T_{f}:f\in\tau\}\) compatible with \(\operatorname{Id}\mathcal{V}\) and homomorphic action \(Q*A(\alpha)/\Delta_{\alpha\alpha}\) compatible with \(\operatorname{Id}\mathcal{V}\) constituting affine datum such that \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\).
Proof.: We have \(Q\approx A/\alpha\). Fix a lifting \(l:Q\to A\) and associated \(\alpha\)-trace \(r:A\to A\) so that \(r=l\circ\pi\) and \(\pi\circ l=\operatorname{id}\). We also see that \(\rho:A(\alpha)\to Q\) defined by \(\rho:\begin{bmatrix}a\\ b\end{bmatrix}\mapsto\pi(a)\) is a surjective homomorphism such that \(\hat{\alpha}=\ker\rho\).
Define \(\phi:A\to A_{T}(Q,A^{\alpha,\tau},*)\) by \(\phi(x)=\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\). Let \(m\) be a weak-difference term of \(\mathcal{V}\). Since \(\alpha\) is abelian, \(m\) is affine on \(\alpha\)-blocks. Note \(\Delta_{\alpha\alpha}\leq\hat{\alpha}\). By Lemma 2.1(2), we see that \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}c\\ d\end{bmatrix}\Leftrightarrow d=m(b,a,c)\); therefore, the universe of the algebra \(A(\alpha)/\Delta_{\alpha\alpha}\) is uniquely reconstructed from \(m\). It also follows that \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}a\\ d\end{bmatrix}\Rightarrow b=d\). These facts guarantee that \(\phi\) is bijective.
For each \(f\in\tau\), define \(T_{f}:(Q)^{\operatorname{ar}f}\to A(\alpha)/\Delta_{\alpha\alpha}\) by
\[T_{f}(y_{1},\ldots,y_{n}):=\begin{bmatrix}r(f(x_{1},\ldots,x_{n}))\\ f(r(x_{1}),\ldots,r(x_{n}))\end{bmatrix}/\Delta_{\alpha\alpha} \tag{5}\]
for any choice of \(x_{i}\in A\) such that \(\pi(x_{i})=y_{i}\in Q\). Since \(r(x)=r(y)\) if and only if \((x,y)\in\alpha\), \(T_{f}\) is well-defined; in this way, we can also treat \(T\) has a function with domain \(A\) which depends only on the \(\alpha\)-classes. For each \(f\in\tau\) and \(1\leq i\leq n=\operatorname{ar}f\), define the action \(Q*A(\alpha)/\Delta_{\alpha\alpha}\) according to the rule
\[a(f,i)(q_{1},\ldots,q_{i-1},x,q_{i+1},\ldots,q_{n}):=f\left(\delta\circ l(q_{ 1}),\ldots,\delta\circ l(q_{i-1}),x,\delta\circ l(q_{i+1}),\ldots,\delta \circ l(q_{n})\right). \tag{6}\]
for all \(x\in A(\alpha)/\Delta_{\alpha\alpha}\), \(q_{1},\ldots,q_{n}\in Q\) and define
\[f^{\Delta}(x,\delta(a_{2}),\ldots,\delta(a_{n})):=f(x_{1},\delta(a_{2}),\ldots,\delta(a_{n})) \tag{7}\]
for all \(x\in A(\alpha)/\Delta_{\alpha\alpha}\), \(a_{2},\ldots,a_{n}\in A\). It follows that the action and \(A^{\alpha,\tau}\) are homomorphic since \(\alpha\) is abelian and \(m\) is Mal'cev on \(\alpha\)-blocks. The definitions also show that (AD2) from Definition 3.9 is satisfied.
The fact that \(\phi\) is a homomorphism is a result of the following expansion: if we first set \(u_{i}=f(r(x_{1}),\ldots,r(x_{i}),x_{i+1},\ldots,x_{n})\) we have
\[\begin{split}\begin{bmatrix}r(f(x_{1},\ldots,x_{n}))\\ f(x_{1},\ldots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}&= \begin{bmatrix}m\big{(}f(r(x_{1}),\ldots,r(x_{n})),f(r(x_{1}),\ldots,r(x_{n} )),r(f(x_{1},\ldots,x_{n}))\big{)}\\ m\big{(}f(x_{1},\ldots,x_{n}),f(r(x_{1}),\ldots,r(x_{n})),f(r(x_{1}),\ldots,r(x _{n}))\big{)}\end{bmatrix}/\Delta_{\alpha\alpha}\\ &=\begin{bmatrix}f(r(x_{1}),\ldots,r(x_{n}))\\ f(x_{1},\ldots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}+_{u_{n}}\begin{bmatrix} r(f(x_{1},\ldots,x_{n}))\\ f(r(x_{1}),\ldots,r(x_{n}))\end{bmatrix}/\Delta_{\alpha\alpha}\\ &=\begin{bmatrix}m\big{(}f(r(x_{1}),x_{2},\ldots,x_{n}),f(r(x_{1}),x_{2},\ldots,x_{n}),f(r(x_{1}),\ldots,r(x_{n}))\big{)}\\ m\big{(}f(x_{1},\ldots,x_{n}),f(r(x_{1}),x_{2},\ldots,x_{n}),f(r(x_{1}),x_{2}, \ldots,x_{n})\big{)}\end{bmatrix}/\Delta_{\alpha\alpha}\\ &\quad+_{u_{n}}T_{f}(\pi(x_{1}),\ldots,\pi(x_{n}))\\ &=\begin{bmatrix}f(r(x_{1}),x_{2},\ldots,x_{n})\\ f(x_{1},x_{2},\ldots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}+_{u_{n}}\begin{bmatrix} f(r(x_{1}),r(x_{2}),\ldots,r(x_{n}))\\ f(r(x_{1}),x_{2},\ldots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}\\ &\quad+_{u_{n}}T_{f}(\pi(x_{1}),\ldots,\pi(x_{n}))\\ &=f\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(x_{2}),\ldots,\delta(x_{n}) \right)\end{split}/\Delta_{\begin{subarray}{c}\alpha\alpha}\\ &\quad+_{u_{2}}\begin{bmatrix}f(r(x_{1}),r(x_{2}),r(x_{3}),\ldots,r(x_{n}))\\ f(r(x_{1}),r(x_{2}),x_{3},\ldots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}\\ &=f\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(x_{2}),\ldots,\delta(x_{n}) \right)\end{split}\]
\[+_{u_{1}}\;a(f,1)\left(\pi(x_{1}),\left[\begin{matrix}r(x_{2})\\ x_{2}\end{matrix}\right]/\Delta_{\alpha\alpha},\delta(x_{3}),\ldots,\delta(x_{n} )\right)\] \[+_{u_{2}}\left[\begin{matrix}f(r(x_{1}),r(x_{2}),r(x_{3}),\ldots,r( x_{n}))\\ f(r(x_{1}),r(x_{2}),x_{3},\ldots,x_{n})\end{matrix}\right]/\Delta_{\alpha\alpha} +_{u_{n}}\;T_{f}(\pi(x_{1}),\ldots,\pi(x_{n}))\] \[\vdots\] \[=f^{\Delta}\left(\left[\begin{matrix}r(x_{1})\\ x_{1}\end{matrix}\right]/\Delta_{\alpha\alpha},\delta(x_{2}),\ldots,\delta(x_{n })\right)+_{u_{n}}\] \[\sum_{i=2}^{n}a(f,i)\left(\pi(x_{1}),\ldots,\pi(x_{i}),\left[ \begin{matrix}r(x_{i})\\ x_{i}\end{matrix}\right]/\Delta_{\alpha\alpha},\delta(x_{i+2}),\ldots,\delta(x _{n})\right)\] \[+_{u_{n}}\;T_{f}(\pi(x_{1}),\ldots,\pi(x_{n}))\] \[=F_{f}\left(\left[\begin{matrix}r(x_{1})\\ x_{1}\end{matrix}\right],\ldots,\left[\begin{matrix}r(x_{n})\\ x_{n}\end{matrix}\right]\right)\]
since \(\delta(u_{i})=\delta(u_{j})=\delta(l(f(\pi(x_{1}),\ldots,\pi(x_{n}))))\) for \(i\neq j\).
We now show both the action and 2-cocycle \(T\) are compatible with \(\operatorname{Id}\mathcal{V}\). Note that in the expansion for \(f\in\tau\) previously calculated,
\[F_{f}\left(\left[\begin{matrix}r(x_{1})\\ x_{1}\end{matrix}\right]/\Delta_{\alpha\alpha},\ldots,\left[\begin{matrix}r(x_ {n})\\ x_{n}\end{matrix}\right]/\Delta_{\alpha\alpha}\right) =\left[\begin{matrix}r(f(x_{1},\ldots,x_{n}))\\ f(x_{1},\ldots,x_{n})\end{matrix}\right]/\Delta_{\alpha\alpha} \tag{9}\] \[=\left[\begin{matrix}f(r(x_{1}),\ldots,r(x_{n}))\\ f(x_{1},\ldots,x_{n})\end{matrix}\right]/\Delta_{\alpha\alpha}+_{u}\;\;T_{f}( \pi(x_{1}),\ldots,\pi(x_{n}))\] (10) \[=f\left(\left[\begin{matrix}r(x_{1})\\ x_{1}\end{matrix}\right],\ldots,\left[\begin{matrix}r(x_{n})\\ x_{n}\end{matrix}\right]\right)/\Delta_{\alpha\alpha}+_{u}\;\;T_{f}(\pi(x_{1}), \ldots,\pi(x_{n})) \tag{8}\]
in the last line Eq.(10) the \(\Delta_{\alpha\alpha}\)-class of the first term is expanded using the action. This is a reflection of the fact that in the algebra \(A(\alpha)/\Delta_{\alpha\alpha}\) the action represents a "twisting" of the product structure.
Take \(t(\bar{x})=g(\bar{y})\in\operatorname{Id}\mathcal{V}\). Fix an assignment \(\epsilon:\operatorname{var}\;t\cup\operatorname{var}\;g\to A\). By the isomorphism \(\phi\) we have \(t^{A_{T}(Q,A^{\alpha,\tau},*)}(\phi\circ\epsilon(\bar{x}))=g^{A_{T}(Q,A^{\alpha,\tau},*)}(\phi\circ\epsilon(\bar{y}))\) since \(A\in\mathcal{V}\). We can use Eq.(8), Eq.(10) and the homomorphism property of the action to recursively expand the interpretation of the term \(t\) in the algebra \(A_{T}(Q,A^{\alpha,\tau},*)\) in order to write
\[t^{A_{T}(Q,A^{\alpha,\tau},*)}(\phi\circ\epsilon(\bar{x})) =t^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\phi\circ\epsilon( \bar{x})\right)\;+_{u}\;\;t^{\partial,T}(\pi\circ\epsilon(\bar{x})) \tag{12}\] \[=t^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\left[\begin{matrix}r (a_{1})\\ a_{1}\end{matrix}\right],\ldots,\left[\begin{matrix}r(a_{n})\\ a_{n}\end{matrix}\right]\right)/\Delta_{\alpha\alpha}\;+_{u}\;\;t^{\partial,T}( \pi\circ\epsilon(\vec{x})) \tag{11}\]
where \(u=l(t^{Q}(\pi\circ\epsilon(\vec{x})))\) and \(\left(\left[\begin{matrix}r(a_{1})\\ a_{1}\end{matrix}\right]/\Delta_{\alpha\alpha},\ldots,\left[\begin{matrix}r(a_ {n})\\ a_{n}\end{matrix}\right]/\Delta_{\alpha\alpha}\right)=\phi\circ\epsilon(\vec{x})\). The second term in Eq.(12) incorporates all the appearances of the transfers \(T_{f}\). By comparison with Lemma 3.18, we see that
\[t^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\left[\begin{matrix}r(a_{1})\\ a_{1}\end{matrix}\right],\ldots,\left[\begin{matrix}r(a_{n})\\ a_{n}\end{matrix}\right]\right)/\Delta_{\alpha\alpha}=\sum_{\mu\in L(t, \epsilon(\vec{x}))}(t^{\sigma})^{*}(\mu(\operatorname{var}t^{\sigma})).\]
A similar calculation produces
\[g^{A_{T}(Q,A^{\alpha,\tau},*)}(\phi\circ\epsilon(\bar{y})) =g^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\left[\begin{matrix}r( b_{1})\\ b_{1}\end{matrix}\right],\ldots,\left[\begin{matrix}r(b_{m})\\ b_{m}\end{matrix}\right]\right)/\Delta_{\alpha\alpha}\;+_{v}\;\;g^{\partial,T}( \epsilon(\pi\circ\vec{y}))\] \[=\sum_{\nu\in L(g,\epsilon(\vec{x}))}(g^{\sigma})^{*}(\nu( \operatorname{var}g^{\sigma}))\;+\;g^{\partial,T}(\epsilon(\pi\circ\vec{y})).\]
where \(v=l(g^{Q}(\pi\circ\epsilon(\bar{y})))\) and \(\left(\begin{bmatrix}r(b_{1})\\ b_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\ldots,\begin{bmatrix}r(b_{m})\\ b_{m}\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\phi\circ\epsilon(\vec{y})\).
Since \(A(\alpha)\in\mathcal{V}\), we conclude that
\[\sum_{\mu\in L(t,\epsilon(\vec{x}))}(t^{\sigma})^{*}(\mu(\operatorname{ var}t^{\sigma}))=t^{\sigma}\left(\begin{bmatrix}r(a_{1})\\ a_{1}\end{bmatrix},\ldots,\begin{bmatrix}r(a_{n})\\ a_{n}\end{bmatrix}\right)/\Delta_{\alpha\alpha} =g^{\sigma^{\prime}}\left(\begin{bmatrix}r(b_{1})\\ b_{1}\end{bmatrix},\ldots,\begin{bmatrix}r(b_{m})\\ b_{m}\end{bmatrix}\right)/\Delta_{\alpha\alpha} \tag{14}\] \[=\sum_{\nu\in L(g,\epsilon(\vec{x}))}(g^{\sigma})^{*}(\nu( \operatorname{var}g^{\sigma})). \tag{13}\]
This shows the action is compatible with \(\operatorname{Id}\mathcal{V}\). Since \(Q\in\mathcal{V}\), we have \(t^{Q}(\pi\circ\epsilon(\bar{x}))=g^{Q}(\pi\circ\epsilon(\bar{y}))\) which implies \(u=v\). Using this we then have
\[t^{\partial,T}(\pi\circ\epsilon(\vec{x})) =t^{A_{T}(Q,A^{\alpha,\tau},*)}(\phi\circ\epsilon(\vec{x}))\ \,-_{u}\ t^{A(\alpha)/\Delta_{\alpha\alpha}}\,(\phi\circ\epsilon(\vec{x}))\] \[=g^{A_{T}(Q,A^{\alpha,\tau},*)}(\phi\circ\epsilon(\vec{y}))\ \,-_{v}\ g^{A(\alpha)/\Delta_{\alpha\alpha}}\,(\phi\circ\epsilon(\vec{y}))=g^ {\partial,T}(\epsilon(\pi\circ\vec{y}))\]
which shows that \(T\) is a \(2\)-cocycle compatible with \(\operatorname{Id}\mathcal{V}\) by Definition 3.14.
**Remark 3.20**.: If \(l\) is lifting associated to \(\rho:A(\alpha)\to Q\) from affine datum, then it useful to observe
\[\delta(l(f^{Q}(\bar{x})))=f^{A(\alpha)/\Delta_{\alpha\alpha}}(\delta(\bar{x}))\]
which follows from the property \((a,l(q))\in\alpha\Longleftrightarrow\rho\left(\begin{bmatrix}a\\ l(q)\end{bmatrix}\right)=q\).
The next theorem is complimentary to Theorem 3.19 in that it starts with affine datum and membership \(Q\in\mathcal{U}\) for a variety with compatible equational theory and reconstructs an extension in \(\mathcal{U}\) which determines the given datum.
**Theorem 3.21**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\). Assume the action is weakly compatible with \(\operatorname{Id}\mathcal{U}\) and \(T=\{T_{f}:f\in\tau\}\) is a \(2\)-cocycle compatible \(\operatorname{Id}\mathcal{U}\). Then there is an extension \(\pi:A_{T}(Q,A^{\alpha,\tau},*)\to Q\) which realizes the datum with \(A_{T}(Q,A^{\alpha,\tau},*)\in\mathcal{U}\).
Proof.: Since the action is weakly compatible with \(\operatorname{Id}\mathcal{U}\), we have \(Q\in\mathcal{U}\). Define \(\pi:A_{T}(Q,A^{\alpha,\tau},*)\to Q\) by \(\pi\left(\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha\alpha}\right):=\rho\left(\begin{bmatrix}a\\ b\end{bmatrix}\right)\). By Definition 3.14.(C1) and Definition 3.9.(D3)-(D4) we see that \(\pi\) is a surjective homomorphism and \(\ker\pi=\hat{\alpha}/\Delta_{\alpha\alpha}\). Fix a lifting \(l:Q\to A\) for \(\rho\) and attendant \(\alpha\)-trace \(r:A\to A\).
We show \(A_{T}(Q,A^{\alpha,\tau},*)\in\mathcal{U}\). Take \(t(\bar{x})=g(\bar{y})\in\operatorname{Id}\ \mathcal{U}\). Let \(\epsilon:\operatorname{var}t\cup\operatorname{varg}\to A(\alpha)/\Delta_{\alpha\alpha}\) be an assignment. If we set \(u=l(t^{Q}(\pi\circ\epsilon(\bar{x})))\) and \(v=l(g^{Q}(\pi\circ\epsilon(\bar{y})))\), then \(Q\in\mathcal{V}\) implies \(u=v\). Then because the action and \(T\) are both separately \(\mathcal{U}\)-compatible, by Lemma 3.18 we have
\[t^{A_{T}(Q,A^{\alpha,\tau},*)}(\epsilon(\bar{x})) =\sum_{\mu\in L(t,\epsilon(\bar{x}))}(t^{\sigma})^{*}(\mu( \operatorname{var}t^{\sigma}))\ \,+_{u}\ t^{\partial,T}\left(\pi\circ\epsilon(\bar{x})\right)\] \[=\sum_{\nu\in L(g,\epsilon(\bar{x}))}(g^{\sigma})^{*}(\nu( \operatorname{var}g^{\sigma}))\ \,+_{u}\ g^{\partial,T}\left(\pi\circ\epsilon(\bar{y})\right)=g^{A_{T}(Q,A^{ \alpha,\tau},*)}(\epsilon(\bar{y})).\]
This shows the algebra \(A_{T}(Q,A^{\alpha,\tau},*)\) satisfies \(\operatorname{Id}\ \mathcal{U}\) and so \(A_{T}(Q,A^{\alpha,\tau},*)\in\mathcal{U}\).
We will put an isomorphic algebraic structure on the set \(A\). Define the bijection \(\phi:A\to A_{T}(Q,A^{\alpha,\tau},*)\) by \(\phi(a):=\begin{bmatrix}r(a)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\) and a new algebra \(\check{A}=\left\langle A,\{\check{f}:f\in\tau\}\right\rangle\) with operations \(\check{f}(a_{1},\ldots,a_{n}):=\phi^{-1}\left(F_{f}\left(\phi(a_{1}),\ldots, \phi(a_{n})\right)\right)\) for \(f\in\tau\). It is immediate that \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\) and \(\ker(\pi\circ\phi)=\alpha\). We
now show \(A_{T}(Q,A^{\alpha,\tau},*)\) realizes the datum. In order to verify Definition 3.11, we take \(1\leq i\leq n=\operatorname{ar}f\) and evaluate
\[\check{f}\left(\delta(r(x_{1})),\ldots,\delta(r(x_{i-1})),\begin{bmatrix}r (x_{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(r(x_{i+1})),\ldots,\delta(r(x_{n }))\right) \tag{16}\] \[=\begin{bmatrix}\check{f}(r(x_{1}),\ldots,r(x_{i-1}),r(x_{i}),r(x _{i+1}),\ldots,r(x_{n}))\\ \check{f}(r(x_{1}),\ldots,r(x_{i-1}),x_{i},r(x_{i+1}),\ldots,r(x_{n}))\end{bmatrix} /\Delta_{\alpha\alpha}\] (17) \[=\begin{bmatrix}\phi^{-1}\left(F_{f}\left(\phi(r(x_{1})),\ldots, \phi(r(x_{i-1})),\phi(r(x_{i})),\phi(r(x_{i+1})),\ldots,\phi(r(x_{n}))\right) \right)\\ \phi^{-1}\left(F_{f}\left(\phi(r(x_{1})),\ldots,\phi(r(x_{i-1})),\phi(x_{i}), \phi(r(x_{i+1})),\ldots,\phi(r(x_{n}))\right)\right)\end{bmatrix}/\Delta_{ \alpha\alpha}. \tag{15}\]
We will evaluate the operations \(F_{f}\) on the above tuples. Let us write \(u=l(f^{Q}(q_{1},\ldots,q_{n}))\) where \(q_{i}=\pi\left(\begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\) and note \(q_{i}=\pi\left(\delta(r(x_{i}))\right)=\pi\left(\delta(x_{i})\right)\). First, for any \(1\leq j<n=\operatorname{ar}f\), by the homomorphism property of the action we always have
\[a(f,j)\left(q_{1},\ldots,q_{j-1},\delta(x_{j}),q_{j+1},\ldots,q_ {n}\right)=a(f,j)\left(q_{1},\ldots,q_{j-1},\delta(x_{j}),q_{j+1},\ldots,q_{n}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad+_{u}\,\,\,a(f,j)\left(q_{1},\ldots,q_{j-1},\delta(x_{j}),q_{j+1},\ldots, q_{n}\right)\]
which implies \(a(f,j)(q_{1},\ldots,q_{j},\delta(x_{j+1}),\ldots,\delta(x_{n})=\delta(u)\); similarly, \(f^{\Delta}\left(\delta(x_{1}),\delta(x_{2}),\ldots,\delta(x_{n})\right)= \delta(u)\). Second, note by Definition 3.9.D3 it follows that we can write
\[a(f,j)\left(q_{1},\ldots,q_{j-1},\begin{bmatrix}r(x_{j})\\ x_{j}\end{bmatrix}/\Delta_{\alpha\alpha},q_{j+1},\ldots,q_{n}\right)=\begin{bmatrix} u\\ a_{j}\end{bmatrix}/\Delta_{\alpha\alpha}\]
for some \(a_{j}\in A\); similarly, \(T_{f}(q_{1},\ldots,q_{n})=\begin{bmatrix}u\\ b\end{bmatrix}/\Delta_{\alpha\alpha}\) for some \(b\in A\). Then if we recall \(r\circ r=r\) for an \(\alpha\)-trace, we have
\[F_{f}\left(\phi(r(x_{1})),\ldots,\phi(r(x_{i-1})),\phi(x_{i}), \phi(r(x_{i+1})),\ldots,\phi(r(x_{n}))\right)\] \[\quad=f^{\Delta}\left(\begin{bmatrix}r(x_{1})\\ r(x_{1})\end{bmatrix}/\Delta_{\alpha\alpha},\delta(r(x_{2})),\ldots,\delta(r(x _{i})),\delta(x_{i+1}),\ldots,\delta(x_{n})\right)\] \[\qquad\qquad\qquad+_{u}\,\,\sum_{j=2}^{i-1}a(f,j)\left(q_{1}, \ldots,q_{j-1},\delta(r(x_{j})),q_{j+1},\ldots,q_{i},\ldots,q_{n}\right)\] \[\qquad\qquad\qquad+_{u}\,\,\,a(f,i)\left(q_{1},\ldots,q_{i-1}, \begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)\] \[\qquad\qquad\qquad+_{u}\sum_{j=i+1}^{n}a(f,j)\left(q_{1},\ldots,q _{j-1},\delta(r(x_{j})),q_{j+1},\ldots,q_{n}\right)\,\,+_{u}\,\,\,T_{f}(q_{1},\ldots,q_{n})\] \[\qquad\qquad=a(f,i)\left(q_{1},\ldots,q_{i-1},\begin{bmatrix}r(x _{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)\,\,+_{u} \,\,\,T_{f}(q_{1},\ldots,q_{n})\] \[\qquad\qquad=\begin{bmatrix}u\\ a_{i}\,\,\,\,\hat{\uparrow}_{u}\,\,\,b\end{bmatrix}/\Delta_{\alpha\alpha}\]
where we have written \(x\,\,\,\hat{\uparrow}_{u}\,\,y=m(x,u,y)\) for the induced sum on \(A\). Note \(\hat{\alpha}/\Delta_{\alpha\alpha}\) abelian implies the sum \(a_{i+2}\,\,\,\hat{\uparrow}_{u}\,\,\hat{\uparrow}_{u}\,\,a_{n}\,\,\hat{\uparrow }_{u}\,\,b\) is unique up to association. In the same manner we have,
\[F_{f}\left(\phi(r(x_{1})),\ldots,\phi(r(x_{i})),\phi(x_{i+1}),\phi(x_{i+2}), \ldots,\phi(x_{n})\right)=T_{f}(q_{1},\ldots,q_{n})=\begin{bmatrix}u\\ b\end{bmatrix}/\Delta_{\alpha\alpha}.\]
Putting the above together with Eq.(17) we see that
\[\check{f}\left(\delta(r(x_{1})),\ldots,\delta(r(x_{i-1})),\genfrac{[}{ }{0.0pt}{}{r(x_{i})}{x_{i}}\,\big{]}\,/\Delta_{\alpha\alpha},\delta(r(x_{i+1})), \ldots,\delta(r(x_{n}))\right)\] \[=\genfrac{[}{]}{0.0pt}{}{\phi^{-1}\left(\genfrac{[}{]}{0.0pt}{}{u }{b}\,/\Delta_{\alpha\alpha}\right)}{\phi^{-1}\left(\genfrac{[}{]}{0.0pt}{}{u }{a_{i}\,\,\hat{+}_{u}\,\,b}\big{]}\,/\Delta_{\alpha\alpha}\right)}\,/\Delta_{ \alpha\alpha}\] \[=\genfrac{[}{]}{0.0pt}{}{b}{a_{i}\,\,\hat{+}_{u}\,\,b}\big{]}\,/ \Delta_{\alpha\alpha}\] \[=\genfrac{[}{]}{0.0pt}{}{b}{m(a_{i},u,b)}\,\big{]}\,/\Delta_{ \alpha\alpha}\] \[=\genfrac{[}{]}{0.0pt}{}{u}{a_{i}}\,/\Delta_{\alpha\alpha}=a(f,i) (q_{1},\ldots,q_{i-1},\genfrac{[}{]}{0.0pt}{}{r(x_{i})}{x_{i}}\,\big{]}\,/ \Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n})\]
which shows \(\check{A}\approx A_{T}(Q,A^{\alpha,\tau},*)\) realizes the datum.
We say the variety \(\mathcal{U}\)_contains_ the datum \((Q,A^{\alpha,\tau},*)\) if the action \(*\) is weakly compatible with \(\operatorname{Id}\mathcal{U}\). The following is a characterization of the internal semidirect product by retractions for algebras which realize affine datum; in particular, this holds for any algebra with an abelian congruence in a variety with a weak-difference term. Note a retraction \(r:A\to A\) is always a \(\ker r\)-trace.
**Proposition 3.22**.: Let \(A\) be an algebra which realizes affine datum \((A^{\alpha,\tau},Q,*)\). Let \(\pi:A\to A/\alpha\) be the canonical homomorphism for \(\alpha\). The following are equivalent:
1. \(A\approx A(\alpha)/\Delta_{\alpha\alpha}\);
2. there is a homomorphism \(l:A/\alpha\to A\) such that \(\pi\circ l=\operatorname{id}_{A/\alpha}\);
3. there is a retraction \(r:A\to A\) with \(\ker r=\alpha\).
Proof.: \((2)\Leftrightarrow(3)\): Suppose \(l:A/\alpha\to A\) is a homomorphism such that \(\pi\circ l=\operatorname{id}_{A/\alpha}\). Define \(r=l\circ\pi\). Then \(r\) is a homomorphism and \(r^{2}=l\circ\pi\circ l\circ\pi=l\circ\operatorname{id}_{A/\alpha}\circ\pi=l \circ\pi=r\); thus, \(r\) is a retraction.
If we assume \(r:A\to A\) is a retraction, then \(r^{2}(x)=r(x)\) implies \((x,r(x))\in\ker r=\alpha\); thus, \(r\) is a \(\alpha\)-trace. Define \(l:A/\alpha\to A\) by \(l(q)=r(x)\) for any \(x\in A\) such that \(\pi(x)=x\). Since \(r\) is an \(\alpha\)-trace, \(l\) is well-defined. Take \(q_{i}\in A/\alpha\) and \(x_{i}\in A\) such that \(\pi(x_{i})=q_{i}\) for \(i=1,\ldots,n=\operatorname{ar}f\). Then \(\pi\circ r(f(x_{1},\ldots,x_{n}))=f(\pi\circ r(x_{1}),\ldots,\pi\circ r(x_{n} ))=f(\pi(x_{1}),\ldots,\pi(x_{n}))=f(q_{1},\ldots,q_{n})\). By definition we have \(l(f(q_{1},\ldots,q_{n}))=r(f(x_{1},\ldots,x_{n}))=f(r(x_{1}),\ldots,r(x_{n} ))=f(l(q_{1}),\ldots,l(q_{n}))\) which shows \(l\) is a homomorphism.
\((3)\Rightarrow(1)\): Assume \(r:A\to A\) is a retraction. Notice \(r\) is also an \(\alpha\)-trace. By Theorem 3.21, we have \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\) defined using an \(\alpha\)-trace \(r\) and associated lifting \(l\). By the construction, we have \(T_{f}(\pi(x_{1}),\ldots,\pi(x_{n}))=\genfrac{[}{]}{0.0pt}{}{l(f(\pi(x_{1}), \ldots,\pi(x_{n})))}{\phi(l(\circ\pi(x_{1}),\ldots,l\circ\pi(x_{n})))}\,/\Delta _{\alpha\alpha}=\delta(f(r(x_{1}),\ldots,r(x_{n})))\) since \(l\) is a homomorphism.
homomorphism. Then taking \(u=l(f(\pi(x_{1}),\dots,\pi(x_{n}))=f(r(x_{1}),\dots,r(x_{n}))\) we have in \(A_{T}(Q,A^{\alpha,\tau},*)\)
\[F_{f}\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\dots,\begin{bmatrix}r(x_{n})\\ x_{n}\end{bmatrix}/\Delta_{\alpha\alpha}\right) =\begin{bmatrix}r(f(x_{1},\dots,x_{n}))\\ f(r(x_{1}),\dots,r(x_{n}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}f(r(x_{1}),\dots,r(x_{n}))\\ f(x_{1},\dots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}+_{u}\begin{bmatrix}r( f(x_{1},\dots,x_{n}))\\ f(r(x_{1}),\dots,r(x_{n}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}f(r(x_{1}),\dots,r(x_{n}))\\ f(x_{1},\dots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}+_{u}T_{f}(\pi(x_{1}), \dots,\pi(x_{n}))\] \[=f\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix},\dots,\begin{bmatrix}r(x_{n})\\ x_{n}\end{bmatrix}\right)/\Delta_{\alpha\alpha}\]
which shows \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\approx A(\alpha)/\Delta_{\alpha\alpha}\).
The constructions have required a fixed lifting to define, but up to isomorphism do not depend on the particular choice. Making different choices for the liftings leads to an equivalence on extensions realizing datum which is defined by a combinatorial condition on 2-cocycles.
**Proposition 3.23**.: Suppose \(\pi:A\to Q\) with \(\alpha=\ker\pi\) is an extension realizing affine datum. If \(T\) is a 2-cocycle defined by the lifting \(l\) and \(T^{\prime}\) is a 2-cocycle defined by the lifting \(r^{\prime}\), then there exists a map \(h:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) such that
\[T^{\prime}_{f}(\bar{x})-_{u}T_{f}(\bar{x}) =f^{\Delta}(h(x_{1}),\delta(l(x_{2})),\dots,\delta(l(x_{n})))-_{u} h(f^{Q}(x_{1},\dots,x_{n}))\] \[+_{u}\sum_{i=2}^{n}a(f,i)\,(x_{1},\dots,x_{i-1},h(x_{i}),x_{i+1},\dots,x_{n}) (f\in\tau)\]
where \(u=l(f^{Q}(x_{1},\dots,x_{n}))\).
Proof.: The action is defined as in Eq.(6) from Theorem 3.19. Define \(h(x):=\begin{bmatrix}l(x)\\ l^{\prime}(x)\end{bmatrix}/\Delta_{\alpha\alpha}\). We first show
\[T^{\prime}_{f}(\bar{x})-_{u}T_{f}(\bar{x})=f^{A(\alpha)/\Delta_{\alpha\alpha}} \left(h(\bar{x})\right)-_{u}h(f^{Q}(\bar{x})) (f\in\tau). \tag{18}\]
If \(l,l^{\prime}:Q\to A\) are liftings associated with \(\pi\) such that \(r=l\circ\pi\) and \(r^{\prime}=l^{\prime}\circ\pi\), then recall the 2-cocycles are defined by \(T^{\prime}_{f}(\bar{x})=\begin{bmatrix}l^{\prime}(f(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}\) and \(T_{f}(\bar{x})=\begin{bmatrix}l(f(\bar{x}))\\ f(l(\bar{x}))\end{bmatrix}\Delta_{\alpha\alpha}\). If we set \(v=f(l(x_{1}),\dots,l(x_{n}))\), then note \(\begin{bmatrix}u\\ u\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}v\\ v\end{bmatrix}\). Then we can expand
\[T^{\prime}_{f}(\bar{x})-_{u}T_{f}(\bar{x}) =\begin{bmatrix}l^{\prime}(f(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}-_{u}\begin{bmatrix}l (f(\bar{x}))\\ f(l(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}l^{\prime}(f(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}+_{u}\begin{bmatrix}l (l(\bar{x}))\\ l(f(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=m\left(\begin{bmatrix}f(l(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix},\begin{bmatrix}l(\bar{x})\\ f(l(\bar{x}))\end{bmatrix},\begin{bmatrix}l^{\prime}f((\bar{x}))\\ f(l(\bar{x}))\end{bmatrix}\right)/\Delta_{\alpha\alpha}+_{u}\begin{bmatrix}f(l (\bar{x}))\\ l(f(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}f(l(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}+_{v}\begin{bmatrix}l ^{\prime}(f(\bar{x}))\\ f(l(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}+_{u}\begin{bmatrix}f(l(\bar{x})) \\ l(f(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}f(l(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}+_{v}m\begin{pmatrix}l ^{\prime}(f(\bar{x}))\\ l(l(\bar{x}))\end{bmatrix},\begin{bmatrix}l(f(\bar{x}))\\ l(f(\bar{x}))\end{bmatrix}\begin{bmatrix}l(l(\bar{x}))\\ l(f(\bar{x}))\end{bmatrix}\begin{pmatrix}\Delta_{\alpha\alpha}\\ \end{array}\] \[=\begin{bmatrix}f(l(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}+_{u}\begin{bmatrix}l ^{\prime}(f(\bar{x}))\\ l(f(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}\]
\[=f^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\begin{bmatrix}l(\bar{x})\\ l^{\prime}(\bar{x})\end{bmatrix}\right)/\Delta_{\alpha\alpha}-_{u}\begin{bmatrix} l(f(\bar{x}))\\ l^{\prime}(f(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}=f^{A(\alpha)/\Delta_{ \alpha\alpha}}(h(\bar{x}))-_{u}h(f^{Q}(\bar{x})).\]
In a similar manner, if we set \(u_{i}=f(l^{\prime}(x_{1}),\ldots,l^{\prime}(x_{i}),l(x_{i+1}),\ldots,l(x_{n}))\), then using realization we can expand
\[T^{\prime}_{f}(\bar{x})-_{u}T_{f}(\bar{x}) =f(h(\bar{x}))-_{u}h(f(\bar{x}))\] \[=f\left(\begin{bmatrix}l(x_{1})\\ l^{\prime}(x_{1})\end{bmatrix},\ldots,\begin{bmatrix}l(x_{n})\\ l^{\prime}(x_{n})\end{bmatrix}\right)/\Delta_{\alpha\alpha}-_{u}h(f(\bar{x}))\] \[=m\left(\begin{bmatrix}f(l(x_{1}),l(x_{2}),\ldots,l(x_{n}))\\ f(l^{\prime}(x_{1}),l(x_{2}),\ldots,l(x_{n}))\end{bmatrix},\begin{bmatrix}u_{ 1}\\ u_{1}\end{bmatrix},\begin{bmatrix}f(l^{\prime}(x_{1}),l(x_{2}),\ldots,l(x_{n}) )\\ f(l^{\prime}(x_{1}),l^{\prime}(x_{2}),\ldots,l^{\prime}(x_{n}))\end{bmatrix} \right)/\Delta_{\alpha\alpha}\] \[\quad-_{u}h(f(\bar{x}))\] \[=f^{\Delta}\left(h(x_{1}),\delta(l(x_{2})),\ldots,\delta(l(x_{n} ))\right)+_{u_{1}}f\left(\begin{bmatrix}l^{\prime}(x_{1})\\ l^{\prime}(x_{1})\end{bmatrix}/\Delta_{\alpha\alpha},h(x_{2}),\ldots,h(x_{n}) \right)-_{u}h(f(\bar{x}))\] \[=f^{\Delta}\left(h(x_{1}),\delta(l(x_{2})),\ldots,\delta(l(x_{n} ))\right)+_{u_{1}}f\left(\begin{bmatrix}l^{\prime}(x_{1})\\ l^{\prime}(x_{1})\end{bmatrix}/\Delta_{\alpha\alpha},h(x_{2}),\delta(l(x_{3})), \ldots,\delta(l(x_{n}))\right)\] \[\quad+_{u_{2}}f\left(\begin{bmatrix}l^{\prime}(x_{1})\\ l^{\prime}(x_{1})\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}l^{\prime}(x _{2})\\ l^{\prime}(x_{2})\end{bmatrix}/\Delta_{\alpha\alpha},h(x_{3}),\ldots,h(x_{n}) \right)-_{u}h(f(\bar{x}))\] \[=f^{\Delta}\left(h(x_{1}),\delta(l(x_{2})),\ldots,\delta(l(x_{n} ))\right)+_{u}a(f,1)(x_{1},h(x_{2}),x_{3},\ldots,x_{n})\] \[\quad+_{u}f\left(\begin{bmatrix}l^{\prime}(x_{1})\\ l^{\prime}(x_{1})\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}l^{\prime}(x _{2})\\ l^{\prime}(x_{2})\end{bmatrix}/\Delta_{\alpha\alpha},h(x_{3}),\ldots,h(x_{n}) \right)-_{u}h(f(\bar{x}))\] \[\vdots\] \[=f^{\Delta}(h(x_{1}),\delta(l(x_{2})),\ldots,\delta(l(x_{n})))-_{ u}h(f(\bar{x}))\] \[\quad+_{u}\ \sum_{i=2}^{n}a(f,i)\left(x_{1},\ldots,x_{i-1},h(x_{i}),x_{i+1}, \ldots,x_{n}\right)\]
since each \(\delta(u_{i})=\delta(u)\) and \(\delta(l(x))=\delta(l^{\prime}(x))\).
**Definition 3.24**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\). A sequence of operations \(G=\{G_{f}:f\in\tau\}\) where \(G_{f}:Q^{\operatorname{ar}f}\to A(\alpha)/\Delta_{\alpha\alpha}\) is a _2-coboundary_ of the datum if there is a function \(h:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) such that for any lifting \(l:Q\to A\) associated to the datum
1. \(h(x)\ \hat{\alpha}/\Delta_{\alpha\alpha}\ \delta\circ l(x)\);
2. for each \(f\in\tau\), \[G_{f}(x_{1},\ldots,x_{n}) =f^{\Delta}(h(x_{1}),\delta(l(x_{2})),\ldots,\delta(l(x_{n})))-_{u} h(f^{Q}(x_{1},\ldots,x_{n}))\] \[+_{u}\sum_{i=2}^{n}a(f,i)(x_{1},\ldots,x_{i-1},h(x_{i}),x_{i+1}, \ldots,x_{n})\]
where \(u=l(f(x_{1},\ldots,x_{n}))\).
The function \(h:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) referenced in the above definition is said to witness the 2-coboundary.
**Definition 3.25**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum and \(\mathcal{U}\) a variety in the same signature which contains the datum. The set of 2-cocycles compatible with \(\mathcal{U}\) is denoted by \(Z^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\). The set of 2-coboundaries of the datum is denoted by \(B^{2}(Q,A^{\alpha,\tau},*)\).
Notice the notation for the class of 2-coboundaries omits a subscript for the variety \(\mathcal{U}\). We shall see in the next lemma that 2-coboundaries are compatible with an variety containing the datum.
**Lemma 3.26**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum and \(\mathcal{U}\) a variety in the same signature which contains the datum. For any lifting \(l:Q\to A\) associated to the datum, the operations
\[(T_{f}+T^{\prime}_{f})(\bar{x}) :=T_{f}(\bar{x})+_{l(f(\bar{x}))}T^{\prime}_{f}(\bar{x}) (f\in\tau,\bar{x}\in Q^{\mathrm{ar}\,f})\] \[(h+h^{\prime})(x) :=h(x)+_{l(x)}h^{\prime}(x) (x\in Q)\]
makes \(Z^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) into an abelian group with subgroup \(B^{2}(Q,A^{\alpha,\tau},*)\leq Z^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\).
Proof.: From previous remarks, note the definition of the operation is not dependent on the lifting. The fact that the operation defines an abelian group follows from noting that the ternary operation \(m\) of the datum is affine on the congruence blocks of \(\hat{\alpha}/\Delta_{\alpha\alpha}\). The sum of 2-cocycles compatible with \(\mathrm{Id}\,\mathcal{U}\) is again a 2-cocycle compatible with \(\mathrm{Id}\,\mathcal{U}\) follows from the homomorphism property of the action which distributes the action terms over the witnesses of the respective 2-cocycles. These facts also show properties \((B1)\) and \((B2)\) are preserved by the sum and so 2-coboundaries form an abelian group.
What requires a little explanation is why a 2-coboundary is a 2-cocycle compatible with \(\mathcal{U}\). Let \(G=\{G_{f}:f\in\tau\}\) be a 2-coboundary. For any choice of 2-cocycle \(\{T_{f}:f\in\tau\}\) compatible with \(\mathcal{U}\), Theorem 3.21 provides an algebra \(\mathcal{U}\ni A_{T}(Q,A^{\alpha,\tau},*)\stackrel{{\pi}}{{ \rightarrow}}Q\) on the universe \(A(\alpha)/\Delta_{\alpha\alpha}\) which realizes the datum. Note there is at least one such 2-cocycle which is provided by the datum. The condition \((B2)\) now takes the form
\[G_{f}(x_{1},\dots,x_{n})=F^{A_{T}(Q,A^{\alpha,\tau},*)}_{f}(h(x_{1}),\dots,h(x _{n}))\,-_{u}\,\,T_{f}(x_{1},\dots,x_{n})\,-_{u}\,\,h(f^{Q}(x_{1},\dots,x_{n})) \tag{19}\]
where \(u=l(f(x_{1},\dots,x_{n}))\). We have preserved the superscripts here to indicate the realization. Consider a term \(t(\bar{x})\) and assignment \(\epsilon:\mathrm{var}\,\,t\to A(\alpha)/\Delta_{\alpha\alpha}\). By induction on the composition tree of a term \(t(\bar{x})\) with Eq.(19) as the base case, we can evaluate
\[t^{\partial,G}(\pi\circ\epsilon(\bar{x}))=t^{A_{T}(Q,A^{\alpha,\tau},*)}(h \circ\pi\circ\epsilon(\bar{x}))\,-_{v}\,\,t^{\partial,T}(\pi\circ\epsilon( \bar{x}))\,-_{v}\,\,h(t^{Q}(\pi\circ\epsilon(\bar{x}))) \tag{20}\]
where \(v=l(t^{Q}(\pi\circ\epsilon(\bar{x})))\). Then using Eq.(20), it follows from the fact that \(A_{T}(Q,A^{\alpha,\tau},*)\in\mathcal{U}\), \(T\) is 2-cocycle compatible with \(\mathcal{U}\) and \(Q\in\mathcal{U}\), that \(G\) is also a 2-cocycle compatible with \(\mathcal{U}\).
**Definition 3.27**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum and \(\mathcal{U}\) a variety in the same signature which contains the datum. The _second cohomology group relative to \(\mathcal{U}\)_ is the quotient group
\[H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*):=Z^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau },*)/B^{2}(Q,A^{\alpha,\tau},*).\]
**Definition 3.28**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum. Two extensions \(A\) and \(A^{\prime}\) realizing the datum are _equivalent_ if there is a 2-cocycle \(T\) associated to \(A\) and 2-cocycle \(T^{\prime}\) associated to \(A^{\prime}\) such that \(T^{\prime}-T\in B^{2}(Q,A^{\alpha,\tau},*)\).
If \(A\) realizes the datum, we write \([A]\) for the equivalence class determined by Definition 3.28. The _trivial 2-cocycle_ for datum \((Q,A^{\alpha,\tau},*)\) has operations \(T_{f}(x_{1},\dots,x_{\mathrm{ar}\,f})=\delta\circ l(f^{Q}(x_{1},\dots,x_{arf}))\) for \(f\in\tau\) and any lifting \(l:Q\to A\) associated to the datum. When the context is clear, we denote the trivial 2-cocycle by \(T=0\).
**Theorem 3.29**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum and \(\mathcal{U}\) a variety in the same signature which contains the datum. The set of equivalence classes of extensions in \(\mathcal{U}\) realizing the datum with the operation \([A_{T}(Q,A^{\alpha,\tau},*)]+[A_{T^{\prime}}(Q,A^{\alpha,\tau},*)]:=[A_{T+T^{ \prime}}(Q,A^{\alpha,\tau},*)]\) is an abelian group isomorphic with \(H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\). The zero of the abelian group is the class of the semidirect product \([A_{0}(Q,A^{\alpha,\tau},*)]\).
For affine datum \((Q,A^{\alpha,\tau},*)\), let \(\mathcal{L}(Q,A^{\alpha,\tau},*)\) be the set of varieties in the signature \(\tau\) which contain the datum ordered by inclusion. It is a complete sublattice of the lattice of varieties in the signature \(\tau\). Let \(Z^{2}(Q,A^{\alpha,\tau},*)\) denote the abelian group generated by 2-cocycles compatible with some \(\mathcal{U}\in\mathcal{L}(Q,A^{\alpha,\tau},*)\).
Since \(B^{2}(Q,A^{\alpha,\tau},*)\leq Z^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) for all \(\mathcal{U}\in L(Q,A^{\alpha,\tau},*)\) by Lemma 3.26, we can define the second cohomology group of the datum
\[H^{2}(Q,A^{\alpha,\tau},*):=Z^{2}(Q,A^{\alpha,\tau},*)/B^{2}(Q,A^{\alpha,\tau},*).\]
For any algebra \(A\), \(\operatorname{Sub}A\) denotes the lattice of subalgebras ordered by inclusion.
**Proposition 3.30**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\). The map
\[\Psi:\mathcal{L}(Q,A^{\alpha,\tau},*)\to\operatorname{Sub}H^{2}(Q,A^{\alpha, \tau},*)\]
defined by \(\Psi(\mathcal{U}):=H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) is a meet-homomorphism which is an upper adjoint for a Galois connection \((-,\Psi)\) between \(\operatorname{Sub}H^{2}(Q,A^{\alpha,\tau},*)\) and \(\mathcal{L}(Q,A^{\alpha,\tau},*)\). The subset of varieties generated by their algebras realizing the datum is join-complete.
Proof.: It is easy to see that \(\psi\) is monotone which yields the immediate inclusions
\[\psi(\mathcal{U}_{1}\wedge\mathcal{U}_{2})\leq\psi(\mathcal{U}_{1})\wedge\psi (\mathcal{U}_{2})\qquad\text{and}\qquad\psi(\mathcal{U}_{1})\vee\psi(\mathcal{ U}_{2})\leq\psi(\mathcal{U}_{1}\vee\mathcal{U}_{2}).\]
For the reverse inclusion for the meet operation, we use the description of the operation in terms of the class operators \(\mathcal{U}_{1}\wedge\mathcal{U}_{2}=\operatorname{Mod}\left(\operatorname{ Id}\mathcal{U}_{1}\cup\operatorname{Id}\mathcal{U}_{2}\right)\). Any \([T]\in\psi(\mathcal{U}_{1})\wedge\psi(\mathcal{U}_{2})\), is both \(\mathcal{U}_{1}\)-compatible and \(\mathcal{U}_{2}\)-compatible; according to Theorem 3.21, the algebra \(A_{T}(Q,A^{\alpha,\tau},*)\in\mathcal{U}_{1}\wedge\mathcal{U}_{2}\). Then by Theorem 3.19 we have \([T]\in H^{2}_{\mathcal{U}_{1}\wedge\mathcal{U}_{2}}(Q,A^{\alpha,\tau},*)=\psi( \mathcal{U}_{1}\wedge\mathcal{U}_{2})\). We conclude \(\psi\) is a meet-homomorphism.
The lower adjoint to \(\Psi\) is the monotone map \(\theta:\operatorname{Sub}H^{2}(Q,A^{\alpha,\tau},*)\longrightarrow\mathcal{L }(Q,A^{\alpha,\tau},*)\) defined by \(\theta(E):=\bigvee_{[T]\in E}\mathcal{V}(A_{T})\). It is not to hard to see that \(\theta\circ\Psi(\mathcal{U})\leq\mathcal{U}\) and \(\Psi\circ\theta(E)\geq E\). It follows that \(\Psi\circ\theta\) is a closure operator and the closed sets are precisely the cohomology groups corresponding to varieties; that is, of the form \(\Psi(\mathcal{U})\).
The combinatorial equivalence on extensions defined on their associated \(2\)-cocycles can also be given by special isomorphisms between the extensions
**Theorem 3.31**.: Let \((A(\alpha),Q,*)\) be affine datum in the signature \(\tau\). Let \(\pi:A\to Q\) and \(\pi^{\prime}:A^{\prime}\to Q\) be extensions realizing the datum. Then \(A\) and \(A^{\prime}\) are equivalent if and only if there is an isomorphism \(\gamma:A\to A^{\prime}\) such that
1. \(\pi^{\prime}\circ\gamma=\pi\), and
2. \(\gamma=m(\gamma\circ r,r,\operatorname{id})\) for all \(\alpha\)-traces \(r:A\to A\).
Proof.: First, assume \(A\) and \(A^{\prime}\) are equivalent extensions realizing the datum. We make take \(\mathcal{U}=\mathcal{V}(A,A^{\prime})\) and note \(Q\in\mathcal{U}\) and \(\mathcal{U}\) contains the datum since it contains algebras which realize the datum. By Theorem 3.21, we have isomorphisms \(\phi:A\to A_{T}(Q,A^{\alpha,\tau},*)\) and \(\phi^{\prime}:A^{\prime}\to A(\alpha,*,T^{\prime})\) by the explicit construction detailed in that theorem. By Definition 3.28, there is a \(2\)-coboundary defined by \(h:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) such that
\[T_{f}(\bar{x})-T_{f}^{\prime}(\bar{x}) =f^{\Delta}\left(h(x_{1}),\delta(\bar{l}(x_{2})),\ldots,\delta( \bar{l}(x_{n}))\right)-_{u}h(f^{Q}(x_{1},\ldots,x_{n}))\] \[+_{u}\sum_{i=2}^{n}a(f,i)(x_{1},\ldots,x_{i},h(x_{i}),x_{i+1}, \ldots,x_{n})\qquad(f\in\tau,n=\operatorname{ar}f)\]
where \(u=f(x_{1},\ldots,x_{n})\) and \(\bar{l}:Q\to A\) is a lifting for the datum. Let \(r:A\to A\) and \(r^{\prime}:A^{\prime}\to A^{\prime}\) be \(\alpha\)-traces used to define the extensions \(A\) and \(A^{\prime}\), respectively. Let \(l:Q\to A\) and \(l:Q\to A^{\prime}\) be the associated liftings for \(r\) and \(r^{\prime}\), respectively.
Note every element in \(A_{T}(Q,A^{\alpha,\tau},*)\) has a unique representation of the form \(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\) with \(x\in A\). Define \(\bar{\gamma}:A_{T}(Q,A^{\alpha,\tau},*)\to A^{\prime}(\alpha,*,T^{\prime})\) by \(\bar{\gamma}:\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\longmapsto\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(x)}h\circ\rho\left(\begin{bmatrix}r (x)\\ x\end{bmatrix}\right)\) and
\(\gamma=\phi^{\prime-1}\circ\bar{\gamma}\circ\phi:A\to A^{\prime}\). Since \(\phi(\ker\pi)=\phi^{\prime}(\ker\pi^{\prime})=\hat{\alpha}/\Delta_{\alpha\alpha}\), we see that
\[\pi^{\prime}\circ\phi^{\prime-1}\circ\bar{\gamma}\left(\begin{bmatrix}r(x) \\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right) =\pi^{\prime}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(x)}h\circ\rho\left(\begin{bmatrix}r (x)\\ x\end{bmatrix}\right)\right)\] \[=m\left(\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right),\rho\left(\begin{bmatrix}r(x)\\ r(x)\end{bmatrix}\right),\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right)\right)=\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right)=\pi(x)\]
by idempotence in \(Q\). This verifies \(\pi=\pi^{\prime}\circ\gamma\) and surjectivity follows readily. We check \(\bar{\gamma}\) is a homomorphism. For simplicity, set \(q_{i}=\rho\left(\begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}\right)\). Calculating for \(f\in\tau\) and \(\bar{x}\in A^{\operatorname{ar}f}\),
\[\bar{\gamma}\left(F_{f}\left(\begin{bmatrix}r(\bar{x})\\ \bar{x}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right) =\bar{\gamma}\left(\begin{bmatrix}r(f(\bar{x}))\\ f(\bar{x})\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=\begin{bmatrix}r(f(\bar{x}))\\ f(\bar{x})\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(f(\bar{x}))}h\circ\rho \left(\begin{bmatrix}r(f(\bar{x}))\\ f(\bar{x})\end{bmatrix}\right)\] \[=\begin{bmatrix}r(f(\bar{x}))\\ f(\bar{x})\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(f(\bar{x}))}h(f^{Q}(q_{1}, \ldots,q_{n}))\] \[=f^{\Delta}\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix}\Delta_{\alpha\alpha},\delta(x_{2}),\ldots,\delta(x_{n})\right)\] \[\quad+_{r(f(\bar{x}))}\sum_{i=2}^{n}a(f,i)\left(q_{1},\ldots,q_{i- 1},\begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)\] \[\quad+_{r(f(\bar{x}))}T_{f}(\bar{q})+_{r(f(\bar{x}))}h(f^{Q}(q_{1 },\ldots,q_{n}))\] \[=F_{f}\left(\begin{bmatrix}r(\bar{x})\\ \bar{x}\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{r(f(\bar{x}))}h(f^{Q}(q_{1 },\ldots,q_{n})).\]
Since \(\left(h\circ\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right),\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\in\hat{\alpha}/\Delta_{\alpha\alpha}\), we can write each
\[h\circ\rho\left(\begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}\right)=\begin{bmatrix}r(x_{i})\\ a_{i}\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}r^{\prime}(x_{i})\\ m(a_{i},r(x_{i}),r^{\prime}(x_{i}))\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix} r^{\prime}(x_{i})\\ w_{i}\end{bmatrix}/\Delta_{\alpha\alpha} \tag{21}\]
for some \(a_{i}\in A\) and \(w_{i}=m(a_{i},r(x_{i}),r^{\prime}(x_{i}))\). Then \(\gamma\left(\begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\begin{bmatrix}r^{\prime}(x_{ i})\\ z_{i}\end{bmatrix}/\Delta_{\alpha\alpha}\) where \(z_{i}=m(x_{i},r(x_{i}),w_{i})\). Then
\[F_{f}\left(\bar{\gamma}\left(\begin{bmatrix}r(\bar{x})\\ \bar{x}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right) =F_{f}\left(\begin{bmatrix}r^{\prime}(x_{1})\\ z_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\ldots,\begin{bmatrix}r^{\prime}(x_{ n})\\ z_{n}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=f^{\Delta}\left(\begin{bmatrix}r^{\prime}(x_{1})\\ z_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(z_{2}),\ldots,\delta(z_{n}) \right)+_{r(f(\bar{z}))}\] \[\quad\sum_{i=2}^{n}a(f,i)\left(q_{1},\ldots,q_{i-1},\begin{bmatrix} r^{\prime}(x_{i})\\ z_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)\] \[\quad+_{r(f(\bar{z}))}T^{\prime}_{f}(q_{1},\ldots,q_{n})\] \[=f^{\Delta}\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(z_{2}),\ldots,\delta(z_{n}) \right)+_{r(f(\bar{x}))}f^{\Delta}\left(h(q_{1}),\delta(z_{2}),\ldots,\delta(z _{n})\right)\] \[\quad+_{r(f(\bar{z}))}\sum_{i=2}^{n}a(f,i)\left(q_{1},\ldots,q_{i- 1},\begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)+_{r(f( \bar{x}))}\]
\[\sum_{i=2}^{n}a(f,i)\left(q_{1},\ldots,q_{i-1},h(q_{i}),q_{i+1}, \ldots,q_{n}\right)+_{r(f(\bar{z}))}T_{f}^{\prime}(q_{1},\ldots,q_{n})\] \[=f^{\Delta}\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(z_{2}),\ldots,\delta(z_{n}) \right)+_{r(f(\bar{z}))}\] \[\sum_{i=2}^{n}a(f,i)\left(q_{1},\ldots,q_{i-1},\begin{bmatrix}r(x _{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)+_{r(f(\bar {x}))}\] \[T_{f}(q_{1},\ldots,q_{n})+_{u}h(f^{Q}(q_{1},\ldots,q_{n}))\] \[=F_{f}\left(\begin{bmatrix}r(\bar{x})\\ \bar{x}\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{r(f(\bar{x}))}h(f^{Q}(q_{1 },\ldots,q_{n})).\]
Since \(\delta(z_{i})=\delta(x_{i})\) and \(\delta(r(f(\bar{x})))=\delta(r(f(\bar{z})))=\delta(u)\), we conclude that \(\bar{\gamma}\), and thus \(\gamma\), is a homomorphism.
Now assume \(\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}r(y)\\ y\end{bmatrix}/\Delta_{\alpha\alpha}\right)\in\ker\bar{\gamma}\). Note we have \(h\circ\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right)=\begin{bmatrix}r(x)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\) and \(h\circ\rho\left(\begin{bmatrix}r(y)\\ y\end{bmatrix}\right)=\begin{bmatrix}r(y)\\ b\end{bmatrix}/\Delta_{\alpha\alpha}\) for some \(a,b\in A\). Then
\[\begin{bmatrix}r(x)\\ m(x,r(x),a)\end{bmatrix}/\Delta_{\alpha\alpha}=\bar{\gamma}\left( \begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\bar{\gamma}\left(\begin{bmatrix} r(y)\\ y\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\begin{bmatrix}r(y)\\ m(y,r(y),b)\end{bmatrix}/\Delta_{\alpha\alpha}. \tag{22}\]
Then \(\pi=\pi^{\prime}\circ\gamma\) yields \(\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}r(y)\\ y\end{bmatrix}/\Delta_{\alpha\alpha}\right)\in\ker\pi=\hat{\alpha}/\Delta_{ \alpha\alpha}\) which implies \(r(x)=r(y)\), and so \(a=b\) since the 2-coboundary \(h\) only depends on \(Q\). Then Eq (22) implies \(m(x,r(x),a)=m(y,r(y),b)=m(y,r(x),a)\). Since \(m\) is affine on \(\alpha\)-blocks, we must have \(x=y\); thus, \(\bar{\gamma}\) is injective.
For the second condition on the isomorphism \(\gamma\), note \(\rho\left(\begin{bmatrix}r(x)\\ r(x)\end{bmatrix}\right)=\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right)\) implies \(h\circ\rho\left(\begin{bmatrix}r(x)\\ r(x)\end{bmatrix}\right)=h\circ\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right)=\begin{bmatrix}r(x)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\) for some \(a\in A\) with \((a,r(x))\in\alpha\). Then
\[\left[\begin{matrix}r(x)\\ \gamma(r(x))\end{matrix}/\Delta_{\alpha\alpha} =\begin{bmatrix}r(x)\\ \phi^{\prime-1}\left(\begin{bmatrix}r(x)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right)\end{matrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}r(x)\\ \phi^{\prime-1}\left(\begin{bmatrix}r(x)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right)\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}r(x)\\ m(a,r(x),r^{\prime}(x))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}x\\ m(m(a,r(x),r^{\prime}(x)),r(x))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}x\\ m(m(x,r(x),a),r(x),r^{\prime}(x))\end{bmatrix}/\Delta_{\alpha\alpha}\]
\[=\begin{bmatrix}x\\ \phi^{\prime-1}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\end{bmatrix}\right)/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}x\\ \phi^{\prime-1}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(x)}h\circ\rho\left(\begin{bmatrix}r(x) \\ x\end{bmatrix}\right)\right)\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}x \\ \gamma(x)\end{bmatrix}/\Delta_{\alpha\alpha}.\]
Conversely, assume there is a homomorphism \(\gamma:A\to A^{\prime}\) which satisfies the conditions. The condition \(\pi=\pi^{\prime}\circ\gamma\) implies \(\gamma\circ l:Q\to A^{\prime}\) is a lifting for \(\pi^{\prime}:A^{\prime}\to Q\). Condition (2) implies that for any \(f\in\tau\) and \(\bar{x}\in A^{\operatorname{ar}f}\), if we apply the operation
\[\begin{bmatrix}f(\bar{x})\\ \gamma(f(\bar{x}))\end{bmatrix}=\begin{bmatrix}f(\bar{x})\\ f(\gamma(\bar{x}))\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}f(l(\bar{x} ))\\ f(\gamma(l(\bar{x})))\end{bmatrix}=\begin{bmatrix}f(l(\bar{x}))\\ \gamma(f(l(\bar{x})))\end{bmatrix}\]
and then also by substitution
\[\begin{bmatrix}f(\bar{x})\\ \gamma(f(\bar{x}))\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}l(f(\bar{x} ))\\ \gamma(l(f(\bar{x})))\end{bmatrix}.\]
From the above we conclude
\[\begin{bmatrix}\gamma\circ l(f(\bar{x}))\\ l\circ f(\bar{x})\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}\gamma(f(l( \bar{x})))\\ f(l(\bar{x}))\end{bmatrix},\]
and since \(\Delta_{\alpha\alpha}\leq\hat{\alpha}\) we have
\[\begin{bmatrix}\gamma\circ l(f(\bar{x}))\\ f(\gamma\circ l(\bar{x}))\end{bmatrix}=\begin{bmatrix}\gamma\circ l(f(\bar{x }))\\ \gamma(f(l(\bar{x})))\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}l\circ f( \bar{x})\\ f(l(\bar{x}))\end{bmatrix}. \tag{23}\]
Now define \(T_{f}^{\gamma}(\bar{x}):=\begin{bmatrix}\gamma\circ l(f(\bar{x}))\\ f(\gamma\circ l(\bar{x}))\end{bmatrix}\) for \(f\in\tau\). It follows that \(T^{\gamma}\) is \(2\)-cocycle for \(A^{\prime}\). By a similar argument as in Proposition 3.23, there is \(h\) such that
\[T_{f}^{\prime}(\bar{x})-T_{f}^{\gamma}(\bar{x})=f(h(\bar{x}))-h(f(\bar{x})) \qquad\qquad\qquad(f\in\tau)\]
which can be expanded out to represent a \(2\)-coboundary. The second condition guarantees that \(T^{\gamma}=T\) and so \(A\) and \(A^{\prime}\) are equivalent.
**Definition 3.32**.: Let \((A^{\alpha,\tau},Q,*)\) be affine datum. A function \(d:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) is a \(1\)-_cocycle_ if for any \(f\in\tau,\operatorname{ar}f=n\),
\[d(f^{Q}(x_{1},\ldots,x_{n}))=f^{\Delta}(d(x_{1}),\delta(l(x_{2})),\ldots, \delta(l(x_{n})))+_{u}\sum_{i=2}^{n}a(f,i)(x_{1},\ldots,x_{i-1},d(x_{i}),x_{i+1 },\ldots,x_{n})\]
where \(l:Q\to A\) is any lifting associated to the datum. The set of \(1\)-cocycles is denoted by \(Z^{1}(A^{\alpha,\tau},Q,*)\)
We will also refer to \(1\)-cocycles as _derivations_ of the datum.
**Lemma 3.33**.: Let \((A^{\alpha,\tau},Q,*)\) be affine datum. For any lifting \(l:Q\to A\) associated to the datum, the operation \((d+d^{\prime})(x):=d(x)+_{l(x)}d^{\prime}(x)\) makes \(Z^{1}(A^{\alpha,\tau}Q,*)\) into an abelian group.
Proof.: See the first paragraph of Lemma 3.26.
Let \((A^{\alpha,\tau},Q,*)\) be affine datum. If \(\pi:A\to Q\) is an extension realizing the datum, let \(\operatorname{Stab}(\pi:A\to Q)\) denote the set of _stabilizing automorphisms_; that is, the automorphism \(\gamma\in\operatorname{Aut}A\) which satisfy conditions (1) and (2) in Theorem 3.31. The following is our characterization of the derivations of datum by stabilizing automorphisms.
**Theorem 3.34**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum. Let \(\pi:A\to Q\) be an extension realizing the datum. Then \(Z^{1}(Q,A^{\alpha,\tau},*)\approx\operatorname{Stab}(\pi:A\to Q)\).
Proof.: Let \(\phi:A\to A(\alpha,*,T)\) witness the isomorphism from Theorem 3.21. Take \(\gamma\in\operatorname{Stab}(\pi:A\to Q)\) and \(\alpha\)-trace \(r\). Since \(\pi\circ\gamma=\pi\), we have \((x,\gamma(x))\in\alpha=\ker\pi\) which implies \(r(\gamma(x))=r(x)\). Then by condition (2) in Theorem 3.31 we can write
\[\phi\circ\gamma(x)=\begin{bmatrix}r(\gamma(x))\\ \gamma(x)\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}r(x)\\ \gamma(x)\end{bmatrix}/\Delta_{\alpha\alpha} =\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(x)}\begin{bmatrix}x\\ \gamma(x)\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(x)}\begin{bmatrix}r(x)\\ \gamma(r(x))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\phi(x)+_{r(x)}d_{\gamma}(\pi(x))\]
where we have defined \(d_{\gamma}(x):=\begin{bmatrix}l(x)\\ \gamma(l(x))\end{bmatrix}/\Delta_{\alpha\alpha}:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) for the lifting associated to \(r\). Note \(r\) and \(\gamma\circ r\) are both \(\alpha\)-traces. Then the argument in Proposition 3.23 shows for all \(f\in\tau\) and \(\bar{x}\in Q^{\operatorname{ar}f}\),
\[\delta(u)=T_{f}(\bar{x})-_{u}T_{f}(\bar{x}) =f(d_{\gamma}(x_{1}),\delta(l(x_{2})),\ldots,\delta(l(x_{n})))-_{u }d_{\gamma}(f(x_{1},\ldots,x_{n}))\] \[+_{u} \sum_{i=2}^{n}a(f,i)\left(x_{1},\ldots,x_{i-1},d_{\gamma}(x_{i}),x_{i+1},\ldots,x_{n}\right))\]
where \(u=l(f(\bar{x}))\); therefore, \(d_{\gamma}\) is a derivation. We should note \(\phi\circ\gamma(r(x))=d_{\gamma}(\pi(x))\).
We show \(\operatorname{Stab}(\pi:A\to Q)\) is closed under composition. Take \(\gamma,\gamma^{\prime}\in\operatorname{Stab}(\pi:A\to Q)\); clearly, condition (1) in Theorem 3.31 holds. We first calculate
\[\phi\circ(\gamma^{\prime}\circ\gamma)(x)=\phi(\gamma(x))+_{r(\gamma(x))}d_{ \gamma^{\prime}}(\pi(\gamma(x)))=\phi(x)+_{r(x)}d_{\gamma}(\pi(x))+_{r(x)}d_{ \gamma^{\prime}}(\pi(x)). \tag{24}\]
Since \(d_{\gamma}\) is a derivation, we have \(\pi\circ d_{\gamma}=\operatorname{id}\) which implies we can write \(d_{\gamma}(x)=\begin{bmatrix}l(x)\\ \beta(x)\end{bmatrix}/\Delta_{\alpha\alpha}\) for some function \(\beta:Q\to A\); similarly, \(d_{\gamma^{\prime}}(x)=\begin{bmatrix}l(x)\\ \beta^{\prime}(x)\end{bmatrix}/\Delta_{\alpha\alpha}\) for some function \(\beta^{\prime}:Q\to A\). Then using Eq. (24), we have
\[\begin{bmatrix}x\\ (\gamma^{\prime}\circ\gamma)(x)\end{bmatrix}/\Delta_{\alpha\alpha} =\begin{bmatrix}x\\ \phi^{-1}\left(\phi(x)+_{r(x)}d_{\gamma}(\pi(x))+_{r(x)}d_{\gamma^{\prime}}( \pi(x))\right)\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}x\\ \phi^{-1}\left(\begin{bmatrix}m(r(x),r(x),m(r(x),r(x),r(x)))\\ m(x,r(x),m(\beta^{\prime}(\pi(x)),r(x),\beta(\pi(x))))\end{bmatrix}/\Delta_{ \alpha\alpha}\right)\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}x\\ m(x,r(x),m(\beta^{\prime}(x),r(x),\beta(\pi(x))))\end{bmatrix}/\Delta_{ \alpha\alpha}\] \[=m\left(\begin{bmatrix}x\\ x\end{bmatrix},\begin{bmatrix}r(x)\\ r(x)\end{bmatrix},m\left(\begin{bmatrix}r(x)\\ \beta^{\prime}(\pi(x))\end{bmatrix},\begin{bmatrix}r(x)\\ r(x)\end{bmatrix},\begin{bmatrix}r(x)\\ \beta(\pi(x))\end{bmatrix}\right)\right)/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}r(x)\\ \beta^{\prime}(\pi(x))\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(x)}\begin{bmatrix}r (x)\\ \beta(\pi(x))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=d_{\gamma^{\prime}}(\pi(x))+_{r(x)}d_{\gamma}(\pi(x))\] \[=\phi\circ(\gamma^{\prime}\circ\gamma)(r(x))=\begin{bmatrix}r(x)\\ (\gamma^{\prime}\circ\gamma)(r(x))\end{bmatrix}/\Delta_{\alpha\alpha}.\]
since \(r(\gamma^{\prime}(\gamma(r(x))))=r(\gamma(r(x)))=r(r(x))=r(x)\). This shows \(\gamma^{\prime}\circ\gamma\) satisfies condition (2) and so \(\gamma^{\prime}\circ\gamma\in\operatorname{Stab}(\pi:A\to Q)\).
Define \(\Psi:\operatorname{Stab}(\pi:A\to Q)\to Z^{1}(Q,A^{\alpha,\tau},*)\) by \(\Psi(\gamma):=d_{\gamma}\) where \(\phi\circ\gamma(r(x))=d_{\gamma}(\pi(x))\). It is easy to see that \(\Psi\) is injective. To show surjectivity, take a derivation \(d\) and define \(\gamma_{d}\) by \(\phi\circ\gamma_{d}(x):=\phi(x)+_{r(x)}d(\pi(x))\). The proof of Theorem 3.31 shows \(\gamma_{d}\) is a stabilizing automorphism. We show it
satisfies condition (2) above. Since \(d\) is a derivation, we can write \(d(x)=\begin{bmatrix}l(x)\\ \beta(x)\end{bmatrix}/\Delta_{\alpha\alpha}\) for some function \(\beta:Q\to A\). We calculate
\[\begin{bmatrix}x\\ \gamma_{d}(x)\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}x\\ \phi^{-1}\left(\phi(x)+_{r(x)}d(\pi(x))\right)\end{bmatrix}/\Delta_{\alpha\alpha }=\begin{bmatrix}x\\ \phi^{-1}\left(\begin{bmatrix}m\big{(}r(x),r(x),r(x)\big{)}\\ m\big{(}x,r(x),\beta(\pi(x))\big{)}\end{bmatrix}/\Delta_{\alpha\alpha}\right) \end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}x\\ m(x,r(x),\beta(\pi(x)))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=m\left(\begin{bmatrix}x\\ x\end{bmatrix},\begin{bmatrix}r(x)\\ r(x)\end{bmatrix},\begin{bmatrix}r(x)\\ \beta(\pi(x))\end{bmatrix}\right)/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}r(x)\\ \beta(\pi(x))\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}r(x)\\ \gamma_{d}(r(x))\end{bmatrix}/\Delta_{\alpha\alpha}\]
This implies \(m(\gamma_{d}(r(x)),r(x),x)=\gamma_{d}(x)\) and so \(\gamma_{d}\in\operatorname{Stab}(\pi:A\to Q)\). We see that \(\Psi(\gamma_{d})=d\) since \(\phi\circ\gamma(r(x))=\phi(r(x))+_{r(x)}d(\pi(r(x)))=d(\pi(x))\).
To finish the theorem, we show \(\Psi\) is a homomorphism. This follows from Eq. (24) since
\[\phi\circ(\gamma^{\prime}\circ\gamma)(r(x))=d_{\gamma^{\prime}}(\pi(x))+_{r( x)}d_{\gamma}(\pi(x))=(d_{\gamma^{\prime}}+d_{\gamma})(\pi(x))\]
implies \(\Psi(\gamma^{\prime}\circ\gamma)=d_{\gamma^{\prime}}+d_{\gamma}\).
Suppose \(\pi:A\to Q\) with \(\alpha=\ker\pi\) is an extension realizing affine datum. By Theorem 3.19 we have the isomorphism \(\phi:A\to A_{T}(Q,A^{\alpha,\tau},*)\) given by \(a\longmapsto\begin{bmatrix}r(a)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\) for any \(\alpha\)-trace \(r:A\to A\). For the semidirect product we have the homomorphisms
\[\rho:A(\alpha)/\Delta_{\alpha\alpha}\to Q, \left[\begin{matrix}a\\ b\end{matrix}\right]/\Delta_{\alpha\alpha}\longmapsto\pi(a)\] \[\kappa:A(\alpha)/\Delta_{\alpha\alpha}\to A(\alpha)/\Delta_{ \alpha 1}, \left[\begin{matrix}a\\ b\end{matrix}\right]/\Delta_{\alpha\alpha}\longmapsto\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha 1}\]
where \(\kappa\) is the canonical homomorphism. We then take the solution in the product diagram
\[\sigma:A(\alpha)/\Delta_{\alpha\alpha}\to A(\alpha)/\Delta_{\alpha 1}\times Q, \left[\begin{matrix}a\\ b\end{matrix}\right]/\Delta_{\alpha\alpha}\longmapsto\left\langle\begin{bmatrix}a \\ b\end{matrix}\right\rangle/\Delta_{\alpha 1},\pi(a)\right\rangle.\]
It is easy to see that the same definition also induces a homomorphism
\[\sigma:A_{T}(Q,A^{\alpha,\tau},*)\to A(\alpha)/\Delta_{\alpha 1}\otimes^{ \kappa\circ T}Q\]
with the following properties:
1. \(\ker\kappa\circ\ker\rho=1\) and \(\ker\sigma=\ker\kappa\wedge\ker\rho\) where \(\ker\kappa=\Delta_{\alpha 1}/\Delta_{\alpha\alpha}\), \(\ker\rho=\hat{\alpha}/\Delta_{\alpha\alpha}\);
2. if \(\mathcal{V}(A)\) has a weak-difference term and \(\alpha\) is left-central, then \(0=\phi([\alpha,1])=\ker\sigma\) and \(\sigma\) is a subdirect embedding;
3. if \(\mathcal{V}(A)\) has a difference term and \(\alpha\) is central, then \(\sigma\) is surjective.
In this way Proposition 2.4 follows from Theorem 3.19 if we assume \(\mathcal{V}\) has a difference term. In the following, we recover what appears to be a folklore result.
**Lemma 3.35**.: Let \(\mathcal{V}\) be a variety with a difference term and \(A\in\mathcal{V}\). If \(\alpha\in\operatorname{Con}A\) is central, then
\[A(\alpha)/\Delta_{\alpha\alpha}\approx A(\alpha)/\Delta_{\alpha 1}\times A/\alpha.\]
Proof.: By Corollary 2.6 and Theorem 3.19, there is a \(2\)-cocycle \(T\) and action \(*\) such that \(A_{T}(Q,A^{\alpha,\tau},*)\approx A\approx A(\alpha)/\Delta_{\alpha 1} \otimes^{\kappa\circ T}A/\alpha\). If we take the trivial \(2\)-cocycle \(T=0\), then \(A(\alpha)/\Delta_{\alpha\alpha}\approx A_{0}(Q,A^{\alpha,\tau},*)\approx A( \alpha)/\Delta_{\alpha 1}\times A/\alpha\)
**Definition 3.36**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum. The action is _trivial_ if the following holds: for any \(f\in\tau\) with \(\operatorname{ar}f>1\) and \(I\subseteq\{1,\ldots,\operatorname{ar}f\}\), for any \(\bar{c},\bar{c}^{\prime}\in A^{\operatorname{ar}f}\) such that \(c_{i}=c_{i}^{\prime}\) for \(i\in I\) and \(\rho(\delta(c_{i}))=q_{i},\rho(\delta(c_{i}^{\prime}))=q_{i}\), and for any \(\alpha\)-trace \(r:A\to A\) with associated lifting \(l\), then
\[\sum_{i\in I}a(f,i)\left(q_{1},\ldots,q_{i-1},\begin{bmatrix}r(c_{i})\\ c_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)\]
where \(u=l\left(f^{Q}(q_{1},\ldots,q_{n})\right)\) and \(u^{\prime}=l\left(f^{Q}(q_{1},\ldots,q_{n})\right)\).
For an algebra \(A\), a congruence \(\alpha\in\operatorname{Con}A\) is _right-central_ if \([1,\alpha]=0\) and _left-central_ if \([\alpha,1]=0\).
**Proposition 3.37**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum contained in some variety with a difference term. The action is trivial if and only if every extension in a variety with a difference term realizing the datum is a central extension.
Proof.: Assume every extension in a variety with a difference term realizing the datum is a central extension. By assumption, there is at least one such extension. Taking any such extension \(\pi:A\to Q\) with \(A\in\mathcal{V}\) a variety with a difference term. By Theorem 3.21, we have \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\) for some compatible \(2\)-cocycle. Fix an \(\alpha\)-trace \(r:A\to A\) with associated lifting \(l\). Take \(f\in\tau\) with \(n=\operatorname{ar}f>1\) and choose \(I\subseteq\{1,\ldots,n\}\). Choose \(\bar{c},\bar{c}^{\prime}\in A^{n}\) such that \(c_{i}=c_{i}^{\prime}\) for \(i\in I\). Let \(\pi(c_{i})=q_{i}\) and \(\pi(c_{i}^{\prime})=q_{i}^{\prime}\) for \(i=1,\ldots,n\). Then by realization we have for \(i\in I\)
\[a(f,i)\left(q_{1},\ldots,q_{i-1},\begin{bmatrix}r(c_{i})\\ c_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right) =\begin{bmatrix}u\\ a_{i}\end{bmatrix}/\Delta_{\alpha\alpha}\] \[a(f,i)\left(q_{1}^{\prime},\ldots,q_{i-1}^{\prime},\begin{bmatrix} r(c_{i}^{\prime})\\ c_{i}^{\prime}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1}^{\prime},\ldots,q_{n }^{\prime}\right) =\begin{bmatrix}u^{\prime}\\ b_{i}\end{bmatrix}/\Delta_{\alpha\alpha}.\]
where \(u=f(r(c_{1}),\ldots,r(c_{n}))\), \(u^{\prime}=f(r(c_{1}^{\prime}),\ldots,r(c_{n}^{\prime}))\) and \(a_{i}=f(r(c_{1}),\ldots,r(c_{i}),c_{i},r(c_{i+1}),\ldots,r(c_{n}))\), \(b_{i}=f(r(c_{1}^{\prime}),\ldots,r(c_{i}^{\prime}),c_{i}^{\prime},r(c_{i+1}^{ \prime}),\ldots,r(c_{n}^{\prime}))\). Note the diagonal is a single \(\Delta_{\alpha 1}\)-class; that is, \(\begin{bmatrix}x\\ x\end{bmatrix}\Delta_{\alpha 1}\begin{bmatrix}y\\ y\end{bmatrix}\) for all \(x,y\in A\). Denote the \(\Delta_{\alpha 1}\)-class of the diagonal by \(\hat{\delta}\). Then by passing to the quotient to \(A(\alpha)/\Delta_{\alpha 1}\) we have
\[\begin{bmatrix}u\\ \sum_{i\in I}^{u}a_{i}\end{bmatrix}/\Delta_{\alpha 1} =\sum_{i\in I}^{u}\begin{bmatrix}u\\ a_{i}\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\sum_{i\in I}^{u}f\left(\delta(c_{1}),\ldots,\delta(c_{i-1}), \begin{bmatrix}r(c_{i})\\ c_{i}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(c_{i+1}),\ldots,\delta(c_{n}) \right)/(\Delta_{\alpha 1}/\Delta_{\alpha\alpha})\] \[=\sum_{i\in I}^{u}f\left(\hat{\delta},\ldots,\hat{\delta}, \begin{bmatrix}r(c_{i})\\ c_{i}\end{bmatrix},\hat{\delta},\ldots,\hat{\delta}\right)/\Delta_{\alpha 1}\] \[=\sum_{i\in I}^{u}f\left(\delta(c_{1}^{\prime}),\ldots,\delta(c_{i-1 }^{\prime}),\begin{bmatrix}r(c_{i}^{\prime})\\ c_{i}^{\prime}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(c_{i+1}^{\prime}), \ldots,\delta(c_{n}^{\prime})\right)/(\Delta_{\alpha 1}/\Delta_{\alpha\alpha})\] \[=\sum_{i\in I}^{u^{\prime}}\begin{bmatrix}u^{\prime}\\ b_{i}\end{bmatrix}/\Delta_{\alpha 1}\] \[=\begin{bmatrix}u^{\prime}\\ \sum_{i\in I}^{u^{\prime}}b_{i}\end{bmatrix}/\Delta_{\alpha 1}\]
Since \(\alpha\) is central, by Lemma 2.1(1) we conclude \(\sum_{i\in I}^{u}a_{i}=\left(\sum_{i\in I}^{u^{\prime}}b_{i}\right)+_{u^{\prime} }u\). Write \(I=\{i_{1},\ldots,i_{k}\}\). Using the difference term identity \(m(u^{\prime},u^{\prime},u)=u\) we see that
\[\sum_{i\in I}^{u}\begin{bmatrix}u\\ a_{i}\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}u\\ a_{i_{1}}+_{u}\cdots+_{u}a_{i_{k}}\end{bmatrix}/\Delta_{\alpha\alpha} =\begin{bmatrix}u^{\prime}+_{u^{\prime}}u\\ (b_{i_{1}}+_{u^{\prime}}\cdots+_{u^{\prime}}b_{i_{k}})+_{u^{\prime}}u\end{bmatrix}/ \Delta_{\alpha\alpha}\] \[=\begin{bmatrix}u^{\prime}\\ (b_{i_{1}}+_{u^{\prime}}\cdots+_{u^{\prime}}b_{i_{k}})\end{bmatrix}/\Delta_{ \alpha\alpha}+_{u^{\prime}}\delta(u)\] \[=\sum_{i\in I}^{u^{\prime}}\begin{bmatrix}u^{\prime}\\ b_{i}\end{bmatrix}/\Delta_{\alpha\alpha}+_{u^{\prime}}\delta(u)\]
Now assume the action is trivial. Consider an extension \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\) realizing the datum in a variety with a difference term. Fix an \(\alpha\)-trace \(r\) and associated lifting \(l\). We directly verify the term condition for centrality in any extension realizing the datum. Take \(f\in\tau\) with \(1\leq k<n=\operatorname{ar}f\) and \(\bar{a},\bar{b}\in A^{k}\), \(\bar{c},\bar{d}\in A^{n-k}\) such that \((c_{i},d_{i})\in\alpha\) for \(i=1,\ldots,n-k\). Assume
\[F_{f}\left(\begin{bmatrix}r(\bar{a})\\ \bar{a}\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}r(\bar{c})\\ \bar{c}\end{bmatrix}/\Delta_{\alpha\alpha}\right)=F_{f}\left(\begin{bmatrix}r (\bar{a})\\ \bar{a}\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}r(\bar{d})\\ \bar{d}\end{bmatrix}/\Delta_{\alpha\alpha}\right). \tag{25}\]
By realization, we can combine the terms in the definition of the operation \(F_{f}\) to rewrite Eq.(25) as
\[f\left(\begin{bmatrix}r(\bar{a})\\ \bar{a}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(\bar{c})\right)+_{u}f\left( \delta(r(\bar{a})),\begin{bmatrix}r(\bar{c})\\ \bar{c}\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{u}T_{f}(\pi(\bar{a}),\pi( \bar{c}))\] \[=f\left(\begin{bmatrix}r(\bar{a})\\ \bar{a}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(\bar{d})\right)+_{u}f\left( \delta(r(\bar{a})),\begin{bmatrix}r(\bar{d})\\ \bar{d}\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{u}T_{f}(\pi(\bar{a}),\pi( \bar{d}))\]
Then \((c_{i},d_{i})\in\alpha\Rightarrow\delta(c_{i})=\delta(d_{i})\) and \(\pi(c_{i})=\pi(d_{i})\). This implies that in the above expression the first terms on the left-side and right-side and the \(2\)-cocycle terms are equal. After canceling we conclude
\[f\left(\delta(r(\bar{a})),\begin{bmatrix}r(\bar{c})\\ \bar{c}\end{bmatrix}/\Delta_{\alpha\alpha}\right)=f\left(\delta(r(\bar{a})), \begin{bmatrix}r(\bar{d})\\ \bar{d}\end{bmatrix}/\Delta_{\alpha\alpha}\right). \tag{26}\]
We can write \(u=l(f(\pi(\bar{a}),\pi(\bar{c})))=l(f(\pi(\bar{a}),\pi(\bar{d})))\) and \(u^{\prime}=l(f(\pi(\bar{b}),\pi(\bar{c})))=l(f(\pi(\bar{b}),\pi(\bar{d})))\). Then we can use realization to re-expand Eq.(26) into action terms and apply triviality to yield
\[f\left(\delta(r(\bar{b})),\begin{bmatrix}r(\bar{c})\\ \bar{c}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=\sum_{i=1}^{n-k}a(f,i)\left(\pi(\bar{b}),\pi(c_{1}),\ldots,\pi( c_{i-1}),\begin{bmatrix}r(c_{i})\\ c_{i}\end{bmatrix}/\Delta_{\alpha\alpha},\pi(c_{i+1}),\ldots,\pi(c_{n-k})\right)\] \[=\sum_{i=1}^{n-k}a(f,i)\left(\pi(\bar{a}),\pi(c_{1}),\ldots,\pi( c_{i-1}),\begin{bmatrix}r(c_{i})\\ c_{i}\end{bmatrix}/\Delta_{\alpha\alpha},\pi(c_{i+1}),\ldots,\pi(c_{n-k}) \right)+_{u}\delta(u^{\prime})\] \[=f\left(\delta(r(\bar{a})),\begin{bmatrix}r(\bar{c})\\ \bar{c}\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{u}\delta(u^{\prime})\] \[=f\left(\delta(r(\bar{a})),\begin{bmatrix}r(\bar{d})\\ \bar{d}\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{u}\delta(u^{\prime})\] \[=\sum_{i=1}^{n-k}a(f,i)\left(\pi(\bar{a}),\pi(d_{1}),\ldots,\pi( d_{i-1}),\begin{bmatrix}r(d_{i})\\ d_{i}\end{bmatrix}/\Delta_{\alpha\alpha},\pi(d_{i+1}),\ldots,\pi(d_{n-k}) \right)+_{u}\delta(u^{\prime})\]
\[=\sum_{i=1}^{n-k}a(f,i)\left(\pi(\bar{b}),\pi(d_{1}),\ldots,\pi(d_{i-1}), \begin{bmatrix}r(d_{i})\\ d_{i}\end{bmatrix}/\Delta_{\alpha\alpha},\pi(d_{i+1}),\ldots,\pi(d_{n-k})\right)\] \[=f\left(\delta(r(\bar{b})),\begin{bmatrix}r(\bar{d})\\ \bar{d}\end{bmatrix}/\Delta_{\alpha\alpha}\right).\]
Again, \(\delta(c_{i})=\delta(d_{i})\) and \(\pi(c_{i})=\pi(d_{i})\) imply
\[f\left(\begin{bmatrix}r(\bar{b})\\ \bar{b}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(\bar{c})\right)=f\left( \begin{bmatrix}r(\bar{b})\\ \bar{b}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(\bar{d})\right)\qquad\text{ and}\qquad T_{f}(\pi(\bar{b}),\pi(\bar{c}))=T_{f}(\pi(\bar{b}),\pi(\bar{d})).\]
Putting the last three systems of equations together we conclude that
\[F_{f}\left(\begin{bmatrix}r(\bar{b})\\ \bar{b}\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}r(\bar{c})\\ \bar{c}\end{bmatrix}/\Delta_{\alpha\alpha}\right)=F_{f}\left(\begin{bmatrix} r(\bar{b})\\ \bar{b}\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}r(\bar{d})\\ \bar{d}\end{bmatrix}/\Delta_{\alpha\alpha}\right).\]
which shows \([1,\alpha]=0\) in \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\).
**Remark 3.38**.: The proof of necessity in Proposition 3.37 does not require the hypothesis of a Mal'cev condition in the variety generated by any extension of the datum; that is, if the action is trivial, then the kernel in every extension realizing the datum is right-central.
**Remark 3.39**.: Suppose \((Q,A^{\alpha,\tau},*)\) be affine datum with trivial action realized by \(A\). Let \(\kappa:A(\alpha)/\Delta_{\alpha\alpha}\to A(\alpha)/\Delta_{\alpha 1}\) be the canonical homomorphism. Then
\[\kappa\circ a(f,i)(y_{1},\ldots,y_{i-1},x,y_{i+1},\ldots,y_{n})\]
depends only on the i-th coordinate.
In the case of affine datum with trivial action, the second-cohomology \(H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) is an abelian group of equivalence classes of right-central extensions which realize the datum; however, to guarantee that we recover every central extension realizing the datum we must rely on Proposition 3.37 together with Theorem 3.29 in the case of varieties with a difference term.
**Theorem 3.40**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum with trivial action and \(\mathcal{U}\) a variety with a difference term containing the datum. The abelian group \(H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) is in bijective correspondence with the set of equivalence classes of central extensions in \(\mathcal{U}\) realizing the datum.
We offer a tentative definition for a \(1^{\text{st}}\)-cohomology group associated to affine datum. The abelian group of derivations is isomoprhic to the group of stabilizing automorphisms of an extension. In the case of groups, the principal derivations correspond to the stabilizing automorphisms of the semidirect product which act by conjugation. The approach to principal derivation for affine datum follows in this line and finds application in Wires [14] for a low-dimensional Hochschild-Serre sequence associated to a general extension with an additional affine action.
For any algebra \(A\) and \(\alpha\in\operatorname{Con}A\), unary functions \(f,h:A\to A\) are \(\alpha\)-twins if there is a term \(t(x,\bar{y})\) and \(\bar{c}\;\alpha\;\bar{d}\) such that \(f(x)=t(x,\bar{c})\) and \(h(x)=t(x,\bar{d})\). The set of \(\alpha\)-twins of the identity is denoted by \(\operatorname{Tw}_{\alpha}A\); that is, \(\gamma\in\operatorname{Tw}_{\alpha}A\) if there there is a term \(t(x,\bar{y})\) and \(\bar{c}\;\alpha\;\bar{d}\) such that \(\gamma(x)=t(x,\bar{c})\) and \(x=t(x,\bar{d})\). We can restrict to a subset
\[\operatorname{Tw}_{\alpha,F}A=\{\gamma\in\operatorname{Tw}_{\alpha}A:(\exists x \in A),\gamma(x)=x\}\]
of those \(\alpha\)-twins of the identity which have a fixed-point. In general, \(\operatorname{Tw}_{\alpha}A\) is closed under composition and \(\operatorname{Tw}A\) is closed under conjugation by automorphisms of \(A\), but \(\operatorname{Tw}_{\alpha,F}A\) is neither. Given affine datum \((Q,A^{\alpha,\tau},*)\) we consider the set of _principal stabilizing automorphisms_
\[\operatorname{PStab}(Q,A^{\alpha,\tau},*)=\operatorname{Tw}_{\bar{\alpha}/ \Delta_{\alpha\alpha},F}A(\alpha)/\Delta_{\alpha\alpha}\cap\operatorname{Stab} (\pi:A(\alpha)/\Delta_{\alpha\alpha}\to Q).\]
**Definition 3.41**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum. A map \(d:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) is a _principal derivation_ if \(d(\pi(x))+_{l(x)}x\in\operatorname{PStab}(Q,A^{\alpha,\tau},*)\) for any lifting \(l\) of \(\pi:A(\alpha)/\Delta_{\alpha\alpha}\to Q\).
If we simplify notation and write the extension of the datum \(A\approx A(\alpha)/\Delta_{\alpha\alpha}\stackrel{{\pi}}{{\to}}Q\) isomorphic to the semidirect product witnessed by \(\phi\) from Theorem 3.21, then a principal derivation \(d\) takes the form
\[\phi(\gamma(x))=d(\pi(x))+_{r(x)}\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\]
for some \(\gamma\in\operatorname{PStab}(Q,A^{\alpha,\tau},*)\) and any \(\alpha\)-trace \(r:A\to A\), and are just those derivations which correspond under the isomorphism of Theorem 3.34 to principal stabilizing automorphisms. Denote by \(\operatorname{PDer}(Q,A^{\alpha,\tau},*)\) the subgroup of \(Z^{1}(Q,A^{\alpha,\tau},*)\) generated by the principal derivations.
**Definition 3.42**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum. The \(1^{\operatorname{st}}\)-cohomology of the datum is defined as the quotient group
\[H^{1}(Q,A^{\alpha,\tau},*):=Z^{1}(Q,A^{\alpha,\tau},*)/\operatorname{PDer}(Q, A^{\alpha,\tau},*).\]
**Lemma 3.43**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum with trivial action. Then \(H^{1}(Q,A^{\alpha,\tau},*)\approx\operatorname{Stab}(\pi:A(\alpha)/\Delta_{ \alpha\alpha}\to Q)\).
Proof.: By Remark 3.38, the kernel in the semidirect product realizing the datum is right-central. For \(\gamma\in\operatorname{PStab}(Q,A^{\alpha,\tau},*)\), we have \(\gamma(x)=t(x,\bar{c})\), \(x=t(x,\bar{d})\) for some term with \(\bar{c}\;\alpha\;\bar{d}\) and there exists \(a\in A\) such that \(a=\gamma(a)\). The matrix
\[\begin{bmatrix}a&\gamma(x)\\ a&x\end{bmatrix}=\begin{bmatrix}\gamma(a)&\gamma(x)\\ a&x\end{bmatrix}=\begin{bmatrix}t(a,\bar{c})&t(x,\bar{c})\\ t(a,\bar{d})&t(x,\bar{d})\end{bmatrix}\in M(\alpha,1)\]
implies \((\gamma(x),x)\in[1,\alpha]=0\); thus, the subgroup generated by principal stabilizing automorphism is trivial. Then Theorem 3.34 yields \(H^{1}(Q,A^{\alpha,\tau},*)\approx\operatorname{Stab}(\pi:A(\alpha)/\Delta_{ \alpha\alpha}\to Q)\).
According to Proposition 3.37, the hypothesis of the previous lemma holds for central extensions in varieties with a difference term.
**Definition 3.44**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum with trivial action and \(Q\) an abelian algebra. For a variety \(\mathcal{V}\) in the same signature as the datum, \(\operatorname{Ext}_{\mathcal{V}}(Q,A^{\alpha,\tau},*)\) denote the set of equivalence classes of \(\mathcal{V}\)-compatible \(2\)-cocycles which represent extensions realizing the datum which are abelian algebras.
The definition is well-defined since equivalence of extensions of datum is finer than isomorphism, and isomorphism preserves the abelian property of an algebra. The following is a far-reaching generalization of the classical observation from group theory where abelian extensions are characterized by symmetric \(2\)-cocycles; in the case of more general varieties with a weak-difference term, the abelian extensions are characterized by the \(2\)-cocycle identities (see Definition 3.14(C2)) corresponding to the axiomatization of the abelian subvariety.
**Corollary 3.45**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum with trivial action and \(Q\) an abelian algebra. Let \(\mathcal{V}\) be a variety with a weak-difference in the same signature as the datum. Then either \(\operatorname{Ext}_{\mathcal{V}}(Q,A^{\alpha,\tau},*)\) is empty or \(\operatorname{Ext}_{\mathcal{V}}(Q,A^{\alpha,\tau},*)\leq H^{2}_{\mathcal{V}}( Q,A^{\alpha,\tau},*)\).
Proof.: Since \(\mathcal{V}\) has a weak-difference term, the abelian algebras of \(\mathcal{V}\) form a subvariety \(\mathcal{A}\). If \(\mathcal{A}\) does not contain the datum, then \(\operatorname{Ext}_{\mathcal{V}}(Q,A^{\alpha,\tau},*)\) is empty; otherwise, \(\mathcal{A}\in\mathcal{L}(Q,A^{\alpha,\tau},*)\) and the embedding follows from Proposition 3.30 after noting \(\operatorname{Ext}_{\mathcal{V}}(Q,A^{\alpha,\tau},*)=H^{2}_{\mathcal{A}}(Q,A ^{\alpha,\tau},*)\).
**Acknowledgments 3.46**.: The research in this manuscript was supported by NSF China Grant #12071374. | Arbitrary 普遍代数の種類に対して、第一と第二のホモトピー群を基盤に、拡張を特徴づける理論を構築する。この拡張は、アフィンなデータを実現する拡張であり、弱差項を持つ variedad では、これらの拡張は、アベリ kernel の拡張に等しい。これは、弱差項を持つ variety が、多重操作子を持つ群のようなアルゴリズムの広範な一般化を与えるため、多くの古典的なアベリ係数を持つ拡張の例を再発見する。これらのアルゴリズムのクラスは、モジュラ格子を持つような、多重操作子を持つ群のようなアルゴリズムの広範な一般化を与える。 私たちは、作用の概念を導入し、そのモデル関係を、方程式の集合と関連付ける。差項を持つ variety では、中央拡張は、その作用の性質によって特徴付けられる。差項を持つ variety の一部をさらに絞り込むと |
2309.10320 | Inverse Formulae for $q$-analogues of Bipartite Distance Matrix | We consider two distinct $q$-analogues of the bipartite distance matrix,
namely the $q$-bipartite distance matrix and the exponential distance matrix.
We provide formulae of the inverse for these matrices, which extend the
existing results for the bipartite distance matrix. These investigations lead
us to introduce a $q$-analogue version of the bipartite Laplacian matrix. | Rakesh Jana | 2023-09-19T05:07:39 | http://arxiv.org/abs/2309.10320v1 | # Inverse Formulae for \(q\)-analogues of Bipartite Distance Matrix
###### Abstract
We consider two distinct \(q\)-analogues of the bipartite distance matrix, namely the \(q\)-bipartite distance matrix and the exponential distance matrix. We provide formulae of the inverse for these matrices, which extend the existing results for the bipartite distance matrix. These investigations lead us to introduce a \(q\)-analogue version of the bipartite Laplacian matrix.
keywords: Tree, Distance Matrix, Bipartite Distance Matrix Msc: [2020] 05C50, 15A15, 15A18 +
Footnote †: journal: XXXXXXXXX
## 1 Introduction
The distance matrix finds numerous applications across various fields, including chemistry, molecular biology, and telecommunications. In 1971, Graham and Pollack [11] presented a remarkable result that the determinant of the distance matrix of a tree depends solely on the number of vertices in the tree. Specifically, for a tree \(T\) with \(n\) vertices, the determinant of its distance matrix is given by \(\det D(T)=(-1)^{n-1}2^{n-2}(n-1)\). This groundbreaking discovery sparked significant interest among researchers. In the same paper [11], Graham and Pollack established a connection between the loop addressing problem in data communication systems and the count of negative eigenvalues of the distance matrix. Subsequently, in 1977, Graham, Hoffman, and Hosoya [9] demonstrated that the distance matrix \(D(G)\) of a graph \(G\) depends solely on its constituent blocks, regardless of how these blocks are connected. In 1978, Graham and Lovasz [10] provided a formula to compute the inverse of the distance matrix for trees. Bapat [1] and Bapat et al. [3] extended many results related to the distance matrix of trees to weighted trees. In 2006, Bapat, Lal, and Pati [4] introduced two types of \(q\)-analogue versions of the distance matrix: the \(q\)-distance matrix and the exponential distance matrix. These \(q\)-analogue versions generated considerable interest and were explored by various researchers (see, for example, [4; 17; 5; 13]). Let us revisit the definitions of the \(q\)-distance matrix and exponential distance matrix for a graph \(G\).
Let \(q\) be an indeterminate. For a positive integer \(k\), we define \(\{\!\!\{k\}\!\!\}:=1+q+q^{2}+\ldots+q^{k-1}\). Let us set \(\{\!\!\{0\}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
store adjacency information efficiently (see [7]). Consider a labelled, connected, bipartite graph \(G\) with the vertex bipartition \((L=\{l_{1},\ldots,l_{m}\},R=\{r_{1},\ldots,r_{n}\})\). The _bipartite adjacency matrix_\(\mathbb{A}(G)\) of \(G\) is an \(m\times n\) matrix, with rows indexed by \(l_{1},\ldots,l_{m}\) and columns indexed by \(r_{1},\ldots,r_{n}\). The \((i,j)\)th entry of \(\mathbb{A}(G)\) is \(1\) if \(l_{i}\sim r_{j}\), and \(0\) otherwise. The _bipartite distance matrix_\(\mathbb{B}(G)\) of \(G\) is an \(m\times n\) matrix, with rows indexed by \(l_{1},\ldots,l_{m}\) and columns indexed by \(r_{1},\ldots,r_{n}\). The \((i,j)\)th entry of \(\mathbb{B}(G)\) is \(\mathtt{dist}(l_{i},r_{j})\).
Significant research has been conducted on the bipartite adjacency matrix of bipartite graphs with unique perfect matchings (see [8, 15, 14, 16, 19] and references therein). The bipartite distance matrix for nonsingular trees, i.e., trees with perfect matchings, studied in [2]. It was shown that unlike the determinant of the distance matrix of a tree, which is independent of the tree's structure, the determinant of the bipartite distance matrix of a nonsingular tree depends on its structure. A combinatorial expression for the determinant of the bipartite distance matrix of a nonsingular tree presented in [2]. Remarkably, the bipartite distance matrix of a nonsingular tree is always invertible, and its inverse is provided in [6]. During the investigation of the inverse of the bipartite distance matrix, the authors in [6] uncovered a nontrivial generalization of the usual Laplacian matrix for a tree, which they have termed the _bipartite Laplacian matrix_. (We provide the definition of the bipartite Laplacian matrix in Section 2.) Unlike the usual Laplacian matrix, the bipartite Laplacian matrix is not necessarily symmetric, but it possesses many elementary properties akin to the usual Laplacian matrix, see Theorem 2.
In [12], Jana studied of two distinct \(q\)-analogue versions of the bipartite distance matrix, namely, the \(q\)-bipartite distance matrix and the exponential bipartite distance matrix. Let's revisit the definitions of these matrices for a bipartite graph \(G\).
Consider a connected, labelled bipartite graph \(G\) with a vertex bipartition \((L=l_{1},\ldots,l_{m},R=r_{1},\ldots,r_{n})\). The \(q\)_-bipartite distance matrix_\(\mathfrak{B}(G)\) (respectively, the _exponential bipartite distance matrix_\(\mathbb{E}(G)\)) of \(G\) is an \(m\times n\) matrix, where the rows are indexed by \(l_{1},\ldots,l_{m}\), and the columns are indexed by \(r_{1},\ldots,r_{n}\). In \(\mathfrak{B}(G)\) (respectively, \(\mathbb{E}(G)\)), the \((i,j)\)th entry is \(\{\,\mathtt{dist}_{T}(l_{i},r_{j})\}\) (respectively, \(q^{\mathtt{dist}_{T}(l_{i},r_{j})}\)).
Let \(G\) be a connected, labelled, bipartite graph with the vertex bipartition \((L=\{l_{1},\ldots,l_{m}\},R=\{r_{1},\ldots,r_{n}\})\). The \(q\)_-bipartite distance matrix_\(\mathfrak{B}(G)\) (respectively, _exponential bipartite distance matrix_\(\mathbb{E}(G)\)) of \(G\) is an \(m\times n\) matrix, with rows indexed by \(l_{1},\ldots,l_{m}\) and columns indexed by \(r_{1},\ldots,r_{n}\). The \((i,j)\)th entry of \(\mathfrak{B}(G)\) (respectively, \(\mathbb{E}(G)\)) is \(\{\,\mathtt{dist}_{T}(l_{i},r_{j})\}\) (respectively, \(q^{\mathtt{dist}_{T}(l_{i},r_{j})}\)).
If \(q=1\) then \(\mathfrak{B}(G)=\mathbb{B}(G)\) and \(\mathbb{E}(G)=\mathds{1}\mathds{1}^{t}\). Therefore, the \(q\)-bipartite distance matrix is a generalization of the bipartite distance matrix.
Let \(T\) be a nonsingular tree on \(2p\) vertices. It is shown in [12] that \(\det\mathbb{E}(T)=q^{p}(1-q^{2})^{p-1}\) and \(\det\mathfrak{B}(T)\) is divisible by \(q^{p-1}(1+q)^{p-1}\). Define \(\mathtt{bd}_{\mathfrak{q}}(T)\), the \(q\)-bipartite distance index of \(T\), as \(\mathtt{bd}_{\mathfrak{q}}(T):=\frac{\det\mathfrak{B}(T)}{q^{p-1}(1+q)^{p-1}}.\) Therefore, \(\det\mathfrak{B}(T)=q^{p-1}(1+q)^{p-1}\mathtt{bd}_{\mathfrak{q}}(T)\). The \(q\)-bipartite distance index of \(T\) follows an inclusion-exclusion type principle, enabling the recursive calculation of the \(q\)-bipartite distance index of any nonsingular tree. For more details, please refer to [12, Theorem 20]. Additionally, a standalone combinatorial formula for the \(q\)-bipartite distance index of a tree \(T\) is provided in [12, Theorem 23], by using the \(f\)-alternating sum technique (see [2, Section 5]).
It is evident that for a nonsingular tree \(T\), the exponential bipartite distance matrix of \(T\) is invertible when \(q\neq 0,\pm 1\). Moreover, when \(q\neq 0,-1\) and \(\mathtt{bd}_{\mathfrak{q}}(T)\neq 0\), the \(q\)-bipartite distance matrix of \(T\) is also invertible. In this article, our main focus is to provide the inverse formulas for both the exponential bipartite distance matrix and the \(q\)-bipartite distance matrix of a nonsingular tree whenever they exist.
We organize the article as follows. In Section 5, we discuss how to obtain the inverse of the bipartite
distance matrix of a nonsingular tree using the bipartite Laplacian matrix. In Section 3, we define the \(q\)-analogous version of the bipartite Laplacian matrix and explore its behaviour when extending a nonsingular tree by attaching a new \(P_{2}\) at some vertex of the old tree. In Section 4, we study the inverse of the exponential bipartite distance matrix of a nonsingular tree. In Section 5, we introduce \(q\)-analogous versions of a few more vectors and finally provide the inverse formula for the \(q\)-bipartite distance matrix of a nonsingular tree.
## 2 Inverse of the bipartite distance matrix
Recently, in [6], it is shown that the inverse of the bipartite distance matrix of a nonsingular tree \(T\) is a rank one update of a Laplacian matrix, which is referred to as the bipartite Laplacian matrix. Remarkably, the usual Laplacian matrix of any tree can be seen as a special case of the bipartite Laplacian matrix. Before providing the definition of the bipartite Laplacian matrix for a nonsingular tree, let's revisit some definitions.
Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). An alternating path in \(T\) is called an _even alternating path_ if it contains even number of matching edges and an alternating path in \(T\) is called an _odd alternating path_ if it contains odd number of matching edges.
**Definition**.: Let \(T\) be a nonsingular tree \(T\) on \(2p\) vertices and \((L,R)\) be a standard vertex bipartition of \(T\). The _bipartite Laplacian matrix_ of \(T\), denoted by \(\mathfrak{L}(T)\) or simply by \(\mathfrak{L}\) is the \(p\times p\) matrix whose rows are indexed by \(r_{1},\ldots,r_{p}\) and the columns are indexed by \(l_{1},\ldots,l_{p}\). The \((i,j)\)th entry of \(\mathfrak{L}(T)\), denoted by \(\mathfrak{L}_{ij}\), is defined as
\[\mathfrak{L}_{ij}=\left\{\begin{array}{rl}d(r_{i})d(l_{i})-1&\text{if $i=j$;}\\ d(r_{i})d(l_{j})&\text{if $i\neq j$ and the $r_{i}$-$l_{j}$ path is an odd alternating path;}\\ -d(r_{i})d(l_{j})&\text{if $i\neq j$ and the $r_{i}$-$l_{j}$ path is an even alternating path;}\\ -1&\text{if $i\neq j$ and $r_{i}\sim l_{j}$;}\\ 0&\text{otherwise.}\end{array}\right.\]
Here we want to emphasize that in the definition of \(\mathbb{B}(T),\mathfrak{B}(T)\), and \(\mathbb{E}(T)\) we indexed rows by \(l_{1},\ldots,l_{p}\) and columns by \(r_{1},\ldots,r_{p}\). But in the definition of bipartite Laplacian matrix we indexed rows by \(r_{1},\ldots,r_{p}\) and columns by \(l_{1},\ldots,l_{p}\).
In the following result we highlight some similarities between the bipartite Laplacian matrix and the usual Laplacian matrix.
**Theorem 2**.: _[_6_, Theorem 9]_ _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Suppose \(\mathfrak{L}\) is the bipartite Laplacian matrix of \(T\). Then the following assertions hold._
1. _The row and the column sums of_ \(\mathfrak{L}\) _are zero. (A similar property also holds for the usual Laplacian matrix of any graph.)_
2. _The cofactors of any two elements of_ \(\mathfrak{L}\) _are equal to one. (In the case of the usual Laplacian matrix of a graph, the cofactors of any two elements are equal to the number of spanning trees.)_
3. _The rank of_ \(\mathfrak{L}\) _is_ \(p-1\)_. (A similar result holds for the usual Laplacian matrix of a connected graph.)_
4. _The algebraic multiplicity of_ \(0\) _as an eigenvalue of_ \(\mathfrak{L}\) _is one. (This property is also true for the usual Laplacian matrix of a connected graph.)_
_._
* _If_ \(\mathbf{u}\) _is an eigenvector of_ \(\mathfrak{L}\) _corresponding to an eigenvalue_ \(\lambda\neq 0\) _then_ \(\mathds{1}^{t}\mathbf{u}=0\)_. (A similar property holds for the usual Laplacian matrix of any graph.)_
* _If_ \(T=F\circ K_{1}\) _for some tree_ \(F\) _on_ \(p\) _vertices. Then_ \(\mathcal{L}(F)=\mathfrak{L}(T)\) _where_ \(\mathcal{L}\) _is the usual Laplacian matrix of_ \(F\)_. (This means that the usual Laplacian matrix of a tree_ \(F\) _can be seen as a bipartite Laplacian matrix of another tree_ \(T\)_.)_
* _The matrix_ \(\mathfrak{L}\) _is a symmetric matrix if and only if_ \(T\) _is a corona tree, that is, if_ \(T=F\circ K_{1}\) _for some tree_ \(F\)_. (In contrast, the usual Laplacian matrix is always symmetric.)_
It's worth noting that in their work [6], Bapat, Jana, and Pati put forward a conjecture regarding the bipartite Laplacian matrix of a nonsingular tree. They proposed that this matrix is diagonalizable, and all of its eigenvalues are nonnegative real numbers. This conjecture, coupled with the properties outlined in Theorem 2, highlights the potential significance and relevance of the bipartite Laplacian matrix in the realm of nonsingular trees.
Similar to the inverse of the usual distance matrix of a tree, which can be viewed as a rank one update of its Laplacian matrix as shown in [3], the bipartite distance matrix of a nonsingular tree follows a similar pattern. It can also be seen as a rank one update of its bipartite Laplacian matrix. Before stating the result let us define some useful terminologies.
Let \(v\) be a vertex in a nonsingular tree \(T\). By \(\mathcal{A}^{+}_{T,v}\) we denote the set of all even alternating paths in \(T\) that start at \(v\). Similarly, we denote the set of all odd alternating paths in \(T\) that start at \(v\) by \(\mathcal{A}^{-}_{T,v}\). By \(\mathtt{diff}_{T}(v)\), we mean the quantity \(\mathtt{diff}_{T}(v):=|\mathcal{A}^{+}_{T,v}|-|\mathcal{A}^{-}_{T,v}|\). Let us define the vector \(\mathbf{\tau}\) by \(\mathbf{\tau}(v):=1-d(v)(1+\mathtt{diff}(v))\) for each \(v\in T\).
**Theorem 3**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Let \(\mathbb{B}(T)\) and \(\mathfrak{L}(T)\) be the bipartite distance matrix and the bipartite Laplacian matrix of \(T\), respectively. Let \(\mathbf{\tau}_{r}(T)\) and \(\mathbf{\tau}_{l}(T)\) be the restriction of the vector \(\mathbf{\tau}(T)\) on \(R\) and \(L\), respectively. Then_
\[\mathbb{B}(T)^{-1}=-\frac{1}{2}\mathfrak{L}(T)+\frac{1}{\mathtt{bd}(T)}\mathbf{ \tau}_{r}(T)\mathbf{\tau}_{l}^{t}(T).\]
In this article, we mostly employ the proof strategy used in [6], which involves using information from the existing tree to establish the inductive case for a tree formed by attaching a \(P_{2}\) to a vertex in the old tree. Below we provide the definition of attaching a \(P_{2}\) to a vertex.
**Definition** (Attaching a \(P_{2}\) to a vertex).: (a) Consider a tree \(T\) and a vertex \(v\). Let \(\widehat{T}\) denote the tree resulting from \(T\) by adding two new vertices, \(u\) and \(w\), along with the edges \([v,u]\) and \([u,w]\). We refer to this operation as _attaching a new \(P_{2}\) at the vertex \(v\)_.
(b) For a nonsingular tree \(T\) with a vertex set of size \(2p\) and a standard vertex bipartition \((L,R)\), let \(\widehat{T}\) be the tree formed by attaching a new \(P_{2}\) to some vertex \(v\in T\). To compute its \(q\)-bipartite distance matrix, exponential distance matrix and the bipartite Laplacian matrix, we label the new vertices according to the following procedure:
i) if \(v\in L\), then we put \(u=r_{p+1}\), \(w=l_{p+1}\), and ii) if \(v\in R\), then we put \(u=l_{p+1}\), \(w=r_{p+1}\).
## 3 \(q\)-bipartite Laplacian Matrix
As our primary focus is to provide the inverse of the \(q\)-bipartite distance matrix, we begin by introducing the \(q\)-analogous version of the bipartite Laplacian matrix, which we call the \(q\)_-bipartite Laplacian matrix_. For a positive integer \(k\), we define \(k_{q}\) as \(1+(k-1)q^{2}\).
**Definition**.: Let \(T\) be a nonsingular tree \(T\) on \(2p\) vertices and \((L,R)\) be a standard vertex bipartition of \(T\). The \(q\)_-bipartite Laplacian matrix_ of \(T\), denoted by \(\mathfrak{C}(T)\) or simply by \(\mathfrak{C}\) is the \(p\times p\) matrix whose rows are indexed by \(r_{1},\ldots,r_{p}\) and the columns are indexed by \(l_{1},\ldots,l_{p}\). The \((i,j)\)th entry of \(\mathfrak{C}(T)\), denoted by \(\mathfrak{C}_{ij}\), is defined as
\[\mathfrak{C}_{ij}=\left\{\begin{array}{rl}d(r_{i})_{q}d(l_{i})_{q}-q^{2}& \text{if $i=j$;}\\ d(r_{i})_{q}d(l_{j})_{q}&\text{if $i\neq j$ and the $r_{i}$-$l_{j}$ path is an odd alternating path;}\\ -d(r_{i})_{q}d(l_{j})_{q}&\text{if $i\neq j$ and the $r_{i}$-$l_{j}$ path is an even alternating path;}\\ -q^{2}&\text{if $i\neq j$ and $r_{i}\sim l_{j}$;}\\ 0&\text{otherwise.}\end{array}\right.\]
Clearly, for a nonsingular tree \(T\), if we put \(q=1\), then \(\mathfrak{C}=\mathfrak{L}\), the bipartite Laplacian matrix of \(T\). Therefore, the \(q\)-bipartite Laplacian matrix is a generalization of the usual bipartite Laplacian matrix.
Similar to signed degree vector as defined in [6], we now introduce the concept of a \(q\)-analogues of the signed degree vector which relates the structure of the bipartite Laplacian of the new tree with that of the old one.
**Definition**.: Let \(T\) be a nonsingular tree on \(2p\) vertices with the standard vertex bipartition \((L,R)\) and \(v\) be a vertex. Then the \(q\)_-signed degree vector_\(\underset{v}{\boldsymbol{\mathcal{H}}}\) at \(v\) is defined in the following way.
1. If \(v\in L\), then for \(i=1,\ldots,p\), we define 1. \(\underset{v}{\boldsymbol{\mathcal{H}}}(i)=d_{T}(r_{i}))_{q}\) if the \(v\)-\(r_{i}\) path is an odd alternating path, 2. \(\underset{v}{\boldsymbol{\mathcal{H}}}(i)=-(d_{T}(r_{i}))_{q}\) if the \(v\)-\(r_{i}\) path is an even alternating path, and 3. \(\underset{v}{\boldsymbol{\mathcal{H}}}(i)=0\) if the \(v\)-\(r_{i}\) path is not an alternating path.
2. Similarly, if \(v\in R\), then for \(i=1,\ldots,p\), we define 1. \(\underset{v}{\boldsymbol{\mathcal{H}}}(i)=(d_{T}(l_{i}))_{q}\) if the \(v\)-\(l_{i}\) path is an odd alternating path, 2. \(\underset{v}{\boldsymbol{\mathcal{H}}}(i)=-(d_{T}(l_{i}))_{q}\) if the \(v\)-\(l_{i}\) path is an even alternating path, and 3. \(\underset{v}{\boldsymbol{\mathcal{H}}}(i)=0\) if the \(v\)-\(l_{i}\) path is not an alternating path.
Clearly, if \(q=1\), then \(\underset{v}{\boldsymbol{\mathcal{H}}}=\underset{v}{\boldsymbol{\mathcal{H}}}\) for each \(v\in T\).
In the following result, we discuss how the structure of \(\mathfrak{C}\) changes after attaching a \(P_{2}\) to a vertex.
**Lemma 4**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Let \(\widehat{T}\) be the tree obtained from \(T\) by attaching a new \(P_{2}\) at \(v\). Let \(\underset{v}{\boldsymbol{\mathcal{H}}}_{v}\) be the signed degree vector at \(v\) of \(T\)._
1. _If_ \(v=l_{k}\) _for some_ \(k\)_, then_ \(\mathfrak{C}(\widehat{T})=\begin{bmatrix}\mathfrak{C}(T)+q^{2}\underset{v}{ \boldsymbol{\mathcal{H}}}_{v}e_{k}^{t}&-\underset{v}{\boldsymbol{\mathcal{H}}} _{v}\\ -q^{2}e_{k}^{t}&1\end{bmatrix}\)_._
2. _If_ \(v=r_{k}\) _for some_ \(k\)_, then_ \(\mathfrak{C}(\widehat{T})=\begin{bmatrix}\mathfrak{C}(T)+q^{2}e_{k}\underset{v} {\boldsymbol{\mathcal{H}}}_{v}^{t}&-q^{2}e_{k}\\ -\underset{v}{\boldsymbol{\mathcal{H}}}_{v}^{t}&1\end{bmatrix}\)_._
Proof.: We only provide the proof of item (a) as the proof of item (b) can be dealt similarly. Without loss of any generality, let us assume that \(\widehat{T}\) obtained from \(T\) by adding a new path \([l_{k},r_{p+1},l_{p+1}]\) for some \(1\leq k\leq p\). Clearly \(\widehat{L}=L\cup\{l_{p+1}\}\) and \(\widehat{R}=R\cup\{r_{p+1}\}\) is a standard vertex bipartition of \(\widehat{T}\). Let \(\mathfrak{C}\) and \(\widehat{\mathfrak{C}}\) be the bipartite Laplacian matrix of \(T\) and \(\widehat{T}\), respectively. Since \([r_{p+1},l_{p+1}]\) is the only
alternating path that starts at \(r_{p+1}\) and \(d_{\widehat{T}}(r_{p+1})=2\) with \([r_{p+1},l_{k}]\) is not a matching edge, it follows that the all entries of the \((p+1)\)th row of \(\mathfrak{P}\) is zero except \(\widehat{\mathfrak{C}}(p+1,p+1)=d_{\widehat{T}}(r_{p+1})_{q}d_{\widehat{T}}(l _{p+1})_{q}-1=1\) and \(\widehat{\mathfrak{C}}(p+1,k)=-1\). Hence \(\widehat{\mathfrak{C}}(p+1,:)=\begin{bmatrix}-q^{2}e_{k}^{t}&1\end{bmatrix}\).
Let us take \(i=1,\ldots,p\). Then \(r_{i}\nsim l_{p+1}\). Note that the \(l_{k}\)-\(r_{i}\) path is an odd alternating path if and only if the \(l_{p+1}\)-\(r_{i}\) path is an even alternating path. Similarly, the \(l_{k}\)-\(r_{i}\) path is an even alternating path if and only if the \(l_{p+1}\)-\(r_{i}\) path is an odd alternating path. Since \(d_{\widehat{T}}(l_{p+1})=1\), it follows that \(\widehat{\mathfrak{C}}(\{1,\ldots,p\},p+1)=-\begin{smallmatrix}-\boldsymbol{ \mu}_{l_{k}}\end{smallmatrix}\), where \(\begin{smallmatrix}\boldsymbol{\mu}_{l_{k}}\end{smallmatrix}\) is the \(q\)-signed degree vector of \(T\) at \(l_{k}\).
Since \(d_{T}(u)=d_{\widehat{T}}(u)\) for each \(u\in T\) other than \(l_{k}\), it follows that \(\widehat{\mathfrak{C}}(i,j)=\mathfrak{P}(i,j)\) for each \(i=1,\ldots,p\) and \(j=1,\ldots,k-1,k+1,\ldots,p\).
Finally, notice that \(d_{\widehat{T}}(l_{k})=d_{T}(l_{k})+1\). Therefore, for \(i=1,\ldots,p\), we have
\[\widehat{\mathfrak{C}}(i,k)=\left\{\begin{array}{rl}d_{T}(r_{i})_{q}(d_{T}( l_{k})_{q}+q^{2})-q^{2}&\text{if $i=k$;}\\ d_{T}(r_{i})_{q}(d_{T}(l_{k})_{q}+q^{2})&\text{if $i\neq k$ and the $r_{i}$-$l_{k}$ path is an odd alternating path;}\\ -d_{T}(r_{i})_{q}(d_{T}(l_{k})_{q}+q^{2})&\text{if $i\neq k$ and the $r_{i}$-$l_{k}$ path is an even alternating path;}\\ -q^{2}&\text{if $i\neq k$ and $r_{i}\sim l_{k}$;}\\ 0&\text{otherwise.}\end{array}\right.\]
Therefore, \(\widehat{\mathfrak{C}}(\{1,\ldots,p\},k)=\mathfrak{P}(\{1,\ldots,p\},k)+q^{2 }\boldsymbol{\mu}_{l_{k}}\). This completes the proof.
In the next result we extend the result [6, Lemma 7] which tells that the sum of all entries in a signed degree vector is always one.
**Lemma 5**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard bipartition \((L,R)\). Let \(u\) be any vertex in \(T\) and \(\boldsymbol{\mu}_{u}\) be the signed degree vector at \(u\). Then \(\mathds{1}^{t}\boldsymbol{\mu}_{u}=(\mathtt{diff}(u)+1)q^{2}-\mathtt{diff}(u)\)._
Proof.: We proceed by induction on \(p\geq 1\). For \(p=1\) the result is trivial. Assume the result to be true for nonsingular trees with less than \(2p\) vertices. Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard bipartition \((L,R)\). Let \(u\in R\). (The case of \(u\in L\) can be dealt similarly.) Let \(\boldsymbol{\mathfrak{H}}\) be the \(q-\)signed degree vector of \(u\) in \(T\).
Suppose \([v_{0},v_{1},\ldots,v_{k}]\) is a longest path in \(T\). As \(p>1\), we have \(k\geq 3\) and so we may assume that \(v_{0},v_{1}\neq u\). As \(T\) is nonsingular and this is a longest path, we have \(d(v_{0})=1\) and \(d(v_{1})=2\). Without loss of any generality, let us assume \(v_{0},v_{1}\in\{l_{p},r_{p}\}\). Let \(\widehat{T}=T-\{v_{0},v_{1}\}\) be the tree obtained from \(T\) by removing the vertices \(v_{0}\) and \(v_{1}\). Clearly, \(u\in\widehat{T}\). Let \(\widehat{\boldsymbol{\mathfrak{H}}}\) be the signed degree vector of \(u\) in \(\widehat{T}\). Note that \(\widehat{\boldsymbol{\mathfrak{H}}}\) is vector of size \(p-1\). Clearly, \(d_{T}(v)=d_{\widehat{T}}(v)\) for each \(v\in\widehat{T}-v_{2}\) and \(d_{T}(v_{2})=d_{\widehat{T}}(v_{2})+1\). It follows that \(\boldsymbol{\mathfrak{H}}(i)=\widehat{\boldsymbol{\mathfrak{H}}}(i)\) for each \(l_{i}\in L\setminus\{v_{2}\}\).
If either \(v_{2}\in R\) or the \(u\)-\(v_{2}\) path is not an alternating path then \(\boldsymbol{\mathfrak{H}}=\begin{bmatrix}\widehat{\boldsymbol{\mathfrak{H}}}&0 \end{bmatrix}\) and the result follows by induction. Now we assume that \(v_{2}\in L\) and the \(u\)-\(v_{2}\) path is an alternating path. Then \(v_{2}\sim r_{p}\) and \(d_{T}(l_{p})=1\). Let \(v_{2}=l_{k}\) for some \(1\leq k<p\). Note that the \(u\)-\(l_{p}\) path is also an alternating path and so we have \(\widehat{\boldsymbol{\mu}}(k)=(-1)^{t}d_{\widehat{T}}(v_{2})_{q}\) for some \(t\) and \(\boldsymbol{\mu}(p)=(-1)^{t+1}\). Since \(\boldsymbol{\mu}(k)=(-1)^{t}d_{T}(v_{2})_{q}=\widehat{\boldsymbol{\mu}}(k)+(-1) ^{t}q^{2}\), it follows that
\[\boldsymbol{\mu}=\begin{bmatrix}\widehat{\boldsymbol{\mu}}^{t}&(-1)^{t+1} \end{bmatrix}^{t}+(-1)^{t}\begin{bmatrix}q^{2}e_{k}&0\end{bmatrix}^{t}.\]
Clearly, \(\mathtt{diff}_{T}=\mathtt{diff}_{\widehat{T}}+(-1)^{t}\). Hence, the result follows by applying induction.
**Theorem 6**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices and \(\mathfrak{P}\) be the \(q\)-Laplacian matrix. Then \(\det\mathfrak{P}=1-q^{2}\)_
_Proof._ We use induction on \(p\). The result clearly true for \(n=1\) and \(2\). Let us assume the result to be true for nonsingular tree on \(2p\) vertices. Let \(\widehat{T}\) be a nonsingular tree on \(2p+2\) vertices. Then \(\widehat{T}\) can be viewed as obtained from some nonsingular tree \(T\) with \(2p\) vertices by attaching a new \(P_{2}\) at a vertex \(v\). Without loss of generality, let's assume that \(v=l_{k}\) for some \(1\leq k\leq p\). Let \(\mathfrak{P}\) be the \(q\)-signed degree vector at \(l_{k}\) of \(T\). By Lemma 4, we have
\[\mathfrak{P}(\widehat{T})=\begin{bmatrix}\mathfrak{P}(T)+q^{2}\mathbf{ \mu}_{v}\mathbf{e}_{k}^{t}&-\mathbf{\mu}_{v}\\ -q^{2}\mathbf{e}_{k}^{t}&1\end{bmatrix}\]
By using Schur's formula for the determinant, we get
\[\det\mathfrak{P}(T)=\det\left(\mathfrak{P}(T)+q^{2}\mathbf{\mu}_{v} \mathbf{e}_{k}^{t}-q^{2}\mathbf{\mu}_{v}\mathbf{e}_ {k}^{t}\right)=\det\mathfrak{P}(T).\]
Hence, the result follows by applying induction hypothesis.
In the following remark we discuss how the \(q\)-bipartite Laplacian matrix of a nonsingular tree can be obtained from some of its nonsingular subtrees. This plays a crucial rule proving our subsequent results.
**Remark 7**.: Consider the tree \(T\) with a matching edge \([l_{k_{1}},r_{k_{1}}]\), see Figure 1. Let the degree of \(l_{k_{1}}\) be \(s\), \(s\geq 1\). Let \(r_{k_{1}+1},r_{k_{2}+1},\ldots,r_{k_{s-1}+1}\) be some distinct vertices other than \(r_{k_{1}}\) that are adjacent to \(l_{k_{1}}\). Note that when we delete the edges \([l_{k_{1}},r_{k_{1}+1}]\), \(\ldots,[l_{k_{1}},r_{k_{s-1}+1}]\), we obtain \(s\) many smaller nonsingular trees, say \(T_{1},\ldots,T_{s}\). Assume that the vertex set of \(T_{1}\) is \(\{l_{1},\ldots,l_{k_{1}},r_{1},\ldots,r_{k_{1}}\}\), the vertex set of \(T_{2}\) is \(\{l_{k_{1}+1},\ldots,l_{k_{2}},r_{k_{1}+1},\ldots,r_{k_{2}}\}\), and so on up to the vertex set of \(T_{s}\) is \(\{l_{k_{s-1}+1},\ldots,l_{k_{s}},r_{k_{s-1}+1},\ldots,r_{k_{s}}\}\). Let us put an arrow on the edge \([l_{k_{1}},r_{k_{1}}]\) from \(r_{k_{1}}\) to \(l_{k_{1}}\). This arrow indicates that, from a vertex \(r_{i}\) in \(T_{2}\), we do not have an alternating path to a vertex in \(T_{1}\). Similarly, from a vertex \(r_{i}\) in \(T_{3}\), we do not have an alternating path to a vertex in \(T_{1},T_{2},T_{4},T_{5},\ldots,T_{s}\). Similar statements are true for vertices \(r_{i}\) in \(T_{4},\ldots,T_{s}\). Also, from a vertex \(l_{i}\) in \(T_{1}\), we only have alternating paths to vertices in \(T_{1}\) but not to a vertex in \(T_{2},\ldots,T_{s}\). Let us take \(F_{1}\) be the tree \(T_{1}\). For \(i=2,\ldots,s\), let \(F_{i}\) be the subtree of \(T\) obtained by taking \(F_{i-1}\) and \(T_{i}\) and by inserting the edge \([l_{k_{1}},r_{k_{i-1}+1}]\). Clearly \(F_{s}\) is the original tree \(T\).
a) Let \(\mathbf{\mu}_{l_{k_{1}}}\) be the \(q\)-signed degree vector at \(l_{k_{1}}\) of \(T_{1}\) and let \(\mathfrak{P}\) be the signed degree vector at \(l_{k_{1}}\) of \(T\). Then \(\mathbf{\mu}=\begin{bmatrix}\mathbf{\mu}_{l_{k_{1}}}&\mathbf{0}\end{bmatrix}^{t}\).
Figure 1: Understanding \(\mathfrak{L}(T)\).
b) Let \(\mathfrak{C}(F_{i})\) be the \(q\)-bipartite Laplacian matrix of \(F_{i}\) for \(i=1,\ldots,s\) and \(\mathfrak{C}(T_{i})\) be the \(q\)-bipartite Laplacian matrix of \(T_{i}\) for \(i=1,\ldots,s\). Clearly, \(\mathfrak{C}(T_{1})=\mathfrak{C}(F_{1})\). Let \(\boldsymbol{\Psi}_{\boldsymbol{r}_{k_{i}+1}}\) be the \(q\)-signed degree vector at \(r_{k_{i}+1}\) of \(T_{i+1}\), for \(i=1,\ldots,s-1\). By \(\boldsymbol{E}^{ij}\) we denote the matrix of an appropriate size with \(1\) at position \((i,j)\) and zero elsewhere. Then, for \(i=2,\ldots,s\), we have
\[\mathfrak{L}(F_{i})=\left[\begin{array}{c|c|c|c}\mathfrak{L}(T_{1})+(i-1)q^ {2}\boldsymbol{\mu}_{\boldsymbol{l}_{k_{1}}}\boldsymbol{e}_{k_{1}}^{t}&- \boldsymbol{\eta}_{\boldsymbol{l}_{k_{1}}}\boldsymbol{\Psi}_{\boldsymbol{r}_{ k_{1}+1}}^{t}&\ldots&-\boldsymbol{\eta}_{\boldsymbol{l}_{k_{1}}}\boldsymbol{ \Psi}_{\boldsymbol{r}_{k_{i-1}+1}}^{t}\\ \hline-q^{2}\boldsymbol{E}^{1k_{1}}&\mathfrak{L}(T_{2})+q^{2}\boldsymbol{e}_{ 1}\boldsymbol{\Psi}_{\boldsymbol{r}_{k_{1}+1}}^{t}&\ldots&\boldsymbol{0}\\ \hline\vdots&\boldsymbol{0}&\ddots&\boldsymbol{0}\\ \hline-q^{2}\boldsymbol{E}^{1k_{1}}&\boldsymbol{0}&\cdots&\mathfrak{L}(T_{i}) +q^{2}\boldsymbol{e}_{1}\boldsymbol{\Psi}_{\boldsymbol{r}_{k_{i-1}+1}}^{t} \end{array}\right]\]
In particular,
\[\mathfrak{L}(T)=\left[\begin{array}{c|c|c|c}\mathfrak{L}(T_{1})+(s-1)q^{2} \boldsymbol{\mu}_{\boldsymbol{l}_{k_{1}}}\boldsymbol{e}_{k_{1}}^{t}&- \boldsymbol{\Psi}_{\boldsymbol{l}_{k_{1}}}\boldsymbol{\Psi}_{\boldsymbol{r}_{ k_{1}+1}}^{t}&\ldots&-\boldsymbol{\Psi}_{\boldsymbol{l}_{k_{1}}}\boldsymbol{ \Psi}_{\boldsymbol{r}_{k_{i-1}+1}}^{t}\\ \hline-q^{2}\boldsymbol{E}^{1k_{1}}&\mathfrak{L}(T_{2})+q^{2}\boldsymbol{e}_{ 1}\boldsymbol{\Psi}_{\boldsymbol{r}_{k_{1}+1}}^{t}&\ldots&\boldsymbol{0}\\ \hline\vdots&\boldsymbol{0}&\ddots&\boldsymbol{0}\\ \hline-q^{2}\boldsymbol{E}^{1k_{1}}&\boldsymbol{0}&\cdots&\mathfrak{L}(T_{s}) +q^{2}\boldsymbol{e}_{1}\boldsymbol{\Psi}_{\boldsymbol{r}_{k_{s-1}+1}}^{t} \end{array}\right]\]
## 4 Exponential bipartite distance matrix
Let us recall the following result which tells that the exponential bipartite distance matrix of a nonsingular tree is independent of tree structure.
**Theorem 8**.: _[_12_]_ _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Then \(\det\mathbb{E}(T)=q^{p}(1-q^{2})^{p-1}\)._
By Theorem 8, we observe that \(\mathbb{E}(T)\) is invertible whenever \(q\neq 0,\pm 1\). In the following result, we present the inverse of \(\mathbb{E}(T)\) under the condition that \(q\neq 0,\pm 1\).
**Theorem 9**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Suppose \(q\neq 0,\pm 1\). Then_
\[\mathbb{E}(T)^{-1}=\frac{\mathfrak{C}}{q(1-q^{2})}.\]
_Proof._ We proceed by induction on \(p\). The base can be verified easily. Assuming the result holds for \(p\). Let \(\widehat{T}\) be a nonsingular tree with \(2p+2\) vertices. We can obtain \(\widehat{T}\) from some nonsingular tree \(T\) with \(2p\) vertices by attaching a new \(P_{2}\) at a vertex \(v\). Without loss of generality, let's assume that \(v=l_{k}\) for some \(1\leq k\leq p\). Let \(\boldsymbol{\Psi}_{\boldsymbol{\mu}}\) be the \(q\)-signed degree vector at \(l_{k}\) of \(T\).
By item (a) of Lemma 4, we have
\[\mathfrak{C}(\widehat{T})=\begin{bmatrix}\mathfrak{C}(T)+q^{2}\boldsymbol{ \Psi}_{\boldsymbol{\mu}}\boldsymbol{e}_{k}^{t}&-\boldsymbol{\Psi}_{ \boldsymbol{\mu}}\\ -q^{2}\boldsymbol{e}_{k}^{t}&1\end{bmatrix}\]
Let \(\boldsymbol{x}\) be a vector of size \(p\) such that \(\boldsymbol{x}(i)=q^{\mathtt{dist}(l_{i},r_{p+1})}\) for each \(i=1,\cdots,p\). Then the exponential bipartite distance matrix of \(\widehat{T}\) can be written as
\[\mathbb{E}(\widehat{T})=\begin{bmatrix}\mathbb{E}(T)&\boldsymbol{x}\\ q^{2}\boldsymbol{e}_{k}^{t}\mathbb{E}(T)&q\end{bmatrix}.\]
Now note that
\[\mathfrak{R}(\widehat{T})\mathbb{E}(\widehat{T}) =\begin{bmatrix}\mathfrak{R}(T)+q^{2}\boldsymbol{\Psi}\boldsymbol{e }_{k}^{t}&-\boldsymbol{\Psi}\boldsymbol{\mu}\\ -q^{2}\boldsymbol{e}_{k}^{t}&1\end{bmatrix}\begin{bmatrix}\mathbb{E}(T)& \boldsymbol{x}\\ q^{2}\boldsymbol{e}_{k}^{t}\mathbb{E}(T)&q\end{bmatrix}\] \[=\begin{bmatrix}\mathfrak{C}(T)\mathbb{E}(T)&\mathfrak{R}(T) \boldsymbol{x}+q^{2}\boldsymbol{\Psi}\boldsymbol{e}_{k}^{t}\boldsymbol{x}-q \boldsymbol{\Psi}\boldsymbol{\mu}\\ \boldsymbol{0}&-q^{2}\boldsymbol{e}_{k}^{t}\boldsymbol{x}+q\end{bmatrix}\] \[=\begin{bmatrix}q(1-q^{2})I&\mathfrak{R}(T)\boldsymbol{x}+q(q^{2 }-1)\boldsymbol{\Psi}\boldsymbol{\mu}\\ \boldsymbol{0}&-q^{2}\boldsymbol{e}_{k}^{t}\boldsymbol{x}+q\end{bmatrix}. \tag{1}\]
The last equality follows from induction hypothesis and by using \(\boldsymbol{e}_{k}^{t}\boldsymbol{x}=\boldsymbol{x}(k)=\mathtt{dist}(l_{k},r_ {p+1})=1\). Therefore, to complete the proof, we only need to show that \(\mathfrak{R}(T)\boldsymbol{x}=q(1-q^{2})\boldsymbol{\Psi}\).
Let the degree of \(l_{k}\) in \(T\) be \(s\), (\(s\geq 1\)). Let \(T_{1}\) be the tree obtained from \(T\) by removing all vertices adjacent to \(l_{k}\) except the vertex \(r_{k}\). If \(s=1\) then \(T_{1}\) is the same as \(T\). Without loss of any generality, let us assume that \(T_{1}\) has the vertex set \(\{l_{1},r_{1},\ldots,l_{k-1},r_{k-1},l_{k},r_{k}\}\). Let \(\widehat{\boldsymbol{\mu}}\) be the \(q\)-signed degree vector at \(v\) of \(T_{1}\). By Remark 7, \(\boldsymbol{\Psi}=\begin{bmatrix}\widehat{\boldsymbol{\Psi}}&\boldsymbol{0} \end{bmatrix}^{t}\). Further note that
\[\boldsymbol{x}=\mathbb{E}(T)\boldsymbol{e}_{k}+(q^{2}-1)\Big{[}\ q^{ \mathtt{dist}(l_{1},r_{k})}&\cdots&q^{\mathtt{dist}(l_{k-1},r_{k})}&0&0&\cdots &0\end{bmatrix}^{t}. \tag{2}\]
Let \(\boldsymbol{z}\) be a vector of size \((p-k)\) defined as follows. For \(i=1,\ldots,(p-k)\), \(\boldsymbol{z}(i)=-1\) if \(r_{k+i}\) adjacent to \(l_{k}\) and \(\boldsymbol{z}(i)=0\) otherwise.
Hence, by Remark 7, we have
\[\mathfrak{R}(T)\begin{bmatrix}\mathbb{E}(T_{1})\boldsymbol{e}_{k }-q\boldsymbol{e}_{k}\\ \boldsymbol{0}\end{bmatrix}^{t} =\begin{bmatrix}\mathfrak{R}(T_{1})+(s-1)\widehat{\boldsymbol{ \Psi}}\boldsymbol{e}_{k}^{t}&*\\ q^{2}\boldsymbol{z}\boldsymbol{e}_{k}^{t}&*\end{bmatrix}\begin{bmatrix} \mathbb{E}(T_{1})\boldsymbol{e}_{k}-q\boldsymbol{e}_{k}\\ \boldsymbol{0}\end{bmatrix}^{t}\] \[=\begin{bmatrix}\mathfrak{R}(T_{1})\mathbb{E}(T_{1}) \boldsymbol{e}_{k}+(s-1)\widehat{\boldsymbol{\Psi}}\boldsymbol{e}_{k}^{t} \mathbb{E}(T_{1})\boldsymbol{e}_{k}\\ q^{3}\boldsymbol{z}\end{bmatrix}-\begin{bmatrix}q\mathfrak{R}(T_{1}) \boldsymbol{e}_{k}+q(s-1)\widehat{\boldsymbol{\Psi}}\boldsymbol{\mu}\\ q^{3}\boldsymbol{z}\end{bmatrix}\] \[=\begin{bmatrix}q(1-q^{2})\boldsymbol{e}_{k}-q(\widehat{ \boldsymbol{\Psi}}-q^{2}\boldsymbol{e}_{k})\\ \boldsymbol{0}\end{bmatrix}\] \[=q(e_{k}-\mathfrak{H}). \tag{3}\]
The second last equality holds by using the induction hypothesis and the fact that \(\mathfrak{R}(T_{1})\boldsymbol{e}_{k}=\widehat{\boldsymbol{\Psi}}-q^{2} \boldsymbol{e}_{k}\). The last equality holds by using Remark 7. Hence, by (2) and (3), it follows that
\[\mathfrak{R}(T)\boldsymbol{x}=q(1-q^{2})\boldsymbol{e}_{k}-q(1-q^{2})(e_{k}- \mathfrak{H})=q(1-q^{2})\mathfrak{H}.\]
This completes the proof.
## 5 \(q\)-bipartite distance matrix
In this section, first, we recall some terminologies from [12] and using them we present a formula for the inverse of the \(q\)-bipartite distance matrix of a nonsingular tree.
Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). The vector \(\mathfrak{q}_{\boldsymbol{\tau}}(T)\), or simply \(\mathfrak{q}_{\boldsymbol{\tau}}\), of size \(2p\) is defined by \(\mathfrak{q}_{\boldsymbol{\tau}}(v):=\big{(}1-d_{T}(v)\big{)}\big{(}1+ \mathtt{diff}_{T}(v)\big{)}q^{2}-\mathtt{diff}_{T}(v)\) for each \(v\) in \(T\). The entries of \(\mathfrak{q}_{\boldsymbol{\tau}}\) are ordered according to \(l_{1},\ldots,l_{p},r_{1},\ldots,r_{p}\). Clearly, for \(q=1\), \(\mathfrak{q}_{\boldsymbol{\tau}}\) is the vector \(\boldsymbol{\tau}\), as defined in [2].
By \(\mathfrak{q}_{\boldsymbol{\tau}}(T)\), or simply by \(\mathfrak{q}_{\boldsymbol{\tau}}\), we mean the restriction of \(\mathfrak{q}_{\boldsymbol{\tau}}(T)\) on \(R\). Similarly, by \(\mathfrak{q}_{\boldsymbol{\tau}}(T)\), or simply by \(\mathfrak{q}_{\boldsymbol{\tau}}\), we mean the restriction of \(\mathfrak{q}_{\boldsymbol{\tau}}(T)\) on \(L\).
The next result relates the structures the \(\mathfrak{V}_{r}\) vectors of the new tree with that of the old one under attaching a new \(P_{2}\) at a vertex. The first item was proved in [12, Lemma 13] and the proof of the second item is routine.
**Lemma 10**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Let \(\widehat{T}\) be the tree obtained from \(T\) by attaching a new \(P_{2}\) at \(v\)._
_(a) If \(v=r_{k}\) for some \(k\), then \(\mathfrak{V}_{r}(\widehat{T})=\begin{bmatrix}\mathfrak{V}_{r}(T)\\ 0\end{bmatrix}-(1+\mathtt{diff}_{T}(v))\begin{bmatrix}\boldsymbol{e}_{k}\\ -1\end{bmatrix}\)._
_(b) If \(v=l_{k}\) for some \(k\), then \(\mathfrak{V}_{r}(\widehat{T})=\begin{bmatrix}\mathfrak{V}_{r}(T)\\ 1\end{bmatrix}-\begin{bmatrix}\mathfrak{V}_{v}(T)\\ 0\end{bmatrix}\), where \(\mathfrak{V}_{v}(T)\) is the \(q\)-signed degree vector at \(v\) of \(T\)._
In the following result, we present details regarding the row sums and column sums of the \(q\)-bipartite Laplacian matrix.
**Theorem 11**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Suppose \(\mathfrak{R}(T)\) is the bipartite Laplacian matrix of \(T\). Then we have_
\[\mathds{1}^{t}\mathfrak{R}(T)=(1-q^{2})(\mathfrak{V}_{l}(T))^{t}\qquad\text{ and}\qquad\mathfrak{R}(T)\mathds{1}=(1-q^{2})\mathfrak{V}_{r}(T).\]
_Proof._ We use induction on \(p\). Let \(\widehat{T}\) be a nonsingular tree on \(2p+2\) vertices. Suppose \(\widehat{T}\) is obtained from some nonsingular tree \(T\) on \(2p\) vertices by attaching a new \(P_{2}\) at some vertex \(v\). Notice that either \(v\in L\) or \(v\in R\). (We shall proof the case \(v\in L\) as the other case can be dealt similarly.)
Suppose \(v=l_{k}\) and \(l_{k}\sim r_{p+1}\). Clearly, \(l_{p+1},r_{p+1}\notin T\). Let \(\mathfrak{H}\) be the \(q\)-signed degree vector at \(l_{k}\) of \(T\). By Lemma 4, we get
\[\mathfrak{R}(\widehat{T})=\begin{bmatrix}\mathfrak{R}(T)+q^{2}\boldsymbol{ \mu}\boldsymbol{e}_{k}^{t}&-\boldsymbol{\mu}\\ -q^{2}\boldsymbol{e}_{k}^{t}&1\end{bmatrix}.\]
Suppose \((z_{l}(\widehat{T}))_{i}=(1-q^{2})\mathfrak{V}_{r}(\widehat{T})(l_{i})\) and \((z_{r}(\widehat{T}))_{i}=(1-q^{2})\mathfrak{V}_{r}(\widehat{T})(r_{i})\) for \(1\leq i\leq p\). Clearly, \(\boldsymbol{z}_{\widehat{T}}(r_{p+1})=1-q^{2}\) and \(\boldsymbol{z}_{\widehat{T}}(l_{p+1})=\mathtt{diff}_{\widehat{T}}(l_{p+1})(q^ {2}-1)\). By Lemma 5, \(1-\mathds{1}^{t}\boldsymbol{\mu}=(1+\mathtt{diff}_{T}(l_{k}))-(1+\mathtt{diff }_{T}(l_{k}))q^{2}=\mathtt{diff}_{\widehat{T}}(l_{p+1})(q^{2}-1)\), as \(\mathtt{diff}_{\widehat{T}}(l_{p+1})=-(1+\mathtt{diff}_{T}(l_{k}))\). Therefore, it follows that
\[(\mathds{1}^{t}\mathfrak{R}(\widehat{T}))_{p+1}=1-\mathds{1}^{t}\boldsymbol{ \mu}=(\boldsymbol{z}_{l}(\widehat{T}))_{p+1}\quad\text{and}\quad(\mathfrak{R} (\widehat{T})\mathds{1})_{p+1}=1-q^{2}=(\boldsymbol{z}_{r}(\widehat{T}))_{p+1}.\]
For each \(1\leq i\leq p\), we have \(d_{\widehat{T}}(r_{i})=d_{T}(r_{i})\) and \(\mathtt{diff}_{\widehat{T}}(r_{i})=\mathtt{diff}_{T}(r_{i})+\boldsymbol{x}(r _{i})\), where \(\boldsymbol{x}\) is a vector such that \(\boldsymbol{x}(r_{i})=0\) if the \(r_{i}\)-\(l_{p+1}\) path is not an alternating path, \(1\) if the \(r_{i}\)-\(l_{p+1}\) path is an even alternating path, and \(-1\) if the \(r_{i}\)-\(l_{p+1}\) path is an odd alternating path. Therefore, by applying induction hypothesis, we get
\[\left(\mathfrak{R}(T)\mathds{1}+(q^{2}-1)\mathfrak{H}\right)_{i} =(\mathtt{diff}_{T}(r_{i})+1)(d_{T}(r_{i})-1)q^{4}+\left[\mathtt{ diff}_{T}(r_{i})(2-d_{T}(r_{i}))+(1-d_{T}(r_{i}))\right]q^{2}\] \[\quad-\mathtt{diff}_{T}(r_{i})+(q^{2}-1)(1+(d_{T}(r_{i})-1)q^{2}) \boldsymbol{x}(i)\] \[=(1-q^{2})\mathfrak{V}_{r}(\widehat{T})(r_{i}).\]
Further, for \(1\leq i\leq p\), note that \(\mathtt{diff}_{\widehat{T}}(l_{i})=\mathtt{diff}_{T}(l_{i})\), \(d_{\widehat{T}}(l_{i})=d_{T}(l_{i})\) for \(i\neq k\) and \(d_{\widehat{T}}(l_{k})=d_{T}(l_{k})+1\). Then, \((z_{l}(\widehat{T}))_{i}=(z_{l}(T))_{i}\) for each \(1\leq i\leq p\) and \(i\neq k\). Therefore, by induction hypotheses, it follows that \((\mathds{1}^{t}\mathfrak{R}(\widehat{T}))_{i}=(z_{l}(\widehat{T}))_{i}\) for each \(i\neq k\). For \(i=k\), by using induction hypothesis and
Lemma 5, we have
\[\left(\mathds{1}^{t}\mathfrak{A}(\widehat{T})\right)_{k} =(\mathbf{z}_{l}(T))_{k}+q^{2}(\mathds{1}^{t}\mathfrak{H}-1)\] \[=(\mathbf{z}_{l}(T))_{k}+q^{2}(q^{2}-1)(1+\mathtt{diff}_{T}(l_{k}))=(1 -q^{2})^{q_{\overline{t}}}(\widehat{T})(l_{k})\]
This completes the proof.
**Theorem 12**.: _[_12_]_ _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Then following assertions hold._
_(a) \(\mathfrak{B}(T)\mathfrak{T}_{\mathbf{r}}(T)=\mathtt{bd}_{\mathfrak{q}}(T)\mathds{1}\) and \(\mathfrak{Tr}_{l}(T)^{t}\mathfrak{B}(T)=\mathtt{bd}_{\mathfrak{q}}(T)\mathds{ 1}^{t}\)._
_(b) Let \(v\) be a vertex and let \(\widehat{T}\) be the tree obtained from \(T\) by attaching a new \(P_{2}\) at \(v\). Then \(\mathtt{bd}_{\mathfrak{q}}(\widehat{T})=\mathtt{bd}_{\mathfrak{q}}(T)+(1+q)( 1+\mathtt{diff}_{T}(v))\)._
In the next result we discuss a relationship between the \(q\)-bipartite distance matrix and the \(q\)-bipartite Laplacian matrix of a nonsingular tree.
**Lemma 13**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Let \(\mathfrak{B}(T)\) and \(\mathfrak{A}(T)\) be the bipartite distance matrix and the bipartite Laplacian matrix of \(T\), respectively. Let \(\mathfrak{Tr}_{r}(T)\) be the restriction of \(\mathfrak{Tr}(T)\) on \(R\). Then_
\[-\mathfrak{A}(T)\mathfrak{B}(T)+(1+q)^{q_{\overline{t}}}r(T)\mathds{1}^{t}=q( 1+q)I\]
Proof.: We proceed by induction on \(p\). The base can be verified easily. Assuming the result holds for \(p\). Let \(\widehat{T}\) be a nonsingular tree with \(2p+2\) vertices. We can obtain \(\widehat{T}\) from some nonsingular tree \(T\) with \(2p\) vertices by attaching a new \(P_{2}\) at a vertex \(v\). Without loss of generality, let's assume that \(v=l_{k}\) for some \(1\leq k\leq p\). Let \(\mathfrak{H}\) be the \(q\)-signed degree vector at \(l_{k}\) of \(T\).
By item (a) of Lemma 4, we have
\[\mathfrak{A}(\widehat{T})=\begin{bmatrix}\mathfrak{A}(T)+q^{2}\mathbf{\mu}\mathbf{e} _{k}^{t}&-\mathbf{\mu}\\ -q^{2}\mathbf{e}_{k}^{t}&1\end{bmatrix}\]
Let \(\mathbf{x}\) be a vector of size \(p\) such that \(\mathbf{x}(i)=\{\,\mathtt{dist}(l_{i},r_{p+1})\}\) for each \(i=1,\cdots,p\). Then the \(q\)-bipartite distance matrix of \(\widehat{T}\) can be written as
\[\mathfrak{B}(\widehat{T})=\begin{bmatrix}\mathfrak{B}(T)&\mathbf{x}\\ (1+q)\mathds{1}^{t}+q^{2}\mathbf{e}_{k}^{t}\mathfrak{B}(T)&1\end{bmatrix}.\]
Now note that
\[\mathfrak{A}(\widehat{T})\mathfrak{B}(\widehat{T}) =\begin{bmatrix}\mathfrak{A}(T)+q^{2}\mathbf{\mu}\mathbf{e}_{k}^{t}&- \mathbf{\mu}\\ -q^{2}\mathbf{e}_{k}^{t}&1\end{bmatrix}\begin{bmatrix}\mathfrak{B}(T)&\mathbf{x}\\ (1+q)\mathds{1}^{t}+q^{2}\mathbf{e}_{k}^{t}\mathfrak{B}(T)&1\end{bmatrix}\] \[=\begin{bmatrix}\mathfrak{A}(T)\mathfrak{B}(T)-(1+q)^{q}\mathbf{\mu} \mathds{1}^{t}&\mathfrak{A}(T)\mathbf{x}+(q^{2}-1)^{q}\mathbf{\mu}\\ (1+q)\mathds{1}^{t}&1-q^{2}\end{bmatrix}\] \[=\begin{bmatrix}(1+q)^{q_{\overline{t}}}r(T)\mathds{1}^{t}-q(1+q )I-(1+q)^{q}\mathbf{\mu}\mathds{1}^{t}&\mathfrak{A}(T)\mathbf{x}+(q^{2}-1)^{q}\mathbf{\mu }\\ (1+q)\mathds{1}^{t}&1-q^{2}\end{bmatrix} \tag{4}\]
The last equality follows from induction hypothesis. By part (b) of Lemma 10, we get
\[-\mathfrak{A}(\widehat{T})\mathfrak{B}(\widehat{T})+(1+q)^{q_{ \overline{t}}}r(\widehat{T})\mathds{1}^{t} =-\mathfrak{A}(\widehat{T})\mathfrak{B}(\widehat{T})+(1+q) \begin{bmatrix}(\mathfrak{Tr}_{r}(T)-\mathbf{\mu})\mathds{1}^{t}\\ \mathds{1}^{t}\end{bmatrix}\] \[=\begin{bmatrix}q(1+q)I&-\mathfrak{A}(T)\mathbf{x}+(1+q)^{q_{ \overline{r}}}r(T)-q(1+q)\mathbf{\mu}\\ 0&q(1+q)\end{bmatrix}\]
Therefore, we only remain to show that \(\mathfrak{C}(T)\mathbf{x}=(1+q)^{q}\mathbf{\tau}_{r}(T)-q(1+q)\mathbf{\mathcal{H}}\).
Let the degree of \(l_{k}\) in \(T\) be \(s\), (\(s\geq 1\)). Let \(T_{1}\) be the tree obtained from \(T\) by removing all vertices adjacent to \(l_{k}\) except the vertex \(r_{k}\). If \(s=1\) then \(T_{1}\) is the same as \(T\). Without loss of any generality, let us assume that \(T_{1}\) has the vertex set \(\{l_{1},r_{1},\ldots,l_{k-1},r_{k-1},l_{k},r_{k}\}\). Further, note that
\[\mathbf{x}=\mathfrak{B}(T)\mathbf{e}_{k}+(1+q)\Big{[}\begin{array}{cccccccc}q^{ \mathtt{dist}(l_{1},r_{k})}&\cdots&q^{\mathtt{dist}(l_{k-1},r_{k})}&0&0&\cdots& 0\end{array}\Big{]}^{t}. \tag{5}\]
By (3), we already have
\[\mathfrak{C}(T)\left[\begin{matrix}\mathbb{E}(T_{1})\mathbf{e}_{k}-q\mathbf{e}_{k}\\ \mathbf{0}\end{matrix}\right]^{t}=q(\mathbf{e}_{k}-\mathfrak{H}). \tag{6}\]
By using induction hypothesis, \(\mathfrak{C}(T)\mathfrak{B}(T)=(1+q)^{q}\mathbf{\tau}_{r}(T)\mathds{1}^{t}-q(1+q)I\). Hence, by (5) and (6), it follows that
\[\mathfrak{C}(T)\mathbf{x}=(1+q)^{q}\mathbf{\tau}_{r}(T)-q(1+q)\mathbf{e}_{k}+q(1+q)(\mathbf{e} _{k}-\mathfrak{H})=(1+q)^{q}\mathbf{\tau}_{r}(T)-q(1+q)^{q}\mathbf{\mathcal{H}}.\]
This completes the proof.
We are now in a position to supply a formula for the inverse of the \(q\)-bipartite distance matrix of a nonsingular tree \(T\).
**Theorem 14**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Suppose \(q\neq 0,-1\) and \(\mathsf{bd_{q}}(T)\neq 0\). Then_
\[\mathfrak{B}(T)^{-1}=-\frac{1}{q(1+q)}\mathfrak{C}(T)+\frac{1}{q\,\mathsf{bd_ {q}}(T)}^{q}\mathbf{\tau}_{r}(T)^{q}\mathbf{\tau}_{l}^{t}(T).\]
_Proof._ By using Theorem 12 and Lemma 13 we have
\[\left(-\frac{1}{q(1+q)}\mathfrak{C}(T)+\frac{1}{q\,\mathsf{bd_{q}}(T)}\mathbf{ \tau}_{r}(T)\mathfrak{V}_{l}^{t}(T)\right)\mathfrak{B}(T)=\frac{1}{q(1+q)} \left(-\mathfrak{C}(T)\mathfrak{B}(T)+(1+q)^{q}\mathbf{\tau}_{r}(T)\mathds{1}^{t} \right)=I\]
This completes the proof.
| 私たちは、二対称距離行列の2つの異なる$q$アナログを検討します。これらの$q$アナログは、$q$二対称距離行列と指数距離行列です。これらの行列の逆行列の公式を提供します。これらは、二対称距離行列の既存の結果を拡張します。これらの調査により、$q$アナログの二対称ラ placian の行列を導入しました。 |
2309.04432 | Nonlinear Stability of Static Néel Walls in Ferromagnetic Thin Films | In this paper, the nonlinear (orbital) stability of static 180^\circ N\'eel
walls in ferromagnetic films, under the reduced wave-type dynamics for the
in-plane magnetization proposed by Capella, Melcher and Otto [CMO07], is
established. It is proved that the spectrum of the linearized operator around
the static N\'eel wall lies in the stable complex half plane with non-positive
real part. This information is used to show that small perturbations of the
static N\'eel wall converge to a translated orbit belonging to the manifold
generated by the static wall. | A. Capella, C. Melcher, L. Morales, R. G. Plaza | 2023-09-08T16:53:59 | http://arxiv.org/abs/2309.04432v2 | # Nonlinear stability of static Neel walls in ferromagnetic thin films
###### Abstract.
In this paper, the nonlinear (orbital) stability of static \(\pi\)-shifted Neel walls in ferromagnetic films, under the reduced wave-type dynamics for the in-plane magnetization proposed by Capella, Melcher and Otto [1], is established. It is proved that the spectrum of the linearized operator around the static Neel wall lies in the stable complex half plane with non-positive real part. This information is used to show that small perturbations of the static Neel wall converge to a translated orbit belonging to the manifold generated by the static wall.
###### Contents
* 1 Introduction
* 2 Preliminaries and main result
* 3 The linearized operator around the static Neel wall's phase
* 4 Perturbation equations and spectral stability
* 5 Semigroup generation and decay
* 6 Nonlinear (orbital) stability
## 1. Introduction
In order to study the motion of magnetization vectors in ferromagnetic materials, in 1935 Landau and Lifshitz [1] introduced a model system of equations, later reformulated and re-derived by Gilbert [14, 15], which constitutes the fundamental and best accepted mathematical model that describes the magnetization in ferromagnets. Since ferromagnetic thin films exhibit a wide range of applications to the design and manufacturing of magnetic storage devices, the Landau-Lifshitz-Gilbert (LLG) model has attracted a great deal of attention from physicists and mathematicians alike in the last decades. A great variety of patterns of magnetization vectors appear in ferromagnetic films. For instance, narrow transition regions between opposite magnetization domains are called _domain walls_. Some of the most common wall types in such materials are called Neel walls, separating two opposite magnetization regions by an in-plane rotation, oriented along an axis; Bloch walls, for which the magnetization rotates about the normal of the domain wall, pointing
along the domain wall plane in a 3D system; or Walker walls, which are formed under the presence of an external magnetic field (see, e.g., Hubert and Schafer [10] for further information).
One of the main objectives of recent mathematical studies is to understand the behavior of these dynamical coherent structures developed by the magnetization of a ferromagnet. The stability under small perturbations of these microstructures is important, not only to validate the mathematical model but also to enhance the numerical simulations performed by physicists and engineers to optimize and design new ferromagnetic materials (see, e.g., [1]). Up to our knowledge, the literature on the dynamical stability theory for magnetic domain walls is scarce. The stability of one-dimensional Bloch walls has been addressed by Krukowski [11] using a spectral (linearized) calculation of energies of ground states, and by Carbou and Labbe [12], under the nanowire, one-dimensional approximation by Sanchez [13]. Takasao [14] improved this last result for Walker walls, also in one dimension and in the presence of an external magnetic field. Carbou [1] proved the stability of a Walker wall in the three-dimensional model using the energy method and under a simplifying assumption that gets rid of the non-local part of the operator. Most of these works employ energy methods to conclude stability, that is, the analyses are based on performing _a priori_ energy estimates on the equations of magnetization evolution and relying on their intrinsic structure.
This paper is devoted to studying the dynamical stability of static Neel walls. Our departure point is the one-dimensional thin film reduction of the micromagnetic energy proposed by Capella, Melcher, and Otto [15] (outlined previously in [15] for numerical purposes), which establishes an effective system for the in-plane magnetization by taking the thin film layer limit. The resulting system underlies a wave-type dynamics for the Neel wall's phase. The authors prove the existence and uniqueness of a static Neel wall's phase profile in the absence of external fields, as well as the emergence of traveling wave solutions near the static profile under the influence of a small constant external forcing. The authors also outline the stability of these structures under small one-dimensional perturbations. The present analysis constitutes a follow-up of such formulation and a full study of the nonlinear stability of the static Neel wall under small, one-dimensional perturbations of the phase itself. As far as we know, this problem has not been studied before in the literature.
One of the main technical difficulties pertains to the non-locality of the dynamical equation, even at a linear level. In contrast to previous studies, we adopt a spectral approach to the problem. Motivated by the ideas in [15], in which the linearized operator around the static phase is defined and previously studied, we profit from this information and perform a full spectral stability analysis of this operator, that includes a proof of its relative compactness with respect to an asymptotic operator. In contrast with standard techniques, which are usually applied to local differential operators with bounded coefficients and which are based on truncating such coefficients with their asymptotic limits (see, e.g., [16], Section 3.1), in this work and by necessity (because we are studying a non-local operator) we develop a novel procedure that focuses on describing totally bounded sets in terms of \(L^{2}\)-equicontinuity and uniform decay in Fourier space (see Theorem 3.23 below). This relative compactness plays a crucial role in the location of the essential spectrum of a block operator matrix that encodes the linearization of the nonlinear
wave equation for perturbations of the static wall. It is proved that both essential and point spectra are stable, that is, they belong to the stable half-plane of complex numbers with negative real part, except for the origin, which is associated with translations of the Neel wall (see Theorem 4.11). An important feature is the presence of an _spectral gap_, which is a positive distance from the eigenvalue zero to the rest of the spectrum. This allows us to establish the exponential decay of the solutions to the spectral problem when projected outside the one-dimensional vector space generated by translations of the static profile. Upon application of the well-known Gearhart-Pruss theorem [10, 11] and after the establishment of uniform resolvent estimates, we conclude that the semigroup generated by the linear block matrix operator is exponentially decaying in the appropriate subspace. This information is then used to prove nonlinear stability. For that purpose, we apply an abstract result, due originally to Sattinger [14] and adapted to a Hilbert space setting by Lattanzio _et al._[13], that establishes nonlinear stability from spectral stability by controlling the growth of nonlinear terms and profiting from the fact that the manifold generated by the wave is one-dimensional (the group of translations). We regard our contributions not only new in the context of ferromagnetic wall stability analysis, but also of methodological nature: we advocate for spectral and nonlinear analysis as a feasible and effective method in the study of this type of problems. The unpublished note by Huber [15] warrants note as the only work (as far as we know) that performs a rigorous spectral analysis of the linearized operator around a Neel wall for a layer of small (but positive) thickness, \(\epsilon>0\). Huber does not prove the spectral stability of this structure but employs the spectral information to obtain time-periodic solutions in a vicinity of it. (We note that in layers with positive thickness, the linearized operators are sectorial, in contrast with the present case of a thin-film limit.)
### Plan of the paper
This paper is structured as follows. Section 2 contains a brief description of the thin-film dynamical model in [1], recalls some of the main properties of the static Neel wall's phase, and states the main result of this paper. Section 3 is devoted to the full, rigorous study of the linearized (scalar) operator around the static Neel wall defined in [1]. In particular, it is shown that it is relatively compact to an asymptotic operator, a feature that plays a key role in the stability analysis. Section 4 establishes the spectral stability of the Neel wall's phase. The spectral problem is posed in terms of a block operator matrix and the stability of both its essential and point spectra is established. Section 5 is devoted to generating the associated semigroup and to showing the exponential decay of solutions to the linearized equations outside a one-dimensional space related to translations of the profile. The final Section 6 contains the proof of Theorem 2.3.
### Notations
Along this manuscript, we denote the spaces \(L^{2}(\mathbb{R},\mathbb{C}),\ H^{1}(\mathbb{R},\mathbb{C})\) and \(H^{2}(\mathbb{R},\mathbb{C})\) of complex-valued functions by \(L^{2},\ H^{1}\) and \(H^{2}\). Meanwhile, their real-valued version are denoted by \(L^{2}(\mathbb{R}),\ H^{1}(\mathbb{R})\) and \(H^{2}(\mathbb{R})\) respectively. The set of unitary vectors in \(\mathbb{R}^{n}\) is denoted by \(\mathbb{S}^{n-1}\). The operators \(\hat{\cdot}:L^{2}\to L^{2}\) and \(\hat{\cdot}:L^{2}\to L^{2}\) stand for the Fourier transform and its inverse respectively. Also, \(\xi\) represents the variable in the frequency domain. In the same fashion, the half-Laplacian is defined by the relation \((-\nabla)^{1/2}u=(|\xi|\hat{u})\), and \(\|u\|_{\dot{H}^{1/2}}\) denotes the fractional \(H^{1/2}\)-norm of the function \(u\in L^{2}\) given by \(\|u\|_{\dot{H}^{1/2}}:=\big{\|}|\xi|^{1/2}\hat{u}\big{\|}_{L^{2}}\)
Finally, for two linear operators, say \(\mathcal{A}\) and \(\mathcal{T}\), the commutator \([\mathcal{A},\mathcal{T}]\) is given by the difference \(\mathcal{A}\mathcal{T}-\mathcal{T}\mathcal{A}\).
## 2. Preliminaries and main result
### The micromagnetic model
The Landau and Lifshitz continuum theory of ferromagnetic materials [10] is based on a magnetization field \(\mathbf{m}:\tilde{\Omega}\to\mathbb{S}^{2}\), that represents the local average magnetic moment, and a variational principle in term of the _micromagnetic energy_. In the absence of an external field, the micromagnetic energy is given by
\[\mathbb{E}(\mathbf{m})=\frac{1}{2}\Big{(}d^{2}\int_{\tilde{\Omega}}|\nabla \mathbf{m}|^{2}\,dx+\int_{\mathbb{R}^{3}}|\nabla U|^{2}+Q\int_{\tilde{\Omega}} \Phi(\mathbf{m})\,dx\Big{)}.\]
where \(d>0\) is the exchange length and \(\nabla U\) is the _stray field_ defined uniquely via the distribution equation \(\Delta U=\mathrm{div}\,(\mathbf{m}\mathbf{1}_{\tilde{\Omega}})\) (\(\mathbf{1}_{A}\) denotes the indicator function of the set \(A\)). The stray-field energy favors vanishing distributional divergence, namely, \(\nabla\cdot\mathbf{m}=0\) in \(\tilde{\Omega}\) and \(\mathbf{m}\cdot n=0\) on \(\partial\tilde{\Omega}\), where \(n\) is the outward normal to the boundary. The last integral models crystalline anisotropies via a penalty energy, for which \(\Phi\) acts as a penalty function, and it usually has the form of an even polynomial in \(\mathbf{m}\in\mathbb{S}^{2}\). The parameter \(Q>0\) measures the relative strength of anisotropy penalization against stray-field interaction.
The combination of the stray-field energy (which is a non-local term) and the non-convex saturation constraint \(|\mathbf{m}|=1\) gives rise to pattern formation among magnetic domains where the magnetization is almost constant. Thin transition layers separating the magnetic domains are known as domain walls and may form complex patterns [14].
### Neel wall in soft magnetic thin films
A thin film is an infinitely extended magnetic material \(\tilde{\Omega}=\mathbb{R}^{2}\times(0,\delta)\) where \(\delta\ll d\). In this regime, it is safe to assume that the magnetization is independent of the \(x_{3}\) variable. By assuming further that the magnetization is \(\ell\)-periodic in the \(\mathbf{e}_{2}\) direction, namely,
\[\mathbf{m}(x_{1},x_{2}+\ell)=\mathbf{m}(x_{1},x_{2})\quad\text{for any }x=(x_{1},x_{2})\in \mathbb{R}^{2},\]
and that the material has a uniaxial anisotropy in the \(e_{2}\) direction, with \(\Phi(\mathbf{m})=1-m_{2}^{2}\), we consider transition layers connecting antipodal states on the easy axis
\[\mathbf{m}:\mathbb{R}^{2}\to\mathbb{S}^{2}\quad\text{ with }\mathbf{m}(\pm \infty,x_{2})=(0,\pm 1,0)\quad\text{for any }x_{2}\in\mathbb{R}.\]
In this case, the stray filed energy is approximated at leading order by
\[E_{stray}(\mathbf{m})=\frac{1}{2}\int_{0}^{\ell}\int_{\mathbb{R}}\left(\frac{ \delta}{2}\left||\nabla|^{\frac{1}{2}}\mathcal{H}(m)\right|^{2}+m_{3}^{2} \right)dx\]
where \(\mathbf{m}=(m,m_{3})\) with \(m=(m_{1},m_{2})\) and formally \(\mathcal{H}(m)=\nabla\Delta^{-1}\mathrm{div}\;m\) ( see [1] for further details).Thus, the micromagnetic energy becomes
\[\mathbb{E}(\mathbf{m})=\frac{1}{2}\int_{0}^{\ell}\int_{\mathbb{R}}\left(d^{2} |\nabla m|^{2}+\frac{\delta}{2}\left||\nabla|^{\frac{1}{2}}\mathcal{H}(m) \right|^{2}+Q(1-m_{2})^{2}+m_{3}^{2}\right)dx.\]
Neel walls are one-dimensional transition layers observed in soft ferromagnetic thin films, that is, magnetic materials with relatively weak anisotropic energy. Here, we consider a parameter regime of soft thin films so that the anisotropy and relative thickness are balanced, more precisely
\[Q\ll 1,\quad\kappa=d/\delta\ll 1\quad\text{while }\mathcal{Q}=4\kappa^{2}Q. \tag{2.1}\]
Therefore, it is feasible to introduce the small parameter \(\varepsilon=\sqrt{Q}\). By rescaling the length \(x\) by \(w=\delta/(2Q)\), and the energy by \(\delta/2\), the micromagnetic energy becomes
\[E_{\varepsilon}(\mathbf{m})=\frac{1}{2}\int_{0}^{L}\int_{\mathbb{R}}\bigg{(} \mathcal{Q}|\nabla m|^{2}+\Big{|}|\nabla|^{\frac{1}{2}}\mathcal{H}(m)\Big{|}^{ 2}+(1-m_{2})^{2}+\Big{(}\frac{m_{3}}{\varepsilon}\Big{)}^{2}\bigg{)}\,dx, \tag{2.2}\]
where \(L=\ell/w\) and we assumed \(\varepsilon\ll\mathcal{Q}\ll 1\). Assuming further that \(m=m(x_{1})\) then \(\mathcal{H}(m)=m_{1}\mathbf{e}_{1}\) is independent of \(x_{2}\) and the reduced variational principle for the one-dimensional wall transition is
\[\begin{split} E_{\varepsilon}(\mathbf{m})=\frac{1}{2}\int_{ \mathbb{R}}&\Big{(}\mathcal{Q}|\mathbf{m}^{\prime}|^{2}+||\nabla| ^{\frac{1}{2}}m_{1}|^{2}+(1-m_{2})^{2}+\Big{(}\frac{m_{3}}{\varepsilon} \Big{)}^{2}\,\Big{)}dx\to\min,\\ &\mathbf{m}:\mathbb{R}\to\mathbb{S}^{2}\qquad\text{with}\ \,\,\mathbf{m}(\pm\infty)=(0,\pm 1,0),\end{split} \tag{2.3}\]
where \(\mathbf{m}^{\prime}=\frac{d\mathbf{m}}{dx_{1}}\). In [1] it is shown that for \(\varepsilon_{k}\to 0\) there exists a sequence of minimizers \(\mathbf{m}_{\varepsilon_{k}}\) of (2.3) with a subsequence that locally converges to \(\mathbf{m}=(m,0)\) and satisfies
\[\begin{split} E_{0}(m)=\frac{1}{2}\int_{\mathbb{R}}& \Big{(}\mathcal{Q}|m^{\prime}|^{2}+||\nabla|^{\frac{1}{2}}m_{1}|^{2}+m_{1}^{2 }\Big{)}dx\to\min,\\ & m:\mathbb{R}\to\mathbb{S}^{2}\qquad\text{with}\ \,\,m(\pm\infty)=(0,\pm 1).\end{split} \tag{2.4}\]
Since \(|m^{\prime}|=(m_{1}^{\prime})^{2}/(1-m_{2}^{2})\) is a strictly convex functional of \(m_{1}\), the variational principle (2.4) has a minimizer for any \(\mathcal{Q}>1\). The minimizer is called a Neel wall. We refer to the energy in (2.4) as the Neel wall energy, and for convenience, we write it as
\[E_{0}(m)=\frac{1}{2}\left(\mathcal{Q}\|m^{\prime}\|_{L^{2}}^{2}+\|m_{1}\|_{ \dot{H}^{1/2}}^{2}+\|m_{1}\|_{L^{2}}^{2}\right).\]
Since the left translation is an \(L^{2}\)-isometry, the expression of \(E_{0}(m)\) is invariant to translations in the spatial coordinate \(x\). This invariance is inherited by the energy, yielding that minimizers of (2.5) are unique up to a constant translation.
For our analysis, we introduce the phase \(\theta:\mathbb{R}\to\mathbb{R}\) so that \(m=(\cos\theta,\sin\theta)\) and the Neel wall energy becomes
\[\begin{split}&\mathcal{E}(\theta)=\frac{1}{2}\big{(}\mathcal{Q}\| \theta^{\prime}\|_{L^{2}}^{2}+\|\cos\theta\|_{\dot{H}^{1/2}}^{2}+\|\cos\theta \|_{L^{2}}^{2}\big{)}\ \to\ \min\\ &\theta:\mathbb{R}\to(-\pi/2,\pi/2),\qquad\text{with}\ \,\,\theta(\pm\infty)=\pm\pi/2.\end{split} \tag{2.5}\]
Since we are interested in Neel wall dynamic properties, we refer to minimizers of (2.5) as the _static_ Neel wall phase. From now on, we assume \(\mathcal{Q}=1\) and we abuse notation by letting \(\partial_{x}\theta=\theta^{\prime}\) and \(\partial_{x}^{2}\theta=\theta^{\prime\prime}\) to avoid confusion.
The following proposition summarizes the basic properties of the static Neel wall phase.
**Proposition 2.1** (properties of the static Neel wall [1, 1]).: _There exists a static Neel wall solution with phase \(\overline{\theta}=\overline{\theta}(x)\), \(\overline{\theta}:\mathbb{R}\to(-\pi/2,\pi/2)\), satisfying the following:_
* \(\overline{\theta}\) _is a strict minimizer of the variational problem (_2.5_), with center at the origin,_ \(\overline{\theta}(0)=0\)_, and monotone increasing,_ \(\partial_{x}\overline{\theta}>0\ \,\,\forall x\in\mathbb{R}\)_._
* \(\overline{\theta}\) _is a smooth solution to_ \[\partial_{x}^{2}\theta+\sin\theta(1+(-\Delta)^{1/2})\cos\theta=0,\] (2.6) _which is the Euler-Lagrange equation for the variational problem (_2.5_)._
* \(\partial_{x}\overline{\theta}\in H^{2}\)
_._
4. _For all centered variation_ \(u\in H^{1}\) _with_ \(u(0)=0\) _there holds_ \[\operatorname{Hess}\,\mathcal{E}(\overline{\theta})\langle u,u\rangle_{L^{2}} \geq\|u\,\partial_{x}\overline{\theta}\|_{L^{2}}^{2}+\operatorname{Re}\,b[u \sin\overline{\theta},u\sin\overline{\theta}],\] (2.7) _where the bilinear form_ \(b[\cdot,\cdot]:H^{1}\times H^{1}\to\mathbb{C}\)_, defined as,_ \[b[f,g]=\int_{\mathbb{R}}(1+|\xi|)\hat{f}(\xi)\hat{g}(\xi)^{*}\,d\xi,\qquad f,g \in H^{1},\] (2.8) _is equivalent to the standard inner product in_ \(H^{1/2}\)_._
Proof.: Property 1 results from combining Lemma 1 in [10] with the main results of [13] (Propositions 1 and 2). The proof of the smoothness of the Neel wall can be found in [13] (Proposition 2). Since it is a minimizer it satisfies equation (2.6) (see Lemma 1 in [10]). This shows 2. Moreover, it is proved in [10] (Theorem 1 and Lemma 1) that \(\partial_{x}\overline{\theta}\), \(\partial_{x}^{2}\overline{\theta}\in L^{2}(\mathbb{R})\). As pointed out by the authors, from the Euler-Lagrange equation (2.6) the regularity arguments of Lemma 1 can be bootstrapped to show that \(\partial_{x}^{3}\overline{\theta}\in L^{2}(\mathbb{R})\). This shows 3. Finally, property 4 is the content of Lemma 3 in [10].
**Corollary 2.2**.: _There exists a uniform constant \(C>0\) such that_
\[\|\partial_{x}\overline{\theta}\|_{\infty},\|\partial_{x}^{2}\overline{\theta }\|_{\infty}\leq C.\]
Proof.: Follows immediately from the fact that \(\partial_{x}\overline{\theta}\in H^{2}\) (see Proposition 2.1 (c)) and Sobolev's inequality: \(\|u\|_{\infty}^{2}\leq 2\|u\|_{L^{2}}\|\partial_{x}u\|_{L^{2}}\) for all \(u\in H^{2}\).
### LLG dynamics
The time evolution of the magnetization distribution on a ferromagnetic body \(\widetilde{\Omega}\subset\mathbb{R}^{3}\) is governed by the Landau-Lifshitz-Gilbert (LLG) equation [14, 15, 16]:
\[\mathbf{m}_{t}+\alpha\mathbf{m}\times\mathbf{m}_{t}-\gamma\mathbf{m}\times \mathbf{H}_{\mathrm{eff}}=0, \tag{2.9}\]
where \(\mathbf{m}:\widetilde{\Omega}\times(0,\infty)\to\mathbb{S}^{2}\subset\mathbb{ R}^{3}\) is the magnetization field, \(\alpha>0\) is a non-dimensional damping coefficient (Gilbert factor), and \(\gamma>0\) is the (constant) absolute value of the gyromagnetic ratio with dimensions of frequency (see, e.g., [16]). The effective field, \(\mathbf{H}_{\mathrm{eff}}=\mathbf{h}-\nabla\mathbb{E}(\mathbf{m})\), is the applied field \(\mathbf{h}\) and the negative functional gradient of the micromagnetic energy \(\mathbb{E}(\mathbf{m})\). If we consider a single magnetic spin \(\mathbf{m}=\mathbf{m}(t)\) under a constant magnetic field \(\mathbf{h}\) and neglect damping the magnetization \(m\) will precess about the applied field \(\mathbf{h}\) with a frequency given by \(\omega=\gamma|\mathbf{h}|\). When the damping is turned on, the vector \(\mathbf{m}\) will spiral down around \(\mathbf{h}\) until \(\mathbf{m}\) and \(\mathbf{h}\) become parallel. The typical relaxation time is \(1/(\alpha\omega)\).
In bulk materials, there exists a one-dimensional optimal path connecting antipodal magnetization states known as the Bloch wall. Bloch walls are such that \(m_{1}=0\) and the transition is perpendicular to the transition axis. In this case, the magnetization \(\mathbf{m}\) is divergence-free free and the stray field energy vanishes. Under this condition, there exist explicit dynamic solutions in the bulk, where under an applied field \(\mathbf{h}=H\mathbf{e}_{2}\) the magnetization rotates to develop a \(m_{1}\) component. This component implies a rotation of the other magnetization components advancing the domain wall [11, 13].
### LGG wave-type dynamic limit in thin films
Thin films are incompatible with gyrotropic wall motion due to the incompatibility constraint of the in-plane magnetization imposed by the stray field. In this configuration, the competition between energy and dynamic forces becomes singular in the thin field limit. In [1], an effective suitable limit is considered under the appropriate regime where the oscillatory features of the LLG dynamics are preserved in the limit. It turns out that the effective dynamics depend on the asymptotic regime as \(\alpha\) and the relative thickness \(\delta/d\) tend to zero.
For the precise scaling and regime in [1] let \(\varepsilon=\sqrt{Q}\) and consider (2.1) when \(\varepsilon\ll\mathcal{Q}\) while \(\mathcal{Q}=(2\varepsilon d/\delta)^{2}\lesssim 1\) is small but bounded from below. That is, \(\varepsilon\sim\delta/d\) can be regarded as the relative thickness. Under these assumptions we rescale space and time by
\[x\mapsto wx\quad\text{where }w=\delta/(2\varepsilon)^{2},\quad t\mapsto t/( \gamma\varepsilon)\]
In this scaling, the mean effective field \(\mathbf{H}_{\text{eff}}\) becomes
\[\mathbf{H}_{\text{eff}}=-\varepsilon^{2}\nabla E_{\varepsilon}(\mathbf{m}) \quad\text{ where }E_{\varepsilon}=(2/\delta)\mathbb{E}(\mathbf{m}) \tag{2.10}\]
where \(E_{\varepsilon}(\mathbf{m})\) is given by (2.2). Therefore, the LLG equation (2.9) becomes,
\[\mathbf{m}_{t}+\alpha\mathbf{m}\times\mathbf{m}_{t}+\varepsilon\mathbf{m} \times E_{\varepsilon}(\mathbf{m})=0. \tag{2.11}\]
To derive the effective equation for the in-plane magnetization it is necessary to write down \(E_{\varepsilon}(\mathbf{m})\) in terms of \(m=(m_{1},m_{2})\) and \(m_{3}\), that is,
\[E_{\varepsilon}(\mathbf{m})=E_{0}(m)+\frac{1}{2}\int_{0}^{L}\int_{\mathbb{R}} \Big{(}\mathcal{Q}|\nabla m_{3}|^{2}+\Big{(}\frac{m_{3}}{\varepsilon}\Big{)}^ {2}\,\Big{)}dx\]
where
\[E_{0}(m)=\frac{1}{2}\int_{0}^{L}\int_{\mathbb{R}}\Big{(}\mathcal{Q}|\nabla m|^ {2}+||\nabla|^{\frac{1}{2}}\mathcal{H}(m)|^{2}+(1-m_{2}^{2})\Big{)}dx. \tag{2.12}\]
Notice that for one-dimensional transition layers the energy \(E_{0}\) coincides with the reduced Neel wall energy (2.4).
In [1] it is shown that as
\[\varepsilon\to 0\quad\text{while}\quad\alpha(\varepsilon)/\varepsilon\to\nu \tag{2.13}\]
for some positive \(\nu\), while keeping \(\mathcal{Q}=1\) for every \(\varepsilon>0\), there exist a sequence of solution \(\mathbf{m}_{\varepsilon}\) of (2.11) \(L\)-periodic in the \(x_{2}\) direction such that the in-plane magnetization \(m_{\varepsilon}\) weakly converges to \(m\in\mathbb{S}^{1}\) (in the appropriate spaces) a weak solution of
\[[\partial_{t}^{2}m+\nu\partial_{t}m+\nabla E_{0}(m)]\perp T_{m}\mathbb{S}^{1}. \tag{2.14}\]
Because \(E_{0}(m)\) coincides with the Neel wall energy, it is clear that under the appropriate boundary conditions at infinity (e.g. (2.4)) the static Neel wall profile \(\bar{m}=(\cos\bar{\theta},\sin\bar{\theta})\) is a static solution of (2.14).
### Main result
The static Neel wall solution and the wave-type dynamic equation (2.14) are the starting point of the present work. We state our main result in terms of the magnetic phase \(\theta:\mathbb{R}\times(0,\infty)\to\mathbb{R}\). As function of \(\theta(x,t)\), equation (2.14) with the boundary conditions given by (2.5) becomes
\[\left\{\begin{array}{l}\partial_{t}^{2}\theta+\nu\partial_{t}\theta+\nabla \mathcal{E}(\theta)=0,\\ \theta(-\infty,t)=-\pi/2,\quad\theta(\infty,t)=\pi/2,\\ \theta(x,0)=\theta_{0}(x),\quad\partial_{t}\theta(x,0)=v_{0}(x)\end{array}\right. \tag{2.15}\]
where \((\theta_{0},v_{0})\) are some initial conditions and the energy \(\mathcal{E}(\theta)\) is as in (2.5). After these definitions we are ready to state our main result.
**Theorem 2.3** (orbital stability of the static Neel wall).: _Let \(\mathcal{J}\subset H^{1}(\mathbb{R})\times L^{2}(\mathbb{R})\) be the set of initial conditions such that the Cauchy problem (2.15) has a global solution. Then there exists \(\varepsilon>0\) sufficiently small such that if the pair \((\theta_{0},v_{0})\in\mathcal{J}\) satisfies_
\[\|\theta_{0}-\overline{\theta}\|_{H^{1}}+\|v_{0}\|_{L^{2}}<\varepsilon,\]
_then the solution to (2.15) with initial condition \((\theta(x,0),\partial_{t}\theta(x,0))=(\theta_{0},v_{0})\) satisfies for any \(t>0\),_
\[\|\theta(\cdot,t)-\overline{\theta}(\cdot+\delta)\|_{H^{1}}\leq C\exp(- \omega t),\]
_for some shift \(\delta\in\mathbb{R}\) and constants \(C,\omega>0\) that may depend on \((\theta_{0},v_{0})\) and \(\varepsilon\)._
**Remark 2.4**.: It is to be noticed that we are not proving the global existence of the solution for a given small initial perturbation. Theorem 2.3 states that any eventual initial small perturbation of the static Neel profile, if exists, must decay to a translation of it. This type of behavior is also called _orbital_ stability (or stability _in shape_), as initial perturbations decay to an element of the orbit or manifold generated by the static wave which, in this case, is the one-dimensional manifold of translations. The existence of global solutions can be studied using standard semigroup techniques and with the help of the decaying estimates performed in this work; we do not pursue such analysis here. Instead, we focus on the stability problem alone.
## 3. The linearized operator around the static Neel wall's phase
In this section, we study the linearized operator around the static Neel wall's phase. We examine its main properties and locate its resolvent and spectra. Notably, we prove that it is a relatively compact perturbation of an asymptotic operator, a property that will play a key role later on. We start by recalling some standard definitions from spectral theory which can be found in the classical literature on the subject [13, 14, 15].
**Definition 3.1**.: Let \(\mathcal{T}\) and \(\mathcal{S}\) two linear operator from \(X\) to \(Y\), Banach spaces. It is said that \(\mathcal{S}\) is relatively bounded with respect to \(\mathcal{T}\) (or simply \(\mathcal{T}\)-bounded) if \(D(\mathcal{S})\subset D(\mathcal{T})\) and
\[\|\mathcal{S}u\|\leq a\|u\|+b\|\mathcal{T}u\|,\qquad\forall\,u\in D(\mathcal{ T}),\]
where \(a,b\) are non negative constants. The greatest lower bound \(b_{0}\) of all possible constants \(b\) is called the relative bound of \(\mathcal{S}\) with respect to \(\mathcal{T}\) (or simply the \(\mathcal{T}\)-bound of \(\mathcal{S}\)).
**Definition 3.2**.: Let \(\mathcal{L}\in\mathscr{C}(X,Y)\) be a linear, closed operator from \(X\) to \(Y\), Banach spaces. The resolvent \(\rho(\mathcal{L})\), the point spectrum \(\sigma_{\rm pt}(\mathcal{L})\) and the essential spectrum \(\sigma_{\rm ess}(\mathcal{L})\) are defined as:
\[\rho(\mathcal{L}) :=\{\lambda\in\mathbb{C}\,:\,\mathcal{L}-\lambda\,\,\,\text{is injective and onto, and }(\mathcal{L}-\lambda)^{-1}\,\text{is bounded}\,\},\] \[\sigma_{\rm pt}(\mathcal{L}) :=\{\lambda\in\mathbb{C}\,:\,\mathcal{L}-\lambda\,\,\,\text{is Fredholm with index zero and has a non-trivial kernel}\},\] \[\sigma_{\rm ess}(\mathcal{L}) :=\{\lambda\in\mathbb{C}\,:\,\mathcal{L}-\lambda\,\,\,\text{is either not Fredholm or has index different from zero}\}.\]
The spectrum of \(\mathcal{L}\) is the set \(\sigma(\mathcal{L}):=\sigma_{\rm ess}(\mathcal{L})\cup\sigma_{\rm pt}( \mathcal{L})\). If \(\lambda\in\sigma_{\rm pt}(\mathcal{L})\) we refer to it as an _eigenvalue_.
**Remark 3.3**.: Several definitions of essential spectrum are in use. This definition is due to Weyl [20] (see also [13, 14]), and has the advantage that the remaining spectrum, namely \(\sigma_{\mathrm{pt}}(\mathcal{L})\), is a discrete set of isolated eigenvalues. It is also to be observed that \(\rho(\mathcal{L})=\mathbb{C}\backslash\sigma(\mathcal{L})\) because the operator \(\mathcal{L}-\lambda\) is closed (cf. Kato [13], p. 167).
**Definition 3.4**.: Let \(X\) be a Banach space and assume that \(\mathcal{T}\) and \(\mathcal{T}_{0}\) are two closed and linear operators on \(X\). The operator \(\mathcal{T}\) is a relatively compact perturbation of \(\mathcal{T}_{0}\) if \((\mathcal{T}_{0}-\mathcal{T})(\lambda\mathrm{I}-\mathcal{T}_{0})^{-1}:X\to X\) is compact for some \(\lambda\in\rho\left(\mathcal{T}_{0}\right)\).
### Basic properties
It can be shown (cf. Capella _et al._[1]) that the linearization of the mapping \(\nabla\mathcal{E}(\theta)\) around the static Neel wall's phase \(\overline{\theta}=\overline{\theta}(x)\) is given by
\[\begin{cases}\mathcal{L}:L^{2}\to L^{2},\\ D(\mathcal{L})=H^{2},\\ \mathcal{L}u:=-\partial_{x}^{2}u+\mathcal{S}u-c_{\theta}u,\qquad u\in D( \mathcal{L}),\end{cases} \tag{3.1}\]
where the nonlocal operator \(\mathcal{S}\) is defined as
\[\begin{cases}\mathcal{S}:L^{2}\to L^{2},\\ D(\mathcal{S})=H^{1},\\ \mathcal{S}u:=\sin\overline{\theta}(1+(-\Delta)^{1/2})(u\sin\overline{\theta}),\quad u\in D(\mathcal{S}),\end{cases} \tag{3.2}\]
and \(c_{\theta}=c_{\theta}(x)\), \(x\in\mathbb{R}\), is the coefficient defined as
\[c_{\theta}:=\cos\overline{\theta}(1+(-\Delta)^{1/2})\cos\overline{\theta}. \tag{3.3}\]
It can be easily shown that \(c_{\theta}=c_{\theta}(x)\) is a real and uniformly bounded coefficient in \(H^{2}\) (see [1]). Notice that the non-local operator, \(1+(-\Delta)^{1/2}:L^{2}\to L^{2}\), is defined through
\[(1+(-\Delta)^{1/2})u:=((1+|\xi|)\widehat{u}(\xi))^{\vee},\]
for any \(u\) in its natural domain, \(D(\mathcal{S})=H^{1}\), and since \(D(-\partial_{x}^{2})=H^{2}\subset H^{1}\), the natural domain of definition of \(\mathcal{L}\) is \(D(\mathcal{L})=H^{2}\). Therefore, we regard \(\mathcal{L}\) as a densely defined operator in \(L^{2}\) with domain \(D(\mathcal{L})=H^{2}\). For notational convenience we denote
\[s_{\theta}(x):=\sin\overline{\theta}(x),\qquad x\in\mathbb{R},\]
which is also real, smooth and bounded for all \(x\in\mathbb{R}\).
The next lemma shows a relation between the Hilbert transform and the half-Laplacian which will be used later on. We present it without proof, inasmuch as the latter can be found in many references (see, e.g., [1, 13, 14]).
**Lemma 3.5**.: _Let \(\mathcal{H}:L^{2}\to L^{2}\) be the Hilbert transform given by_
\[u\mapsto\mathrm{P.V.}\,\frac{1}{\pi}\int_{\mathbb{R}}\frac{u(s)}{x-s}\,ds.\]
_Then, \(\mathcal{H}\) is an isometry on \(L^{2}\). Moreover, if \(u\in H^{1}\) then_
\[(-\Delta)^{1/2}u=\mathcal{H}(\partial_{x}u)=\partial_{x}\mathcal{H}u.\]
From the definition of the coefficients \(s_{\theta}\) and \(c_{\theta}\) we immediately have the following properties.
**Corollary 3.6**.: _There exists a uniform constant \(C>0\) such that_
\[\|c_{\theta}\|_{\infty},\|\partial_{x}c_{\theta}\|_{\infty} \leq C, \tag{3.4}\] \[\|s_{\theta}\|_{\infty},\|\partial_{x}s_{\theta}\|_{\infty},\| \partial_{x}^{2}s_{\theta}\|_{\infty} \leq C.\]
Proof.: Follows directly from Corollary 2.2 and the regularity of \(c_{\theta}\) and \(s_{\theta}\).
**Corollary 3.7**.: _Let \(u,v\in D(\mathcal{S})\). Then \(\left\langle\mathcal{S}u\,,v\right\rangle_{L^{2}}=b[s_{\theta}u,s_{\theta}v]\)._
Proof.: This can be easily proved by applying Plancherel's theorem. Indeed, there holds
\[\langle\mathcal{S}u,v\rangle_{L^{2}} =\int_{\mathbb{R}}s_{\theta}(x)(1+(-\Delta)^{1/2})(s_{\theta}(x)u )v^{*}\,dx\] \[=\int_{\mathbb{R}}(1+(-\Delta)^{1/2})(s_{\theta}(x)u)(s_{\theta}( x)v)^{*}\,dx\] \[=\int_{\mathbb{R}}(1+|\xi|)\widehat{(s_{\theta}u)}(\xi)\widehat{ (s_{\theta}v)}(\xi)^{*}\,d\xi\] \[=b[s_{\theta}u,s_{\theta}v],\]
as claimed.
The following proposition summarizes the basic properties of the linearized operator \(\mathcal{L}\) and the Neel wall's phase, which have already been proved in [1].
**Proposition 3.8**.: _The operator \(\mathcal{L}\) and the static Neel wall's phase \(\overline{\theta}\) satisfy:_
1. \(\partial_{x}\overline{\theta}\in D(\mathcal{L})\) _with_ \(\mathcal{L}\partial_{x}\overline{\theta}=0\)_._
2. _For all_ \(f\in L^{2}\) _such that_ \(f\perp\partial_{x}\overline{\theta}\) _in_ \(L^{2}\) _there exists a solution_ \(u\in H^{2}\) _to the equation_ \(\mathcal{L}u=f\)_. The solution is unique up to a constant multiple of_ \(\partial_{x}\overline{\theta}\)_._
3. _There exists a uniform constant_ \(\Lambda_{0}>0\) _such that if_ \(u\in H^{1}\) _and_ \(\langle u,\partial_{x}\overline{\theta}\rangle_{L^{2}}=0\)_, then_ \[\langle\mathcal{L}u,u\rangle_{L^{2}}\geq\Lambda_{0}\|u\|_{L^{2}}^{2}.\] (3.5)
4. _Let_ \(f\in\{\partial_{x}\overline{\theta}\}^{\perp}\subset L^{2}\)_. Then the equation_ \(\mathcal{L}u=f\) _has a strong solution_ \(u\in H^{2}\)_, unique up to a constant multiple of_ \(\partial_{x}\overline{\theta}\)_. Moreover, if_ \(u\in\{\partial_{x}\overline{\theta}\}^{\perp}\)_, then_ \[\|u\|_{H^{2}}\leq C\|f\|_{L^{2}},\] (3.6) _for some_ \(C>0\)_._
Proof.: The proof follows from Lemmata 4 and 5, together with Proposition 1 in [1].
Next, we are going to verify that the operator \(\mathcal{L}\) is self-adjoint. For that purpose, we remind the reader that it is well-known that the Laplacian, \(-\Delta=-\partial_{x}^{2}\), is essentially self-adjoint in \(L^{2}\) when defined on \(C_{0}^{\infty}(\mathbb{R})\), but it is actually self-adjoint on its maximal domain, \(D(-\partial_{x}^{2})=H^{2}\subset L^{2}\). This property can be easily verified using the Fourier transform, which unitarily diagonalizes the Laplacian (see, e.g., Kato [14], section V-5.2, pp. 299-302).
First, we have the following observation.
**Lemma 3.9**.: _The operator \(\mathcal{S}:L^{2}\to L^{2}\) is symmetric._
Proof.: We recall that \(\mathcal{S}\) is symmetric if and only if its domain is dense and \(\langle v,\mathcal{S}u\rangle_{L^{2}}=\langle\mathcal{S}v,u\rangle_{L^{2}}\) for all \(u,v\in D(\mathcal{S})\). Since \(D(\mathcal{S})=H^{1}\) is dense in \(L^{2}\) we only need to verify the latter property. But Corollary 3.7 and the hermiticity of \(b\) yield
\[\langle\mathcal{S}u\,,v\rangle_{L^{2}}=b[s_{\theta}u,s_{\theta}v]=b[s_{\theta} v,s_{\theta}u]^{*}=\langle\mathcal{S}v\,,u\rangle_{L^{2}}^{*}=\langle u\,, \mathcal{S}v\rangle_{L^{2}}\,,\]
for all \(u,v\in H^{1}\), as claimed.
We now verify that the operator \(\mathcal{L}\) is self-adjoint through the application of Kato-Rellich's theorem twice.
**Theorem 3.10**.: _The operator \(\mathcal{L}:L^{2}\to L^{2}\) with domain \(D(\mathcal{L})=H^{2}\) is self-adjoint._
Proof.: First, note that \(\mathcal{L}\) is clearly a symmetric operator, because its domain is dense in \(L^{2}\) and there holds
\[\langle\mathcal{L}u,v\rangle_{L^{2}} =\langle-\partial_{x}^{2}u,v\rangle_{L^{2}}+\langle\mathcal{S}u,v \rangle_{L^{2}}+\langle c_{\theta}u,v\rangle_{L^{2}},\] \[=\langle u,-\partial_{x}^{2}v\rangle_{L^{2}}+\langle u,\mathcal{ S}v\rangle_{L^{2}}+\langle u,c_{\theta}v\rangle_{L^{2}},\] \[=\langle u,\mathcal{L}v\rangle_{L^{2}},\]
for all \(u,v\in H^{2}\), after integration by parts, application of Lemma 3.9 and the fact that \(c_{\theta}\) is real.
Now, it is well-known that for every \(u\in H^{2}\) there holds the estimate
\[\|\partial_{x}u\|_{L^{2}}\leq k\|\partial_{x}^{2}u\|_{L^{2}}+\frac{2}{k}\|u\|_ {L^{2}}, \tag{3.7}\]
for any arbitrary \(k>0\) (see Kato [14], p. 192). Let us denote the operator,
\[\begin{cases}\widetilde{\mathcal{S}}:L^{2}\to L^{2},\\ D(\widetilde{\mathcal{S}})=H^{1},\\ \widetilde{\mathcal{S}}u:=s_{\theta}(-\Delta)^{1/2}(s_{\theta}u),\quad u\in D (\widetilde{\mathcal{S}}),\end{cases}\]
so that \(\mathcal{S}=s_{\theta}^{2}\mathrm{I}+\widetilde{\mathcal{S}}\). Following the arguments of Lemma 3.9, it is easy to verify that \(\widetilde{\mathcal{S}}\) is a symmetric operator. Moreover, from Corollary 2.2, we observe that \(s_{\theta}=\sin\overline{\theta}\) and \(\partial_{x}s_{\theta}=(\partial_{x}\overline{\theta})\cos\overline{\theta}\) are uniformly bounded functions for all \(x\in\mathbb{R}\), and there exists a constant \(C_{0}>0\) such that \(\|s_{\theta}\|_{\infty}\leq 1\) and \(\|\partial_{x}s_{\theta}\|_{\infty}\leq C_{0}\). Therefore,
\[\|\widetilde{\mathcal{S}}u\|_{L^{2}} \leq\left(\int_{\mathbb{R}}|(-\Delta)^{1/2}(s_{\theta}(x)u)|^{2} \,dx\right)^{1/2}=\left(\int_{\mathbb{R}}|\xi|^{2}|\widehat{(s_{\theta}u)}( \xi)|^{2}\,d\xi\right)^{1/2}\] \[\leq\|\partial_{x}(s_{\theta}u)\|_{L^{2}}\leq\|\partial_{x}s_{ \theta}\|_{\infty}\|u\|_{L^{2}}+\|s_{\theta}\|_{\infty}\|\partial_{x}u\|_{L^{2 }}\leq C_{0}\|u\|_{L^{2}}+\|\partial_{x}u\|_{L^{2}}.\]
Inequality (3.7) then yields
\[\|\widetilde{\mathcal{S}}u\|_{L^{2}}\leq k\|-\partial_{x}^{2}u\|_{L^{2}}+ \Big{(}C_{0}+\frac{2}{k}\Big{)}\|u\|_{L^{2}},\]
for all \(u\in H^{2}\) and any arbitrary \(k>0\). Since \(D(-\partial_{x}^{2})=H^{2}\subset D(\widetilde{\mathcal{S}})=H^{1}\) and since \(k>0\) is arbitrary, this shows that the symmetric operator \(\widetilde{\mathcal{S}}\) is relatively bounded with respect to \(-\partial_{x}^{2}\) with relative bound equal to zero. Consequently, we may apply Kato-Rellich's theorem (see Reed and Simon, vol. II [12], Theorem X.12, p. 162) to conclude that the operator \(\widetilde{\mathcal{S}}-\partial_{x}^{2}:L^{2}\to L^{2}\), with domain \(D(\widetilde{\mathcal{S}}-\partial_{x}^{2})=D(-\partial_{x}^{2})=H^{2}\), is self-adjoint.
Finally, let us write \(\mathcal{L}=-\partial_{x}^{2}+\mathcal{S}+c_{\theta}\mathrm{I}=-\partial_{x}^{2}+ \widetilde{\mathcal{S}}+\beta\mathrm{I}\), where \(\beta:=s_{\theta}^{2}+c_{\theta}\) is a real, smooth and bounded coefficient. Clearly,
\[\|\beta u\|_{L^{2}}\leq\|\beta\|_{\infty}\|u\|_{L^{2}}\leq\|\beta\|_{\infty}\|u \|_{L^{2}}+k\|(\widetilde{\mathcal{S}}-\partial_{x}^{2})u\|_{L^{2}},\]
for all \(u\in H^{2}\) and for any \(k>0\). Since \(D(\widetilde{\mathcal{S}}-\partial_{x}^{2})=H^{2}\subset D(\beta\mathrm{I})=L^ {2}\), we conclude that the symmetric operator \(\beta\mathrm{I}\) is \((\widetilde{\mathcal{S}}-\partial_{x}^{2})-\)bounded with relative bound equal to zero. Upon application, once again, of Kato-Rellich's theorem we conclude that the operator \(\mathcal{L}=-\partial_{x}^{2}+\widetilde{\mathcal{S}}+\beta\mathrm{I}\), with domain \(D(\mathcal{L})=H^{2}\), is self-adjoint. The theorem is proved.
**Corollary 3.11**.: \(\mathcal{L}\) _is a closed operator._
Proof.: Since every self-adjoint operator is closed (it coincides with its adjoint, which is closed), the conclusion follows from Theorem 3.10.
### The spectrum of \(\mathcal{L}\)
Thanks to Theorem 3.10, we immediately obtain from basic properties of self-adjoint operators that the \(L^{2}\)-spectrum of \(\mathcal{L}\) is real, \(\sigma(\mathcal{L})\subset\mathbb{R}\). Moreover, from Proposition 3.8 (a) and Proposition 2.1 (c), we already know that \(\partial_{x}\overline{\theta}\) is an eigenfunction of \(\mathcal{L}\) associated to the eigenvalue \(\lambda=0\), referred to as _the translation eigenvalue_. This means that any translation of the Neel wall's phase, \(\overline{\theta}(\cdot+\delta)\), remains a Neel wall (that is, a minimizer of the variational problem (2.5), which might not be longer centered at \(x=0\), though).
Now, if we decompose \(L^{2}=\{\partial_{x}\overline{\theta}\}^{\perp}\oplus\mathrm{span}\{\partial _{x}\overline{\theta}\}\), and if we suppose that there exists \(u\in H^{2}\subset L^{2}\), \(u\neq 0\), with \(u=u_{\perp}+\alpha\partial_{x}\overline{\theta}\) with some \(\alpha\in\mathbb{C}\) such that \(\mathcal{L}u=0\), then by Proposition 3.8 (c) there holds
\[0=\langle\mathcal{L}(u_{\perp}+\alpha\partial_{x}\overline{\theta}),u_{\perp} +\alpha\partial_{x}\overline{\theta}\rangle_{L^{2}}=\langle\mathcal{L}u_{ \perp},u_{\perp}\rangle_{L^{2}}\geq\Lambda_{0}\|u_{\perp}\|_{L^{2}}^{2},\]
yielding \(u_{\perp}=0\). This implies that the geometric multiplicity of \(\lambda=0\) is equal to one. Recalling that for self-adjoint operators on Hilbert spaces the algebraic and geometric multiplicities of an eigenvalue coincide (see Kato [14], p. 273), we readily obtain the following result.
**Corollary 3.12**.: \(\lambda=0\) _is a simple eigenvalue of the operator \(\mathcal{L}:L^{2}\to L^{2}\), with eigenfunction \(\partial_{x}\overline{\theta}\in D(\mathcal{L})=H^{2}\)._
We use this information to prove the following spectral bound.
**Lemma 3.13**.: _The \(L^{2}\)-spectrum of \(\mathcal{L}\) satisfies \(\sigma(\mathcal{L})\subset\{0\}\cup[\Lambda_{0},\infty)\), where \(\Lambda_{0}>0\) is the constant from Proposition 3.8 (c)._
Proof.: Corollary 3.12 implies that \(\lambda=0\in\sigma_{\mathrm{pt}}(\mathcal{L})\) and, therefore, it is an isolated simple eigenvalue. Moreover, we also know that \(\sigma(\mathcal{L})_{|L^{2}}\subset\mathbb{R}\). Hence, by the spectral decomposition theorem (see Theorem III-6.17, p. 178, in Kato [14]) we have a decomposition for \(\mathcal{L}\) according to a decomposition of the Hilbert space, \(L^{2}=X^{\prime}\oplus X^{\prime\prime}\), such that \(\sigma(\mathcal{L})_{|X^{\prime}}=\{0\}\) and \(\sigma(\mathcal{L})_{|X^{\prime\prime}}=\sigma(\mathcal{L})_{|L^{2}}\backslash\{0\}\), where \(X^{\prime}=\mathcal{P}_{0}L^{2}\), \(X^{\prime\prime}=(\mathrm{I}-\mathcal{P}_{0})L^{2}\) and \(\mathcal{P}_{0}\) is the spectral projection associated to the eigenvalue \(\lambda=0\), determined by the Dunford integral,
\[\mathcal{P}_{0}=-\frac{1}{2\pi i}\int_{\partial\Gamma}(\lambda\mathrm{I}- \mathcal{L})^{-1}\,d\lambda,\]
with \(\partial\Gamma\) being a simple, rectifiable curve such that \(\partial\Gamma\subset\rho(\mathcal{L})\) and \(\Gamma\cap\sigma(\mathcal{L})_{|L^{2}}=\{0\}\). Actually, since the eigenvalue is simple (the rank of \(\mathcal{P}_{0}\) is equal to one), we have \(X^{\prime}=\mathrm{span}\{\partial_{x}\overline{\theta}\}\subset L^{2}\) and \(X^{\prime\prime}=\{\partial_{x}\overline{\theta}\}^{\perp}\) in \(L^{2}\).
Next, we verify that \(\mathcal{L}_{|X^{\prime\prime}}\) is also self-adjoint. This restriction of \(\mathcal{L}\) is defined as
\[\begin{cases}\mathcal{L}_{|X^{\prime\prime}}:X^{\prime\prime}\to X^{\prime \prime},\\ D(\mathcal{L}_{|X^{\prime\prime}})=D(\mathcal{L})\cap X^{\prime\prime}=H^{2} \cap X^{\prime\prime},\\ \mathcal{L}_{|X^{\prime\prime}}u:=\mathcal{L}u,\quad u\in D(\mathcal{L}_{|X^{ \prime\prime}}).\end{cases}\]
Clearly, \(\mathcal{L}_{|X^{\prime\prime}}\) is symmetric because \(D(\mathcal{L}_{|X^{\prime\prime}})\) is dense in \(X^{\prime\prime}=\{\partial_{x}\overline{\theta}\}^{\perp}\) and \(\mathcal{L}\) is symmetric. Thus, we apply the basic criterion of self-adjointness: in order to show that \(\mathcal{L}_{|X^{\prime\prime}}\) is self-adjoint it suffices to show that \((\mathcal{L}_{|X^{\prime\prime}}\pm i)(D(\mathcal{L})\cap X^{\prime\prime})=X^ {\prime\prime}\) (see, e.g., Theorem VIII.3, p. 256, in Reed and Simon, vol. I [13]). But we already know that \(\mathcal{L}\pm i:D(\mathcal{L})\to L^{2}\) is surjective because \(\mathcal{L}\) is self-adjoint. Therefore, for \(v\in X^{\prime\prime}\subset L^{2}\) there exist elements \(u_{\pm}\in D(\mathcal{L})=H^{2}\) such that \((\mathcal{L}\pm i)u_{\pm}=v\). This implies that
\[(\mathcal{L}\pm i)(\mathrm{I}-\mathcal{P}_{0})u_{\pm}=(\mathcal{ L}\pm i)u_{\pm}-(\mathcal{L}\pm i)\mathcal{P}_{0}u_{\pm} =v-(\mathcal{L}\mathcal{P}_{0}u_{\pm}\pm i\mathcal{P}_{0}u_{\pm})\] \[=v-\mathcal{P}_{0}(\mathcal{L}\pm i)u_{\pm}\] \[=(\mathrm{I}-\mathcal{P}_{0})v\,\in X^{\prime\prime},\]
with \((\mathrm{I}-\mathcal{P}_{0})u_{\pm}\in X^{\prime\prime}\). That is, \((\mathcal{L}_{|X^{\prime\prime}}\pm i):D(\mathcal{L})\cap X^{\prime\prime} \to X^{\prime\prime}\) is surjective, and this proves that \(\mathcal{L}_{|X^{\prime\prime}}\) is self-adjoint.
Finally, from Rayleigh's formula for semi-bounded self-adjoint operators (cf. Kato [12], p. 278), we have the bound
\[\langle\mathcal{L}_{|X^{\prime\prime}}u,u\rangle_{L^{2}}=\langle\mathcal{L}u, u\rangle_{L^{2}}\geq\Lambda_{0}\|u\|_{L^{2}}^{2},\]
for all \(u\in D(\mathcal{L})\cap X^{\prime\prime}=H^{2}\cap\{\partial_{x}\overline{ \theta}\}_{L^{2}}^{\perp}\) (see Proposition 3.8 (c)), which implies, in turn, that \(\sigma(\mathcal{L}_{X^{\prime\prime}})\subset[\Lambda_{0},\infty)\). Kato's decomposition theorem then yields \(\sigma(\mathcal{L})_{L^{2}}\subset\{0\}\cup[\Lambda_{0},\infty)\), as claimed.
### The asymptotic operator \(\mathcal{L}_{\infty}\)
We now examine the following operator, defined by
\[\begin{cases}\mathcal{L}_{\infty}:L^{2}\to L^{2},\\ D(\mathcal{L}_{\infty})=H^{2},\\ \mathcal{L}_{\infty}u:=-\partial_{x}^{2}u+(1+(-\Delta)^{1/2})u,\quad u\in D( \mathcal{L}_{\infty}).\end{cases} \tag{3.8}\]
This operator results from (formally) taking the limit when \(x\to\pm\infty\) in the expression of \(\mathcal{L}\), recalling that \(\overline{\theta}(\pm\infty)=\pm\pi/2\). Let us define the bilinear form
\[a_{\infty}[\cdot,\cdot]:H^{1}\times H^{1}\to\mathbb{C}, \tag{3.9}\] \[a_{\infty}[u,v]:=\langle\partial_{x}u,\partial_{x}v\rangle_{L^{2} }+b[u,v],\qquad u,v\in H^{1},\]
where \(b[\cdot,\cdot]\) is the bilinear form defined in (2.8). It follows from standard facts that if \(f\in L^{2}\), then the equation
\[\mathcal{L}_{\infty}u=f, \tag{3.10}\]
is endowed with a weak formulation in the space \(H^{1}\) in terms of the bilinear form \(a_{\infty}[\cdot,\cdot]\). Indeed, we say that \(u\in H^{1}\) is a weak solution to (3.10) provided that
\[a_{\infty}[u,v]=\langle\mathcal{L}_{\infty}u,v\rangle_{L^{2}}=\langle f,v \rangle_{L^{2}},\qquad\forall\,v\in H^{1}. \tag{3.11}\]
**Lemma 3.14**.: _The bilinear form \(a_{\infty}[\cdot,\cdot]\) defines an inner product in \(H^{1}\) whose induced norm is equivalent to the standard \(H^{1}\)-norm._
Proof.: First, it is to be noticed that \(a_{\infty}[\cdot,\cdot]\) is complex symmetric, \(a_{\infty}[u,v]^{*}=a_{\infty}[v,u]\) for all \(u,v\in H^{1}\), and clearly bilinear. Use Plancherel's identity to observe that
\[b[u,u]=\int_{\mathbb{R}}(1+|\xi|)|\widehat{u}(\xi)|^{2}\,d\xi\geq\|u\|_{L^{2}}^ {2},\]
yielding \(a_{\infty}[u,u]=\|\partial_{x}u\|^{2}+b[u,u]\geq\|\partial_{x}u\|^{2}+\|u\|_{L^ {2}}=\|u\|_{H^{1}}^{2}\) for all \(u\in H^{1}\). On the other hand, by Cauchy-Schwarz' inequality it follows that
\[b[u,u]=\int_{\mathbb{R}}(1+|\xi|)|\widehat{u}(\xi)|^{2}\,d\xi\leq\int_{ \mathbb{R}}\big{(}\tfrac{3}{2}+\tfrac{1}{2}\xi^{2})|\widehat{u}(\xi)|^{2}\,d\xi,\]
yielding,
\[a_{\infty}[u,u]=\|\partial_{x}u\|_{L^{2}}^{2}+b[u,u]\leq\|\partial_{x}u\|_{L^ {2}}^{2}+\int_{\mathbb{R}}\big{(}\tfrac{3}{2}+\tfrac{1}{2}\xi^{2})|\widehat{u }(\xi)|^{2}\,d\xi=\tfrac{3}{2}\|u\|_{H^{1}}^{2}.\]
Finally, we notice that \(a_{\infty}[u,u]=0\) if and only if \(u=0\). This shows the result.
We now apply the previous lemma to show that (3.10) has a unique strong solution in \(H^{2}\).
**Lemma 3.15**.: _For every \(f\in L^{2}\) there exists a unique solution \(u\in H^{2}\) to (3.10). Moreover, there exists a uniform constant \(C>0\) such that_
\[\|u\|_{H^{2}}\leq C\|f\|_{L^{2}}. \tag{3.12}\]
Proof.: The bilinear form is continuous in \(H^{1}\) because
\[|a_{\infty}[u,v]|\leq|\langle\partial_{x}u,\partial_{x}v\rangle_ {L^{2}}|+|b[u,v]| \leq\|\partial_{x}u\|_{L^{2}}\|\partial_{x}v\|_{L^{2}}+\int_{ \mathbb{R}}(1+|\xi|)|\widehat{u}(\xi)||\widehat{v}(\xi)|\,d\xi\] \[\leq\|\partial_{x}u\|_{L^{2}}\|\partial_{x}v\|_{L^{2}}+\|u\|_{L^ {2}}\|v\|_{L^{2}}+\|\partial_{x}u\|_{L^{2}}\|v\|_{L^{2}}\] \[\leq C\|u\|_{H^{1}}\|v\|_{H^{1}},\]
for all \(u,v\in H^{1}\). Moreover, in Lemma 3.14 we have already verified that \(a_{\infty}[\cdot,\cdot]\) is \(H^{1}\)-elliptic. Thus, by Lax-Milgram theorem, for each \(f\in L^{2}\) there exists a unique weak solution \(u\in H^{1}\) to (3.11). This solution solves
\[\mathcal{L}_{\infty}u=-\partial_{x}^{2}u+(1+(-\Delta)^{1/2})u=f,\]
in the sense of distributions. Therefore, for any test function \(\varphi\in C_{0}^{\infty}(\mathbb{R})\) there holds
\[\langle\partial_{x}u,\partial_{x}\varphi\rangle_{L^{2}}+\langle(1+(-\Delta)^{ 1/2})u,\varphi\rangle_{L^{2}}=\langle f,\varphi\rangle_{L^{2}}.\]
By Plancherel's identity this implies that
\[\int_{\mathbb{R}}\big{[}(1+|\xi|+\xi^{2})\widehat{u}(\xi)-\widehat{f}(\xi) \big{]}\widehat{\varphi}(\xi)^{*}\,d\xi=0,\]
for all \(\varphi\in C_{0}^{\infty}(\mathbb{R})\). Hence, \((1+|\xi|+\xi^{2})\widehat{u}(\xi)=\widehat{f}(\xi)\) a.e. in \(\xi\in\mathbb{R}\). Therefore,
\[\|u\|_{H^{2}}^{2}=\int_{\mathbb{R}}(1+\xi^{2})^{2}|\widehat{u}(\xi)|^{2}\,d \xi=\int_{\mathbb{R}}\Big{(}\frac{1+\xi^{2}}{1+|\xi|+\xi^{2}}\Big{)}^{2}| \widehat{f}(\xi)|^{2}\,d\xi\leq\|f\|_{L^{2}}^{2}<\infty.\]
This yields \(u\in H^{2}\), that \(u\) is a strong solution to (3.10) as well as estimate (3.12). The lemma is proved.
**Lemma 3.16**.: _The asymptotic operator \(\mathcal{L}_{\infty}:L^{2}\to L^{2}\) with domain \(D(\mathcal{L}_{\infty})=H^{2}\) is endowed with the following properties:_
* \(\mathcal{L}_{\infty}\) _is self-adjoint._
2. \(\ker\mathcal{L}_{\infty}=\{0\}\)_._
3. \(\operatorname{ran}(\mathcal{L}_{\infty})=L^{2}\)_._
4. \(\sigma(\mathcal{L}_{\infty})_{|L^{2}}\subset[1,\infty)\)_._
5. \((\mathcal{L}_{\infty})^{-1}\) _exists and it is bounded._
Proof.: Notice that \(\mathcal{L}_{\infty}\) is clearly symmetric with dense domain. Therefore, the proof of property (a) follows a Kato-Rellich argument, very similar to that in the proof of Theorem 3.10 and we omit it. Items (b) and (c) follow directly from Lemma 3.15 due to \(\mathcal{L}_{\infty}\) is self-adjoint and its spectrum is real, \(\sigma(\mathcal{L}_{\infty})_{|L^{2}}\subset\mathbb{R}\). If \(u\in D(\mathcal{L}_{\infty})=H^{2}\), then \(\langle\mathcal{L}_{\infty}u,u\rangle_{L^{2}}=a_{\infty}[u,u]\geq\|u\|_{H^{1} }^{2}\geq\|u\|_{L^{2}}^{2}\), showing that \(\mathcal{L}_{\infty}\) is semi-bounded. By Rayleigh's spectral bound for semi-bounded self-adjoint operators in Hilbert spaces (cf. [10], p. 278) we obtain
\[\inf\sigma(\mathcal{L}_{\infty})_{|L^{2}}=\inf_{0\neq v\in D(\mathcal{L}_{ \infty})}\frac{\langle\mathcal{L}_{\infty}v,v\rangle_{L^{2}}}{\|v\|_{L^{2}}^{ 2}}=\inf_{0\neq v\in D(\mathcal{L}_{\infty})}\frac{a_{\infty}[v,v]}{\|v\|_{L^ {2}}^{2}}\geq 1.\]
This shows (d). Property (e) follows directly from (d), inasmuch as it implies that \(\lambda=0\in\rho(\mathcal{L}_{\infty})\). This completes the proof of the lemma.
### Relative compactness
In this section we prove that the linearized operator around the static Neel wall's phase, \(\mathcal{L}\), is a relatively compact perturbation of \(\mathcal{L}_{\infty}\). This fundamental property will be useful later on. First we establish an elementary result.
**Lemma 3.17**.: _Let \(\mu\in\mathbb{C}\backslash[1,\infty)\) be fixed. Then the function_
\[\begin{cases}\qquad g_{\mu}:[0,\infty)\to\mathbb{R},\\ g_{\mu}(\eta):=\frac{\eta^{2}+1}{|\eta^{2}+\eta+1-\mu|},\qquad\eta\geq 0, \end{cases}\]
_is bounded above, that is, there exists a positive constant \(\widetilde{C}=\widetilde{C}(\mu)>0\) such that \(|g_{\mu}(\eta)|\leq\widetilde{C}(\mu)\) for all \(\eta\geq 0\). Moreover, if \(\operatorname{Re}\mu<0\), then the constant \(\widetilde{C}\) may be chosen independently of \(\mu\)._
Proof.: Fix \(\mu\in\mathbb{C}\backslash[1,\infty)\). Then the function \(g_{\mu}\) is continuous in \(\eta\in[0,\infty)\). Since,
\[g_{\mu}(0)=\frac{1}{|1-\mu|},\qquad\lim_{\eta\to\infty}g_{\mu}(\eta)=1,\]
then from continuity we deduce the existence of \(\widetilde{C}=\widetilde{C}(\mu)>0\) such that \(|g_{\mu}(\eta)|\leq\widetilde{C}(\mu)\) for all \(\eta\geq 0\). If \(\operatorname{Re}\mu<0\), then we have \(1-\operatorname{Re}\mu>1\) and therefore
\[|\eta^{2}+\eta+1-\mu|\geq|\operatorname{Re}\left(\eta^{2}+\eta+1-\mu\right)|= \eta^{2}+\eta+1-\operatorname{Re}\mu>\eta^{2}+1,\]
yielding \(|g_{\mu}(\eta)|\leq 1\) for all \(\eta\geq 0\).
**Lemma 3.18**.: _Fix \(\mu\in\mathbb{C}\backslash[1,\infty)\subset\rho(\mathcal{L}_{\infty})\). Then for every \(f\in L^{2}\) there exists a unique solution \(u\in H^{2}\) to the equation \((\mathcal{L}_{\infty}-\mu)u=f\). Moreover, there exists a constant \(C=C(\mu)>0\) such that_
\[\|u\|_{H^{2}}\leq C(\mu)\|f\|_{L^{2}}. \tag{3.13}\]
Proof.: From Lemma 3.16 (d), if we fix \(\mu\in\mathbb{C}\backslash[1,\infty)\), then \(\mu\in\rho(\mathcal{L}_{\infty})\) and \((\mathcal{L}_{\infty}-\mu):D(\mathcal{L}_{\infty})=H^{2}\subset L^{2}\to L^{2}\) is onto. Hence, for every \(f\in L^{2}\) there exists \(u\in H^{2}\) such
that \((\mathcal{L}_{\infty}-\mu)u=f\). This implies that \((\xi^{2}+1+|\xi|-\mu)\widehat{u}(\xi)=\widehat{f}(\xi)\). Noticing that for \(\mu\in\mathbb{C}\backslash[1,\infty)\) we have \(\xi^{2}+1+|\xi|-\mu\neq 0\) for all \(\xi\in\mathbb{R}\), we obtain the estimate
\[\|u\|_{H^{2}}^{2} =\int_{\mathbb{R}}(1+\xi^{2})^{2}|\widehat{u}(\xi)|^{2}\,d\xi=\int _{\mathbb{R}}|g_{\mu}(|\xi|)|^{2}|\widehat{f}(\xi)|^{2}\,d\xi\] \[\leq\widetilde{C}(\mu)^{2}\int_{\mathbb{R}}|\widehat{f}(\xi)|^{2 }\,d\xi=C(\mu)\|f\|_{L^{2}}^{2},\]
thanks to Lemma 3.17.
**Lemma 3.19**.: \(\mathcal{L}_{\infty}-\mathcal{L}\) _continuously maps \(H^{2}\) into \(H^{1}\)._
Proof.: Take any \(u\in H^{2}\). We then have
\[(\mathcal{L}_{\infty}-\mathcal{L})u=(1+(-\Delta)^{1/2})u-s_{\theta}(1+(- \Delta)^{1/2})(s_{\theta}u)+c_{\theta}u. \tag{3.14}\]
Apply bounds (3.4) to obtain
\[\|c_{\theta}u\|_{H^{1}}^{2} =\|c_{\theta}u\|_{L^{2}}^{2}+\|\partial_{x}(c_{\theta}u)\|_{L^{2}} ^{2}\] \[\leq\|c_{\theta}\|_{\infty}^{2}\big{(}\|u\|_{L^{2}}^{2}+\| \partial_{x}u\|_{L^{2}}^{2}\big{)}+\|\partial_{x}c_{\theta}\|_{\infty}^{2}\|u \|_{L^{2}}^{2}\leq C\|u\|_{H^{2}}^{2},\]
for some \(C>0\). Moreover,
\[\|(1+(-\Delta)^{1/2})u\|_{H^{1}}^{2} =\int_{\mathbb{R}}(1+\xi^{2})\big{|}\big{(}(1+(-\Delta)^{1/2})u \big{)}^{\wedge}(\xi)\big{|}^{2}\,d\xi\] \[\leq 2\int_{\mathbb{R}}(1+\xi^{2})^{2}|\widehat{u}(\xi)|^{2}\,d \xi=2\|u\|_{H^{2}}^{2}.\]
Apply bounds (3.4) once again to obtain
\[\|s_{\theta}(1+(-\Delta)^{1/2})(s_{\theta}u)\|_{H^{1}}^{2} \leq\|s_{\theta}(1+(-\Delta)^{1/2})(s_{\theta}u)\|_{L^{2}}^{2}+\] \[\quad+\|(\partial_{x}s_{\theta})(1+(-\Delta)^{1/2})(s_{\theta}u )\|_{L^{2}}^{2}+\] \[\quad+\|s_{\theta}\partial_{x}(1+(-\Delta)^{1/2})(s_{\theta}u)\| _{L^{2}}^{2}\] \[\leq C\|(1+(-\Delta)^{1/2})(s_{\theta}u)\|_{H^{1}}^{2}\] \[\leq 2C\int_{\mathbb{R}}(1+\xi^{2})^{2}|\widehat{(s_{\theta}u)}( \xi)|^{2}\,d\xi=2C\|s_{\theta}u\|_{H^{2}}^{2}\leq\widetilde{C}\|u\|_{H^{2}}^{2},\]
for some \(\widetilde{C}>0\). Combine these estimates to conclude that there exists a constant \(C>0\) such that
\[\|(\mathcal{L}_{\infty}-\mathcal{L})u\|_{H^{1}}\leq C\|u\|_{H^{2}}, \tag{3.15}\]
for all \(u\in D(\mathcal{L})=H^{2}\). This shows the result.
At this point, let us recall two useful theorems, one due to Pego [10] and the other due to Kolmogorov [11] and Riesz [14] (see, for example, [1] and the references therein), describing totally bounded sets in \(L^{2}\) and in \(L^{p}\), respectively.
**Theorem 3.20** (Pego [10]).: _Let \(\mathcal{F}\) be a bounded set of \(L^{2}(\mathbb{R}^{n})\) and \(\widehat{\mathcal{F}}:=\{\widehat{u}\,|\,u\in\mathcal{F}\}\). The functions for \(\mathcal{F}\) are \(L^{2}\)-equicontinuous if and only if the functions for \(\widehat{\mathcal{F}}\) decay uniformly in \(L^{2}\) and vice versa._
Proof.: See Theorem 1 in [10].
**Theorem 3.21** (Kolmogorov-Riesz [11, 14]).: _A bounded set \(\mathcal{F}\subset L^{p}(\mathbb{R}^{n})\) with \(1\leq p<\infty\) is totally bounded if and only if_
1. (\(L^{p}\)-equicontinuity) \(\lim_{h\to 0}\int_{\mathbb{R}^{n}}|u(x+h)-u(x)|^{p}\,dx=0\) uniformly for \(u\in\mathcal{F}\), and
2. (\(L^{p}\)-uniform decay) \(\lim_{R\to\infty}\int_{|x|>R}|u(x)|^{p}\,dx=0\) uniformly for \(u\in\mathcal{F}\).
Proof.: See the proof of Theorem 5 in Hanche-Olsen and Holden [1].
We now prove a result which will be helpful in the proof of Theorem 3.23 below.
**Proposition 3.22**.: _Let \(\mathcal{F}\) be a bounded set in \(H^{1}\) and let \(\phi\in H^{1}\) be a fixed function such that \(\|\partial_{x}\phi\|_{L^{\infty}}<\infty\). Then the set \(\phi\mathcal{F}:=\{\phi u\,|\,u\in\mathcal{F}\}\) is totally bounded in \(L^{2}\)._
Proof.: First, we prove that \(\lim_{|x|\to\infty}\phi(x)=0\). By density, there exists a sequence \(\{u_{n}\}_{n\in\mathbb{N}}\subset C_{0}^{\infty}(\mathbb{R})\) that converges to \(\phi\) in \(H^{1}\). Thanks to the Sobolev inequality, \(\|v\|_{L^{\infty}}\leq 2\,\|v\|_{L^{2}}\,\|\partial_{x}v\|_{L^{2}}\) for \(v\in H^{1}\), the \(H^{1}\)-convergence of \(\{u_{n}\}\) to \(\phi\) is improved to \(L^{\infty}\)-convergence, and for every \(\epsilon>0\) there exists \(N\in\mathbb{N}\) such that
\[\|\phi-u_{n}\|_{L^{\infty}}<\epsilon\quad\text{for }n>N.\]
Since each \(u_{n}\) has compact support, there exists \(R>0\) such that \(u_{n}(x)=0\) for \(|x|>R\). Hence
\[|\phi(x)|\leq|\phi(x)-u_{n}(x)|\leq\|\phi-u_{n}\|_{L^{\infty}}\leq\epsilon.\]
Therefore, \(\lim_{|x|\to\infty}\phi(x)=0\). It is also easy to see that \(\phi\mathcal{F}\) is bounded in \(L^{2}\). Indeed, by hypothesis, there exists \(\widetilde{M}>0\) such that \(\sup_{u\in\mathcal{F}}\|u\|_{H^{1}}<\widetilde{M}\). Moreover, since \(\|\phi\|_{L^{\infty}}<\infty\), by the Sobolev inequality we obtain
\[\sup_{v\in\phi\mathcal{F}}\|v\|_{L^{2}}\leq\sup_{u\in\mathcal{F}}\|\phi u\|_{ L^{2}}\leq\widetilde{M}\|\phi\|_{L^{\infty}}.\]
Second, we prove that \(\phi\mathcal{F}\) is \(L^{2}\)-equicontinuous. By Sobolev imbedding theorems, we can assume that \(\phi\in C^{0}\cap L^{\infty}\). Also, by hypothesis \(\|\partial_{x}\phi\|_{L^{\infty}}<\infty\). Hence,
\[\|\phi u\|_{H^{1}}^{2}\leq\int_{\mathbb{R}}(\phi^{2}+(\partial_{x}\phi)^{2})(u ^{2}+(\partial_{x}u)^{2})\,dx\leq(\|\phi\|_{L^{\infty}}^{2}+\|\partial_{x}\phi \|_{L^{\infty}}^{2})\,\|u\|_{H^{1}}^{2}<M^{2}.\]
for every \(u\in\mathcal{F}\) and \(M:=\sqrt{\|\phi\|_{L^{\infty}}^{2}+\|\partial_{x}\phi\|_{L^{\infty}}^{2}} \,\widetilde{M}\). Thus \(\phi\mathcal{F}\) is bounded in \(H^{1}\). Then, for every \(v\in\phi\mathcal{F}\)
\[\int_{\{|\xi|>R\}}|\hat{v}(\xi)|^{2}\,d\xi\leq\frac{1}{1+R^{2}}\int_{\mathbb{R }}(1+\xi^{2})|\hat{v}(\xi)|^{2}\,d\xi=\frac{\|v\|_{H^{1}}^{2}}{1+R^{2}}\leq \frac{M^{2}}{1+R^{2}}.\]
Thus, the functions in \(\widehat{\phi\mathcal{F}}\) are \(L^{2}\)-uniformly decaying. By Theorem 3.20, the functions in \(\phi\mathcal{F}\) are \(L^{2}\)-equicontinuous.
Finally, we prove that the functions in \(\phi\mathcal{F}\) are \(L^{2}\)-uniformly decaying. Indeed, if \(v\in\phi\mathcal{F}\) then \(v=\phi u\) for some \(u\in\mathcal{F}\subset H^{1}\). This yields,
\[\int_{|x|>R}|v(x)|^{2}\,dx=\left\|\mathbf{1}_{\{|x|>R\}}(\phi u)\right\|_{L^{2 }}^{2}\leq\|\mathbf{1}_{\{|x|>R\}}\phi\|_{L^{\infty}}^{2}\,\|u\|_{L^{2}}^{2}\]
Again, since \(\|u\|_{L^{2}}\leq\widetilde{M}\) for every \(u\in\mathcal{F}\) and \(\phi(x)\to 0\) as \(|x|\to\infty\), we conclude that
\[\lim_{R\to\infty}\int_{|x|>R}|\phi u|^{2}\,dx\leq\lim_{R\to\infty}2\widetilde{ M}^{2}\|\mathbf{1}_{\{|x|>R\}}\phi\|_{L^{\infty}}^{2}=0\]
uniformly for \(u\in\mathcal{F}\). Thus, by Theorem 3.21, the set \(\phi\mathcal{F}\) is totally bounded in \(L^{2}\)
We now prove the main result of this section.
**Theorem 3.23**.: _The operator \(\mathcal{L}_{\infty}-\mathcal{L}:H^{2}\to H^{1}\) is compact in \(L^{2}\)._
Proof.: Let \(\mathcal{F}\) be a bounded subset of \(H^{2}\), namely \(\sup_{u\in\mathcal{F}}\|f\|_{H^{2}}<M\) for some \(M>0\). Then, fix \(\delta>0\) and let \(g_{\delta}\in C^{\infty}\) be an increasing and antisymmetric function such that \(g_{\delta}(x)=x/|x|\) for \(|x|\geq\delta\). With these tools at hand and assuming that \(\mathcal{T}\) stands for the operator \((1+(-\Delta)^{1/2})\), the operator \(\mathcal{L}_{\infty}-\mathcal{L}\) is easily recast, by adding and subtracting the terms \(g_{\delta}(x)\mathcal{T}(s_{\theta}u)+g_{\delta}(x)\mathcal{T}(g_{\delta}(x)u)\), as
\[(\mathcal{L}_{\infty}-\mathcal{L})u=\mathcal{Q}_{1}u+\mathcal{Q}_{2}u+ \mathcal{Q}_{3}(s_{\theta}u)+\mathcal{Q}_{4}u, \tag{3.16}\]
where
\[\mathcal{Q}_{1}u :=\mathcal{T}u-g_{\delta}\mathcal{T}(g_{\delta}u), \mathcal{Q}_{2}u :=g_{\delta}\mathcal{T}[(g_{\delta}-s_{\theta})u], \tag{3.17}\] \[\mathcal{Q}_{3}u :=[g_{\delta}-s_{\theta}]\mathcal{T}u, \mathcal{Q}_{4}u :=c_{\theta}u. \tag{3.18}\]
Since the set of compact operators between two Banach spaces is a linear manifold, we shall prove that each operator \(\mathcal{Q}_{i}:H^{2}\to H^{1}\) is compact in \(L^{2}\) by showing that the set \(\mathcal{Q}_{i}\mathcal{F}:=\{\mathcal{Q}_{i}u\,|\,u\in\mathcal{F}\}\subset H^ {1}\) is totally bounded in \(L^{2}\), for each \(1\leq i\leq 4\).
First, we analyze \(\mathcal{Q}_{4}\). Notice that \(\mathcal{Q}_{4}\mathcal{F}\) is totally bounded by Proposition 3.22 since \(\mathcal{F}\subset H^{2}\), \(c_{\theta}\) is a smooth function which belongs to \(H^{2}\) and \(\lim_{x\to\pm\infty}c_{\theta}(x)=0\) (see the beginning of proof of Proposition 3.22).
Second, we examine \(\mathcal{Q}_{3}\). Indeed the set \(\mathcal{T}\mathcal{F}:=\{\mathcal{T}u\,|\,u\in\mathcal{F}\}\subset H^{1}\) satisfies that \(\sup_{v\in\mathcal{T}\mathcal{F}}\|v\|_{H^{1}}\leq\sqrt{2}M\) because \(\|\mathcal{T}u\|_{H^{1}}^{2}\leq 2\,\|u\|_{H^{2}}\). Then, \(\mathcal{T}\mathcal{F}\) is bounded in \(H^{1}\) and, consequently, also in \(L^{2}\). Now, observe that
\[\mathcal{Q}_{3}\mathcal{F}=\{[g_{\delta}-s_{\theta}]\mathcal{T}u\,|\,u\in \mathcal{F}\}=\{[g_{\delta}-s_{\theta}]v\,|\,v\in\mathcal{T}\mathcal{F}\}=[g_{ \delta}-s_{\theta}]\mathcal{T}\mathcal{F},\]
and that \(\lim_{x\to\pm\infty}(g_{\delta}-s_{\theta})(x)=0\) since \(\lim_{x\to\pm\infty}\overline{\theta}(x)=\pm\pi/2\). In order to apply Proposition 3.22 and to conclude that \(\mathcal{Q}_{3}\mathcal{F}\) is totally bounded we only need to show that \(g_{\delta}-s_{\theta}\in H^{1}\) and \(\partial_{x}(g_{\delta}-s_{\theta})\in L^{\infty}\). This follows by standard calculus. It is easily seen that \(|\theta/|\theta|-\sin\theta|<\cos\theta\) for every \(\theta\in(-\pi/2,0)\cup(0,\pi/2)\), and since \(\overline{\theta}\) is a strictly increasing function with \(\overline{\theta}(0)=0\), one concludes that \(x/|x|=\operatorname{sgn}\left(\overline{\theta}(x)\right)\) for every \(x\neq 0\). These two facts readily imply that \((x/|x|-s_{\theta}(x))^{2}<\cos^{2}\overline{\theta}(x)\) a.e. in \(x\in\mathbb{R}\), and \(x/|x|-s_{\theta}\in L^{2}\). Recalling that \(g_{\delta}(x)=x/|x|-s_{\theta}\) for \(|x|\geq\delta\) and \(|g_{\delta}(x)|<2\) for \(|x|<\delta\), one concludes that \(g_{\delta}-s_{\theta}\in L^{2}\cap L^{\infty}\). In the same fashion, and using Proposition 2.1 we can readily prove that \(\partial_{x}(g_{\delta}-s_{\theta})\in L^{2}\cap L^{\infty}\). Therefore, we conclude that \([g_{\delta}-s_{\theta}]\mathcal{T}\mathcal{F}\) is totally bounded. The compactness of the linear operator \(\mathcal{Q}_{3}\) in \(L^{2}\) then follows. Moreover, observe that \(s_{\theta}\in H^{1}\cap L^{\infty}\) and the compactness of \(\mathcal{Q}_{3}\) imply the compactness of the linear operator \(u\mapsto\mathcal{Q}_{3}(s_{\theta}u)\) involved in (3.16).
Let us now study the operator \(\mathcal{Q}_{2}\). We claim that \(\mathcal{T}^{-1}:H^{2}\to H^{3}\) is continuous. Indeed, since \((1+|\xi|)^{2}\geq 1+|\xi|^{2}\), we have
\[\|\mathcal{T}^{-1}u\|_{H^{3}}^{2}=\int_{\mathbb{R}}\frac{1+\xi^{6}}{(1+|\xi|)^ {2}}|\hat{u}(\xi)|^{2}\,d\xi\leq\int_{\mathbb{R}}(1-\xi^{2}+|\xi|^{4})|\hat{u} (\xi)|^{2}\,d\xi\leq\|u\|_{H^{2}}^{2}\,.\]
Notice that \(\mathcal{Q}_{2}=g_{\delta}(x)\mathcal{T}\mathcal{Q}_{3}\mathcal{T}^{-1}\), and since \(g_{\delta}\mathcal{T}:H^{1}\to L^{2}\) is bounded, the compactness of \(\mathcal{Q}_{2}\) is proved by showing that \(\mathcal{Q}_{3}:H^{3}\to H^{2}\) is compact in \(H^{1}\). Let \(\{u_{j}\}_{j>0}\subset H^{3}\), then by the second step (compactness of \(\mathcal{Q}_{3}\) in \(L^{2}\)), there exists a
subsequence \(\{u_{j_{k}}\}_{k>0}\) and \(u\in L^{2}\) such that \(\left\|u-\mathcal{Q}_{3}u_{j_{k}}\right\|_{L^{2}}\to 0\) as \(k\to\infty\). Since \(\{u_{j_{k}}\}_{k>0}\subset H^{3}\), then \(\{\mathcal{Q}_{3}u_{j_{k}}\}_{k>0}\subset H^{2}\) and
\[\begin{split}\partial_{x}(\mathcal{Q}_{3}u_{j_{k}})=\partial_{x}( [g_{\delta}-s_{\theta}]\mathcal{T}u_{j_{k}})=&\partial_{x}(g_{ \delta}-s_{\theta})\mathcal{T}u_{j_{k}}+[g_{\delta}-s_{\theta}]\mathcal{T} \partial_{x}u_{j_{k}}\\ =&\partial_{x}(g_{\delta}-s_{\theta})\mathcal{T}u_{j _{k}}+\mathcal{Q}_{3}\partial_{x}u_{j_{k}},\end{split} \tag{3.19}\]
where the first equality follows by noticing that \(\partial_{x}\) and \(\mathcal{T}\) commute,
\[\widehat{\partial_{x}\mathcal{T}u}=i\xi(1+\xi)\hat{u}(\xi)=(1+\xi)i\xi\hat{u }(\xi)=(1+\xi)\widehat{\partial_{x}u}=\widehat{\mathcal{T}\partial_{x}u}.\]
It is not difficult to see that \(\partial_{x}(g_{\delta}-s_{\theta})\in H^{1}\) with \(\|\partial_{x}(g_{\delta}-s_{\theta})\|_{L^{\infty}}<\infty\). Hence, by Proposition 3.22, the linear operator \(\partial_{x}(g_{\delta}-s_{\theta})\mathcal{T}:H^{3}\to H^{2}\) is compact in \(L^{2}\). Therefore, there exist two functions, \(v\) and \(w\), both in \(L^{2}\), and a subsequence denoted as \(\{u_{\ell}\}_{\ell>0}\), such that
\[\lim_{\ell\to\infty}\left\|v-\partial_{x}(g_{\delta}-s_{\theta})\mathcal{T}u_ {\ell}\right\|_{L^{2}}=0,\quad\text{and}\quad\lim_{\ell\to\infty}\left\|w- \mathcal{Q}_{3}\partial_{x}u_{\ell}\right\|_{L^{2}}=0.\]
We will prove that \(u\in H^{1}\) and \(\partial_{x}u=v+w\). The argument follows by density and letting \(\phi\in C_{0}^{\infty}\), where
\[\left\langle u\,,\partial_{x}\phi\right\rangle_{L^{2}}=\left\langle u-[g_{ \delta}-s_{\theta}]\mathcal{T}u_{\ell}\,,\partial_{x}\phi\right\rangle_{L^{2}} +\left\langle[g_{\delta}-s_{\theta}]\mathcal{T}u_{\ell}\,,\partial_{x}\phi \right\rangle_{L^{2}}.\]
Now, take the limit when \(\ell\to\infty\) and use the facts that \(\left\|u-(g_{\delta}-s_{\theta})\mathcal{T}u_{\ell}\right\|_{L^{2}}\to 0\) and that strong convergence implies weak convergence, in order to obtain
\[\left\langle u\,,\partial_{x}\phi\right\rangle_{L^{2}}=\lim_{ \ell\to\infty}\left\langle[g_{\delta}-s_{\theta}]\mathcal{T}u_{\ell}\,, \partial_{x}\phi\right\rangle_{L^{2}}= -\lim_{\ell\to\infty}\left\langle\partial_{x}([g_{\delta}-s_{ \theta}]\mathcal{T}u_{\ell})\,,\phi\right\rangle_{L^{2}}\] \[= -\lim_{\ell\to\infty}\left\langle\partial_{x}(g_{\delta}-s_{ \theta})\mathcal{T}u_{\ell}+\mathcal{Q}_{3}\partial_{x}u_{\ell}\,,\phi\right\rangle _{L^{2}}\] \[= -\left\langle v+w\,,\phi\right\rangle_{L^{2}}.\]
Whence, for every bounded sequence \(\{u_{j}\}_{j>0}\subset H^{3}\) there exists a convergent subsequence \(\{\mathcal{Q}_{3}u_{\ell}\}_{\ell>0}\) in \(H^{1}\). In other words, \(\mathcal{Q}_{3}:H^{3}\to H^{2}\) is compact in \(H^{1}\).
Finally, we study the operator \(\mathcal{Q}_{1}\). From the definition of \(\mathcal{Q}_{1}\), we have
\[\mathcal{Q}_{1}u=\mathcal{T}u-g_{\delta}(x)\mathcal{T}(g_{\delta}(x)u)=(1-g_{ \delta}^{2})\mathcal{T}u+g_{\delta}(g_{\delta}\mathcal{T}-\mathcal{T}g_{\delta })u=(1-g_{\delta}^{2})\mathcal{T}u+g_{\delta}[g_{\delta},\mathcal{T}]u.\]
Notice that \((1-g_{\delta}^{2}(x))=0\) for \(|x|\geq\delta\) and it is bounded for \(|x|<\delta\) as well as its derivative. Hence by Proposition 3.22, the operator \((1-g_{\delta}^{2})\mathcal{T}:H^{2}\to H^{1}\) is compact in \(L^{2}\). For this last term, it will be enough to prove that the commutator \([g_{\delta},\mathcal{T}]:H^{2}\to H^{1}\) is compact in \(L^{2}\), since the multiplication by \(g_{\delta}\) is a continuous operation. Indeed, we notice that the term \([g_{\delta},\mathcal{T}]\) can be written in terms of the Hilbert transform \(\mathcal{H}\) (see Lemma 3.5) as
\[[g_{\delta},\mathcal{T}]u=[g_{\delta},(-\Delta)^{1/2}]u=[g_{\delta },\mathcal{H}\circ\partial_{x}]u =g_{\delta}\mathcal{H}(\partial_{x}u)-\mathcal{H}(\partial_{x}(g_{ \delta}u))\] \[=g_{\delta}\mathcal{H}(\partial_{x}u)-\mathcal{H}(g_{\delta} \partial_{x}u)-\mathcal{H}((\partial_{x}g_{\delta})u)\] \[=[g_{\delta},\mathcal{H}](\partial_{x}u)-\mathcal{H}((\partial_{x }g_{\delta})u).\]
Observe that \((\partial_{x}g_{\delta})\mathrm{I}:H^{2}\to H^{1}\) is compact in \(L^{2}\) since the hypothesis in Proposition 3.22 is satisfied by choosing \(\phi=\partial_{x}g_{\delta}\). Also, since the Hilbert transform is continuous on \(L^{2}\), we conclude that \(\mathcal{H}\circ(\partial_{x}g_{\delta})\mathrm{I}:H^{2}\to H^{1}\) is compact on \(L^{2}\).
Thus we must prove that the linear operator \([g_{\delta},\mathcal{H}]\partial_{x}:H^{2}\to H^{1}\) is compact in \(L^{2}\). Notice that \(\{[g_{\delta},\mathcal{H}]\partial_{x}u\,|\,u\in\mathcal{F}\}\) is \(L^{2}\)-equicontinuous since this set is bounded in \(H^{1}\). This readily follows by applying the properties of the Hilbert transform to the terms in \(\left\|[g_{\delta},\mathcal{H}]\partial_{x}u\right\|_{H^{1}}^{2}\). Indeed, we have the estimates
\[\left\|[g_{\delta},\mathcal{H}]\partial_{x}u\right\|_{L^{2}}^{2}\leq\left\|g_{ \delta}\mathcal{H}\partial_{x}u\right\|_{L^{2}}^{2}+\left\|\mathcal{H}(g_{ \delta}\partial_{x}u)\right\|_{L^{2}}^{2}\leq 2\|g_{\delta}\|_{L^{\infty}}^{2}\left\| \partial_{x}u\right\|_{L^{2}}^{2},\]
and
\[\left\|\partial_{x}([g_{\delta},\mathcal{H}]\partial_{x}u)\right\|_{L^ {2}}^{2}\leq \left\|\partial_{x}(g_{\delta}\mathcal{H}\partial_{x}u)\right\|_{L^ {2}}^{2}+\left\|\partial_{x}(\mathcal{H}(g_{\delta}\partial_{x}u))\right\|_{L^ {2}}^{2}\] \[\leq \left\|(\partial_{x}g_{\delta})\mathcal{H}\partial_{x}u\right\|_{L ^{2}}^{2}+\left\|g_{\delta}\mathcal{H}\partial_{x}^{2}u\right\|_{L^{2}}^{2}+ \left\|\mathcal{H}\partial_{x}(g_{\delta}\partial_{x}u)\right\|_{L^{2}}^{2}\] \[\leq \|\partial_{x}g_{\delta}\|_{L^{\infty}}^{2}\left\|\partial_{x}u \right\|_{L^{2}}^{2}+\|g_{\delta}\|_{L^{\infty}}^{2}\left\|\partial_{x}^{2}u \right\|_{L^{2}}^{2}+\left\|\partial_{x}(g_{\delta}\partial_{x}u)\right\|_{L^ {2}}^{2}\] \[\leq 2\|\partial_{x}g_{\delta}\|_{L^{\infty}}^{2}\left\|\partial_{x} u\right\|_{L^{2}}^{2}+2\|g_{\delta}\|_{L^{\infty}}^{2}\left\|\partial_{x}^{2}u \right\|_{L^{2}}^{2}.\]
It remains to show that functions in the set \(\{[g_{\delta},\mathcal{H}]\partial_{x}u\,|\,u\in\mathcal{F}\}\) are \(L^{2}\)-uniformly decaying. For simplicity, let \(v\) denote \(u_{x}\). Hence, \(v\in\mathcal{F}^{\prime}:=\{u_{x}\,:\,u\in\mathcal{F}\}\), which is a bounded set in \(H^{1}\). We recall that
\[\pi[g_{\delta},\mathcal{H}]\partial_{x}u=\pi[g_{\delta},\mathcal{H }]v= \,\text{P.V. }\int_{\mathbb{R}}\frac{g_{\delta}(x)-g_{\delta}(y)}{x-y}v(y)\,dy\] \[= \lim_{\epsilon\to 0}\int_{|y-x|>\epsilon}\frac{g_{\delta}(x)-g_{ \delta}(y)}{x-y}v(y)\,dy\] \[= \lim_{\epsilon\to 0}\int_{|h|>\epsilon}\frac{g_{\delta}(x+h)-g_{ \delta}(x)}{h}v(x+h)\,dh\] \[= \lim_{\epsilon\to 0}\int_{|h|>\epsilon}\frac{1}{h}\int_{x}^{x+h}g_{ \delta}^{\prime}(t)\,dt\,v(x+h)\,dh.\]
Since we are interested in the behavior of \(\left\|\mathbf{1}_{|x|>R}[g_{\delta},\mathcal{H}]\partial_{x}u\right\|_{L^{2}}^ {2}\) for \(R\to\infty\), we assume that \(R>2\delta\) and \(\epsilon<\delta\). For \(x>R\) the integral is split as
\[\pi[g_{\delta},\mathcal{H}]v(x)= \int_{-\infty}^{-x+\delta}\frac{1}{h}\int_{x}^{x+h}g_{\delta}^{ \prime}(t)\,dt\,v(x+h)\,dh+\] \[+\lim_{\epsilon\to 0}\left[\int_{-x+\delta}^{-\epsilon}\frac{1}{h} \int_{x}^{x+h}g_{\delta}^{\prime}(t)\,dt\,v(x+h)\,dh+\int_{\epsilon}^{\infty }\frac{1}{h}\int_{x}^{x+h}g_{\delta}^{\prime}(t)\,dt\,v(x+h)\,dh\right].\]
Notice that the last two integral are equal to zero since \(\operatorname{supp}g^{\prime}\subset[-\delta,\delta]\) and \(\delta<x+h\) for \(h>\delta-x\). Moreover if \(C:=\int_{|x|\leq\delta}g^{\prime}(x)\,dx\) then
\[\pi[g_{\delta},\mathcal{H}]v(x)= \int_{-\infty}^{-x-\delta}\frac{1}{h}\int_{x}^{x+h}g_{\delta}^{ \prime}(t)\,dt\,v(x+h)\,dh+\int_{-x-\delta}^{-x+\delta}\frac{1}{h}\int_{x}^{x+ h}g_{\delta}^{\prime}(t)\,dt\,v(x+h)\,dh\] \[= \int_{-\infty}^{-x-\delta}\frac{1}{h}\int_{\delta}^{-\delta}g_{ \delta}^{\prime}(t)\,dt\,v(x+h)\,dh+\int_{-x-\delta}^{-x+\delta}\frac{1}{h} \int_{\delta}^{x+h}g_{\delta}^{\prime}(t)\,dt\,v(x+h)\,dh\] \[= -C\int_{-\infty}^{-x-\delta}\frac{v(x+h)}{h}\,dh+\int_{-x-\delta}^ {-x+\delta}\frac{1}{h}\int_{\delta}^{x+h}g_{\delta}^{\prime}(t)\,dt\,v(x+h)\,dh.\]
Now, we use the variable change \(y=x+h\), the fundamental theorem of calculus, and the fact that \(g_{\delta}(\delta)=1\), to obtain
\[\pi[g_{\delta},\mathcal{H}]v(x)= -C\int_{-\infty}^{-\delta}\frac{v(y)}{y-x}\,d+\int_{-\delta}^{ \delta}\frac{1}{y-x}\int_{\delta}^{y}g_{\delta}^{\prime}(t)\,dt\,v(y)\,dy\] \[= -C\int_{-\infty}^{-\delta}\frac{v(y)}{y-x}\,dy-\int_{-\delta}^{ \delta}\frac{1-g_{\delta}(y)}{y-x}\,v(y)\,dy.\]
A similar analysis applies for \(x<-R\). Thus, for \(|x|>R>2\delta\) there holds
\[\pi[g_{\delta},\mathcal{H}]v(x)=\begin{cases}C{\int_{\delta}^{ \infty}\frac{v(y)}{y-x}\,dy}+{\int_{-\delta}^{\delta}\frac{g_{ \delta}(y)+1}{y-x}\,v(y)\,dy}&\text{for }x<-R,\\ -C{\int_{-\infty}^{-\delta}\frac{v(y)}{y-x}\,dy}-{\int_{- \delta}^{\delta}\frac{1-g_{\delta}(y)}{y-x}\,v(y)\,dy}&\text{for }x>R.\end{cases}\]
These expressions can be recast as,
\[\pi[g_{\delta},\mathcal{H}]v(x)=C\int_{\delta}^{\infty}\frac{v(-\text{sgn}\,( x)y)}{y+|x|}\,dy+{\int_{-\delta}^{\delta}\frac{g_{\delta}(y)-\text{sgn}\,(x)}{y-x }\,v(y)\,dy}. \tag{3.20}\]
Notice that both integrals are convergent. Indeed, since \(v=u_{x}\), then an integration by parts yields
\[{\int_{\delta}^{\infty}\frac{v(-\text{sgn}\,(x)y)}{y+|x|}\,dy} ={\int_{\delta}^{\infty}\frac{u^{\prime}(-\text{sgn}\,(x)y)}{y+|x|} \,dy}\] \[=\frac{-\text{sgn}\,(x)u(-\text{sgn}\,(x)y)}{y+|x|}\bigg{|}_{ \delta}^{\infty}+{\int_{\delta}^{\infty}\frac{-\text{sgn}\,(x)u(-\text{sgn} \,(x)y)}{(y+|x|)^{2}}\,dy}\] \[=-\,\text{sgn}\,(x)\left[-\frac{u(-\text{sgn}\,(x)\delta)}{ \delta+|x|}+{\int_{\delta}^{\infty}\frac{u(-\text{sgn}\,(x)y)}{(y+|x|)^{2}}\, dy}\right]\] \[=-\,\text{sgn}\,(x)\int_{\delta}^{\infty}\frac{u(-\text{sgn}\,( x)y)-u(-\text{sgn}\,(x)\delta)}{(y+|x|)^{2}}\,dy.\]
Since \(\mathcal{F}\) is bounded in \(H^{2}\), then \(\|u\|_{L^{\infty}}\leq\|u\|_{H^{1}}^{2}\leq M\) for every \(u\in\mathcal{F}\), which implies that
\[\left|{\int_{\delta}^{\infty}\frac{v(-\text{sgn}\,(x)y)}{y+|x|}\,dy}\right|\leq 2 M\int_{\delta}^{\infty}\frac{1}{(y+|x|)^{2}}\,dy=\frac{2M}{\delta+|x|}.\]
This yields
\[{\int_{|x|>R}\left({\int_{\delta}^{\infty}\frac{v(-\text{sgn}\,(x)y)}{y+|x|} \,dy}\right)^{2}dx}\leq 4M^{2}\int_{|x|>R}\frac{dx}{(\delta+x)^{2}}\leq\frac{8M^ {2}}{\delta+R}. \tag{3.21}\]
Now, we analyze the second integral in (3.20). By Jensen's inequality and the fact that \(\|g_{\delta}\|_{L^{\infty}}\leq 1\), one gets
\[{\int_{|x|>R}\left({\int_{-\delta}^{\delta}\frac{g_{\delta}(y)- \text{sgn}\,(x)}{y-x}\,v(y)\,dy}\right)^{2}dx} \leq 2\delta\int_{|x|>R}{\int_{-\delta}^{\delta}\frac{\left(g_{ \delta}(y)-\text{sgn}\,(x)\right)^{2}}{\left(y-x\right)^{2}}\,v(y)^{2}\,dydx}\] \[\leq 8\delta\int_{|x|>R}{\int_{-\delta}^{\delta}\frac{1}{\left(y-x \right)^{2}}\,v(y)^{2}\,dydx}.\]
Since \(v\in L^{2}\) and \((y-x)^{2}\geq(|x|-\delta)\), then for every \(y\in(-\delta,\delta)\) we obtain
\[{\int_{|x|>R}\left({\int_{-\delta}^{\delta}\frac{g_{\delta}(y)- \text{sgn}\,(x)}{y-x}\,v(y)\,dy}\right)^{2}dx}\leq 8\delta\left\|v\right\|_{L^{2}}^ {2}\int_{|x|>R}\frac{dx}{\left(|x|-\delta\right)^{2}}\leq\frac{16\delta M^{2} }{R-\delta}. \tag{3.22}\]
We easily see that Young's inequality implies that
\[\int_{|x|>R}([g_{\delta},\mathcal{H}]v(x))^{2}\,dx \leq\frac{2C^{2}}{\pi^{2}}\int_{|x|>R}\left(\int_{\delta}^{\infty} \frac{v(-\mathrm{sgn}\,(x)y)}{y+|x|}\,dy\right)^{2}dx\] \[+\frac{2}{\pi^{2}}\int_{|x|>R}\int_{-\delta}^{\delta}\left(\frac{ g_{\delta}(y)-\mathrm{sgn}\,(x)}{y-x}\,v(y)\,dy\right)^{2}dx\] \[\leq\frac{16M^{2}(C^{2}+2\delta)}{\pi(R-\delta)}.\]
Therefore, it follows that the set \(\{[g_{\delta},\mathcal{H}]\partial_{x}u\,|\,u\in\mathcal{F}\}\) is totally bounded and the operator \([g_{\delta},\mathcal{H}]\partial_{x}:H^{2}\to H^{1}\) is compact in \(L^{2}\). Therefore \([g_{\delta},\mathcal{T}]:H^{2}\to H^{1}\) and \(\mathcal{Q}_{1}:H^{2}\to H^{1}\) both are compact in \(L^{2}\). This completes the proof.
**Theorem 3.24**.: _The operator \(\mathcal{L}\) is a relatively compact perturbation of \(\mathcal{L}_{\infty}\)._
Proof.: Let \(\mu\in\rho\,(\mathcal{L}_{\infty})\), hence \((\mu-\mathcal{L}_{\infty})^{-1}:L^{2}\to H^{2}\) is a continuous linear operator and, by Theorem 3.23, \(\mathcal{L}_{\infty}-\mathcal{L}:H^{2}\to H^{1}\) is compact in \(L^{2}\). This implies that the operator \((\mathcal{L}_{\infty}-\mathcal{L})(\mu-\mathcal{L}_{\infty})^{-1}\) is compact on \(L^{2}\) (see, e.g., [10] p. 158).
**Remark 3.25**.: An immediate consequence of this relative compactness result is that the essential spectrum of \(\mathcal{L}\) and the spectrum of \(\mathcal{L}_{\infty}\) coincide, by virtue of Weyl's essential spectrum theorem (see, e.g., [11], p. 29). Albeit we do not apply the latter to the operator \(\mathcal{L}\)_per se_, Theorem 3.24 will play a key role in the location of the essential spectrum of a block operator matrix, as we shall see below.
## 4. Perturbation equations and spectral stability
In order to establish the perturbation equations, consider a solution \(\overline{\theta}(x)+u(x,t)\) to the reduced dynamic equation (2.15). Here \(u\) is the perturbation of the static Neel wall's phase which, by the boundary conditions on the real line, must satisfy
\[u(\pm\infty,t)=0,\qquad t>0. \tag{4.1}\]
Upon substitution into (2.15), we obtain the following nonlinear equation for the perturbation,
\[\partial_{t}^{2}u+\nu\partial_{t}u+\nabla\mathcal{E}(\overline{\theta}+u)=0. \tag{4.2}\]
In view of (3.1), equation (4.2) can be recast as
\[\partial_{t}^{2}u+\nu\partial_{t}u+\mathcal{L}u+\mathcal{N}(u)=0,\]
where \(\mathcal{L}u\) is the linearization around \(\overline{\theta}\) of \(\nabla\mathcal{E}(\overline{\theta}+u)\) acting on the perturbation \(u\), and
\[\mathcal{N}(u):=\nabla\mathcal{E}(\overline{\theta}+u)-\mathcal{L}u=O(u^{2}),\]
comprises the nonlinear terms. In view of the form of the operator (3.1) we reckon the perturbation equation as a nonlinear wave equation. By making the (standard) change of variables \(v=\partial_{t}u\), solving the perturbation equation (4.2) is equivalent to solving the nonlinear hyperbolic system
\[\partial_{t}\begin{pmatrix}u\\ v\end{pmatrix}=\begin{pmatrix}0&\mathrm{I}\\ -\mathcal{L}&-\nu\mathrm{I}\end{pmatrix}\begin{pmatrix}u\\ v\end{pmatrix}+\begin{pmatrix}0\\ \mathcal{N}(u)\end{pmatrix}, \tag{4.3}\]
in an appropriate space, which will be determined later.
### The spectral problem
By linearizing equation (4.2) around the Neel wall's phase, we obtain the following equation for the perturbation,
\[\partial_{t}^{2}u+\nu\partial_{t}u+\mathcal{L}u=0, \tag{4.4}\]
which is equivalent to the following linear system in the \((u,v)\) variables,
\[\partial_{t}\begin{pmatrix}u\\ v\end{pmatrix}=\begin{pmatrix}0&\mathrm{I}\\ -\mathcal{L}&-\nu\mathrm{I}\end{pmatrix}\begin{pmatrix}u\\ v\end{pmatrix}. \tag{4.5}\]
Specialize the linearized equation (4.4) to perturbations of the form \(e^{\lambda t}u(x)\), with \(\lambda\in\mathbb{C}\) and \(u\in X\), being \(X\) a Banach space to be determined below. Substituting into (4.4), we obtain the following spectral problem
\[(\lambda^{2}+\nu\lambda)u+\mathcal{L}u=0. \tag{4.6}\]
**Remark 4.1**.: Under the substitution \(\lambda=i\zeta\), equation (4.6) can be written in terms of a _quadratic operator pencil_, \(\widetilde{\mathcal{T}}u=0\) (cf. Markus [13]), with \(\widetilde{\mathcal{T}}=\widetilde{\mathcal{T}}_{0}+\zeta\widetilde{\mathcal{ T}}_{1}+\zeta^{2}\widetilde{\mathcal{T}}_{2}\), and \(\widetilde{\mathcal{T}}_{0}=\partial_{x}^{2}+\mathcal{L}\), \(\widetilde{\mathcal{T}}_{1}=i\nu\mathrm{I}\), \(\widetilde{\mathcal{T}}_{2}=\mathrm{I}\). The transformation \(v=\lambda u\) (the spectral equivalent of the change of variables \(v=\partial_{t}u\)) defines an appropriate cartesian product of the base space which allows us to write equation (4.6) as a genuine eigenvalue problem of the form
\[\lambda\begin{pmatrix}u\\ v\end{pmatrix}=\begin{pmatrix}0&\mathrm{I}\\ -\mathcal{L}&-\nu\end{pmatrix}\begin{pmatrix}u\\ v\end{pmatrix}=:\mathcal{A}\begin{pmatrix}u\\ v\end{pmatrix}. \tag{4.7}\]
The matrix operator \(\mathcal{A}\) is often called the companion matrix to the pencil \(\widetilde{\mathcal{T}}\) (see [1, 1] for further information). Clearly, equation (4.7) is the spectral equation associated to the linear system (4.5). We shall refer to both (4.6) and (4.7) as the spectral problem making no distinction.
In the present stability analysis, we are interested in the spectral properties of the block operator,
\[\mathcal{A}:H^{1}\times L^{2}\to H^{1}\times L^{2},\]
regarded as a linear, densely defined operator in \(H^{1}\times L^{2}\) with domain \(D(\mathcal{A}):=H^{2}\times H^{1}\). In other words, we choose our energy base space as
\[X:=H^{1}\times L^{2}.\]
This choice is not only consistent with the boundary conditions (4.1) for perturbations of the Neel wall's phase but, more importantly, it relates to the appropriate energy space encoding perturbations of the energy functional defined in (2.5), which requires variations \(u\in H^{1}\). In addition, the condition \(v\in L^{2}\) implies that those perturbations have finite kinetic energy because \(v\) is the spectral equivalent to \(\partial_{t}u\). Thus, the stability analysis pertains to localized perturbations with finite energy in \(X=H^{1}\times L^{2}\). For shortness, let us introduce the notation
\[U=(u,v)\in H^{2}\times H^{1},\qquad\mathcal{A}U=(v,-\mathcal{L}u-\nu v)\in H^ {1}\times L^{2}.\]
In addition, the standard scalar product in \(H^{1}\times L^{2}\) will be denoted as
\[\langle U,F\rangle_{X}:=\langle(u,v),(f,g)\rangle_{H^{1}\times L^{2}}=\langle u,f\rangle_{H^{1}}+\langle v,g\rangle_{L^{2}},\]
for any \(U=(u,v)\) and \(F=(f,g)\) in \(X\).
**Remark 4.2**.: It is to be observed that this choice of the energy space conveys a slight abuse of notation. Indeed, the operator \(\mathcal{L}\) in the expression for \(\mathcal{A}\) in (4.7) refers actually to its restriction to \(H^{1}\), namely, to the operator
\[\widetilde{\mathcal{L}} :=\mathcal{L}_{|H^{1}} \widetilde{\mathcal{L}} :H^{1}\to L^{2},\] \[D(\widetilde{\mathcal{L}}) =H^{2}\subset H^{1}, \widetilde{\mathcal{L}}u :=\mathcal{L}u,\quad\forall\,u\in H^{2},\]
where, rigorously speaking, \(\mathcal{L}\) is the operator from \(L^{2}\) to \(L^{2}\) defined in (3.1). However, since the original properties remain (for example, its closedness and its spectral bounds, as the reader may easily verify), for simplicity we keep the notation \(\mathcal{L}:H^{1}\to L^{2}\) with the same dense domain \(D(\mathcal{L})=H^{2}\) in the definition of the operator \(\mathcal{A}\) under consideration. In the sequel, we shall remind the reader of this distinction at the steps of the proofs where its is explicitly required.
The first property of the block operator \(\mathcal{A}\) that we verify is its closedness, so that the definitions of resolvent and spectra, as well as their basic properties, apply.
**Lemma 4.3**.: _The matrix block operator \(\mathcal{A}:H^{1}\times L^{2}\to H^{1}\times L^{2}\) is closed._
Proof.: Let \(U_{j}=(u_{j},v_{j})\in D(\mathcal{A})=H^{2}\times H^{1}\), \(j\in\mathbb{N}\), be a Cauchy sequence in \(X=H^{1}\times L^{2}\) such that \(\{\mathcal{A}U_{j}\}_{j\in\mathbb{N}}\) is a Cauchy sequence in \(X\) as well. Let us denote their limits as \(U=(u,v)=\lim_{j\to\infty}U_{j}\) and \(F=(f,g)=\lim_{j\to\infty}\mathcal{A}U_{j}\), both in \(X\). This implies that
\[v_{j} \to f,\quad\text{in}\;\;H^{1},\] \[-\mathcal{L}u_{j}-\nu v_{j} \to g,\quad\text{in}\;\;L^{2},\]
as \(j\to\infty\). Since \(v_{j}\to f\) in \(H^{1}\) implies that \(v_{j}\to f\) in \(L^{2}\), we obtain \(-\mathcal{L}u_{j}\to g+\nu f\) in \(L^{2}\). Because \(\{u_{j}\}_{j\in\mathbb{N}}\) is a Cauchy sequence in \(H^{2}\) we deduce that it is also a Cauchy sequence in \(L^{2}\). By virtue of the closedness of the operator \(\mathcal{L}\) when regarded as an operator from \(L^{2}\) to \(L^{2}\) (see Corollary 3.11), we deduce that \(u_{j}\to u\) in \(L^{2}\) implies \(u\in D(\mathcal{L})=H^{2}\) and \(-\mathcal{L}u=g+\nu f\). Therefore, \(U=(u,v)\in D(\mathcal{A})\) and
\[\mathcal{A}U=(v,-\mathcal{L}u-\nu v)=(f,g)=F.\]
This proves that \(\mathcal{A}\) is a closed operator.
Another important property is that the translation eigenvalue remains.
**Lemma 4.4**.: \(\lambda=0\) _is a simple eigenvalue of \(\mathcal{A}\) with eigenfunction_
\[\Theta:=(\partial_{x}\overline{\theta},0)\in D(\mathcal{A})=H^{2}\times H^{1}. \tag{4.8}\]
Proof.: Since \(\partial_{x}\overline{\theta}\in H^{2}\) (see Proposition 2.1 (c)) we clearly notice that \(\Theta\in D(\mathcal{A})\). Moreover, \(\mathcal{A}\Theta=(0,-\mathcal{L}\partial_{x}\overline{\theta})=0\). Hence \(\lambda=0\in\sigma_{\mathrm{pt}}(\mathcal{A})\) with eigenfunction \(\Theta\). To verify that it spans the whole kernel, let \(0\neq U=(u,v)\in\ker\mathcal{A}\). Since \(u\in H^{2}\subset L^{2}\), writing \(u=u_{\perp}\oplus\alpha\partial_{x}\overline{\theta}\) with \(\langle u_{\perp},\partial_{x}\overline{\theta}\rangle=0\) and some \(\alpha\in\mathbb{C}\), yields
\[0=\mathcal{A}U=\mathcal{A}(u_{\perp},v)+\alpha\mathcal{A}(\partial_{x} \overline{\theta},0)=(v,-\mathcal{L}u_{\perp}-\nu v).\]
Therefore \(v=0\) and \(\mathcal{L}u_{\perp}=0\). By Corollary 3.12, \(u_{\perp}=0\) and this shows that the geometric multiplicity is equal to one.
Finally, the algebraic multiplicity is equal to one. Otherwise, there would exist a non trivial Jordan chain \(\mathcal{A}U=\alpha\Theta\), \(\alpha\in\mathbb{C}\setminus\{0\}\) with \(U\neq 0\). This implies that
\[\mathcal{A}U=(v,-\mathcal{L}u-\nu v)=(\alpha\partial_{x}\overline{\theta},0).\]
Therefore \(v=\alpha\partial_{x}\overline{\theta}\) and \(-\mathcal{L}u=\nu\alpha\partial_{x}\overline{\theta}\). Then \(\mathcal{L}\) has a nontrivial Jordan chain which contradicts Corollary 3.12.
### Point spectral stability
After these preparations, we are ready to prove that the operator \(\mathcal{A}\) is point spectrally stable.
**Lemma 4.5**.: _Let \(\lambda\in\sigma_{\mathrm{pt}}(\mathcal{A})\), \(\lambda\neq 0\). Then_
\[\mathrm{Re}\,\lambda\leq-\tfrac{1}{2}\nu+\tfrac{1}{2}\mathbf{1}_{[2\sqrt{ \Lambda_{0}},\infty)}(\nu)\sqrt{\nu^{2}-4\Lambda_{0}}<0, \tag{4.9}\]
_where \(\Lambda_{0}\) is given by Proposition 3.8 (c) and \(\mathbf{1}_{\Omega}(\cdot)\) denotes the characteristic function of any measurable set \(\Omega\subset\mathbb{R}\)._
Proof.: Suppose that \(\lambda\in\sigma_{\mathrm{pt}}(\mathcal{A})\) with \(\lambda\neq 0\). Hence, there exists \(U=(u,v)\in D(\mathcal{A})=H^{2}\times H^{1}\) such that \(\mathcal{A}U=\lambda U\). This yields \(\lambda u=v\) and \((\lambda+\nu)v+\mathcal{L}u=0\). Upon substitution, we obtain
\[\mathcal{L}u+\lambda(\lambda+\nu)u=0,\qquad u\in H^{2}=D(\mathcal{L}).\]
Therefore, \(-\lambda(\lambda+\nu)\in\sigma_{\mathrm{pt}}(\mathcal{L})\) with eigenfunction \(u\). Since \(\mathcal{L}\) is self-adjoint we obtain \(\lambda(\lambda+\nu)\in\mathbb{R}\). Due to \(u\in H^{2}\subset L^{2}\) and \(v\in H^{1}\subset L^{2}\) we may decompose \(u=u_{\perp}\oplus\alpha\partial_{x}\overline{\theta}\), \(v=v_{\perp}\oplus\alpha\partial_{x}\overline{\theta}\), for some \(\alpha,\beta\in\mathbb{C}\), and \(\langle u_{\perp},\partial_{x}\overline{\theta}\rangle_{L^{2}}=\langle v_{ \perp},\partial_{x}\overline{\theta}\rangle_{L^{2}}=0\). Substituting one arrives at the relations
\[\lambda u_{\perp}=v_{\perp},\quad\beta=\lambda\alpha,\] \[\mathcal{L}u_{\perp}+\lambda(\lambda+\nu)(u_{\perp}+\alpha \partial_{x}\overline{\theta})=0.\]
Take the \(L^{2}\)-product of last equation with \(u_{\perp}\). The result is
\[0=\langle\mathcal{L}u_{\perp},u_{\perp}\rangle_{L^{2}}+\lambda(\lambda+\nu) \|u_{\perp}\|_{L^{2}}^{2}+\lambda(\lambda+\nu)\langle\alpha\partial_{x} \overline{\theta},u_{\perp}\rangle_{L^{2}}\geq(\Lambda_{0}+\lambda^{2}+\lambda \nu)\|u_{\perp}\|_{L^{2}}^{2},\]
because \(\langle u_{\perp},\partial_{x}\overline{\theta}\rangle_{L^{2}}=0\) and \(\lambda(\lambda+\nu)\in\mathbb{R}\). Therefore, we obtain the bound
\[\lambda(\lambda+\nu)\leq-\Lambda_{0}. \tag{4.10}\]
Henceforth, we arrive at the relations
\[\mathrm{Im}\,(\lambda(\lambda+\nu)) =(\mathrm{Im}\,\lambda)(\nu+2\mathrm{Re}\,\lambda)=0, \tag{4.11a}\] \[-\Lambda_{0}\geq\mathrm{Re}\,(\lambda(\lambda+\nu)) =(\mathrm{Re}\,\lambda)^{2}-(\mathrm{Im}\,\lambda)^{2}+\nu \mathrm{Re}\,\lambda. \tag{4.11b}\]
Since \(\nu>0\) is a given physical constant,1 we have two parameter regimes: (i) \(\nu\in(0,2\sqrt{\Lambda_{0}})\), or (ii) \(\nu\in[2\sqrt{\Lambda_{0}},\infty)\). Let us examine the first case. From (4.11a) we either have \(\lambda\in\mathbb{R}\) or \(\mathrm{Re}\,\lambda=-\tfrac{1}{2}\nu\). Assuming \(\lambda\in\mathbb{R}\), we readily observe that (4.11b) has no real \(\lambda\)-solutions if \(\nu\in(0,2\sqrt{\Lambda_{0}})\). Indeed, with basic calculus tools one can easily verify that the real polynomial \(q(\lambda)=\lambda^{2}+\nu\lambda+\Lambda_{0}\) has a unique global minimum at \(\lambda=-\tfrac{1}{2}\nu\) with \(q(-\tfrac{1}{2}\nu)=\Lambda_{0}-\tfrac{1}{4}\nu^{2}>0\). Thus, we are left with the case \(\mathrm{Re}\,\lambda=-\tfrac{1}{2}\nu\) which clearly satisfies (4.9).
Footnote 1: Notice that \(\mathcal{L}\) and its spectral bound \(\Lambda_{0}\) do not depend on \(\nu\)
In the second parameter regime with \(\nu\in[2\sqrt{\Lambda_{0}},\infty)\), again we either have \(\lambda\in\mathbb{R}\) or \(\mathrm{Re}\,\lambda=-\tfrac{1}{2}\nu\). If \(\lambda\) is real then \(\lambda^{2}+\nu\lambda\leq-\Lambda_{0}\) holds only for
\[\lambda\in\big{[}-\tfrac{1}{2}\nu-\tfrac{1}{2}\sqrt{\nu^{2}-4\Lambda_{0}},- \tfrac{1}{2}\nu+\tfrac{1}{2}\sqrt{\nu^{2}-4\Lambda_{0}}\big{]}.\]
Clearly in both cases the bound (4.9) holds. This shows the lemma.
**Corollary 4.6** (point spectral stability).: \[\sigma_{\mathrm{pt}}(\mathcal{A})\subset\{0\}\cup\{\lambda\in\mathbb{C}\,: \,\mathrm{Re}\,\lambda\leq-\tfrac{1}{2}\nu+\tfrac{1}{2}\sqrt{\nu^{2}-4 \Lambda_{0}}\,\mathbf{1}_{[2\sqrt{\Lambda_{0}},\infty)}(\nu)\}.\] (4.12)
Proof.: Follows immediately from Lemmata 4.4 and 4.5.
### Stability of the essential spectrum
In this section, we study the essential spectrum of the block operator \(\mathcal{A}\). To that end, we define the following auxiliary asymptotic matrix block operator,
\[\mathcal{A}_{\infty}:H^{1}\times L^{2}\to H^{1}\times L^{2},\qquad\mathcal{A}_{ \infty}:=\begin{pmatrix}0&\mathrm{I}\\ -\mathcal{L}_{\infty}&-\nu\mathrm{I}\end{pmatrix}, \tag{4.13}\]
with dense domain \(D(\mathcal{A}_{\infty})=H^{2}\times H^{1}\). Once again, with a slight abuse in notation the operator \(\mathcal{L}_{\infty}\) in (4.13) refers to the restriction of the operator defined in (3.8) to the space \(H^{1}\), namely, to the operator
\[\widetilde{\mathcal{L}}_{\infty} :=\mathcal{L}_{\infty|H^{1}} \widetilde{\mathcal{L}}_{\infty}:H^{1}\to L^{2},\] \[D(\widetilde{\mathcal{L}}_{\infty}) =H^{2}\subset H^{1}, \widetilde{\mathcal{L}}_{\infty}u:=\mathcal{L}_{\infty}u,\quad \forall\,u\in H^{2},\]
so that the energy base space of the asymptotic operator \(\mathcal{A}_{\infty}\) is \(H^{1}\times L^{2}\). In the sequel, we write \(\mathcal{L}_{\infty}\) to denote this restriction. Therefore, for any \(U=(u,v)\in D(\mathcal{A}_{\infty})=H^{2}\times H^{1}\) we clearly have \(\mathcal{A}_{\infty}U=(v,-\mathcal{L}_{\infty}u-\nu v)\in H^{1}\times L^{2}\).
**Lemma 4.7**.: _The asymptotic block operator \(\mathcal{A}_{\infty}:H^{1}\times L^{2}\to H^{1}\times L^{2}\) is closed and onto._
Proof.: The proof of the closedness of \(\mathcal{A}_{\infty}\) is the same as that of Lemma 4.3 and we omit it. To show that \(\mathcal{A}_{\infty}\) is onto, notice that for any \(F=(f,g)\in H^{1}\times L^{2}\) the equation \(\mathcal{A}_{\infty}U=F\) with \(U=(u,v)\in D(\mathcal{A}_{\infty})=H^{2}\times H^{1}\) is equivalent to the system
\[v=f,\qquad-\mathcal{L}_{\infty}u=g+\nu f.\]
By defining \(v:=f\in H^{1}\) and by virtue of Lemma 3.15, given \(g+\nu f\in L^{2}\) there exists a unique solution \(u\in H^{2}\) to the equation \(-\mathcal{L}_{\infty}u=g+\nu f\). Hence, \(H^{1}\times L^{2}=\mathcal{R}(\mathcal{A}_{\infty})\), as claimed.
In this fashion, \(\mathcal{A}_{\infty}\) is a closed, densely defined operator with full range. The following result determines the location of its spectrum.
**Lemma 4.8**.: _If \(\lambda\in\sigma(\mathcal{A}_{\infty})\) then_
\[\operatorname{Re}\lambda\leq-\tfrac{1}{2}\nu+\tfrac{1}{2}\mathbf{1}_{[2, \infty)}(\nu)\sqrt{\nu^{2}-4}<0. \tag{4.14}\]
Proof.: Assume \(\lambda\in\mathbb{C}\), \(U=(u,v)\in D(\mathcal{A}_{\infty})=H^{2}\times H^{1}\) and \(F=(f,g)\in X=H^{1}\times L^{2}\) are such that \((\lambda-\mathcal{A}_{\infty})U=F\). This equation is equivalent to the system
\[\lambda u-v=f,\qquad\mathcal{L}_{\infty}u+(\lambda+\nu)v=g.\]
Substituting the first equation into the second, we arrive at
\[\big{(}\mathcal{L}_{\infty}+\lambda(\lambda+\nu)\big{)}u=g+(\lambda+\nu)f.\]
For any \(\nu>0\) and \(\lambda\in\mathbb{C}\) fixed, we have \(g+(\lambda+\nu)f\in L^{2}\). Thus, from Lemma 3.16 (d) and the resolvent estimate from Lemma 3.18, this equation has a unique solution \(u\in H^{2}\) provided that \(\lambda(\lambda+\nu)\in\mathbb{C}\backslash(-\infty,-1]\). Moreover, by Young's inequality
\[\|U\|_{H^{1}\times L^{2}}^{2}=\|u\|_{H^{1}}^{2}+\|v\|_{L^{2}}^{2}=\|u\|_{H^{1} }^{2}+\|f+\lambda u\|_{L^{2}}^{2}\leq(1+2|\lambda|)\|u\|_{H^{1}}^{2}+2\|f\|_{L^ {2}}^{2}.\]
From Lemma 3.18, if \(u\in H^{2}\) solves \((\mathcal{L}_{\infty}+\lambda(\lambda+\nu))u=g+(\lambda+\nu)f\) with \(\mu=\lambda(\lambda+\nu)\in\mathbb{C}\backslash(-\infty,-1]\), then there exists a constant \(C=C(\lambda,\nu)>0\) such that
\[\|u\|_{H^{1}}\leq\|u\|_{H^{2}}\leq C(\lambda,\nu)\|g+(\lambda+\nu)f\|_{L^{2}}.\]
Therefore, we obtain that
\[\|U\|_{H^{1}\times L^{2}}^{2}\leq(1+2|\lambda|)\|u\|_{H^{1}}^{2}+2 \|f\|_{L^{2}}^{2} \leq(1+2|\lambda|)C(\lambda,\nu)^{2}\|g+(\lambda+\nu)f\|_{L^{2}}^{ 2}+2\|f\|_{L^{2}}^{2}\] \[\leq\overline{C}(\lambda,\nu)\big{(}\|f\|_{H^{1}}^{2}+\|g\|_{L^{ 2}}^{2}\big{)}=\overline{C}(\lambda,\nu)\|F\|_{H^{1}\times L^{2}}^{2},\]
for some \(\overline{C}(\lambda,\nu)>0\). This shows that \(\lambda\in\rho(\mathcal{A}_{\infty})\). To sum up, we have proved that \(\lambda(\lambda+\nu)\in\mathbb{C}\backslash(-\infty,-1]\,\Rightarrow\,\lambda \in\rho(\mathcal{A}_{\infty})\), or, equivalently, that
\[\sigma(\mathcal{A}_{\infty})\subset\big{\{}\lambda\in\mathbb{C}\,:\,\lambda( \lambda+\nu)\in(-\infty,-1]\big{\}}. \tag{4.15}\]
Now, notice that the relation that defines the set on the right hand side of (4.15) can be recast as
\[\begin{split}\operatorname{Im}\left(\lambda(\lambda+\nu)\right)& =(\operatorname{Im}\lambda)(\nu+2\mathrm{Re}\,\lambda)=0,\\ -1&\geq\mathrm{Re}\left(\lambda(\lambda+\nu)\right) =(\mathrm{Re}\,\lambda)^{2}-(\operatorname{Im}\lambda)^{2}+\nu\mathrm{Re}\, \lambda.\end{split} \tag{4.16}\]
First, let us assume that \(\nu\in(0,2)\). Then the first equation in (4.16) implies that either \(\operatorname{Im}\lambda=0\) or \(\mathrm{Re}\,\lambda=-\frac{1}{2}\nu\). In the latter case there is nothing to prove. Whereas, if \(\lambda\in\mathbb{R}\), then the second relation in (4.16), namely \(\lambda^{2}+\nu\lambda\leq-1\) has no real \(\lambda\)-solutions. Thus, (4.14) holds if \(\nu\in(0,2)\).
Second, suppose that \(\nu\geq 2\). Once again, we have two cases, either \(\lambda\in\mathbb{R}\) or \(\mathrm{Re}\,\lambda=-\frac{1}{2}\nu\). In the latter case (4.14) clearly holds. In the former case, the inequality \(\lambda^{2}+\lambda\nu\leq-1\) is satisfied only if
\[\lambda\in\big{[}-\tfrac{1}{2}\nu-\tfrac{1}{2}\sqrt{\nu^{2}-4},-\tfrac{1}{2} \nu+\tfrac{1}{2}\sqrt{\nu^{2}-4}\big{]},\]
determining values of \(\lambda\) for which (4.14) also holds. The proof is complete.
The following lemma is the key ingredient to locate the essential spectrum of the block operator \(\mathcal{A}\).
**Lemma 4.9**.: _The block operator \(\mathcal{A}\) is a relatively compact perturbation of \(\mathcal{A}_{\infty}\)._
Proof.: Suppose \(\lambda\in\rho(\mathcal{A}_{\infty})\) and let \(\{U_{j}\}_{j\in\mathbb{N}}\) be a bounded sequence in \(H^{1}\times L^{2}\). Therefore, \((\lambda-\mathcal{A}_{\infty})^{-1}U_{j}\in D(\mathcal{A}_{\infty})=H^{2} \times H^{1}\) is a bounded sequence in \(H^{2}\times H^{1}\) because \((\lambda-\mathcal{A}_{\infty})^{-1}\) is a bounded operator. Hence, if we denote
\[\begin{pmatrix}f_{j}\\ g_{j}\end{pmatrix}:=(\lambda-\mathcal{A}_{\infty})^{-1}U_{j},\]
we have,
\[(\mathcal{A}_{\infty}-\mathcal{A})(\lambda-\mathcal{A}_{\infty})^{-1}U_{j}= \begin{pmatrix}0&0\\ \mathcal{L}_{\infty}-\mathcal{L}&0\end{pmatrix}\begin{pmatrix}f_{j}\\ g_{j}\end{pmatrix}=\begin{pmatrix}0\\ (\mathcal{L}_{\infty}-\mathcal{L})f_{j}\end{pmatrix}.\]
Since \(\{f_{j}\}_{j\in\mathbb{N}}\) is bounded in \(H^{2}\) and \(\mathcal{L}_{\infty}-\mathcal{L}:H^{2}\to H^{1}\) is compact in \(L^{2}\) (see Theorem 3.23 above), the bounded sequence \(\{(\mathcal{L}_{\infty}-\mathcal{L})f_{j}\}\subset H^{1}\) has a convergent subsequence in \(L^{2}\). This implies that \((\mathcal{A}_{\infty}-\mathcal{A})(\lambda-\mathcal{A}_{\infty})^{-1}U_{j}\) has a convergent subsequence in \(H^{1}\times L^{2}\). Thus, the operator \((\mathcal{A}_{\infty}-\mathcal{A})(\lambda-\mathcal{A}_{\infty})^{-1}\) is compact on \(H^{1}\times L^{2}\) for every \(\lambda\in\rho(\mathcal{A}_{\infty})\), and the proof is complete.
The most important consequence of last result is the location of the essential spectrum of \(\mathcal{A}\).
**Corollary 4.10**.: \(\sigma(\mathcal{A}_{\infty})=\sigma_{\rm ess}(\mathcal{A})\)_. Moreover, any \(\lambda\in\sigma_{\rm ess}(\mathcal{A})\) satisfies estimate (4.14)._
Proof.: This is a direct consequence of Weyl's essential spectrum theorem (see [13], p. 29) and Lemma 4.8.
### Spectral stability with uniform spectral gap
Let us summarize the content of Corollaries 4.6 and 4.10 into the following result, which conveys the spectral stability of the Neel wall's phase in the appropriate energy space with a uniform spectral gap, that is, a positive distance from the eigenvalue zero to the rest of the spectrum.
**Theorem 4.11**.: _For each fixed \(\nu>0\) there exists a uniform positive constant_
\[\zeta_{0}(\nu):=\tfrac{1}{2}\nu-\max\Big{\{}\tfrac{1}{2}\mathbf{1}_{[2,\infty )}(\nu)\sqrt{\nu^{2}-4},\tfrac{1}{2}\mathbf{1}_{[2\sqrt{\Lambda_{0}},\infty)}( \nu)\sqrt{\nu^{2}-4\Lambda_{0}}\Big{\}}>0,\]
_such that_
\[\sigma(\mathcal{A})\subset\{0\}\cup\big{\{}\lambda\in\mathbb{C}\,:\,\mathrm{ Re}\,\lambda\leq-\zeta_{0}(\nu)<0\big{\}}.\]
**Remark 4.12**.: The positive constant \(\zeta_{0}(\nu)\) is uniform because the spectral bound \(\Lambda_{0}\) does not depend on the parameter \(\nu\). This spectral gap determines an exponential decay for the solutions to the evolutionary equation, as we shall see in the sequel.
## 5. Semigroup generation and decay
### The adjoint operator
It is known (see Kato [14], Remark 6.23, p. 184) that if \(\lambda\in\mathbb{C}\) is an eigenvalue of a closed operator \(\mathcal{T}:D(\mathcal{T})\subset H\to H\) on a Hilbert space \(H\), then \(\lambda^{*}\) is an eigenvalue of \(\mathcal{T}^{*}\) (formal adjoint) with the same geometric and algebraic multiplicities. In the present context, since \(H^{1}\) and \(L^{2}\) are reflexive Hilbert spaces, then \(\mathcal{A}:H^{1}\times L^{2}\to H^{1}\times L^{2}\) and \(D(\mathcal{A})=H^{2}\times H^{1}\) has a formal adjoint which is also densely defined and closed. Moreover, \(\mathcal{A}^{**}=\mathcal{A}\) (cf. [14], Theorem 5.29, p. 168). Upon these observations we immediately have the following
**Lemma 5.1**.: \(\lambda=0\) _is an isolated, simple eigenvalue of \(\mathcal{A}^{*}:X^{*}\to X^{*}\)._
The following result determines the form of the adjoint of the linearized block matrix operator around the Neel wall's phase.
**Lemma 5.2**.: _The formal adjoint \(\mathcal{A}^{*}\), restricted to the domain \(D(\mathcal{A})\), is given by_
\[\mathcal{A}^{*}|_{D(\mathcal{A})}=\begin{pmatrix}0&\mathcal{F}\\ -\partial_{xx}+\mathrm{I}&-\nu\end{pmatrix} \tag{5.1}\]
_where the operator \(\mathcal{F}:H^{1}\to H^{-1}\) is formally defined as the map_
\[v\mapsto-(\mathcal{S}v-c_{\theta}v,\partial_{x}v)=:\mathcal{F}v.\]
_Moreover, \(\mathcal{F}|_{H^{2}}=[1+(-\Delta)]^{-1}\,\mathcal{L}\), where \([1+(-\Delta)]^{-1}\,\mathcal{L}\,v\) denotes the convolution of the Bessel potential for \(k=2\) with \(\mathcal{L}\,v\)._
Proof.: First, let \(U=(u,v)\) and \(V=(w,z)\) be both in \(D(\mathcal{A})=H^{2}\times H^{1}\). Then by definition of the inner product in \(X\), we have
\[\left\langle\mathcal{A}U\,,V\right\rangle_{X}=\left\langle v\,,w\right\rangle _{H^{1}}-\left\langle\mathcal{L}\,u+\nu v\,,z\right\rangle_{L^{2}}=\left\langle v \,,w-\nu z\right\rangle_{L^{2}}-\left\langle\mathcal{L}\,u\,,z\right\rangle_{ L^{2}}+\left\langle\partial_{x}v\,,\partial_{x}w\right\rangle_{L^{2}}.\]
Since \(w\in H^{2}\), integration by parts on the last term leads us to
\[\left\langle\mathcal{A}U\,,V\right\rangle_{X}=\left\langle v\,,-\partial_{x}^ {2}w+w-\nu z\right\rangle_{L^{2}}-\left\langle\mathcal{L}\,u\,,z\right\rangle_{ L^{2}}. \tag{5.2}\]
Also, by the symmetry of the linear operator \(\mathcal{S}\) (see Lemma 3.9), we recast the last inequality as
\[\left\langle\mathcal{A}U\,,V\right\rangle_{X}=\left\langle v\,,-\partial_{x}^{2}w +w-\nu z\right\rangle_{L^{2}}-\left\langle\partial_{x}u\,,\partial_{x}z\right \rangle_{L^{2}}-\left\langle u\,,\mathcal{S}z-c_{\theta}z\right\rangle_{L^{2}},\]
since \(z\in H^{1}\). Therefore, \(\left\langle\mathcal{A}U\,,V\right\rangle_{X}=\left\langle U\,,\mathcal{A}^{* }V\right\rangle_{X}\) for \(\mathcal{A}^{*}\) as in (5.1) where \(\mathcal{F}z\in H^{-1}\) is represented by the pair \(-(\mathcal{S}z-c_{\theta}z,\partial_{x}z)\in L^{2}\times L^{2}\).
Finally, assume that \(z\in H^{2}\) and let \(\mathcal{K}\) be the Bessel potential with parameter \(k=2\) on \(L^{2}\) functions, defined by the Fourier symbol \((1+|\xi|^{2})^{-1}\). Apply Plancherel's identity twice to the last term of (5.2) in order to get
Last equality holds because \(\mathcal{K}\,\mathcal{L}\,z\in H^{1}\) with \(\left\|\mathcal{K}\,\mathcal{L}\,z\right\|_{H^{1}}^{2}\leq\left\|\mathcal{L} \,z\right\|_{L^{2}}^{2}\). This shows the result.
**Corollary 5.3**.: _Let \(\mathcal{A}^{*}\) be the formal adjoint of \(\mathcal{A}\). Also assume that \(\left.\mathcal{A}^{*}\right|_{D(\mathcal{A})}\) and \(\mathcal{F}\) be as in Lemma 5.2 and define_
\[\Phi:=(\nu[1+(-\Delta)]^{-1}\ \partial_{x}\overline{\theta},\partial_{x} \overline{\theta}). \tag{5.3}\]
_Then \(\Phi\in X^{*}\) is an eigenvector of the adjoint \(\mathcal{A}^{*}:X^{*}\to X^{*}\), associated to the isolated, simple eigenvalue \(\lambda=0\)._
Proof.: First, we claim that \([1+(-\Delta)]^{-1}\ \partial_{x}\overline{\theta}\in H^{2}\). This can easily seen by Plancherel's identity since,
\[\left\|[1+(-\Delta)]^{-1}\ \partial_{x}\overline{\theta}\right\|_{H^{2}}^{2}=\nu^ {2}\int_{\mathbb{R}}(1+|\xi|^{2})^{2}(1+|\xi|^{2})^{-2}\left|\widehat{\partial _{x}\overline{\theta}}\right|^{2}d\xi=\nu^{2}\left\|\partial_{x}\overline{ \theta}\right\|_{L^{2}}^{2}. \tag{5.4}\]
Thanks to property (c) in Proposition 2.1, we know that \(\partial_{x}\overline{\theta}\in H^{2}\); therefore \(\Phi\in H^{2}\times H^{2}\subset D(\mathcal{A})\). Since \(H^{2}\subset H^{1}\subset L^{2}\subset H^{-1}\) and \((H^{1}\times L^{2})^{*}=H^{-1}\times L^{2}\) holds due to the used norm in \(X\), it follows that \(\Phi\in X^{*}\). Also, Lemma 5.2 yields
\[\mathcal{A}^{*}\Phi=\left.\mathcal{A}^{*}\right|_{D(\mathcal{A})}\Phi=( \mathcal{F}\partial_{x}\overline{\theta},0)=(\mathcal{K}\,\mathcal{L}\, \partial_{x}\overline{\theta},0)=(0,0).\]
The last equality holds since the Bessel potential is an invertible linear operator on \(L^{2}\) and \(\mathcal{L}\,\partial_{x}\overline{\theta}=0\) in \(L^{2}\).
If we define \(\Phi_{0}:=(\nu\partial_{x}\overline{\theta},\partial_{x}\overline{\theta})\) then it is clear that \(\Phi_{0}\in L^{2}\times L^{2}\). The following result shows that \(\left\langle\cdot\,,\Phi\right\rangle_{X}\in(H^{1}\times L^{2})^{*}\) has a natural extension to the dual of \(L^{2}\times L^{2}\).
**Corollary 5.4**.: _Let \(F\in H^{1}\times L^{2}\) and \(\Phi_{0}=(\nu\partial_{x}\overline{\theta},\partial_{x}\overline{\theta})^{\top}\), then_
\[\left\langle F\,,\Phi\right\rangle_{X}=\left\langle F\,,\Phi_{0}\right\rangle_{ L^{2}}.\]
Proof.: The result follows by a straightforward calculation in the Fourier space; indeed, for any \(F=(f,g)\in H^{1}\times L^{2}\) there holds
\[\left\langle F\,,\Phi\right\rangle_{X}= \left\langle f\,,\nu[1+(-\Delta)]^{-1}\partial_{x}\overline{ \theta}\right\rangle_{H^{1}}+\left\langle g\,,\partial_{x}\overline{\theta} \right\rangle_{L^{2}}\] \[= \nu\int_{\mathbb{R}}\widehat{f}(\xi)(\widehat{\partial_{x} \overline{\theta}}(\xi))^{*}d\xi+\left\langle g\,,\partial_{x}\overline{ \theta}\right\rangle_{L^{2}}\] \[= \left\langle f\,,\nu\partial_{x}\overline{\theta}\right\rangle_{L^ {2}}+\left\langle g\,,\partial_{x}\overline{\theta}\right\rangle_{L^{2}}= \left\langle F\,,\Phi_{0}\right\rangle_{L^{2}}.\]
Now, let us denote the inner product
\[\Xi:=\left\langle\Theta\,,\Phi\right\rangle_{X}=\nu\left\|\partial_{x}\overline{ \theta}\right\|_{L^{2}}^{2}>0, \tag{5.5}\]
and define the Hilbert space \(X_{1}\subset H^{1}\times L^{2}\) as the range of the spectral projection
\[\mathcal{P}U:=U-\Xi^{-1}\left\langle U\,,\Phi\right\rangle_{X}\Theta,\qquad U \in H^{1}\times L^{2}, \tag{5.6}\]
that is, \(X_{1}:=\mathcal{R}(\mathcal{P})\). In this fashion we project out the eigenspace spanned by the single eigenfunction, \(\Theta=(\partial_{x}\overline{\theta},0)\). We shall verify that, outside this eigenspace, the associated semigroup decays exponentially. First, it is to be observed that Corollary 5.4 implies the following explicit characterization of the space \(X_{1}\).
**Lemma 5.5**.: _Let \(\mathcal{P}\) be the spectral projector defined in (5.6) and let \(X_{1}\) be its range. Then_
\[X_{1}=\left\{F\in H^{1}\times L^{2}\ \big{|}\ \left\langle F\,,\Phi_{0} \right\rangle_{L^{2}}=0\right\}. \tag{5.7}\]
Proof.: Let \(F\in X_{1}\). Hence, \(F=\mathcal{P}F\) because \(\mathcal{P}\) is a projector. By (5.6), we have \(F=F-\Xi^{-1}\left\langle F\,,\Phi\right\rangle_{X}\Theta\), which implies \(0=\left\langle F\,,\Phi\right\rangle_{X}=\left\langle F\,,\Phi_{0}\right\rangle _{L^{2}}\), due to Corollary 5.4. The converse holds trivially.
### Generation of the semigroup and decay estimates
In this section we prove that a restriction of the linearized block operator around the Neel wall's phase is the infinitesimal generator of an exponentially-decaying semigroup. For that purpose we need to show some resolvent estimates. Let us recall the growth bound for a semigroup \(e^{t\mathcal{T}}\) (where \(\mathcal{T}\) denotes its infinitesimal generator),
\[\omega_{0}=\inf\{\omega\in\mathbb{R}\,:\,\lim_{t\to+\infty}e^{-\omega t}\|e^{ t\mathcal{T}}\|\}.\]
We say a semigroup is uniformly (exponentially) stable whenever \(\omega_{0}<0\). The spectral bound of the generator is defined as
\[s(\mathcal{T}):=\sup\{\operatorname{Re}\lambda\,:\,\lambda\in\sigma( \mathcal{T})\}.\]
Since the spectral mapping theorem (that is, \(\sigma(e^{t\mathcal{T}})\backslash\{0\}=e^{t\sigma(\mathcal{T})}\) for all \(t\geq 0\)) is not true in general for \(C_{0}\)-semigroups (see [10]), for stability purposes we rely on the Gearhart-Pruss theorem (cf. [11, 12]), which restricts our attention to semigroups on Hilbert spaces (see also [13, 14] and the references therein). It states that any \(C_{0}\)-semigroup \(\{e^{t\mathcal{T}}\}_{t\geq 0}\) on a Hilbert space \(H\) is uniformly exponentially stable if and only if \(s(\mathcal{T})<0\) and the resolvent satisfies \(\sup_{\operatorname{Re}\lambda>0}\|(\mathcal{T}-\lambda)^{-1}\|<\infty\) (see Lemma 5.21 below).
It is well-known that the generalized Hille-Yosida theorem (see, e.g., [14], p. 69) requires all powers of the resolvent to conclude the existence of a \(C_{0}\)-semigroup unless it is quasi-contractive. Therefore, we apply the classical Lumer-Philips theorem instead. For that purpose we need some preparations.
Following Capella _et al._[13], we define \(L^{2}_{\perp}:=\{\partial_{x}\overline{\theta}\}_{L^{2}}^{\perp}\). For \(k=1\) and \(2\), we define \(H^{k}_{\perp}\) as \(H^{k}\cap L^{2}_{\perp}\). Next lemma describes the structure of these subspaces.
**Lemma 5.6**.: _Let \(L^{2}_{\perp}\) be the \(L^{2}\)-orthogonal complement of \(\partial_{x}\overline{\theta}\). For \(k=1,2\) define \(H^{k}_{\perp}\) as the intersection between \(H^{k}\) and \(L^{2}_{\perp}\). Then, for every \(\bar{u}\in H^{k}\),_
\[\bar{u}=u+\alpha\partial_{x}\overline{\theta} \tag{5.8}\]
_for some \(u\in H^{k}_{\perp}\) and \(\alpha\in\mathbb{C}\)._
Notice that this lemma needs to be proved since, in general, the intersection does not distribute the direct sum.
Proof.: Assume \(k\) is fixed and \(\bar{u}\in H^{k}\). The spectral decomposition theorem (see Theorem III-6.17, p. 178 in [10]) and Corollary 3.12 yield \(L^{2}=L^{2}_{\perp}\oplus\operatorname{Span}\{\partial_{x}\overline{\theta}\}\) and because \(H^{k}\subset L^{2}\) there exist \(u\in L^{2}_{\perp}\) and \(\alpha\in\mathbb{C}\) such that \(\bar{u}=u+\alpha\partial_{x}\overline{\theta}\). Since \(\partial_{x}\overline{\theta}\in H^{k}\), by Proposition 2.1 (c) there holds \(u=\bar{u}-\alpha\partial_{x}\overline{\theta}\in H^{k}\). Thus \(u\in H^{k}_{\perp}\).
This splitting also extends to the working (product) space \(H^{1}\times L^{2}\). The proof of the following corollary is omitted.
**Corollary 5.7**.: _For every \(\bar{U}\in H^{1}\times L^{2}\) there exist \(U\in H^{1}_{\perp}\times L^{2}\) and \(\alpha\in\mathbb{C}\) such that \(\bar{U}=U+\alpha\Theta\)._
**Lemma 5.8**.: _Define \(a:H^{1}_{\perp}\times H^{1}_{\perp}\to\mathbb{C}\) as_
\[a\left[u,v\right]:=\left\langle\partial_{x}u\,,\partial_{x}v\right\rangle_{L^{ 2}}+b[s_{\theta}u,s_{\theta}v]-\left\langle c_{\theta}u\,,v\right\rangle_{L^{ 2}}, \tag{5.9}\]
_with \(b\) as in (2.8). Then, \(a[\cdot,\cdot]\) is a positive, Hermitian, sesquilinear form. Moreover, if \(u\in H^{2}_{\perp}\)_
\[\left\langle\mathcal{L}\,u\,,v\right\rangle_{L^{2}}=a[u,v]\quad\text{for every $v\in H^{1}_{\perp}$.} \tag{5.10}\]
Proof.: The sesquilinearity and hermiticity of \(a\) follows trivially from its definition, meanwhile it is positive defined due to item (c) in Proposition 3.8. Finally, relation (5.10) follows from an integration by parts and from Corollary 3.7.
With slight changes to the arguments presented in [1], we can prove that \(a[\cdot,\cdot]\) induces an inner product in \(H^{1}_{\perp}\) equivalent to the \(H^{1}\)-inner product. The norm induced by this sesquilinear form is denoted by \(\|\cdot\|_{a}:H^{1}_{\perp}\to\mathbb{C}\). In other words,
\[\|u\|_{a}:=\sqrt{a[u,u]},\qquad\text{for every $u\in H^{1}_{\perp}$.}\]
**Proposition 5.9**.: _Let us define_
\[Z:=H^{1}_{\perp}\times L^{2}. \tag{5.11}\]
_Then \(\left(Z,\left\langle\cdot\,,\cdot\right\rangle_{X}\right)\) is a Hilbert space. In addition, if \(\|\cdot\|_{Z}:Z\to\mathbb{C}\) and \(\|\cdot\|_{2}:Z\to\mathbb{C}\) are defined by_
\[\|U\|_{Z}:=\sqrt{\|u\|_{a}^{2}+\|v\|_{L^{2}}^{2}}, \tag{5.12}\]
\[\|U\|_{2}:=\|u\|_{a}+\|v\|_{L^{2}}\,, \tag{5.13}\]
_where \(U=(u,v)\in Z\), then \(\|\cdot\|_{Z}\) and \(\|\cdot\|_{2}\) are norms in \(Z\), both equivalent to \(\left\|\cdot\right\|_{X}\)._
Proof.: It suffices to show that \(Z\) is a closed linear subspace of \(X=H^{1}\times L^{2}\). The linearity of \(Z\) follows from the linearity of \(L^{2}\) and the linearity of \(H^{1}_{\perp}\). Now, assume \(\{U_{j}=(u_{j},v_{j})\}_{j\in\mathbb{N}}\) is a Cauchy sequence in \(Z\). Therefore, \(\{u_{j}\}_{j\in\mathbb{N}}\) and \(\{v_{j}\}_{j\in\mathbb{N}}\) are Cauchy sequences in \(H^{1}\) and in \(L^{2}\), respectively, and \(u_{j}\to u\) and \(v_{j}\to v\) for some \(u\in H^{1}\) and \(v\in L^{2}\). Note that \(u\in H^{1}_{\perp}\) since \(H^{1}\)-convergence implies weak \(L^{2}\)-convergence and \(0=\left\langle u_{j}\,,\partial_{x}\overline{\theta}\right\rangle_{L^{2}}\) for every \(j\in\mathbb{N}\). Therefore \(Z\) is a closed linear subspace of \(X\).
Next, we will show that \(\|\cdot\|_{Z}\) and \(\|\cdot\|_{2}\) are norms in \(Z\). Clearly, both functions are positive defined and absolutely homogeneous since \(\|\cdot\|_{L^{2}}\) and \(\|\cdot\|_{a}\) are norms in \(L^{2}\) and \(H^{1}_{\perp}\), respectively. Also, subadditivity of \(\|\cdot\|_{2}\) readily follows from the
subadditivity of \(\|\cdot\|_{a}\) and of \(\left\|\cdot\right\|_{L^{2}}\). To verify the subadditivity for \(\|\cdot\|_{Z}\), let \(U=(u_{1},u_{2})\) and \(V=(v_{1},v_{2})\) belong to \(Z\); then we obtain
\[\begin{split}\|U+V\|_{Z}^{2}&=\|u_{1}+u_{2}\|_{a}^{ 2}+\left\|v_{1}+v_{2}\right\|_{L^{2}}^{2}\\ &\leq(\|u_{1}\|_{a}+\|u_{2}\|_{a})^{2}+(\left\|v_{1}\right\|_{L^ {2}}+\left\|v_{2}\right\|_{L^{2}})^{2}\\ &=(\|u_{1}\|_{a}^{2}+\left\|v_{1}\right\|_{L^{2}}^{2})+(\|u_{2}\|_ {a}^{2}+\|v_{2}\|_{L^{2}}^{2})+2(\|u_{1}\|_{a}\|u_{2}\|_{a}+\left\|v_{1}\right\| _{L^{2}}\left\|v_{2}\right\|_{L^{2}})\\ &\leq(\|U\|_{Z}+\left\|V\right\|_{Z})^{2}\,.\end{split}\]
Finally, we prove that both norms are equivalent to \(\left\|\cdot\right\|_{X}\). Indeed, since \(\|\cdot\|_{a}\) and \(\left\|\cdot\right\|_{H^{1}}\) are equivalent in \(H^{1}_{\perp}\), there exist \(k_{0}\) and \(K_{0}\) two positive constants such that \(k_{0}\|u\|_{a}\leq\|u\|_{H^{1}}\leq K_{0}\|u\|_{a}\) for each \(u\in H^{1}_{\perp}\). Hence
\[k_{0}^{2}\|u\|_{a}^{2}+\|v\|_{L^{2}}^{2}\leq\left\|(u,v)^{\top}\right\|_{X}^{ 2}\leq K_{0}^{2}\|u\|_{a}^{2}+\left\|v\right\|_{L^{2}}^{2}.\]
By choosing \(k_{1}=\sqrt{\min\{1,k_{0}^{2}\}}\) and \(K_{1}=\sqrt{\max\{1,K_{0}^{2}\}}\)
\[k_{1}\|U\|_{Z}\leq\left\|U\right\|_{X}\leq K_{1}\|U\|_{Z},\qquad\text{for every $U=(u,v)^{\top}\in Z$.}\]
Thus, \(\|\cdot\|_{Z}\) and \(\left\|\cdot\right\|_{X}\) are equivalent in \(H^{1}_{\perp}\). Since, clearly,
\[(\left\|u\right\|_{H^{1}}+\left\|v\right\|_{L^{2}})^{2}\leq 2(\left\|u\right\| _{H^{1}}^{2}+\left\|v\right\|_{L^{2}}^{2})\leq 2(\left\|u\right\|_{H^{1}}+ \left\|v\right\|_{L^{2}})^{2}.\]
taking the square root and using the equivalence between \(\|\cdot\|_{a}\) and \(\left\|\cdot\right\|_{H^{1}}\) one obtains
\[(k_{0}\|u\|_{a}+\left\|v\right\|_{L^{2}})\leq\sqrt{2}\left\|U\right\|_{X}\leq \sqrt{2}(K_{0}\|u\|_{a}+\left\|v\right\|_{L^{2}}).\]
Again, choosing \(k_{2}=\min\{1,k_{0}\}/\sqrt{2}\) and \(K_{2}=\max\{1,K_{0}\}\), we get
\[k_{2}\|U\|_{2}\leq\left\|U\right\|_{X}\leq K_{2}\|U\|_{2},\qquad\text{for every $U=(u,v)^{\top}\in Z$.}\]
**Remark 5.10**.: Note that \(\|\cdot\|_{Z}\) is induced by the inner product \(\langle\cdot,\cdot\rangle_{Z}:Z\times Z\to\mathbb{C}\) given by
\[\langle U,V\rangle_{Z}:=a[u,w]+\langle v\,,z\rangle_{L^{2}}\,,\quad\text{with $U=(u,v),\;V=(w,z)$.}\]
Henceforth, \(\langle\cdot,\cdot\rangle_{Z}\) is equivalent to \(\langle\cdot\,,\cdot\rangle_{X}\) in \(Z\).
**Lemma 5.11**.: _Let \(\langle\cdot,\cdot\rangle_{\tilde{X}}:X\times X\to\mathbb{C}\) be defined as_
\[\langle\bar{U},\bar{V}\rangle_{\tilde{X}}:=\langle U,V\rangle_{Z}+\left\langle \bar{U}\,,\beta\Theta\right\rangle_{X}+\left\langle\alpha\Theta\,,\bar{V} \right\rangle_{X}-\alpha\beta^{*}\left\|\Theta\right\|_{X}^{2},\]
_where \(\bar{U}=U+\alpha\Theta\) and \(\bar{V}=V+\beta\Theta\) for some \(U,V\in Z\) and \(\alpha,\beta\in\mathbb{C}\) (see Corollary 5.7). Then \(\langle\cdot,\cdot\rangle_{\tilde{X}}\) is and inner product in \(X\) equivalent to \(\langle\cdot\,,\cdot\rangle_{X}\)._
Proof.: First, we prove that \(\langle\cdot,\cdot\rangle_{\tilde{X}}:X\times X\to\mathbb{C}\) is an inner product. It is clearly an hermitian sesquilinear form because it is a sum of four inner products whose first entry depends linearly on \(\bar{U}\) and their second entry also depends linearly on \(\bar{V}\). In view of Corollary 5.7, if \(\bar{U}\in X\), then \(\bar{U}=U+\alpha\Theta\) for some \(U\in Z\) and \(\alpha\in\mathbb{C}\) which yields
\[\langle\bar{U},\bar{U}\rangle_{\tilde{X}}=\left\|U\right\|_{Z}^{2}+2\mathrm{ Re}\langle U\,,\alpha\Theta\rangle_{X}+\left\|\alpha\Theta\right\|_{X}^{2}.\]
Thus, by adding and subtracting \(\left\|U\right\|_{X}^{2}\), one gets
\[\langle\bar{U},\bar{U}\rangle_{\tilde{X}}=\left\|\bar{U}\right\|_{X}^{2}-\left\| U\right\|_{X}^{2}+\left\|U\right\|_{Z}^{2}\geq\left\|U\right\|_{Z}^{2}. \tag{5.14}\]
Last inequality holds since \(\left\|\bar{U}\right\|_{X}^{2}\geq\left\|U\right\|_{X}^{2}\) with equality if and only if \(\alpha=0\). Henceforth, \(\langle\bar{U},\bar{U}\rangle_{\tilde{X}}=0\) if and only if \(\bar{U}=0\).
Second, we prove that \(\left\langle\cdot,\cdot\right\rangle_{\tilde{X}}\) and \(\left\langle\cdot\,,\cdot\right\rangle_{X}\) are equivalent. Since \(\left\langle\cdot,\cdot\right\rangle_{Z}\) and \(\left\langle\cdot\,,\cdot\right\rangle_{X}\) are equivalent in \(Z\), then there exist two positive constants \(k,K>0\) such that \(0<k\leq 1\leq K\) and \(k\left\|U\right\|_{X}\leq\|U\|_{Z}\leq K\left\|U\right\|_{X}\) (see the proof of Lemma 5.9). By applying this relation into the equality in eq. (5.14), we obtain
\[\left(k^{2}-1\right)\left\|U\right\|_{X}^{2}+\left\|\bar{U}\right\|_{X}^{2} \leq\left\langle\bar{U},\bar{U}\right\rangle_{\tilde{X}}\leq\left(K^{2}-1 \right)\left\|U\right\|_{X}^{2}+\left\|\bar{U}\right\|_{X}^{2}\]
Since \(\left\|U\right\|_{X}\leq\left\|\bar{U}\right\|_{X}\), we conclude that
\[k^{2}\left\|\bar{U}\right\|_{X}^{2}\leq\left\langle\bar{U},\bar{U}\right\rangle _{\tilde{X}}\leq K^{2}\left\|\bar{U}\right\|_{X}^{2}.\]
and the proof is complete.
The following resolvent estimate is the key ingredient to apply Lumer-Phillips theorem. We use the appropriate choice of a metric in order to prove it.
**Lemma 5.12**.: _There exists \(\eta_{0}\in\mathbb{R}\) such that_
\[\operatorname{Re}\left\langle\mathcal{A}\bar{U}\,,\bar{U}\right\rangle_{X} \leq\eta_{0}\|\bar{U}\|_{X}^{2}\]
_for every \(\bar{U}\in D(\mathcal{A})\)._
Proof.: Note that if \(\bar{U}\in D(\mathcal{A})\subset X\), then \(\bar{U}=U+\alpha\Theta\) for some \(U\in Z\) and \(\alpha\in\mathbb{C}\) due to Corollary 5.7. Moreover, \(U=(u,v)\), with \(u\in H_{\perp}^{2}\) and \(v\in H^{1}\) also, by Lemma 5.6, \(v=w+\beta\partial_{x}\overline{\theta}\) for some \(w\in H_{\perp}^{1}\) and \(\beta\in\mathbb{C}\). Since \(\lambda=0\) is a eigenvalue of \(\mathcal{A}\) with eigenfunction \(\Theta\) (see Lemma 4.4), we have that
\[\mathcal{A}\bar{U}=\mathcal{A}U=V+\beta\Theta,\quad\text{and}\quad\ V:= \begin{pmatrix}w\\ -\nu v-\mathcal{L}\,u\end{pmatrix}\in Z.\]
Then,
\[\left\langle\mathcal{A}\bar{U},\bar{U}\right\rangle_{\tilde{X}}=\]
In view of Remark 5.10 and (5.10), the term \(\langle V,U\rangle_{Z}\) is recast as
\[\left\langle V,U\right\rangle_{Z}=a[w,u]-\left\langle\mathcal{L}\,u\,,v\right \rangle_{L^{2}}-\nu\left\|v\right\|_{L^{2}}^{2}=2i\operatorname{Im}a[w,u]-\nu \left\|v\right\|_{L^{2}}^{2}\]
Upon substitution into the expression for \(\left\langle\mathcal{A}\bar{U},\bar{U}\right\rangle_{\tilde{X}}\), one gets
Now, using the explicit form of \(\Theta\) and the fact that \(\left\langle w\,,\partial_{x}\overline{\theta}\right\rangle_{L^{2}}=0\) we obtain
\[\left\langle V\,,\alpha\Theta\right\rangle_{X}=\left\langle w\,,\alpha \partial_{x}\overline{\theta}\right\rangle_{H^{1}}=\left\langle\partial_{x}w \,,\alpha\partial_{x}^{2}\overline{\theta}\right\rangle_{L^{2}}=-\left\langle w \,,\alpha\partial_{x}^{3}\overline{\theta}\right\rangle_{L^{2}},\]
where the last equality follows upon integration by parts. Henceforth,
\[\left\langle\mathcal{A}\bar{U},\bar{U}\right\rangle_{\tilde{X}}=2i \operatorname{Im}a[w,u]-\nu\left\|v\right\|_{L^{2}}^{2}+\left\langle\beta \Theta\,,U\right\rangle_{X}+\left\langle\beta\Theta\,,\alpha\Theta\right\rangle _{X}-\left\langle w\,,\alpha\partial_{x}^{3}\overline{\theta}\right\rangle_{L^{2 }}. \tag{5.15}\]
Taking the real part of (5.15) and applying Cauchy-Schwartz inequality yields
\[2\operatorname{Re}\left\langle\mathcal{A}\bar{U},\bar{U}\right\rangle_{ \tilde{X}}\leq-2\nu\left\|v\right\|_{L^{2}}^{2}+\left\|U\right\|_{X}^{2}+2 \left\|\beta\Theta\right\|_{X}^{2}+\left\|\alpha\Theta\right\|_{X}^{2}+\left\| \alpha\partial_{x}^{3}\overline{\theta}\right\|_{L^{2}}^{2}+\left\|w\right\|_ {L^{2}}^{2}.\]
Note that \(\left\|\partial_{x}^{3}\overline{\theta}\right\|_{L^{2}}<\infty\) and \(\left\|\partial_{x}\overline{\theta}\right\|_{L^{2}}\neq 0\) due to Proposition 2.1. Thereby, we may define the positive constants \(C_{1}:=\left\|\Theta\right\|_{X}^{2}/\left\|\partial_{x}\overline{\theta} \right\|_{L^{2}}^{2}\) and \(C_{2}:=\left\|\partial_{x}^{3}\overline{\theta}\right\|_{L^{2}}^{2}/\left\| \partial_{x}\overline{\theta}\right\|_{L^{2}}^{2}\)
depending only on \(\overline{\theta}\), so that
\[2\mathrm{Re}\left\langle\mathcal{A}\bar{U},\bar{U}\right\rangle_{ \tilde{X}}\leq -2\nu\left\|v\right\|_{L^{2}}^{2}+\left\|U\right\|_{X}^{2}+2C_{1} \left\|\beta\partial_{x}\overline{\theta}\right\|_{L^{2}}^{2}+\left\|\alpha \Theta\right\|_{X}^{2}+C_{2}\left\|\alpha\partial_{x}\overline{\theta}\right\| _{L^{2}}^{2}+\left\|w\right\|_{L^{2}}^{2}\] \[\leq -2\nu\left\|v\right\|_{L^{2}}^{2}+\left(2+C_{2}\right)\left\| \bar{U}\right\|_{X}^{2}+\left(1+2C_{1}\right)\left\|v\right\|_{L^{2}}^{2}\] \[\leq \left(3+2C_{1}+C_{2}\right)\left\|\bar{U}\right\|_{X}^{2}\]
The last two inequalities hold because \(\left\|\bar{U}\right\|_{X}\geq\left\|\bar{U}\right\|_{X}\geq\left\|v\right\| _{L^{2}}\geq\max\{\left\|w\right\|_{L^{2}},\left\|\beta\partial_{x}\overline{ \theta}\right\|_{L^{2}}\}\) and \(\left\|\bar{U}\right\|_{X}\geq\left\|\alpha\Theta\right\|_{X}\geq\left\| \alpha\partial_{x}\overline{\theta}\right\|_{L^{2}}\). Finally, the equivalence between \(\left\langle\cdot\,,\cdot\right\rangle_{X}\) and \(\left\langle\cdot,\cdot\right\rangle_{\tilde{X}}\) implies the existence of \(K>0\) such that
\[\mathrm{Re}\,\left\langle\bar{U},\mathcal{A}\bar{U}\right\rangle_{X}\leq\tfrac {1}{2}K(3+2C_{1}+C_{2})\|\bar{U}\|_{X}^{2},\]
yielding the result with \(\eta_{0}=\tfrac{1}{2}K(3+2C_{1}+C_{2})>0\).
**Lemma 5.13**.: _There exists \(\tau>\eta_{0}\) such that \(\mathcal{A}-\tau\) is onto._
Proof.: First we notice that, from the proof of Lemma 5.12, \(\eta_{0}>0\). In addition, we know that every \(\lambda>0\) belongs to \(\rho\left(\mathcal{A}\right)\) due to Theorem 4.11. Therefore, the proof is complete by choosing any \(\tau>\omega_{0}\).
As an immediate consequence of Lemmata 5.12 and 5.13, we are now able to apply the classical Lumer-Phillips theorem (see, e.g., Theorem 12.22, p. 407, in [10]) and to claim the following result.
**Lemma 5.14**.: _The operator \(\mathcal{A}:H^{1}\times L^{2}\to H^{1}\times L^{2}\) with \(D(\mathcal{A})=H^{2}\times H^{1}\) is the inifinitesimal generator of a \(C_{0}\)-semigroup of quasicontractions \(\{e^{t\mathcal{A}}\}_{t\geq 0}\)._
**Corollary 5.15**.: _For each \(A(\mathcal{A})=U\in H^{2}\times H^{1}\) there holds_
\[\frac{d}{dt}\big{(}e^{t\mathcal{A}}U\big{)}=e^{t\mathcal{A}}\mathcal{A}U= \mathcal{A}(e^{t\mathcal{A}}U).\]
Proof.: Follows from Lemma 5.14 and basic properties of semigroups (cf. [11, 12]).
We now observe that on a reflexive Banach space, weak and weak\({}^{*}\) topologies coincide, and therefore the family of dual operators \(\{(e^{t\mathcal{A}})^{*}\}_{t\geq 0}\), consisting of all the formal adjoints in \(L^{2}\) is a \(C_{0}\)-semigroup as well (cf. [11], p. 44). Moreover, the infinitesimal generator of this semigroup is simply \(\mathcal{A}^{*}\) (see Corollary 10.6 in [12]), so we denote \((e^{t\mathcal{A}})^{*}=e^{t\mathcal{A}^{*}}\). By semigroup properties we readily have
\[e^{t\mathcal{A}}\Theta=\Theta,\qquad\text{and,}\qquad e^{t\mathcal{A}^{*}} \Phi=\Phi.\]
As a result of these identities and of the definition of the projector, we have
**Lemma 5.16**.: _For all \(t\geq 0\) there holds \(e^{t\mathcal{A}}\mathcal{P}=\mathcal{P}e^{t\mathcal{A}}\)._
Proof.: Let \(U\in H^{2}\times H^{1}\); then
\[\mathcal{P}e^{t\mathcal{A}}U=e^{t\mathcal{A}}U-\Xi^{-1}\left\langle e ^{t\mathcal{A}}U\,,\Phi\right\rangle_{X}\Theta =e^{t\mathcal{A}}U-\Xi^{-1}\left\langle U\,,e^{t\mathcal{A}^{*}} \Phi\right\rangle_{X}\Theta\] \[=e^{t\mathcal{A}}U-\Xi^{-1}\left\langle U\,,\Phi\right\rangle_{X }e^{t\mathcal{A}}\Theta\] \[=e^{t\mathcal{A}}\mathcal{P}U,\]
as claimed.
Last result implies that \(X_{1}\) is an \(e^{t\mathcal{A}}\)-invariant closed (Hilbert) subspace of \(X=H^{2}\times H^{1}\). Hence, we define the domain
\[D_{1}:=\{U\in\mathcal{D}\cap X_{1}\,:\,\mathcal{A}U\in X_{1}\},\]
and the operator
\[\mathcal{A}_{1}:D_{1}\subset X_{1}\to X_{1},\]
\[\mathcal{A}_{1}U:=\mathcal{A}U,\qquad U\in D_{1},\]
as the restriction of \(\mathcal{A}\) on \(X_{1}\). Therefore, \(\mathcal{A}_{1}\) is a closed, densely defined operator on the Hilbert space \(X_{1}\). Moreover,
**Lemma 5.17**.: \(\lambda=0\) _is not in the spectrum of \(\mathcal{A}_{1}\)._
Proof.: It suffices to verify that \(\Theta\notin X_{1}\). We compute \(\mathcal{P}\Theta=\Theta-\Xi^{-1}\langle\Theta,\Phi\rangle_{L^{2}}\Theta=0\). Hence \(0\neq\Theta\in\ker\mathcal{P}\) and therefore the eigenfunction associated to the eigenvalue \(\lambda=0\) is not in \(\mathcal{R}(\mathcal{P})=X_{1}\).
In this fashion we project out \(\lambda=0\) from the spectrum. As a consequence of spectral stability (see Theorem 4.11 above), we obtain the following
**Corollary 5.18**.: \(\sigma(\mathcal{A}_{1})\) _is a strict subset of the stable complex plane,_
\[\sigma(\mathcal{A}_{1})\subset\{\lambda\in\mathbb{C}\,:\,\mathrm{Re}\,\lambda \leq-\zeta_{0}(\nu)<0\},\]
_and the spectral bound of \(\mathcal{A}_{1}\) is strictly negative, \(s(\mathcal{A}_{1})<0\)._
**Lemma 5.19**.: _The family of operators \(\{e^{t\mathcal{A}_{1}}\}_{t\geq 0}\), \(e^{t\mathcal{A}_{1}}:X_{1}\to X_{1}\), defined as_
\[e^{t\mathcal{A}_{1}}U:=e^{t\mathcal{A}}U,\quad U\in X_{1},\;t\geq 0,\]
_is a \(C_{0}\)-semigroup of quasicontractions in the Hilbert space \(X_{1}\) with infinitesimal generator \(\mathcal{A}_{1}\)._
Proof.: The semigroup properties are inherited from those of \(e^{t\mathcal{A}}\) in \(X=H^{1}\times L^{2}\). That \(\mathcal{A}_{1}\) is the infinitesimal generator follows from Corollary in Section 2.2 of [1], p. 61.
Finally, in order to prove that the semigroup is exponentially decaying, we rely on Gearhart-Pruss theorem and we need to show that
\[\sup_{\mathrm{Re}\,\lambda>0}\|(\lambda-\mathcal{A}_{1})^{-1}\|_{X_{1}\to X_{ 1}}<\infty.\]
This condition is satisfied if any solution \(U\) to the linear equation \((\lambda-\mathcal{A}_{1})U=F\) for \(F\in H^{1}\times L^{2}\) satisfies a resolvent estimate, \(\|U\|_{X}\leq C(\lambda)\left\|F\right\|_{X}\), in which the constant \(C(\lambda)\) remains bounded in \(\mathrm{Re}\,\lambda>0\). The next result goes in that direction.
**Lemma 5.20**.: _Let \(\lambda\in\rho\left(\mathcal{A}\right)\) and \(f,g,u,v\in L^{2}_{\perp}\) be such that \(F=(f,g)^{\top}\in X_{1}\), \(U=(u,v)^{\top}\in D_{1}\) and \((\lambda-\mathcal{A}_{1})U=F\). Then,_
\[\left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|\leq \|U\|_{2}\|F\|_{2} \tag{5.16}\]
_Moreover, if \(C_{0}\) and \(C_{1}\) are two fixed positive numbers, then there exists a constant \(K(C_{0},C_{1})>0\) such that \(\left\|U\right\|_{X}\leq K(C_{0},C_{1})\left\|F\right\|_{X}\) for all \(\lambda\) such that \(\mathrm{Re}\,\lambda>C_{0}\) or \(\left|\mathrm{Im}\,\lambda\right|>C_{1}\)._
Proof.: First, we write the vectorial equation as a system of linear equations,
\[\lambda u-v=f, \tag{5.17}\]
\[\mathcal{L}\,u+(\lambda+\nu)v=g. \tag{5.18}\]
Take the \(L^{2}\)- product on the left of (5.17) with \(\mathcal{L}\,u\), and the \(L^{2}\)-product on the right of (5.18) with \(v\). The result is
\[\lambda^{*}\left\langle\mathcal{L}\,u\,,u\right\rangle_{L^{2}}-\left\langle \mathcal{L}\,u\,,v\right\rangle_{L^{2}}=\left\langle\mathcal{L}\,u\,,f\right \rangle_{L^{2}},\quad\left\langle\mathcal{L}\,u\,,v\right\rangle_{L^{2}}+( \lambda+\nu)\left\|v\right\|_{L^{2}}^{2}=\left\langle g\,,v\right\rangle_{L^{2 }}.\]
Notice that \(u\in H^{2}_{\perp}\) and \(v,f\in H^{1}_{\perp}\). By Lemma 5.8, these equations can be written in terms of the sesquilinear form \(a[\cdot,\cdot]\) as
\[\lambda^{*}a[u,u]-a[u,v]=a[u,f],\]
\[a[u,v]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}=\left\langle g\,,v\right\rangle _{L^{2}}.\]
Then, the complex modulus of the sum of these equations satisfies
\[\left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|\leq \left|a[u,f]\right|+\left|\left\langle g\,,v\right\rangle_{L^{2}}\right|\]
Since \(a\) is a nonnegative, Hermitian, sesquilinear form, then the Cauchy-Schwartz inequality remains valid for \(a\) in \(H^{1}_{\perp}\), as well as the classic inner product in \(L^{2}\). Hence,
\[\left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|\leq a ^{1/2}[u,u]a^{1/2}[f,f]+\left\|g\right\|_{L^{2}}\left\|v\right\|_{L^{2}}.\]
Also note that the right-hand side of last equation is bounded by
\[\left[\left\|v\right\|_{L^{2}}+a^{1/2}[u,u]\right]\left[\left\|g\right\|_{L^{ 2}}+a^{1/2}[f,f]\right]=\|U\|_{2}\|F\|_{2}.\]
Thus, inequality (5.16) follows.
Second, use (5.16) to get
\[\left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|\leq \left[a^{1/2}[u,u]+\left\|v\right\|_{L^{2}}\right]\|F\|_{2}.\]
Notice that \(\left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|=0\) if and only if \((u,v)=(0,0)\) because \(\operatorname{Re}\lambda\geq 0\) and \(\nu>0\). Hence, if \((u,v)\neq 0\), we have
\[a^{1/2}[u,u]+\left\|v\right\|_{L^{2}}\leq\frac{\left(a^{1/2}[u,u]+\left\|v \right\|_{L^{2}}\right)^{2}}{\left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v \right\|_{L^{2}}^{2}\right|}\|F\|_{2}.\]
If \(\operatorname{Re}\lambda>C_{0}>0\), for some \(C_{0}>0\), then
\[\frac{\left(\left\|v\right\|_{L^{2}}+a^{1/2}[u,u]\right)^{2}}{\left|\lambda^{ *}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|}\leq\frac{2}{ \operatorname{Re}\lambda}\ \frac{a[u,u]+\left\|v\right\|_{L^{2}}^{2}}{a[u,u]+\left\|v\right\|_{L^{2}}^ {2}+\frac{\nu}{\operatorname{Re}\lambda}\left\|v\right\|_{L^{2}}^{2}}\leq \frac{2}{C_{0}}.\]
Now, if \(|\operatorname{Im}\lambda|\geq C_{1}>0\) and \(\operatorname{Re}\lambda\geq 0\) then
\[\frac{\left(a^{1/2}[u,u]+\left\|v\right\|_{L^{2}}\right)^{2}}{\left|\lambda^{ *}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|}\leq 2\frac{a[u,u]+\left\|v \right\|_{L^{2}}^{2}}{\sqrt{C_{1}^{2}(a[u,u]-\left\|v\right\|_{L^{2}}^{2})^{2} +\nu^{2}\left\|v\right\|_{L^{2}}^{4}}}.\]
Let us write \(a[u,u]=r^{2}\cos^{2}t\) and \(\left\|v\right\|_{L^{2}}^{2}=r^{2}\sin^{2}t\) for some \(r>0\) and \(t\in[0,\pi/2]\). This change of variables implies that
\[\frac{\left(\left\|v\right\|_{L^{2}}+a^{1/2}[u,u]\right)^{2}}{ \left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|}\leq \frac{2}{\sqrt{C_{1}^{2}\cos^{2}2t+\nu^{2}\cos^{4}t}}\] \[\leq \frac{2}{\sqrt{C_{1}^{2}\cos^{2}2t+\frac{1}{4}\nu^{2}(1+\cos 2t)^{ 2}}}\] \[\leq \frac{2}{\sqrt{\left(C_{1}^{2}+\frac{1}{4}\nu^{2}\right)\cos^{2}2 t+\frac{1}{2}\nu^{2}\cos 2t+\frac{1}{4}\nu^{2}}}.\]
Let us denote,
\[h(t):=\left(C_{1}^{2}+\tfrac{1}{4}\nu^{2}\right)\cos^{2}2t+\tfrac{1}{2}\nu^{2 }\cos 2t+\tfrac{1}{4}\nu^{2},\qquad t\in[0,\pi/2].\]
This is a not vanishing \(C^{1}\)-function with global minimum for \(t_{c}\in(\pi/4,\pi/2)\) determined by the relation \(\cos 2t_{c}=-\nu^{2}/(4C_{1}^{2}+\nu^{2})\). Thus, a straightforward computation implies that
\[\frac{\left(\left\|v\right\|_{L^{2}}+a^{1/2}[u,u]\right)^{2}}{\left|\lambda^{* }a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|}\leq\frac{2}{\sqrt{h (t_{c})}}=\frac{2\sqrt{\nu^{2}+4C_{1}^{2}}}{\nu C_{1}}.\]
Therefore, if \(K=2\max\{\sqrt{\nu^{2}+4C_{1}^{2}}/(\nu C_{1}),1/C_{0}\}\), we obtain
\[\|U\|_{2}\leq K\|F\|_{2}.\]
Finally, we conclude the existence of a constant \(K(C_{0},C_{1})>0\) such that \(\left\|U\right\|_{X}\leq K(C_{0},C_{1})\left\|F\right\|_{X}\) due to the equivalence between the norms \(\|\cdot\|_{2}\) and \(\left\|\cdot\right\|_{X}\); see Proposition (5.9). Thus, the second statement also holds. This completes the proof.
We are left to prove the following estimate.
**Lemma 5.21**.: \[\sup_{\operatorname{Re}\lambda>0}\|(\lambda-\mathcal{A}_{1})^{-1}\|_{X_{1} \to X_{1}}<\infty.\]
Proof.: Let \(\lambda\geq 0\), so \(\lambda\in\rho(\mathcal{A}_{1})\) by Corollary 5.18 and choose \(C_{0}\) and \(C_{1}\) two positive numbers. Then, we split the set \(\{\lambda\in\mathbb{C}\ |\ \operatorname{Re}\lambda\geq 0\}\) into three disjoint sets, namely
\[\begin{array}{l}S_{0}=\{\lambda\in\mathbb{C}\ |\ 0\leq\operatorname{Re} \lambda\leq C_{0},\ |\operatorname{Im}(\lambda)\,|\leq C_{1}\},\\ S_{1}=\{\lambda\in\mathbb{C}\ |\ 0\leq\operatorname{Re}\lambda\leq C_{0},\ C_{1} <|\operatorname{Im}(\lambda)\,|\},\\ S_{2}=\{\lambda\in\mathbb{C}\ |\ C_{0}<\operatorname{Re}\lambda\}.\end{array}\]
In the rest of the proof, we will show that for every \(\bar{F}\in X_{1}\) the solution \(\bar{U}\in D_{1}\subset X_{1}\) to the equation \((\lambda-\mathcal{A}_{1})\bar{U}=\bar{F}\) is uniformly bounded for \(\lambda\in S_{k}\) with \(k=0,1\), or \(2\).
We analyze the behavior on \(S_{0}\). We claim that \(\lambda\to\|(\lambda-\mathcal{A}_{1})^{-1}\|\) is a continuous mapping. Indeed it follows by the continuity of the mapping \(\lambda\to(\lambda-\mathcal{A}_{1})^{-1}\) and the reversed triangle's inequality, since for every \(\lambda,\ \mu\in\rho(\mathcal{A}_{1})\) there holds
\[\big{|}\ \|(\lambda-\mathcal{A}_{1})^{-1}\|-\|(\mu-\mathcal{A}_{1})^{-1}\|\ \big{|}\leq\|(\lambda-\mathcal{A}_{1})^{-1}-(\mu-\mathcal{A}_{1})^{-1}\|.\]
Now, we observe that \(S_{0}\) is a compact subset contained in \(\rho(\mathcal{A}_{1})\), where the mapping \(\lambda\to\|(\lambda-\mathcal{A}_{1})^{-1}\|\) is continuous. Then, it follows that there exists \(K_{1}>0\) such that \(\|(\lambda-\mathcal{A}_{1})^{-1}\|\leq K_{1}\).
The analysis on \(S_{1}\) and \(S_{2}\) is as follows. Since \(H^{k}\subset L^{2}=H^{0}\) for \(k>0\), we write the entries in \(\bar{F}\) and \(\bar{U}\) as the sum of two terms, one in \(\operatorname{Span}\{\partial_{x}\overline{\theta}\}\) and the other in \(H^{k}_{\perp}\). More precisely, by Lemma 5.5 we know that there exist \(u\in H^{2}_{\perp}\), \(v,f\in H^{1}_{\perp}\), \(g\in L^{2}_{\perp}\) and \(\alpha,\gamma\in\mathbb{C}\) such that \(\bar{U}=(u,v)+\alpha(1,-\nu)\partial_{x}\overline{\theta}\) and \(\bar{F}=(f,g)+\gamma(1,-\nu)\partial_{x}\overline{\theta}\). The vectorial equation \((\lambda-\mathcal{A}_{1})\bar{U}=\bar{F}\) translates into three equations:
\[\lambda u-v=f,\qquad\mathcal{L}\,u+(\lambda+\nu)v=g,\qquad\text{and}\qquad \alpha(\lambda+\nu)=\gamma.\]
Now let \(U=(u,v)\) and \(F=(f,g)\). Since \(u,v,f\), and \(g\) satisfies Lemma 5.20, then
\[\left\|U\right\|_{X}\leq K(C_{0},C_{1})\left\|F\right\|_{X}.\]
Thus,
\[\left\|\bar{U}\right\|_{X}\leq\left\|U\right\|_{X}+\frac{\left\|\gamma(1,-\nu )\partial_{x}\overline{\theta}\right\|_{X}}{|\lambda+\nu|}\leq\left(K(C_{0},C _{1})+\frac{1}{|\lambda+\nu|}\right)\left\|\bar{F}\right\|_{X}.\]
Hence, \((\lambda-\mathcal{A}_{1})^{-1}\) is bounded on \(S_{1}\cup S_{2}\) and the proof is complete.
Now from Lemma 5.21 and Corollary 5.18, we may apply Gearhart-Pruss theorem directly to conclude the following:
**Theorem 5.22**.: _There exists a uniform \(M\geq 1\) and \(\omega_{1}>0\) such that_
\[\|e^{t\mathcal{A}_{1}}U\|_{H^{1}\times L^{2}}\leq Me^{-\omega_{1}t}\|U\|_{H^{ 1}\times L^{2}}, \tag{5.19}\]
_for all \(t\geq 0\), \(U\in X_{1}\)._
## 6. Nonlinear (orbital) stability
In this section we study the stability of the solution \(\theta(x,t)\), if it exists, to the Cauchy problem (2.15),
\[\begin{split}\partial_{t}^{2}\theta+\nu\partial_{t}\theta+\nabla \mathcal{E}(\theta)=0,&\quad x\in\mathbb{R},\ t>0,\\ \theta(x,0)=u_{0}(x),&\quad x\in\mathbb{R},\\ \partial_{t}\theta(x,0)=v_{0}(x),&\quad x\in\mathbb{ R},\end{split} \tag{6.1}\]
when the initial conditions are close to the static Neel wall \(\overline{\theta}\). This problem can be rewritten as a nonlinear vector system of equations by setting \(\varphi=\partial_{t}\theta\). Hence, if \(W=(\theta,\varphi)\), \(W_{0}=(u_{0},v_{0})\), and \(F(W)=(\varphi,-\nu\varphi-\nabla\mathcal{E}(\theta))^{\top}\), we get
\[\begin{split}\partial_{t}W&=F(W),\qquad x\in \mathbb{R},\ t>0,\\ W(x,0)&=W_{0}(x),\qquad x\in\mathbb{R}.\end{split} \tag{6.2}\]
**Remark 6.1**.: It is known that the nonlinear term in (6.1) is invariant to translations in the spatial variable (see Lemma 2.6 in [10]). Thus, if \(\overline{\theta}\) denotes the phase of the static Neel wall, then \(\nabla\mathcal{E}(\overline{\theta}(\cdot+\delta))=0\) for every \(\delta\in\mathbb{R}\). This symmetry is inherited by equation (6.2). Indeed,
\[F(\phi(\delta))=0,\quad\text{for}\quad\ \phi(\delta)=(\overline{\theta}(\cdot+ \delta),0)^{\top}.\]
Hence, taking the derivative with respect to \(\delta\), we get \(DF(\phi(\delta))\phi^{\prime}(\delta)=0\). Therefore, zero is an eigenvalue of the \(DF(\phi(\delta))\) with eigenfunction \(\phi^{\prime}(\delta)\), expressing, once again, translation invariance.
The linearized system around \(\phi(\delta)\) now reads,
\[\partial_{t}V=\mathcal{A}^{\delta}V \qquad x\in\mathbb{R},\ t>0, \tag{6.3}\] \[V(x,0)=V_{0}(x) x\in\mathbb{R},\]
where,
\[\mathcal{A}^{\delta}:=\begin{pmatrix}0&\mathrm{I}\\ -\mathcal{L}^{\delta}&-\nu\mathrm{I}\end{pmatrix},\qquad\text{and}\qquad \mathcal{L}^{\delta}\,u=\left.\frac{d}{d\epsilon}\nabla\mathcal{E}\left( \overline{\theta}(\cdot+\delta)+\epsilon u\right)\right|_{\epsilon=0}.\]
These operators are defined on the same base spaces as before: \(H^{1}\times L^{2}\) and \(L^{2}\), respectively. Notice that \(\mathcal{L}^{\delta}\) is similar to \(\mathcal{L}\), but the only difference lies on the dependence on \(\delta\) due to the translation in the argument of the Neel wall's phase. Then, the following identification is well justified,
\[\mathcal{A}^{0}:=\mathcal{A},\qquad\text{and}\qquad\mathcal{L}^{0}:=\mathcal{ L}\,.\]
Due to previous results, the system (6.3) for \(\delta=0\) has a unique solution in \(X_{1}\) given by the action of a \(C_{0}\)-semigroup, generated by \(\mathcal{A}_{1}\), on the initial condition \(V_{0}\in X_{1}\). It is not difficult to see that all the arguments before Section 6 are easily adapted to the case \(\delta\neq 0\) since the translation by \(\delta\) in the argument of \(\overline{\theta}\) can be interpreted as the action of the left translation operator \(T_{l}(\delta)\), which is an \(L^{2}\)-isometry and a \(C^{0}\)-semigroup with generator \(\partial_{x}\) (see [10]). Therefore, since \(\overline{\theta}\in H^{1}\), there holds
\[\left\|\partial_{x}\overline{\theta}(x+\delta)\right\|_{L^{2}}=\left\|\partial _{x}T_{l}(\delta)\overline{\theta}(x)\right\|_{L^{2}}=\left\|T_{l}(\delta) \partial_{x}\overline{\theta}(x)\right\|_{L^{2}}=\left\|\partial_{x}\overline {\theta}(x)\right\|_{L^{2}},\]
which implies that the \(H^{1}\)-norm and \(L^{2}\)-norm remains invariant. Thus, we must emphasize this \(\delta\)-dependence in all the terms that depends on the profile \(\overline{\theta}\). For example, we replace \(\overline{\theta}\) by \(\overline{\theta}_{\delta}=\overline{\theta}(\cdot+\delta)\), as well as the vector \(\Theta\), the projector \(\mathcal{P}\) and the space \(X_{1}\), which are replaced by \(\Theta(\delta)\), \(\mathcal{P}(\delta)\) and \(X_{1}(\delta)\), respectively. We represent these functions, operators and spaces for the case \(\delta\neq 0\) with the explicit dependence on this variable. It is important to point out that the spectral and growth bounds _do not change_, because they depend on the invariant \(H^{1}\) and \(L^{2}\) norms.
As a result, from the previous analysis we know that system (6.3) has a unique solution in \(X=H^{1}\times L^{2}\) given by the action of the \(C_{0}\)-semigroup \(\{e^{t\mathcal{A}^{\delta}}\}_{t>0}\), on the initial condition \(V_{0}\in X\). Moreover, due to Theorem 5.22, there exist uniform constants \(M\geq 1\) and \(\tilde{\omega}>0\) such that
\[\|e^{t\mathcal{A}^{\delta}_{1}}V_{0}\|_{H^{1}\times L^{2}}\leq Me^{-\tilde{ \omega}t}\|V_{0}\|_{H^{1}\times L^{2}}\]
Notice that if \(\mathcal{P}(\delta)\) is the projector defined in (5.6) for \(\delta\neq 0\) and \(V_{0}\in H^{1}\times L^{2}\), then \(\mathcal{P}(\delta)V_{0}\in X_{1}(\delta)\) and the linear system (6.3) has at least one solution given by
\[V=e^{t\mathcal{A}^{\delta}}\mathcal{P}(\delta)V_{0}+(\mathrm{I}-\mathcal{P}( \delta))V_{0}, \tag{6.4}\]
since
\[\partial_{t}V= \partial_{t}\left[e^{t\mathcal{A}^{\delta}}\mathcal{P}(\delta)V_{ 0}+(\mathrm{I}-\mathcal{P}(\delta))V_{0}\right]\] \[= \mathcal{A}^{\delta}[e^{t\mathcal{A}^{\delta}}\mathcal{P}(\delta) V_{0}]\] \[= \mathcal{A}^{\delta}\left[e^{t\mathcal{A}^{\delta}}\mathcal{P}( \delta)V_{0}+(\mathrm{I}-\mathcal{P}(\delta))V_{0}\right]\] \[= \mathcal{A}^{\delta}V.\]
Moreover, due to standard properties of \(C_{0}\)-semigroups, it follows that
\[\lim_{t\to 0}V=\lim_{t\to 0}\left[e^{t\mathcal{A}^{\delta}}\mathcal{P}(\delta)V_{0}+( \mathrm{I}-\mathcal{P}(\delta))V_{0}\right]=\mathcal{P}(\delta)V_{0}+(\mathrm{I }-\mathcal{P}(\delta))V_{0}=V_{0}.\]
In order to establish nonlinear stability we rely on an application of the implicit function theorem in Hilbert spaces given by Lattanzio _et al._[13] based on a similar result for Banach spaces presented by Sattinger [14]. We present this result here to ease the reading.
**Theorem 6.2**.: _Let \(X\) be a Hilbert space and \(I\subset\mathbb{R}\) be an open neighborhood of \(\delta=0\). Assume that \(F:\mathcal{D}\subset X\to X\) and \(\phi:I\subset\mathbb{R}\to\mathcal{D}\) satisfies \(F(\phi)=0\). If \(\mathcal{P}(\delta)\) is the projector onto \(\{\phi^{\prime}(\delta)\}_{X}^{\perp}\) and there exist positive constants \(C_{0},\delta_{0},M,\omega,\) and \(\gamma\) such that_
1. _for every solution_ \(V=V(t,V_{0},\delta)\) _to (_6.3_),_ \[\|\mathcal{P}(\delta)V(t,V_{0},\delta)\|_{X}\leq C_{0}e^{-\omega t}\|\mathcal{ P}(\delta)V_{0}\|_{X},\] (6.5)
2. \(\phi\) _is differentiable at_ \(\delta=0\) _with_ \[\|\phi(\delta)-\phi(0)-\phi^{\prime}(0)\delta\|_{X}\leq C_{0}|\delta|^{1+\gamma},\] (6.6) _for_ \(|\delta|<\delta_{0}\)_, and_
3. \(F\) _is differentiable at_ \(\phi(\delta)\) _for every_ \(\delta\in(-\delta_{0},\delta_{0})\) _with_ \[\|F(\phi(\delta)+W)-F(\phi(\delta))-DF(\phi(\delta))W\|_{X}\leq C_{0}\|W\|_{X} ^{1+\gamma},\] (6.7) _for_ \(|\delta|<\delta_{0}\) _and_ \(\|W\|_{X}\leq M\)_._
_Then there exists \(\epsilon>0\) such that for any \(W_{0}\in B_{\epsilon}(\phi(0))\subset X\) there exists \(\delta\in I\) and a positive constant C for which the solution \(W(t;W_{0})\) to the nonlinear system (6.2) satisfies_
\[\|W(t,W_{0})-\phi(\delta)\|_{X}\leq C\ \|W_{0}-\phi(0)\|_{X}\ e^{-\omega t}. \tag{6.8}\]
We proceed with the nonlinear stability result for the Neel wall's phase.
### Proof of Theorem 2.3
We begin the proof by setting \(X=H^{1}\times L^{2}\) and
\[\phi(\delta)=(T_{l}(\delta)\overline{\theta},0),\qquad F(W)=\begin{pmatrix} \varphi\\ -\nu\varphi-\nabla\mathcal{E}(\theta)\end{pmatrix},\qquad\mathcal{D}:=H^{2} \times H^{1}.\]
Due to Remark 6.1, we know that \(F(\phi(\delta))=0\) for every \(\delta\in\mathbb{R}\).
Now, let \(V_{0}\in\mathcal{D}\) be an initial condition such that \(V(t,V_{0},\delta)\) is a solution to the linear system (6.3). By setting \(\mathcal{P}(\delta)\) as the projector in Theorem 6.2, it follows that (6.5) is satisfied (see Theorem 5.22).
We turn our attention to the second hypothesis in Theorem 6.2. We know that \(\overline{\theta}\in H^{2}\) is a smooth real-valued function. Hence \(\phi\in H^{1}\times L^{2}\) and
\[\|\phi(\delta)-\phi(0)-\phi^{\prime}(0)\delta\|_{H^{1}\times L^{2}}=\|T_{l}( \delta)\overline{\theta}-\overline{\theta}-\partial_{x}\overline{\theta} \delta\|_{H^{1}}.\]
This term is easily estimated with the integral representation of the remainder for Taylor polynomials, yielding
\[|T_{l}(\delta)\overline{\theta}-\overline{\theta}-\partial_{x} \overline{\theta}\delta|^{2} =\delta^{4}\left|\int_{0}^{1}(1-t)\,\partial_{x}^{2}\overline{ \theta}(x+t\delta)\,dt\right|^{2}\] \[\leq\delta^{4}\int_{0}^{1}(1-t)^{2}\,\left(\partial_{x}^{2} \overline{\theta}(x+t\delta)\right)^{2}\,dt,\]
where the last inequality follows from Jensen's inequality. Now, integrating in \(x\) leads us to
\[\left\|T_{l}(\delta)\overline{\theta}-\overline{\theta}-\partial_{x}\overline{ \theta}\delta\right\|_{L^{2}}^{2}\leq\delta^{4}\int_{\mathbb{R}}\int_{0}^{1}(1 -t)^{2}\,\left(\partial_{x}^{2}\overline{\theta}(x+t\delta)\right)^{2}\,dt\,dx.\]
Since the integrand is not negative, we can interchange the order of integration. Also, by noticing that \(\partial_{x}\) is the generator of the left translation semigroup, we have
\[\partial_{x}^{2}\overline{\theta}(x+t\delta)=\partial_{x}^{2}T_{l}(t\delta) \overline{\theta}=T_{l}(t\delta)\partial_{x}^{2}\overline{\theta}.\]
Therefore,
\[\left\|T_{l}(\delta)\overline{\theta}-\overline{\theta}-\partial _{x}\overline{\theta}\delta\right\|_{L^{2}}^{2} \leq\delta^{4}\int_{0}^{1}(1-t)^{2}\int_{\mathbb{R}}\,\left( \partial_{x}^{2}\overline{\theta}(x+t\delta)\right)^{2}\,dx\,dt\] \[=\delta^{4}\int_{0}^{1}(1-t)^{2}\int_{\mathbb{R}}\,\left(T_{l}(t \delta)\partial_{x}^{2}\overline{\theta}\right)^{2}\,dx\,dt\] \[=\delta^{4}\left\|\partial_{x}^{2}\overline{\theta}\right\|_{L^ {2}}^{2}\int_{0}^{1}(1-t)^{2}\,dt,\]
where the last equality follows because \(T_{l}(t\delta)\) is an isometry in \(L^{2}\). A similar argument is applied to \(\partial_{x}\overline{\theta}\). Indeed,
\[\left\|T_{l}(\delta)\partial_{x}\overline{\theta}-\partial_{x} \overline{\theta}-\partial_{x}^{2}\overline{\theta}\delta\right\|_{L^{2}}^{2} \leq\delta^{4}\int_{0}^{1}(1-t)^{2}\,\int_{\mathbb{R}}\left( \partial_{x}^{3}\overline{\theta}(x+t\delta)\right)^{2}\,dx\,dt\] \[=\delta^{4}\int_{0}^{1}(1-t)^{2}\int_{\mathbb{R}}\,\left(T_{l}(t \delta)\partial_{x}^{3}\overline{\theta}\right)^{2}\,dx\,dt\] \[=\delta^{4}\left\|\partial_{x}^{3}\overline{\theta}\right\|_{L^{ 2}}^{2}\int_{0}^{1}(1-t)^{2}\,dt.\]
With the last two results, we conclude that
\[\left\|\phi(\delta)-\phi(0)-\phi^{\prime}(0)\delta\right\|_{X}\leq\frac{ \left\|\partial_{x}^{2}\overline{\theta}\right\|_{H^{1}}}{\sqrt{3}}\,\delta^{ 2}.\]
Finally, we prove that (6.7) holds. If \(W=(w_{1},w_{2})^{\top}\in H^{2}\times H^{1}\), the expressions for \(F\) and \(\phi(\delta)\) imply that
\[F(\phi(\delta))=\begin{pmatrix}0\\ -\nabla\mathcal{E}\left(T_{l}(\delta)\overline{\theta}\right)\end{pmatrix}, \quad F(\phi(\delta)+W)=\begin{pmatrix}w_{2}\\ -\nu w_{2}-\nabla\mathcal{E}\left(w_{1}+T_{l}(\delta)\overline{\theta}\right) \end{pmatrix},\]
and
\[DF(\phi(\delta))W=\mathcal{A}^{\delta}W=\begin{pmatrix}w_{2}\\ -\nu w_{2}-\mathcal{L}^{\delta}w_{1}\end{pmatrix}.\]
In order to simplify the notation, we denote \(T_{l}(\delta)\overline{\theta}\) by \(\overline{\theta}_{\delta}\). Then, a substitution on the left hand side of (6.7) implies
\[\left\|F(\phi(\delta)+W)-F(\phi(\delta))-DF(\phi(\delta))W\right\|_{X}=\left\| \nabla\mathcal{E}(\overline{\theta}_{\delta}+w_{1})-\nabla\mathcal{E}( \overline{\theta}_{\delta})-\mathcal{L}^{\delta}\,w_{1}\right\|_{L^{2}}.\]
From Proposition 2.1 2 we have that
\[\nabla\mathcal{E}(\overline{\theta}_{\delta}+w_{1}) =-\partial_{x}^{2}\overline{\theta}_{\delta}-\partial_{x}^{2}w_{1 }-\sin(\overline{\theta}_{\delta}+w_{1})\,\left(1+(-\Delta)^{1/2}\right)\cos( \overline{\theta}_{\delta}+w_{1}),\] \[-\nabla\mathcal{E}(\overline{\theta}_{\delta}) =\partial_{x}^{2}\overline{\theta}_{\delta}+\sin\overline{\theta }_{\delta}\,\left(1+(-\Delta)^{1/2}\right)\cos\overline{\theta}_{\delta},\] \[-\mathcal{L}^{\delta}\,w_{1} =\partial_{x}^{2}w_{1}-\sin\overline{\theta}_{\delta}(1+(-\Delta)^ {1/2})\sin\overline{\theta}_{\delta}w_{1}+w_{1}\cos\overline{\theta}_{\delta}( 1+(-\Delta)^{1/2})\cos\overline{\theta}_{\delta}.\]
By letting \(\mathcal{K}:=\nabla\mathcal{E}(\overline{\theta}_{\delta}+w_{1})-\nabla\mathcal{E}( \overline{\theta}_{\delta})-\mathcal{L}^{\delta}\,w_{1}\), we have
\[\mathcal{K}= -\sin(\overline{\theta}_{\delta}+w_{1})\,\left(1+(-\Delta)^{1/2} \right)\cos(\overline{\theta}_{\delta}+w_{1})+\sin\overline{\theta}_{\delta} \,\left(1+(-\Delta)^{1/2}\right)\cos\overline{\theta}_{\delta}+\] \[-\sin\overline{\theta}_{\delta}(1+(-\Delta)^{1/2})\sin\overline{ \theta}_{\delta}w_{1}+w_{1}\cos\overline{\theta}_{\delta}(1+(-\Delta)^{1/2}) \cos\overline{\theta}_{\delta}.\]
Next, we rearrange the last expression by adding and subtracting the term \(\sin(\overline{\theta}_{\delta}+w_{1})\,\left[1+(-\Delta)^{1/2}\right](\cos( \overline{\theta}_{\delta})-w_{1}\sin(\overline{\theta}_{\delta}))\). Hence \(\mathcal{K}=A_{1}+A_{2}+A_{3}\) where
\[A_{1} :=-\sin(\overline{\theta}_{\delta}+w_{1})\,\left[1+(-\Delta)^{1 /2}\right]\left(\cos(\overline{\theta}_{\delta}+w_{1})-\cos\overline{\theta}_{ \delta}+w_{1}\sin\overline{\theta}_{\delta}\right),\] \[A_{2} :=(\sin(\overline{\theta}_{\delta}+w_{1})-\sin\overline{\theta}_{ \delta})[1+(-\Delta)^{1/2}](w_{1}\sin\overline{\theta}_{\delta}),\] \[A_{3} :=-(\sin(\overline{\theta}_{\delta}+w_{1})-\sin\overline{\theta}_{ \delta}-w_{1}\cos\overline{\theta}_{\delta})[1+(-\Delta)^{1/2}]\cos\overline{ \theta}_{\delta}.\]
From standard calculus, we know that
\[\sin(\overline{\theta}_{\delta}+w_{1})-\sin\overline{\theta}_{ \delta} =w_{1}\int_{0}^{1}\cos(\theta_{\delta}+\xi w_{1})d\xi,\] \[\cos(\overline{\theta}_{\delta}+w_{1})-\cos\overline{\theta}_{ \delta} =-w_{1}\int_{0}^{1}\sin(\theta_{\delta}+\xi w_{1})d\xi.\]
Then, by applying the same procedure, we achieve
\[\sin(\overline{\theta}_{\delta}+w_{1})-\sin\overline{\theta}_{ \delta}-w_{1}\cos\overline{\theta}_{\delta}= w_{1}\int_{0}^{1}\left[\cos(\theta_{\delta}+\xi w_{1})-\cos \overline{\theta}_{\delta}\right]\,d\xi\] \[= -w_{1}^{2}\int_{0}^{1}\int_{0}^{1}\xi\sin(\overline{\theta}_{ \delta}+\xi\eta w_{1})\,d\eta d\xi,\]
and
\[\cos(\overline{\theta}_{\delta}+w_{1})-\cos\overline{\theta}_{ \delta}+w_{1}\sin\overline{\theta}_{\delta}= -w_{1}\int_{0}^{1}\left[\sin(\theta_{\delta}+\xi w_{1})-\sin \overline{\theta}_{\delta}\right]\,d\xi\] \[= -w_{1}^{2}\int_{0}^{1}\int_{0}^{1}\xi\cos(\overline{\theta}_{ \delta}+\xi\eta w_{1})\,d\eta d\xi.\]
Therefore, we have that:
\[|\sin(\overline{\theta}_{\delta}+w_{1})-\sin\overline{\theta}_{ \delta}| \leq|w_{1}|,\] \[|\sin(\overline{\theta}_{\delta}+w_{1})-\sin\overline{\theta}_{ \delta}-w_{1}\cos\overline{\theta}_{\delta}| \leq\tfrac{1}{2}|w_{1}|^{2}, \tag{6.9}\] \[|\cos(\overline{\theta}_{\delta}+w_{1})-\cos\overline{\theta}_{ \delta}+w_{1}\cos\overline{\theta}_{\delta}| \leq\tfrac{1}{2}|w_{1}|^{2}.\]
Notice that \(w_{1}\in L^{\infty}\), due to \(w_{1}\in H^{2}(\mathbb{R})\) and the Sobolev imbedding theorems. This fact and Holder inequality imply that \(w_{1}^{2}\in L^{2}\) due to \(\left\|w_{1}^{2}\right\|_{L^{2}}^{2}\leq\left\|w_{1}\right\|_{L^{2}}^{2}\left\| w_{1}\right\|_{L^{\infty}}^{2}\). Moreover \(w_{1}^{2}\in H^{1}\) with \(\left\|w_{1}^{2}\right\|_{H^{1}}\leq 2\left\|w_{1}\right\|_{H^{1}}^{2}\) since
\[\left\|w_{1}^{2}\right\|_{H^{1}}^{2}= \left\|w_{1}^{2}\right\|_{L^{2}}^{2}+\left\|2w_{1}\partial_{x}w_{ 1}\right\|_{L^{2}}^{2}\] \[\leq \left\|w_{1}\right\|_{L^{\infty}}^{2}\left(\left\|w_{1}\right\|_{ L^{2}}^{2}+4\left\|\partial_{x}w_{1}\right\|_{L^{2}}^{2}\right)\] \[\leq 4\left\|w_{1}\right\|_{L^{\infty}}^{2}\left\|w_{1}\right\|_{H^{ 1}}^{2}\] \[\leq 4\left\|w_{1}\right\|_{H^{1}}^{4}.\]
This property allows us to easily estimate \(L^{2}\)-norm of \(A_{1}\), since
\[\left\|A_{1}\right\|_{L^{2}} \leq C\left\|\cos(\overline{\theta}_{\delta}+w_{1})-\cos\overline{ \theta}_{\delta}+w_{1}\sin\overline{\theta}_{\delta}\right\|_{H^{1}}\] \[\leq C\left\|\frac{w_{1}^{2}}{2}\right\|_{H^{1}}\] \[\leq C\left\|w_{1}\right\|_{H^{1}}^{2},\]
where the very first inequality followed since \(\left\|[1+(-\Delta)^{1/2}]u\right\|_{L^{2}}\leq C\left\|u\right\|_{H^{1}}\) for every \(u\in H^{1}\). Also the \(L^{2}\)-norm for the terms \(A_{2}\) and \(A_{3}\) can be bounded using (6.9),
\[\left\|A_{2}\right\|_{L^{2}}^{2} \leq \left\||w_{1}|[1+(-\Delta)^{1/2}](w_{1}\sin\overline{\theta}_{ \delta})\right\|_{L^{2}}^{2}\] \[\leq \left\|w_{1}\right\|_{L^{\infty}}^{2}\left\|[1+(-\Delta)^{1/2}]( w_{1}\sin\overline{\theta}_{\delta})\right\|_{L^{2}}^{2}\] \[\leq C^{2}\left\|w_{1}\right\|_{L^{\infty}}^{2}\left\|w_{1}\sin \overline{\theta}_{\delta}\right\|_{H^{1}}^{2},\]
\[\left\|A_{3}\right\|_{L^{2}}^{2} \leq \left\|\frac{\left|w_{1}\right|^{2}}{2}[1+(-\Delta)^{1/2}]\cos \overline{\theta}_{\delta}\right\|_{L^{2}}^{2}\] \[\leq \frac{1}{4}\left\|w_{1}\right\|_{L^{\infty}}^{4}\left\|[1+(- \Delta)^{1/2}]\cos\overline{\theta}_{\delta}\right\|_{L^{2}}^{2}\] \[\leq \frac{C^{2}}{4}\left\|w_{1}\right\|_{L^{\infty}}^{4}\left\|\cos \overline{\theta}_{\delta}\right\|_{H^{1}}^{2}.\]
Due to the Sobolev inequality \(\left\|w_{1}\right\|_{L^{\infty}}^{2}\leq 2\left\|w_{1}\right\|_{L^{2}}\left\| \partial_{x}w_{1}\right\|_{L^{2}}\), we have that \(\left\|w_{1}\right\|_{L^{\infty}}\leq\left\|w_{1}\right\|_{H^{1}}\). Also, we notice that
\[\left\|w_{1}\sin\overline{\theta}_{\delta}\right\|_{H^{1}}^{2}\leq\left\|w_{1 }\right\|_{L^{2}}^{2}+\left\|\partial_{x}w_{1}\right\|_{L^{2}}^{2}+\left\|w_{ 1}\partial_{x}\overline{\theta}_{\delta}\right\|_{L^{2}}^{2}\leq\left(1+ \left\|\partial_{x}\overline{\theta}_{\delta}\right\|_{L^{\infty}}^{2}\right) \left\|w_{1}\right\|_{H^{1}}^{2}.\]
Thus, we obtain
\[\left\|A_{2}\right\|_{L^{2}}\leq C\sqrt{1+\left\|\partial_{x}\overline{\theta} _{\delta}\right\|_{L^{\infty}}^{2}}\left\|w_{1}\right\|_{H^{1}}^{2},\quad \text{and}\quad\left\|A_{3}\right\|_{L^{2}}\leq\frac{C\left\|\cos\overline{ \theta}_{\delta}\right\|_{H^{1}}}{2}\left\|w_{1}\right\|_{H^{1}}^{2}.\]
Gluing together these three inequalities, we have that
\[\left\|\mathcal{K}\right\|_{L^{2}}\leq\left\|A_{1}\right\|_{L^{2}}+\left\|A_{2 }\right\|_{L^{2}}+\left\|A_{3}\right\|_{L^{2}}\leq\tilde{C}\left\|w_{1} \right\|_{H^{1}}^{2}\leq\tilde{C}\left\|W\right\|_{X}^{2}\]
and (H3) in Theorem 6.2 is verified. The proof is complete.
## Acknowledgements
A. Capella and R. G. Plaza thank Professors Yuri Latushkin and Jaime Angulo Pava for enlightening conversations and useful suggestions during a workshop at the Casa Matematica Oaxaca (BIRS-CMO). The work of A. Capella and R. G. Plaza was partially supported by CONAHCyT, Mexico, grant CF-2023-G-122. The work of L. Morales was supported by CONAHCyT, Mexico, through the Program "Estancias Postdoctorales por Mexico 2022".
| この論文では、 Capella, Melcher, and Otto [CMO07] の提案する波数削減型ダイナミクスを用いて、磁性膜における静止180° N\'eel壁の非線形(軌道)安定性が検証されています。線形化された演算子のスペクトルを静止 N\'eel壁の周辺で定量的に解析し、そのスペクトルは安定した複素半平面内に位置し、実部は非正です。この情報に基づき、静止 N\'eel壁の小さな擾乱が、静止壁によって生成されたマニフィールドに属する翻訳された軌道に収束することを示しています。 |
2309.14621 | Confidence Intervals for the F1 Score: A Comparison of Four Methods | In Natural Language Processing (NLP), binary classification algorithms are
often evaluated using the F1 score. Because the sample F1 score is an estimate
of the population F1 score, it is not sufficient to report the sample F1 score
without an indication of how accurate it is. Confidence intervals are an
indication of how accurate the sample F1 score is. However, most studies either
do not report them or report them using methods that demonstrate poor
statistical properties. In the present study, I review current analytical
methods (i.e., Clopper-Pearson method and Wald method) to construct confidence
intervals for the population F1 score, propose two new analytical methods
(i.e., Wilson direct method and Wilson indirect method) to do so, and compare
these methods based on their coverage probabilities and interval lengths, as
well as whether these methods suffer from overshoot and degeneracy. Theoretical
results demonstrate that both proposed methods do not suffer from overshoot and
degeneracy. Experimental results suggest that both proposed methods perform
better, as compared to current methods, in terms of coverage probabilities and
interval lengths. I illustrate both current and proposed methods on two
suggestion mining tasks. I discuss the practical implications of these results,
and suggest areas for future research. | Kevin Fu Yuan Lam, Vikneswaran Gopal, Jiang Qian | 2023-09-26T02:20:13 | http://arxiv.org/abs/2309.14621v2 | # Confidence Intervals for the \(F_{1}\) Score:
###### Abstract
In Natural Language Processing (NLP), binary classification algorithms are often evaluated using the \(F_{1}\) score. Because the sample \(F_{1}\) score is an estimate of the population \(F_{1}\) score, it is not sufficient to report the sample \(F_{1}\) score without an indication of how accurate it is. Confidence intervals are an indication of how accurate the sample \(F_{1}\) score is. However, most studies either do not report them or report them using methods that demonstrate poor statistical properties. In the present study, I review current analytical methods (i.e., Clopper-Pearson method and Wald method) to construct confidence intervals for the population \(F_{1}\) score, propose two new analytical methods (i.e., Wilson direct method and Wilson indirect method) to do so, and compare these methods based on their coverage probabilities and interval lengths, as well as whether these methods suffer from overshoot and degeneracy. Theoretical results demonstrate that both proposed methods do not suffer from overshoot and degeneracy. Experimental results suggest that both proposed methods perform better, as compared to current methods, in terms of coverage probabilities and interval lengths. I illustrate both current and proposed methods on two suggestion mining tasks. I discuss the practical implications of these results, and suggest areas for future research.
Confidence Intervals, Delta Method, \(F_{1}\) Score, Natural Language Processing, Supervised Learning.
## I Introduction
### _Background_
Natural Language Processing (NLP) is a subfield of computer science which uses computational techniques to learn, understand and produce human language content [1]. In NLP, computational techniques are used to address problems which include, but are not limited to, supervised learning [2].
In supervised learning, all of the data are labelled: the problem is a regression problem if the label is quantitative; and it is a classification problem if the label is qualitative [3]. In a classification problem, the labels can include two (i.e., binary classification problem) or more (i.e., multi-class classification problem) categories [4].
Regardless of whether the problem is a regression problem or a classification problem, the data are often split into a training set, a validation set and a test set: the training set is used to train the model, the validation set is used to tune the hyperparameters in the model, and the test set is used to evaluate the model [4].
In a binary classification problem, some metrics used to evaluate the performance of the model are accuracy, precision, recall and the \(F_{1}\) score [4]. Although accuracy might seem to be a natural metric, it is seldom used in NLP because it does not perform well if the categories are imbalanced. Instead, precision, recall and the \(F_{1}\) score, a single metric that incorporates both precision and recall, are preferred [4, 5, 6]. In fact, the \(F_{1}\) score has been used to evaluate the most recent developments in NLP, including Large Language Models (LLMs), from Bidirectional Encoder Representations from Transformers (BERT) by Google [7] to Large Language Model Meta AI-2 (LLaMA-2) by Meta [8]. Given both its prevalence in and its relevance to NLP, the \(F_{1}\) score will be the focus of the present paper.
### _Problem_
Regardless of the metric that is used to evaluate the performance of a model on the test set, it is assumed that both observations in the test set and the observations in the future are drawn from the same distribution [9]. In other words, the test set is a sample from a population, and the metrics (e.g., accuracy, precision, recall, \(F_{1}\) score) are sample estimates of the corresponding population parameters. If so, then it is not sufficient to report the sample estimate of the population parameter without an indication of how accurate it is [10].
Confidence intervals are an indication of how accurate a sample estimate is [10]. However, most studies do not report confidence intervals for the population \(F_{1}\) score, both in NLP [11] and elsewhere [5, 12]. Even among the studies that do, the confidence intervals either have coverage probabilities that are far from the nominal confidence level, have long interval lengths, suffer from overshoot and degeneracy or are computationally intensive [11, 12, 13, 14, 15, 16].
### _Contribution_
Given the limitations of current methods to construct confidence intervals for the population \(F_{1}\) score, I propose two analytical methods to do so. In the process, I answer the following three research questions:
* **Research Question 1**: What are the current analytical methods to construct confidence intervals for the population \(F_{1}\) score?
* **Research Question 2**: What are proposed analytical methods (i.e., Wilson direct method and Wilson indirect method) to construct confidence intervals for the population \(F_{1}\) score?
* **Research Question 3**: How do the proposed analytical methods perform, as compared to the current analytical methods, to construct confidence intervals for the population \(F_{1}\) score?
### _Outline_
The outline of the present paper is as follows: In Section II, I review the literature to define the \(F_{1}\) score, to define the \(F^{*}\) score and to describe the relationship between the \(F_{1}\) score and the \(F^{*}\) score. In Section III, I review the literature to define the sample \(F_{1}\) score, to define the sample \(F^{*}\) score, to demonstrate that the sample \(F_{1}\) score is the maximum likelihood estimator of the population \(F_{1}\) score, and to derive the asymptotic distribution of the sample \(F_{1}\) score.
In Section IV, I review the literature to describe the current analytical methods to construct confidence intervals for the population \(F_{1}\) score, describe the proposed analytical methods to construct confidence intervals for the population \(F_{1}\) score, and prove that the proposed methods do not suffer from overshoot and degeneracy.
In Section V, I perform a simulation study to compare the confidence intervals constructed using the Clopper-Pearson method, the Wald method, the Wilson direct method and the Wilson indirect method, across the different simulation conditions, based on different evaluation criteria.
In Section VI, I illustrate the Clopper-Pearson method, the Wald method, the Wilson direct method and the Wilson indirect method, to construct confidence intervals for the population \(F_{1}\) score, to evaluate the performance of Bidirectional Encoder Representations from Transformers (BERT) on two suggestion mining tasks using a public dataset and a private dataset.
In Section VII, I discuss the theoretical and practical implications of the study, and suggest directions for future research. In Section VIII, I conclude the present study.
## II \(F_{1}\) Score
The \(F_{1}\) score is the weighted harmonic mean of precision and recall in which both precision and recall are given equal weights [4, 6]:
\[F_{1} = \left(\frac{\text{precision}^{-1}+\text{recall}^{-1}}{2}\right)^{-1} \tag{1}\]
In the literature, the \(F_{1}\) score is also known as Sorensen-Dice coefficient [17, 18]. In this section, I review the literature to define the \(F_{1}\) score, to define the \(F^{*}\) score and to describe the relationship between the \(F_{1}\) score and the \(F^{*}\) score.
### \(F_{1}\)_Score_
Flores _et al._[12] demonstrated that the \(F_{1}\) score can be stated in terms of either unconditional probabilities or conditional probabilities.
#### Ii-A1 Unconditional Probabilities
Let the unconditional probabilities \(\boldsymbol{p}=(p_{11},p_{10},p_{01},p_{00})\), where \(p_{11}\) is the proportion of true positives, \(p_{10}\) is the proportion of false positives, \(p_{01}\) is the proportion of false negatives and \(p_{00}\) is the proportion of true negatives, among all documents. Table 1 summarises the unconditional probabilities \(\boldsymbol{p}\) in a confusion matrix.
The unconditional probabilities \(\boldsymbol{p}\) can be used to obtain both precision and recall.
Precision, also known as the positive predictive value [19], is the proportion of true positives among all documents that are predicted positives [4]:
\[\text{precision}=\frac{p_{11}}{p_{11}+p_{10}} \tag{2}\]
Recall, also known as sensitivity [19], is the proportion of true positives among all documents that are actual positives [4]:
\[\text{recall}=\frac{p_{11}}{p_{11}+p_{01}} \tag{3}\]
Then the \(F_{1}\) score can be stated in terms of the unconditional probabilities \(\boldsymbol{p}\):
\[F_{1} = \left(\frac{\text{precision}^{-1}+\text{recall}^{-1}}{2}\right)^{-1} \tag{4}\] \[= \left(\frac{1}{2}\cdot\frac{p_{11}+p_{10}}{p_{11}}+\frac{1}{2} \cdot\frac{p_{11}+p_{01}}{p_{11}}\right)^{-1}\] (5) \[= \frac{2p_{11}}{2p_{11}+p_{10}+p_{01}} \tag{6}\]
#### Ii-A2 Conditional Probabilities
Let the conditional probabilities \(\boldsymbol{\pi}=(\pi_{11},\pi_{10},\pi_{01})\), where \(\pi_{11}=p_{11}/(p_{11}+p_{10}+p_{01})\) is the proportion of true positives, \(\pi_{10}=p_{10}/(p_{11}+p_{10}+p_{01})\) is the proportion of false positives, \(\pi_{01}=p_{01}/(p_{11}+p_{10}+p_{01})\) is the proportion of false negatives, among all relevant documents (i.e., all documents that are either actual positives or predicted positives). Table 2 summarises the conditional probabilities \(\boldsymbol{\pi}\) in a confusion matrix.
Then the \(F_{1}\) score can also be stated in terms of the conditional probabilities \(\boldsymbol{\pi}\):
\[F_{1} = \frac{2p_{11}}{2p_{11}+p_{10}+p_{01}} \tag{7}\] \[= \frac{2\pi_{11}(p_{11}+p_{10}+p_{01})}{(2\pi_{11}+\pi_{10}+\pi_{ 01})(p_{11}+p_{10}+p_{01})}\] (8) \[= \frac{2\pi_{11}}{1+\pi_{11}} \tag{9}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline PredictedActual & Positive & Negative \\ \hline Positive & \(p_{11}\) & \(p_{10}\) \\ \hline Negative & \(p_{01}\) & \(p_{00}\) \\ \hline \end{tabular}
\end{table}
Table 1: Confusion Matrix For All Documents
\begin{table}
\begin{tabular}{|c|c|c|} \hline PredictedActual & Positive & Negative \\ \hline Positive & \(\pi_{11}\) & \(\pi_{10}\) \\ \hline Negative & \(\pi_{01}\) & – \\ \hline \end{tabular}
\end{table}
Table 2: Confusion Matrix For All Relevant Documents
### \(F^{*}\) _Score_
Hand _et al._[20] termed \(\pi_{11}\) as the \(F^{*}\) score, also known as the Jaccard coefficient [21]. In other words, the \(F_{1}\) score can be stated in terms of the \(F^{*}\) score:
\[F_{1} = \frac{2F^{*}}{1+F^{*}} \tag{10}\]
### \(F_{1}\) _Score and \(F^{*}\) Score_
Hand _et al._[20] demonstrated that the \(F_{1}\) score is a monotonic function of the \(F^{*}\) score on the interval [0,1].
**Proposition 1.** The \(F_{1}\) score is a monotonic function of the \(F^{*}\) score on the interval [0,1].
**Proof.** From (10), the first derivative of the \(F_{1}\) with respect to \(F^{*}\) is as follows:
\[\frac{\partial F_{1}}{\partial F^{*}} = \frac{2}{1+F^{*}}-\frac{2F^{*}}{(1+F^{*})^{2}} \tag{11}\] \[= \frac{2}{(1+F^{*})^{2}} \tag{12}\]
By the First Derivative Test (e.g., [22, p. 250]), because \(\frac{\partial F_{1}}{\partial F^{*}}>0\) at each point \(F^{*}\in(0,1)\), \(F_{1}\) is increasing on [0,1]. Therefore, the \(F_{1}\) score is a monotonic function of the \(F^{*}\) score on the interval [0,1]. Figure 1 illustrates the relationship between the \(F_{1}\) score and the \(F^{*}\) score on the interval [0,1].
## III Sample \(F_{1}\) Score
In this section, I review the literature to define the sample \(F_{1}\) score, to define the sample \(F^{*}\) score, to demonstrate that the sample \(F_{1}\) score is the maximum likelihood estimator of the population \(F_{1}\) score, and to derive the asymptotic distribution of the sample \(F_{1}\) score.
### _Sample \(F_{1}\) Score_
Flores _et al._[12] demonstrated that the sample \(F_{1}\) score can be stated in terms of either unconditional probabilities or conditional probabilities.
#### Iii-A1 Unconditional Probabilities
Let _n_=\((n_{11},n_{10},n_{01},n_{00})\), where \(n_{11}\) is the number of true positives, \(n_{10}\) is the number of false positives, \(n_{01}\) is the number of false negatives, and \(n_{00}\) is the number of true negatives, among all \(n\) documents in a sample.
Then \(n\) can be assumed to follow a multinomial distribution with parameters \(n\) and _p_[12, 16, 5, 23]:
\[\textbf{{n}}\sim Multinomial(n;\textbf{{p}}) \tag{13}\]
Both \(n\) and \(n\) can be used to obtain the unconditional probabilities \(\hat{\textbf{{p}}}=(\hat{p}_{11},\hat{p}_{10},\hat{p}_{01},\hat{p}_{00})\), where \(\hat{p}_{11}=n_{11}/n\) is the proportion of true positives, \(\hat{p}_{10}=n_{10}/n\) is the proportion of false positives, \(\hat{p}_{01}=n_{01}/n\) is the proportion of false negatives and \(\hat{p}_{00}=n_{00}/n\) is the proportion of true negatives, among all \(n\) documents in the sample.
Then the sample \(F_{1}\) score can be stated in terms of the unconditional probabilities \(\hat{\textbf{{p}}}\):
\[\hat{F}_{1} = \frac{2\hat{p}_{11}}{2\hat{p}_{11}+\hat{p}_{10}+\hat{p}_{01}} \tag{14}\]
#### Iii-A2 Conditional Probabilities
Let _\(\boldsymbol{\nu}\)_=\((n_{11},n_{10},n_{01})\), where \(n_{11}\) is the number of true positives, \(n_{10}\) is the number of false positives and \(n_{01}\) is the number of false negatives, among all \(\nu=n-n_{00}\) documents in a sample.
Then \(\nu\) can be assumed to follow a binomial distribution with parameters \(n\) and \(p_{11}+p_{10}+p_{01}\)[12]:
\[\nu\sim Binomial(n,p_{11}+p_{10}+p_{01}) \tag{15}\]
And \(n_{11}\) can be assumed to follow a binomial distribution with parameters \(\nu\) and \(F^{*}\), conditioned on the observed value of \(\nu\)[12]:
\[n_{11}\sim Binomial(\nu,F^{*}) \tag{16}\]
Both \(\nu\) and \(\boldsymbol{\nu}\) can be used to obtain the conditional probabilities \(\hat{\boldsymbol{\pi}}\)=\((\hat{\pi}_{11},\hat{\pi}_{10},\hat{\pi}_{01})\), where \(\hat{\pi}_{11}=n_{11}/\nu\) is the proportion of true positives, \(\hat{\pi}_{10}=n_{10}/\nu\) is the proportion of false negatives, among all \(\nu\) relevant documents in the sample.
Then the sample \(F_{1}\) score can also be stated in terms of the conditional probabilities \(\hat{\boldsymbol{\pi}}\):
\[\hat{F}_{1} = \frac{2\hat{\pi}_{11}}{1+\hat{\pi}_{11}} \tag{17}\]
### _Sample \(F^{*}\) Score_
If \(\hat{\pi}_{11}\) is termed the sample \(F^{*}\) score, then the sample \(F_{1}\) score can also be stated in terms of the sample \(F^{*}\) score:
\[\hat{F}_{1} = \frac{2\hat{F}^{*}}{1+\hat{F}^{*}} \tag{18}\]
Figure 1: \(F_{1}\) is a monotonic function of \(F^{*}\) on the interval [0,1].
### _Maximum Likelihood Estimation of the Population \(F_{1}\) Score_
Regardless of whether the sample \(F_{1}\) score is stated in terms of the unconditional probabilities \(\hat{\mathbf{p}}\) or the conditional probabilities \(\hat{\mathbf{\pi}}\) (i.e., the sample \(F^{*}\) score), the sample \(F_{1}\) score is the maximum likelihood estimator of the population \(F_{1}\) score.
**Proposition 2.** The sample \(F_{1}\) score is the maximum likelihood estimator of the population \(F_{1}\) score.
**Proof.** By the invariance property of maximum likelihood estimators, (e.g., [24, p. 320]), because the sample proportion (i.e., \(\hat{F}^{*}\)) is the maximum likelihood estimator of the population proportion (i.e., \(F^{*}\)) (e.g., [24, p. 318]), and \(F_{1}\) is a one-to-one function of \(F^{*}\) on the interval [0,1] (Proposition 1), the sample \(F_{1}\) score is also the maximum likelihood estimator of the population \(F_{1}\) score.
### _Asymptotic Distribution of the Sample \(F_{1}\) Score_
Flores _et al._[12] used the Delta Method to derive the asymptotic distribution of the sample \(F_{1}\) score.
**Proposition 3.** If \(\nu\) is sufficiently large, then \((\hat{F}_{1}-F_{1})/\sigma_{\hat{F}_{1}}\) has approximately the standard normal distribution.
**Proof.** By the Central Limit Theorem (e.g., [25, p. 107]), or the De Moivre-Laplace Theorem (e.g., [25, p. 108]), \((\hat{F}^{*}-F^{*})/\sigma_{\hat{F}^{*}}\) converges in distribution to the standard normal distribution:
\[\frac{(\hat{F}^{*}-F^{*})}{\sigma_{\hat{F}^{*}}}\xrightarrow{D}N(0,1) \tag{19}\]
where \(\sigma_{\hat{F}^{*}}^{2}\) is the variance of \(\hat{F}^{*}\):
\[\sigma_{\hat{F}^{*}}^{2} = \frac{F^{*}(1-F^{*})}{\nu} \tag{20}\]
By the Delta Method (e.g., [25, p. 131; 26, p. 637]), \((\hat{F}_{1}-F_{1})/\sigma_{\hat{F}_{1}}\) also converges in distribution to the standard normal distribution:
\[\frac{(\hat{F}_{1}-F_{1})}{\sigma_{\hat{F}_{1}}^{2}}\xrightarrow{D}N(0,1) \tag{21}\]
where \(\sigma_{\hat{F}_{1}}^{2}\) is the variance of \(\hat{F}_{1}\):
\[\sigma_{\hat{F}_{1}}^{2} = \left[\frac{\partial F_{1}}{\partial F^{*}}\right]^{2}\frac{ \sigma_{\hat{F}^{*}}^{2}}{\nu} \tag{22}\] \[= \left[\frac{2}{(1+F^{*})^{2}}\right]^{2}\frac{F^{*}(1-F^{*})}{\nu}\] (23) \[= \frac{4F^{*}(1-F^{*})}{\nu(1+F^{*})^{4}}\] (24) \[= \frac{F_{1}(1-F_{1})(2-F_{1})^{2}}{2\nu} \tag{25}\]
Therefore, if \(\nu\) is sufficiently large, then \((\hat{F}_{1}-F_{1})/\sigma_{\hat{F}_{1}}\) has approximately the standard normal distribution.
## IV Confidence Intervals for the \(F_{1}\) Score
A \((1-\alpha)100\%\) confidence interval for a population parameter, where \(\alpha\) is the level of statistical significance (e.g., 0.05), is the expected percentage of intervals that include the true population parameter if repeated samples from the population are taken, and a confidence interval is constructed using the same method for each possible sample (e.g., [10]).
In this section, I review the literature to describe the current analytical methods to construct confidence intervals for the population \(F_{1}\) score (i.e., Clopper-Pearson method and Wald method), describe the proposed analytical methods to construct confidence intervals for the population \(F_{1}\) score (i.e., Wilson direct method and Wilson indirect method), and prove that the proposed methods do not suffer from both overshoot and degeneracy.
### _Clopper-Pearson Method_
The Clopper-Pearson method assumes that \(n_{11}\) has a binomial distribution with parameters \(\nu\) and \(F^{*}\) (16) and inverts the binomial test for the sample \(F^{*}\) score [14].
The endpoints of the confidence interval for the population \(F^{*}\) score are the solutions in \(F^{*}\) to the following equations:
\[P_{F^{*}}(n_{11}<\tilde{n}_{11})=1-\alpha/2 \tag{26}\]
and
\[P_{F^{*}}(n_{11}>\tilde{n}_{11})=1-\alpha/2 \tag{27}\]
where \(\tilde{n}_{11}\) is the realisation of \(n_{11}\).
In particular, the lower endpoint is the \(\alpha/2\) quantile of a beta distribution with parameters \(\tilde{n}_{11}\) and \(\nu-\tilde{n}_{11}+1\), and the upper endpoint is the \(1-\alpha/2\) quantile of a beta distribution with parameters \(\tilde{n}_{11}+1\) and \(\nu-\tilde{n}_{11}\) (e.g., [14]).
The endpoints of the confidence interval for the population \(F_{1}\) score are the abovementioned solutions' transformation via (10).
### _Wald Method_
The Wald method assumes that \((\hat{F}_{1}-F_{1})/\sigma_{\hat{F}_{1}}\) has approximately a standard normal distribution (Proposition 3) and inverts the Wald test for the sample \(F_{1}\) score [14]:
\[\left|\frac{\hat{F}_{1}-F_{1}}{\hat{\sigma}_{\hat{F}_{1}}}\right|<z_{\alpha/2} \tag{28}\]
where \(z_{\alpha/2}=\Phi^{-1}(1-\alpha/2)\), \(\Phi(z)\) is the standard normal distribution function, and \(\hat{\sigma}_{\hat{F}_{1}}^{2}\) is the estimated variance of the sample \(F_{1}\) score using the maximum likelihood estimator of the population \(F_{1}\) score (Proposition 2):
\[\hat{\sigma}_{\hat{F}_{1}}^{2} = \frac{\hat{F}_{1}(1-\hat{F}_{1})(2-\hat{F}_{2})^{2}}{2\nu} \tag{29}\]
The endpoints of the confidence interval for the population \(F_{1}\) score using the Wald method are the solutions in \(F_{1}\) to the
following equations:
\[F_{1} = \hat{F}_{1}-z_{\alpha/2}\times\hat{\sigma}_{\hat{F}_{1}} \tag{30}\]
and
\[F_{1} = \hat{F}_{1}+z_{\alpha/2}\times\hat{\sigma}_{\hat{F}_{1}} \tag{31}\]
### _Wilson Direct Method_
The Wilson direct method also assumes that that \((\hat{F}_{1}-F_{1})/\sigma_{\hat{F}_{1}}\) has approximately a standard normal distribution (Proposition 3). However, it uses the null variance, as provided in (25), instead of the estimated variance, as provided in (29), when inverting the score test for the sample \(F_{1}\) score [14]:
\[\left|\frac{\hat{F}_{1}-F_{1}}{\sigma_{\hat{F}_{1}}}\right|<z_{\alpha/2} \tag{32}\]
The endpoints of the confidence interval for the population \(F_{1}\) score are the real (i.e., not imaginary) solutions in \(F_{1}\) to the following quartic equation, most conveniently solved by iteration (e.g., [27]):
\[\left(\frac{\hat{F}_{1}-F_{1}}{\sigma_{\hat{F}_{1}}}\right)^{2} = z_{\alpha/2}^{2} \tag{33}\] \[\left(\hat{F}_{1}-F_{1}\right)^{2} = z_{\alpha/2}^{2}\frac{F_{1}(1-F_{1})(2-F_{1})^{2}}{2\nu} \tag{34}\]
If \(k=z_{\alpha/2}^{2}/\nu\), then
\[2F_{1}^{2}-4\hat{F}_{1}F_{1}+2\hat{F}_{1}^{2}=k(F_{1}-F_{1}^{2})(4-4F_{1}+F_{ 1}^{2}) \tag{35}\]
\[2F_{1}^{2}-4\hat{F}_{1}F_{1}+2\hat{F}_{1}^{2}=k(4F_{1}-8F_{1}^{2} +5F_{1}^{3}-F_{1}^{4}) \tag{36}\]
\[kF_{1}^{4}-5kF_{1}^{3}+2(4k+1)F_{1}^{2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad-4(k+\hat{F}_{1})F_{1}+2\hat{ F}_{1}^{2}=0 \tag{37}\]
### _Wilson Indirect Method_
The Wilson indirect method assumes that \((\hat{F}^{*}-F^{*})/\sigma_{\hat{F}^{*}}\) has approximately the standard normal distribution (19) and inverts the score test for the sample \(F^{*}\) score:
\[\left|\frac{\hat{F}^{*}-F^{*}}{\sigma_{\hat{F}^{*}}}\right|<z_{\alpha/2} \tag{38}\]
where \(\sigma_{\hat{F}^{*}}^{2}\) is the null variance of \(\hat{F}^{*}\) as provided in (20).
The endpoints of the confidence interval for the population \(F^{*}\) score using the Wilson indirect method are the solutions in \(F^{*}\) to the following quadratic equation:
\[\left(\frac{\hat{F}^{*}-F^{*}}{\sigma_{\hat{F}^{*}}}\right)^{2} = z_{\alpha/2}^{2} \tag{39}\] \[\left(\hat{F}^{*}-F^{*}\right)^{2} = z_{\alpha/2}^{2}\frac{F^{*}(1-F^{*})}{\nu} \tag{40}\]
If \(k=z_{\alpha/2}^{2}/\nu\), then
\[\hat{F}^{*^{2}}-2\hat{F}^{*}F^{*}+{F^{*}}^{2}=kF^{*}-k{F^{*}}^{2} \tag{41}\] \[(1+k){F^{*}}^{2}-(2\hat{F}^{*}+k)F^{*}+\hat{F}^{*^{2}}=0 \tag{42}\]
The endpoints of the confidence interval for the population \(F_{1}\) score using the Wilson indirect method are the abovementioned solutions' transformation via (10).
### _Overshoot and Degeneracy_
Because confidence intervals for the population \(F_{1}\) score constructed using the Wald method produce intervals centred on the point estimate, these intervals suffer from both overshoot, in which either the upper limit is greater than 1 or the lower limit is less than 0, and degeneracy, in which the confidence interval has zero width [5, 12, 15]. And because confidence intervals for the population \(F^{*}\) score, constructed using either the Clopper-Pearson method or the Wilson score method, do not suffer from overshoot and degeneracy [15], those for the population \(F_{1}\) score, constructed using either the Clopper-Pearson method or the Wilson indirect method, obtained from a strictly monotonic transformation via (10), also do not suffer from overshoot and degeneracy.
However, the properties of confidence intervals constructed using the Wilson direct method are less apparent and have not been studied in the literature, to the best of my knowledge. In the remainder of the section, I prove that confidence intervals for the population \(F_{1}\) score constructed using the Wilson direct method also do not suffer from overshoot and degeneracy because they have exactly 2 distinct real roots under almost all conditions in practice (e.g., to have at least 3 relevant observations in the test set if the level of statistical significance is set at 0.05).
In the proof, let \(f(F_{1})\) be the quartic polynomial in the left-hand-side of (37). Then \(f(F_{1})\) can also expressed as follows, from (34), in order to facilitate the proof:
\[f(F_{1})=kF_{1}^{4}-5kF_{1}^{3}+2(4k+1)F_{1}^{2}\\ -4(k+\hat{F}_{1})F_{1}+2\hat{F}_{1}^{2} \tag{43}\]
\[=kF_{1}(F_{1}-1)(F_{1}-2)^{2}+2(F_{1}-\hat{F}_{1})^{2} \tag{44}\]
The first derivative is as follows:
\[f^{\prime}(F_{1})=\frac{\partial}{\partial F_{1}}f(F_{1}) \tag{45}\]
\[=4kF_{1}^{3}-15kF_{1}^{2}+16kF_{1}+4F_{1}-4(k+\hat{F}_{1}) \tag{46}\]
\[=4kF_{1}^{3}-15kF_{1}^{2}+16kF_{1}+4F_{1}-4k-4\hat{F}_{1} \tag{47}\]
\[=4kF_{1}^{3}-15kF_{1}^{2}+16kF_{1}-4k+4(F_{1}-\hat{F}_{1}) \tag{48}\]
\[=k(4F_{1}^{3}-15F_{1}^{2}+16F_{1}-4)+4(F_{1}-\hat{F}_{1}) \tag{49}\]
\[=k(F_{1}-2)(4F_{1}^{2}-7F_{1}+2)+4(F_{1}-\hat{F}_{1}) \tag{50}\]
And the second derivative is as follows:
\[f^{\prime\prime}(F_{1}) =\frac{\partial^{2}}{\partial F_{1}^{2}}f(F_{1}) \tag{51}\] \[=12kF_{1}^{2}-30kF_{1}+16k+4 \tag{52}\]
**Lemma 1.**\(f(F_{1})\) has at least 2 distinct real roots in the interval [0,1] if \(k>0\).
**Proof.** First, suppose that \(\hat{F}_{1}=0\). Then \(F_{1}=0\) is a root. By the Intermediate Value Theorem (e.g., [22, p. 127]), because \(f^{\prime}(0)<0\) (i.e., \(f(0+h)<0\) for some small increment \(h\)) if \(k>0\), and \(f(1)>0\), \(f(F_{1})\) has a real root in the interval (0,1) if \(k>0\).
Second, suppose that \(0<\hat{F}_{1}<1\). By the Intermediate Value Theorem, because \(f(\hat{F}_{1})<0\) if \(k>0\), and \(f(0)>0\), \(f(F_{1})\) has a root in the interval (0, \(\hat{F}_{1}\)) if \(k>0\). And because \(f(1)>0\), \(f(F_{1})\) also has a root in the interval (\(\hat{F}_{1}\), 1) if \(k>0\).
Last, suppose that \(\hat{F}_{1}=1\). Then \(F_{1}=1\) is a root. By the Intermediate Value Theorem, because \(f^{\prime}(1)>0\) (i.e., \(f(1-h)<0\) for some small decrement \(h\)) if \(k>0\), and \(f(0)>0\), \(f(F_{1})\) has a real root in the interval (0,1) if \(k>0\).
Therefore \(f(F_{1})\) has at least 2 distinct real roots in the interval [0,1] if \(k>0\).
**Lemma 2.**\(f(F_{1})\) has at most 2 distinct real roots if \(0<k<16/11\).
**Proof.** If \(f(F_{1})\) is concave up, then it has at most 2 distinct real roots. By the Second Derivative Test for Concavity (e.g., [22, p. 256]), \(f(F_{1})\) is concave up if \(f^{\prime\prime}(F_{1})>0\). And \(f^{\prime\prime}(F_{1})>0\) if it is also concave up (i.e., \(12k>0\)) and its discriminant is strictly negative:
\[(-30k)^{2}-4(12k)(16k+4) <0 \tag{53}\] \[132k^{2}-192k <0 \tag{54}\]
Which implies that
\[0<k<16/11 \tag{55}\]
Therefore, \(f(F_{1})\) has at most 2 distinct real roots if \(0<k<16/11\).
**Theorem 1.**\(f(F_{1})\) has exactly 2 distinct roots in the interval [0,1] if \(0<k<16/11\).
**Proof.** The theorem follows from Lemma 1 and Lemma 2.
Therefore, the confidence intervals for the population \(F_{1}\) score constructed using the Wilson direct method do not suffer from overshoot and degeneracy if \(\nu>(11/16)\times z_{\alpha/2}^{2}\) and \(z_{\alpha/2}^{2}>0\). For example, if the level of statistical significance is 0.05, then the number of relevant observations in the test set should be at least 3.
## V Simulation Study
In this section, I perform a simulation study to compare the 95% confidence intervals for the population \(F_{1}\) score constructed using the Clopper-Pearson method, the Wald method, the Wilson direct method and the Wilson indirect method, across the different simulation conditions, based on different evaluation criteria. In particular, I describe the simulation conditions, the evaluation criteria and the study results.
### _Simulation Conditions_
The simulation conditions were adapted from Takahashi _et al._[5]. In particular, 18 simulation conditions were obtained from crossing six \(n\) (i.e., 25, 50, 100, 500, 1000, 5000) against three scenarios:
* **Scenario 1:** The positive class has moderate prevalence (50%), high precision (80%) and high recall (80%). Therefore, \(p_{11}=0.4\), \(p_{10}=0.1\), \(p_{01}=0.1\), \(p_{00}=0.4\) and \(F_{1}=0.8\).
* **Scenario 2:** The positive class has high prevalence (80%), high precision (80%) and high recall (80%). Therefore, \(p_{11}=0.64\), \(p_{10}=0.16\), \(p_{01}=0.16\), \(p_{00}=0.04\) and \(F_{1}=0.8\).
* **Scenario 3:** The positive class has high prevalence (80%), high precision (80%) and low recall (20%). Therefore, \(p_{11}=0.16\), \(p_{10}=0.04\), \(p_{01}=0.64\), \(p_{00}=0.16\) and \(F_{1}=0.32\).
For each simulation condition, I generated \(10^{6}\) replicates. Each replicate was generated from a multinomial distribution with parameters corresponding to the \(n\), \(p_{11}\), \(p_{10}\), \(p_{01}\) and \(p_{00}\) for that condition. The generation of replicates from multinomial distributions has been applied in previous studies which examined the properties of confidence intervals for the population \(F_{1}\) score [5, 12].
### _Evaluation Criteria_
In each simulation condition, I evaluated each method based on their coverage probabilities, expected lengths, overshoot probabilities and degeneracy probabilities. The coverage probability is the proportion of times the confidence interval includes the true population parameter [15]. The expected length is the expected difference between the upper and lower limits of the confidence interval [14]. The overshoot probability is the proportion of times the confidence interval has either an upper limit that is greater than 1 or a lower limit that is less than 0 [15]. The degeneracy probability is the proportion of times the confidence interval has zero width (i.e., the difference between the upper and lower limits of the confidence interval is zero) [15]. It is desirable to have confidence intervals with coverage probabilities near the nominal confidence level, with small expected lengths, and that do not suffer from overshoot and degeneracy.
### _Study Results_
#### V-C1 Coverage Probabilities
For small \(n\), across all scenarios, the Clopper-Pearson method demonstrated coverage probabilities that were far greater than the nominal confidence level, the Wald method demonstrated coverage probabilities that were far less than the nominal confidence level, and both the Wald direct
method and the Wald indirect method demonstrated coverage probabilities that were near the nominal confidence level. For large \(n\), across all scenarios, all four methods demonstrated coverage probabilities that were near the nominal confidence level. Table III summarises the coverage probabilities for each method across all simulation conditions.
#### V-C2 Expected Lengths
For small \(n\), in Scenarios 1 and 2, the Wilson indirect method demonstrated the shortest expected lengths as compared to the Clopper-Pearson method, the Wald method and the Wilson direct method; but in Scenario 3, the Wilson direct method demonstrated the shortest expected lengths as compared to the Clopper-Pearson method, the Wald method and the Wilson indirect method. For large \(n\), across all scenarios, all four methods demonstrated comparable expected lengths. For each scenario, all methods' expected lengths decreased as \(n\) increased, as expected. Table IV summarises the expected lengths for each method across all simulation conditions.
The difference in expected lengths between the confidence intervals constructed using the Wilson direct method and the Wilson indirect method occurs because both methods' interval lengths depend on both \(\nu\) and \(\hat{F}_{1}\). Figure 2 illustrates a comparison of the interval lengths between the Wilson direct method, obtained from the absolute difference between the roots to (37), and the Wilson indirect method, obtained from the absolute difference between the transformed roots to (42) (i.e., via (10)), across different \(\nu\) and \(\hat{F}_{1}\).
If \(\nu\) is small and \(\hat{F}_{1}\) is small, then the Wilson direct method produces confidence intervals that are shorter as compared to the Wilson indirect method. However, if \(\nu\) is small and \(\hat{F}_{1}\) is large, then the Wilson indirect method produces confidence intervals that are shorter as compared to the Wilson direct method. And if \(\nu\) is large, then both methods produce confidence intervals that are of comparable lengths.
For example, in Scenario 3 where \(n=25\) and \(F_{1}=0.32\), the Wilson direct method demonstrated a shorter expected length (0.395) as compared to the Wilson indirect method (0.414) (Table IV). This occurred because \(n\) was small, and therefore \(\nu\leq n\) was small, and because the population \(F_{1}\) score was small, and therefore \(\hat{F}_{1}\) tended to be small.
#### V-C3 Overshoot Probabilities
For small \(n\), across all scenarios, the Wald method demonstrated overshoot. However, for large \(n\), across all scenaiors, the Wald method did not demonstrate overshoot. Regardless of \(n\), across all scenarios, neither the Clopper-Pearson method, the Wilson direct method nor the Wilson indirect method demonstrated degeneracy. Table IV summarises the degeneracy probabilities for each method across all simulation conditions.
tations from Transformers (BERT) on two suggestion mining tasks using a public dataset and a private dataset.
On the one hand, suggestion mining is the automatic extraction of suggestions using techniques in NLP [28]. In suggestion mining, an explicit suggestion directly suggests or recommends an action or entity, and an implicit suggestion indirectly suggests or recommends an action or entity [29]. In the literature, suggestion mining often focuses on explicit suggestions, and has been applied to extract suggestions across a number of industries including, but not limited to, education, software and tourism (e.g., [30, 31]).
On the other hand, BERT is a language model built on a multi-layer bidirectional transformer encoder architecture based on the original implementation described in Vaswani _et al._[32]. BERT was pre-trained on a masked language modelling task and a next sentence prediction task using data from BooksCorpus and English Wikipedia [7]. BERT\({}_{\text{BASE}}\) comprises 12 layers (i.e., transformer blocks), 768 hidden units and 12 self-attention heads (i.e., 110 million parameters); and BERT\({}_{\text{LARGE}}\) comprises 24 layers, 1024 hidden units and 16 attention heads (i.e., 340 million parameters).
In the present examples, I used the fine-tuning approach on BERT\({}_{\text{BASE}}\) in which a simple classification layer was added to the pre-trained model and all parameters were jointly fine-tuned on the corresponding downstream task [7]. As suggested by Devlin et al. [7], I ran an exhaustive search over the following hyperparameters and chose the model that performed best on the validation set:
* **Batch size:** 16, 32
* **Learning rate (Adam):** 5e-5, 3e-5, 2e-5
* **Number of epochs:** 2, 3, 4
### _Public Dataset_
Yamamoto and Sekiya [33] used the fine-tuning approach on BERT\({}_{\text{BASE}}\) and applied it to the Semantic Evaluation 2019 (SemEval2019) Task 9 Subtask A dataset [31]. The dataset was scraped from a software forum. It was annotated in two phases. In Phase 1, crowdsourced annotators labelled all observations as either a suggestion or a non-suggestion. And in Phase 2, expert annotators labelled some samples, in particular only those labelled as a suggestion in Phase 1, as either a suggestion or a non-suggestion. In the final dataset, an observation was labelled as a suggestion if it was labelled as such in both Phase 1 and Phase 2. Otherwise, the observation was labelled as a non-suggestion.
The final dataset comprised 9925 observations among which 2468 (25%) were suggestions. The dataset was split into a training set, a validation set and a test set. The training set comprised 8,500 observations among which 2,085 (25%) were suggestions. The validation set comprised 592 observations among which 296 were suggestions (50%). And the test set comprised 833 observations among which 87 (10%) were suggestions.
In Yamamoto and Sekiya [33], BERT\({}_{\text{BASE}}\) achieved an \(F_{1}\) score of 0.731 on the test set. In the present example, BERT\({}_{\text{BASE}}\) achieved 77 true positives, 44 false positives, 10 false negatives and 702 true negatives on the same test set. This corresponds to an \(F_{1}\) score of 0.740.
Using the Clopper-Pearson method, the 95% confidence interval is [0.665,0.805] and has an interval length of 0.139. Using the Wald method, the 95% confidence interval is [0.674,0.807] and has an interval length of 0.134. Using the Wilson direct method, the 95% confidence interval is [0.664,0.799] and has an interval length of 0.135. And using the Wilson indirect method, the 95% confidence interval is [0.669,0.801] and has an interval length of 0.133.
As expected, the confidence interval constructed using the Clopper-Pearson method was the longest. The confidence interval constructed using the Wilson indirect method was shorter as compared to that constructed using the Wilson direct method because the sample \(F_{1}\) score was relatively large.
### _Private Dataset_
I applied BERT\({}_{\text{BASE}}\) to the course feedback for a programme offered, to more than 4300 executive and administrative staff, as well as laboratory technicians, at a university. Upon completion of the programme, all participants were required to complete questionnaires which comprised, among others, the following questions:
1. What do you like about the onsite workshop?
2. Were there any issues or problems that you encountered in the onsite workshop?
3. Overall, what is the most useful topic you learnt in the programme?
4. Have the skills that you learnt in the programme helped you in your role at the university? Please give a brief description.
I labelled all responses as either a suggestion or a non-suggestion based on the definition provided in Negi _et al._[29]. The final dataset comprised 12303 observations among which 802 (7%) were suggestions. The final dataset was split into a training set, a validation set and a test set. The training set comprised 9841 samples among which 634 (6%) were suggestions. The validation set comprised 1231 observations among which 71 (6%) were suggestions. And the test set comprised 1231 observations among which 97 (8%) were suggestions.
In the present example, BERT\({}_{\text{BASE}}\) achieved 83 true positives, 9 false positives, 14 false negatives and 1125 true negatives on the test set. This corresponds to an \(F_{1}\) score of 0.878.
Using the Clopper-Pearson method, the 95% confidence interval is [0.818,0.923] and has an interval length of 0.105. Using the Wald method, the 95% confidence interval is [0.829,0.928] and has an interval length of 0.099. Using the Wilson direct method, the 95% confidence interval is [0.817,0.918] and has an interval length of 0.102. And using the Wilson indirect method,
the 95% confidence interval is [0.820,0.919] and has an interval length of 0.099.
As expected, the confidence interval constructed using the Clopper-Pearson method was the longest. The confidence interval constructed using the Wilson indirect method was shorter as compared to that constructed using the Wilson direct method because the sample \(F_{1}\) score was relatively large.
## VII Discussion
In the present paper, I reviewed the current analytical methods to construct confidence intervals for the population \(F_{1}\) score, proposed two analytical methods to do so, and compared their performances. The answers to the three research questions formulated in Section I are as follows:
* First, the current analytical methods to construct the confidence interval for the population \(F_{1}\) score include the Clopper-Pearson method, which inverts the Binomial test for the sample count, and the Wald method, which inverts the Wald test for the sample \(F_{1}\) score.
* Second, the proposed analytical methods to construct confidence intervals for the population \(F_{1}\) score include the Wilson direct method, which inverts the score test for the sample \(F_{1}\) score, and the Wilson indirect method, which inverts the score test for the sample \(F^{*}\) score.
* Last, both the Wilson direct method and the Wilson indirect method perform better, in terms of coverage probabilities and average lengths, as compared to the Clopper-Pearson method and the Wald method to construct confidence intervals for the population \(F_{1}\) score. In addition, unlike the Wald method, neither the Wilson direct method nor the Wilson indirect method suffer from overshoot and degeneracy.
In accordance with these findings, Takahashi _et al._[5] also reported that the coverage probabilities for the Wald method to construct confidence intervals for the population \(F_{1}\) score tended to be far smaller as compared to the nominal confidence level when \(n<100\) (p. 10).
Because the sample \(F_{1}\) score is an estimate of the population \(F_{1}\) score [10], confidence intervals should be used to indicate how accurate the estimate is. In the construction of confidence intervals for the population \(F_{1}\) score, if the test set is small (i.e., \(n\leq 100\)), then both the Wilson direct method and the Wilson indirect method are recommended, especially since both methods demonstrate better coverage probabilities and interval lengths as compared to the Clopper-Pearson method and the Wald method. Because both methods demonstrate comparable coverage probabilities for small \(n\), the choice between them depends on the interval length. If the test set is large (i.e., \(n>100\)), then both the Wilson direct method and the Wilson indirect method are also recommended, especially since both methods do not suffer from overshoot and degeneracy. Because both methods demonstrate comparable coverage probabilities and interval lengths for large \(n\), the choice between them depends on individual preference.
The recommendation of methods which construct confidence intervals using the score test (i.e., Wilson direct method and Wilson indirect method), as compared to using either the binomial test (i.e., Clopper-Pearson method) or the Wald test (i.e., Wald method), is also consistent with the literature. In the construction of confidence intervals for the population proportion, Brown et al. [14] recommended the Wilson method regardless of sample size. The authors also argued that the Clopper-Pearson method is "wastefully conservative and is not a good choice for practical use" (p. 113), and that the Wald method is "persistently chaotic and unacceptably poor" (p. 115). And in the comparisons of predictive values for binary diagnostic tests for paired designs, Leisenring et al. [34] argued that although "the gains are small, the score statistic has consistently better size and power than a generalised Wald statistic" (p. 349).
To the best of my knowledge, the present paper is the first to propose the Wilson direct method and the Wilson indirect method to construct confidence intervals for the population \(F_{1}\) score, to prove that the Wilson direct method does not suffer from both overshoot and degeneracy, and to compare the performance of both methods against the Clopper-Pearson method and the Wald method. Nonetheless, it is not without limitations. First, the present paper focused on constructing confidence intervals for a single population \(F_{1}\) score but not for the difference between two or more population \(F_{1}\) scores. Confidence intervals for the difference between two or more population \(F_{1}\) scores should account for non-independence between observations, especially if the confidence intervals are constructed using the same test set. Future research can investigate this, perhaps through the use of Generalised Estimating Equations (GEEs; e.g., [34]). Second, the present paper focused on analytical but not computational, and frequentist but not Bayesian, methods to construct either confidence or credible intervals for the population \(F_{1}\) score. Future research can investigate this, perhaps building on current research focused on computational and/or Bayesian methods [11, 12, 16, 35]. Last, the present paper focused on the \(F_{1}\) score but not the \(F\)-beta score. The \(F\)-beta score is the generalised form of the \(F_{1}\) score in which the weights for both precision and recall are not constrained to be equal [4, 6]. Future research can investigate this, perhaps through the use of the multivariate Delta Method [25, 26].
## VIII Conclusion
In conclusion, both the Wilson direct method and the Wilson indirect method are promising alternatives to both the Clopper-Pearson method and the Wald method to analytically constructing confidence intervals for the population \(F_{1}\) score. Given the stochastic nature of evaluation in machine learning, it is recommended to construct and report confidence intervals when evaluating the performance of NLP algorithms.
## Acknowledgements
I would like to extend my heartlet gratitude to both Dr. Vik Gopal and Dr. Qian Jiang for their invaluable inputs to improve this paper. | 自然言語処理 (NLP)において、二値分類アルゴリズムは、しばしば F1スコアの評価に使用されます。サンプル F1スコアは、人口 F1スコアの推定値であるため、サンプル F1スコアの報告だけではその正確性の程度を理解することはできません。信頼区間は、サンプル F1スコアの正確性の程度を示す指標であり、しかし、多くの研究ではそれらを報告する際に、統計的な性質の悪さを持つ方法を用いて報告しています。本研究では、現在の分析方法(すなわち、Clopper-Pearson 法と Wald 法)を考察し、人口 F1スコアの信頼区間を構築する新しい分析方法を提案する。この方法には、Wilson 直接法と Wilson 漸近法が含まれます。これらの方法を、Coverage プロバブリティと区間長、また、オーバーシュートとデジェネレーションの発生可能性について比較する。理論的な結果から、提案された方法は、オーバーシュ |
2310.01429 | Chatmap : Large Language Model Interaction with Cartographic Data | The swift advancement and widespread availability of foundational Large
Language Models (LLMs), complemented by robust fine-tuning methodologies, have
catalyzed their adaptation for innovative and industrious applications.
Enabling LLMs to recognize and interpret geospatial data, while offering a
linguistic access to vast cartographic datasets, is of significant importance.
OpenStreetMap (OSM) is the most ambitious open-source global initiative
offering detailed urban and rural geographic data, curated by a community of
over 10 million contributors, which constitutes a great potential for LLM
applications. In this study, we demonstrate the proof of concept and details of
the process of fine-tuning a relatively small scale (1B parameters) LLM with a
relatively small artificial dataset curated by a more capable teacher model, in
order to provide a linguistic interface to the OSM data of an arbitrary urban
region. Through this interface, users can inquire about a location's
attributes, covering a wide spectrum of concepts, such as its touristic appeal
or the potential profitability of various businesses in that vicinity. The
study aims to provide an initial guideline for such generative artificial
intelligence (AI) adaptations and demonstrate early signs of useful emerging
abilities in this context even in minimal computational settings. The
embeddings of artificially curated prompts including OSM data are also
investigated in detail, which might be instrumental for potential geospatially
aware urban Retrieval Augmented Generation (RAG) applications. | Eren Unlu | 2023-09-28T15:32:36 | http://arxiv.org/abs/2310.01429v1 | # Chatmap : Large Language Model Interaction with Cartographic Data
###### Abstract
The swift advancement and widespread availability of foundational Large Language Models (LLMs), complemented by robust fine-tuning methodologies, have catalyzed their adaptation for innovative and industrious applications. Enabling LLMs to recognize and interpret geospatial data, while offering a linguistic access to vast cartographic datasets, is of significant importance. OpenStreetMap (OSM) is the most ambitious open-source global initiative offering detailed urban and rural geographic data, curated by a community of over 10 million contributors, which constitutes a great potential for LLM applications. In this study, we demonstrate the proof of concept and details of the process of fine-tuning a relatively small scale (1B parameters) LLM with an artificial dataset curated by a more capable teacher model, in order to provide a linguistic interface to the OSM data of an arbitrary urban region. Through this interface, users can inquire about a location's attributes, covering a wide spectrum of concepts, such as its touristic appeal or the potential profitability of various businesses in that vicinity. The study aims to provide an initial guideline for such generative artificial intelligence (AI) adaptations and demonstrate early signs of useful emerging abilities in this context even in minimal computational settings. The embeddings of artificially curated prompts including OSM data are also investigated in detail, which might be instrumental for potential geospatially aware urban Retrieval Augmented Generation (RAG) applications.
Generative AI Cartographic Data Large Language Models
## 1 Introduction
In recent years, the explosive growth in the capabilities and utilities of Large Language Models (LLMs) has brought forth a paradigm shift in how we interact with and leverage data [2][3]. Traditionally, extracting value from vast datasets, especially those with specialized content like cartographic data, required a combination of domain expertise, time-consuming analysis, and specialized tools. With the advent of LLMs, there is an enticing opportunity to simplify this extraction process, making it more accessible and intuitive. The concept of linguistically querying geospatial datasets presents a confluence of the advances in natural language processing (NLP) and the ubiquity of current open digital cartographic data.
The implications of such advancements are vast and transformational. Corporate officials with no technical background can seek insights about whether a commercial venture they are considering is viable in an area that they simply selected by clicking on a map. Tourists could ask about the historical significance of regions they plan to visit and automatic touristic itinerary generation applications powered by LLMs with linguistic interfaces can be developed. Policy makers can profit from such a framework to optimally plan infrastructure investments such as new lines of subway in more human centric and geospatially aware interactions. Therefore, integration of LLMs with OSM like cartographic data is crucial.
In this paper, we present a basic framework aligned with this goal requiring very minimal computational budget and human labeled data in order to constitute a pioneering guideline and demonstrate that such productive applications
in future can be developed with a reasonable amount of effort and time. The central idea of the paper is to fine-tune a foundational language model to comprehend the OSM features in an arbitrary area and align the content with the human intentions. The OSM database contains a vast amount of highly variant attributes from detailed mappings of power lines, road networks, buildings, designated areas to any type of entities such as cafes, schools, public drinking water fountains [1][4]. Consequently, for minimal proof of concept demonstration, we have strictly limited the OSM content we use in this study.
In our exemplary study, without loss of generality, circular areas with a 300-meter radius in the most densely tagged districts of Istanbul, Turkey, were selected. Specific quantitative aspects of selected OSM attributes within these circular areas are described verbally, which we refer to as 'preprompts'. Using a foundational LLM as a competent teacher, various prompt-response pairs were generated to form a training dataset for our fine-tuned model. Details on preprompt construction and guidance from the teacher model for effective artificial dataset curation are provided in the article.
An unaligned, open source foundational model of approximately just 1 billion parameters to fine-tune which is effectively pretrained on vast datasets is preferred to demonstrate the development of such abilities with significantly low amounts of resources. Using Low Rank Adaptation (LORA) and 8-bit quantization the fine-tuning process is performed on a single, moderate GPU [5][6].
Thanks to the effective training dataset and relatively advanced linguistic capabilities of the pretrained base LLM compared to its moderate size, our fine-tuned model shows early signs of emergent abilities in this context. For locations which were not included in the fine-tuning dataset the model is queried to answer concepts mostly abstained in the training. In addition to LLM development, we have also investigated the embeddings of curated prompts including OSM attributes, which reflects the latent structure of urban landscape.
We believe that the minimal framework represented in this paper can encourage researchers to develop much advanced cartographic data aware generative AI applications. The prospects of such potential paradigms are also discussed.
## 2 OSM Data and Preprompts
OSM contains an immense amount of detailed data about specific geolocations. For our minimal setting, we have limited the quantitative data to a few key urban details. The concept presented in this paper is to define a point within an urban environment and then consider a circular area around it. For simplicity, we consistently chose radii of 300 meters in this study. However, varying radii or alternative methods of area selection could be considered. Urban attributes within this circular area are then retrieved from the OSM database. These attributes are subsequently articulated verbally. This verbal articulation serves as a "pre-prompt," establishing the context for the questions posed to the LLM. For this study's purposes, we have limited our scope to the more densely tagged central districts of Istanbul, Turkey.
Chosen urban data to be included in the preprompts are :
* Number of different types of amenities : Amenities are primary categories that denote facilities and services vital for both visitors and residents. Examples of amenities include but are not limited to schools, ATMs, banks, pharmacies, libraries, and bus stops. An amenity essentially highlights features that offer convenience or serve essential functions in a particular geographical area. The number of each type of amenity in the area is verbally described.
* Intersecting administrative divisions : OSM database includes boundaries of all levels of administrative divisions of that particular country of interest. In order to incorporate urban geolocation awareness in the fine-tuning process, we have included the names of administrative bodies intersecting this area. For this case, only "districts" and "provinces" are considered. Note that, all geolocations in this study are in the province of "Istanbul".
* Number and Surface Area of Buildings : In-detail tagged areas include polygon representations of buildings. As a measure of dense urbanization we have included the number of buildings residing in the circular area and the percentage of building surface area to the total circular area.
* Landuses : Landuses are polygon areas in OSM database defined according to their usage such as "military", "park", "construction", "commercial", "industrial" etc. As tagged residential areas are most of the time lacking in the region, they are excluded in this study. The percentages of each type of landuse surface to the total area are verbally expressed. For the sake of efficient contextualisation only landuse areas exceeding 2% of the total surface area are included.
* Leisures : Leisures are polygon areas where they are defined based on their recreational purposes such as "water park", "stadium", "marina" etc. For the sake of efficient contextualisation only leisure areas exceeding 2% of the total surface area are included.
* Roads and Railways : OSM offers a very detailed description of all types of land roads with their types such as "motorway", "residential street", "secondary road", "pedestrian road" etc and railways also including tram and subway lines. We calculate the total length in meters of each types of roads and railways in the circular area and verbally express it.
For instance, the preprompt of an arbitrary area is as follows :
```
1insisacircularareaofradiusof300metersthatintersectsproving(s)ofistanbulanddistrict(s)ofFath.Thereare3at(s),2bank(s),1bureau.de.change(s),1iscafe(s),2clinic(s),1court.house(s),2dontist(s),1driving.school(s),2events.venue(s),11fast.food(s),1guest.house(s),3hospital(s),11parking(s),33pharmacy(s),9place.of.worship(s),1post.office(s),43restaurant(s),5school(s),1shover(s).Thereare525buildingswhichcover81%ofthetotalarea.Itcontains289metersofplatformrail,100metersoffootwayroad,80metersofpedestrianroad,44metersofprimary.linkroad,2786metersofresidentialroad,283metersofserviceFood,20metersofstepsroad,1005metersoftertiaryroad,62metersoftertiary.linkroad,249metersofunclassifiedroad.
```
Note that, native texts of OSM values are used taking into consideration that the advanced teacher model is able to process them properly (also familiar with OSM nomenclature) to generate proper prompt-answer pairs.
## 3 Artificial Dataset Curation with a Teacher Model
Using advanced publicly available LLMs to curate supervised datasets to train or fine-tune smaller models is a well established concept. [7] is one of the most widely known examples of such approaches. By leveraging properly the detailed information and cognitive power of these models one can generate a quasi-infinite number of datapoints with sufficient quality. We have used OpenAI ChatGPT 3.5-turbo [8] to generate prompt-answer pairs for given preprompts. Some examples of prompts queried to ChatGPT 3.5-turbo are as follows :
```
1willgivethesetypesofprepromptsandyouwillgenerateprompt-answerpairsinpythonlistofdictionariesformat.Thesepromptsshouldbequestionsthatbusinessmen,citizens,touristswoulddemandbasedonthedatainthepreprompt.Generate60prompt-answerpairswithverydiversetopics.Important:Donotgeneratepromptsthatdatainprepromptisnotsufficienttoanswer!
```
Preprompt = 'Thisisacircularareaofradiusof300metersthatintersectsproving(s)ofistanbulanddistrict(s)ofFath.Thereare10at(s),2bank(s),1bar(s),1bench(s),1bureau.de.change(s),22cafe(s),1car.rental(s),1clinic(s),7fast.food(s),5fountain(s),1ic.cream(s),4.library(s),1none.transfer(s),1.more.transfer(s),1.more.transfer(s),1.more.transfer(s),5parking(s),7pharmacy(s),14place.of.worship(s),2police(s),1post.office(s),66restaurant(s),1school(s),1breatre(s),6toile(s),1univers(s),2waste.basset(s),1waste.disposal(s).Thereare200buildingswhichcover2%ofthetotalarea.Itcontains112metersofplatformrail,291metersoffootrayroad,730metersofpedestrianroad,227metersofresidentialroad,236metersofserviceroad,270metersofstepsroad,330metersoftertiaryroad.' ```
Ifwillgivethesetypesofprepromptsandyouwillgenerateprompt-answerpairsinpythonlistofdictionariesformat.Thesepromptsshouldbequestionsthatbusinessmen,citizens,touristswoulddemandbasedonthedatainthepreprompt.Generate60prompt-answerpairswithverydiversetopics.Important:Donotgeneratepromptsthatdatainprepromptisnotsufficienttoanswer!
```
Generate50prompt-answerpairs,butbeccreative.Trytocoververydifferentaspectsinprompts,suchaswhichtypeofcommercialventurewouldsuthere,whetheritisresidentialortouristic,horyoucandescribethisareaetc.
ImportantWarning: "Donotincludequestionsthatwedonhavesufficientdatainpreprompttoanswer"Beforegenerating,repeatthislastImportantWarning:gave,for
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
## 5 Results
The fine-tuned model is queried with different geolocations in the same districts with different types of questions in the training dataset and sometimes with different semantic context. Results suggest early signs of emerging abilities in this type of a paradigm and applicability of similar frameworks even with a very minimal amount of resources. Some examples of the responses received from the model are as follows :
``` Area:Thisisacircularareaofradiusof300metersthatintersectsprovince(s) ofistanbulanddistrict(s)ofAtagheir,Kadiky.Thereare10at(s),1bank(s),4bench(s),2caffe(s),1clanic(s),1dentist(s),1festfood(s),1fountain(s),9parking(s),5pharmacy(s),1placeofworship(s),3restaurant(s),3school(s),1wastebasket(s).Thereare140buildingswhichcover18%ofthetotalarea.Itcontains205metersoffootwayread,66metersofpedestrianroad,94metersofresidentialroad,532metersofserviceroad,64metersoftertiaryroad.Question:HowcanweclassifythisregionofIstanbulbasedonthedatainpreprevent?ExplainintdetailAnswer:Basedonthedata,thisareaappearstobeasresidentialareawithabylofresidentialandcommercialbuildings.Itcouldbeclassified
Figure 2: Training (blue) and validation (orange) losses during approximately 2 epochs long fine-tuning process.
markeplace(s), 1 place_of_worthy(s), 1 restaurant(s), 1 politics(s), There are 26 buildings which cover 4% of the total area. The area is covered by 7% construction. It contains 603 meters of platform trail, 60 meters of tram trail, 1361 meters of footway road, 287 meters of residential road, 164 meters of steps road. Question : Which age group of people would enjoy visiting here 7 answer : The presence of cafes and restaurants suggests that this area could be suitable for a variety of age groups.
Area : This is a circular area of radius of 300 meters that intersects province(s) of 1stanbul and district(s) of 1stanbul and district(s) of 1stanbul. There are 1 bank(s), 3 cafe(s), 4 fast_food(s), 2 hospital(s), 13 pharmacy(s), 2 place_of_worthy(s), 1 post_office(s), 5 restaurant(s), 1 theatre(s). There are 106 buildings which cover 1% of the total area. It contains 83 meters of footway road, 72 meters of pedestrian road, 1233 meters of primary road, 157 meters of primary_link road, 504 meters of residential road, 37 meters of secondary_link road, 210 meters of service road, 78 meters of tertiary road. Question : I m a lawyer. Should 1 open an office in this area 7 answer : The presence of a band and a hospital suggests a need for legal services. Opening an office may be a good option.
## 6 Embeddings of Preprompts
It is a convenient idea to inspect the embeddings of the curated preprompts to have an insight on the general attributes of various geolocations. Without loss of generality, we have used average GLOVE word embeddings [14] and projected them on to two dimensional space using Universal Manifold Approximation and Projection (UMAP) algorithm [15]. Fig. 3 shows the plot of projected UMAP values, where color channels are determined linearly proportional to these for illustrative purposes. Locations corresponding to these preprompt embeddings are displayed on the map with same color values in Fig. 4. As expected, even with basic embedding mechanisms and models, the latent structure of verbal descriptions of OSM data yields insightful patterns. Bright red colors indicate more touristic locations, dark red colors more indicate business/commercial districts and bright greenish colors indicate relatively empty spaces, residential areas. Note that, as mentioned previously this study is a preliminary attempt of a minimalistic proof of concept, thus much more complex frameworks can be imagined enabling creative Retrieval Augmented Generation (RAG) applications in this context.
Figure 3: Two dimensional UMAP values of various preprompts, where color values are linearly proportional to UMAP scores.
## 7 Conclusion and Prospects
In this research, we explored the integration of LLMs with intricate cartographic datasets. Expressing OSM data of an urban region verbally and using these phrases as pre-context for queries, we have shown that such frameworks can be developed even with very minimal resources. Advanced publicly available LLMs can be used to generate artificial datasets automatically for this purpose such as in this study. Blending cartographic data with generative linguistic AI models offer vast amounts of possibilities.
| The swift advancement and wide availability of foundation large language models (LLMs), complemented by robust fine-tuning methodologies, have catalyzed their adaptation for innovative and industrious applications. Enabling LLMs to recognize and interpret geospatial data, while offering linguistic access to vast cartographic datasets, is of significant importance. OpenStreetMap (OSM) is the most ambitious open-source global initiative offering detailed urban and rural geographic data, curated by a community of over 10 million contributors, which constitutes a great potential for LLM applications. In this study, we demonstrate the proof of concept and details of the process of fine-tuning a relatively small scale (1B parameters) LLM with a relatively small artificial dataset curated by a more capable teacher model, in order to provide a linguistic interface to the OSM data of an arbitrary urban region. Through this interface, users can inquire about a location's attributes, covering a wide spectrum of concepts, such as its touristic appeal or the potential profitability of |
2301.00076 | Higgs Boson Mass Corrections at Three-Loops in the Top-Yukawa Sector of
the Standard Model | The search for new physics signals in Higgs precision measurements plays a
pivotal role in the High-Luminosity Large Hadron Collider (HL-LHC) and future
colliders programs. The Higgs properties are expected to be measured with great
experimental precision, implying higher-order perturbative computations of the
electroweak parameters from the theoretical side. In particular, the
renormalized Higgs boson mass parameter in the Standard Model shows significant
variation around the electroweak scale, resulting in a lower-bound theoretical
uncertainty that exceeds future collider expectations. A more stable result
under the renormalization group can be computed from a non-zero external
momentum Higgs self-energy, for which available calculations include 3-loop
corrections in the QCD sector. In this work, we present an additional
contribution by estimating the leading non-QCD 3-loop corrections to the mass
of the Higgs boson in the top-Yukawa sector of order $y_t^6$. The
momentum-dependent Higgs self-energy is computed in the tadpole-free scheme for
the Higgs vacuum expectation value in the Landau gauge, and the explicit
dependence upon the Higgs boson and top quark masses is shown. The obtained
result is expressed in dimensional regularization as a superposition of a set
of master integrals with coefficients that are free of poles in four space-time
dimensions, and the corrections are evaluated numerically by the sector
decomposition method. | E. A. Reyes R., A. R. Fazio | 2022-12-31T00:03:11 | http://arxiv.org/abs/2301.00076v3 | # Higgs Boson Mass Corrections at N\({}^{3}\)LO in the Top-Yukawa Sector of the Standard Model
###### Abstract
The search of new physics signals in the Higgs precision measurements plays a pivotal role in the High-Luminosity Large Hadron Collider (HL-LHC) and the future colliders programs. The Higgs properties are expected to be measured with great experimental precision, implying higher-order perturbative computations of the electroweak parameters from the theoretical side. In particular, the Higgs boson mass parameter in the Standard Model runs over several tens of MeV with a corresponding large theoretical uncertainty. A more stable result under the renormalization group can be computed from a non-zero external momentum Higgs self-energy, for which available calculations include three-loop corrections in the QCD sector. In this work we present an additional contribution, by estimating the leading non-QCD three-loop corrections to the mass of the Higgs boson in the Top-Yukawa sector of order \(y_{t}^{6}\). The momentum dependent Higgs self-energy is computed in the tadpole-free scheme for the Higgs vacuum expectation value in the Landau gauge and the explicit dependence upon the Higgs boson and top quark masses is shown. The obtained result is expressed in dimensional regularization as a superposition of a set of master integrals with coefficients that are free of poles in four space-time dimensions and the corrections are evaluated numerically by the sector decomposition method.
## I Introduction
The experiments have recently showed that high-precision measurements of the observables in the electroweak (EW) sector of the Standard Model (SM) are moving away from the theoretical expectations. In the past year, the Fermilab MUON g-2 collaboration [1] published its results concerning the muon anomalous magnetic moment, showing a discrepancy between the experimental value and the SM predictions corresponding to a \(4.2\sigma\) difference. Recently, another EW observable joins to this list of anomalous measurements, namely the mass of the \(W\)-boson. The CDF collaboration [2] reported a new and more precise value, \(M_{W}=80433.5\pm 9.4\,MeV\), together with the complete dataset collected by the CDF II detector at the Fermilab Tevatron. The current SM prediction evidences a tension of \(7\sigma\) compared with the CDF measurement, suggesting the possibility to improve the SM calculations or to extend the SM. New and more precise experiments of the SM observables can help to explain the origin of those discrepancies, but this requires also an improvement on the precision of the theoretical calculations. In particular, the Higgs boson mass is an input parameter in the theoretical expressions for the above mentioned observables and an improvement of its theoretical uncertainties can lead to more precise predictions to be compared with measurements at future accelerators. The improvement can come from the computation of the missing higher order corrections to the Higgs mass which are left out due to the assumption of some kinematic limit or due to the truncation of the perturbative expansions at some level. In the SM, the truncation is done at three-loop order. The one- and two-loop level corrections to the Higgs self-energy have been completely computed [3; 4; 5; 6; 7] and implemented in the public computer codes mr[8] and SMDR[9]. In the former mr code the renormalized vacuum expectation value of the Higgs field is defined as the minimum of the tree-level Higgs potential. The corrections to the mass parameters are consequently gauge invariant due to the explicit insertion of the tadpole diagrams. The disadvantage of this approach is that the Higgs tadpoles can include negative powers of the Higgs quartic self-coupling leading to very large corrections in \(\overline{MS}\) schemes that deteriorates the perturbative stability. On the other hand, the corrections included in SMDR typically leads to stable perturbative predictions but suffers from gauge dependences since the vacuum is defined as the minimum of the Higgs effective potential and therefore the tadpoles are removed by imposing an appropriate renormalization condition. It would be convenient to have a gauge independent prediction with a stable perturbative behaviour, as highlighted in [10; 11] where the longstanding discussion about a suitable prescription for tadpole contributions in EW renormalization is solved at one-loop level. Additionally, the three-loop corrections have been evaluated in the gauge-less limit where the EW contributions are disregarded. In this computation the external momentum dependence of the contributions that are proportional to \(g_{s}^{4}y_{t}^{2}M_{t}^{2}\) is included, where \(g_{s}\) is the strong coupling constant, \(y_{t}\) is the top quark Yukawa coupling and \(M_{t}\) is the top quark mass. There are also included the three-loop contribu
tions proportional to \(g_{s}^{2}y_{t}^{4}M_{t}^{2}\) and \(y_{t}^{6}M_{t}^{2}\) using the 1PI effective potential, from which the 1PI self-energies at vanishing external momenta can be derived. All those three-loop corrections are implemented in the last version of SMDR [12; 13]. Although these SMDR predictions are rather precise, they contain a renormalization scale dependence of several tens of MeV implying theoretical uncertainties larger than the expected experimental ones, of about 10-20 MeV, for the Higgs boson mass measurements at the HL-LHC, ILC and FCCee [14]. A more refined calculation including the missing higher order contributions is therefore required.
In this paper we compute an additional three-loop contribution to the mass of the Higgs boson coming from the non-QCD Top-Yukawa sector in the gaugeless limit where the three-loop Higgs self-energy corrections at order \(y_{t}^{6}\) are calculated. These three-loop corrections are meant to be included into the prediction of the physical Higgs boson mass (\(M_{h}\)) which comes from the complex pole of the Higgs propagator in an on-shell scheme and therefore the Higgs self-energies are evaluated at non-vanishing external momentum, \(p^{\mu}\neq 0\). Since the ratio \(M_{h}/M_{t}\approx 0.6\) is not a really small expansion parameter, the leading three-loop corrections may receive significant contributions from the external momentum dependent terms evaluated at \(p^{2}=M_{h}^{2}\). Additionally, the inclusion of the non-vanishing external momentum self-energies are expected to cancel the renormalization scale dependence introduced in the propagator pole by the running Higgs mass computed in the effective potential approach [15; 16].
Finally, we point out that electroweak contributions at three-loop level is still missing, but the analytic results for all master integrals contributing to the three-loop Higgs self-energy diagrams in the mixed EW-QCD sector at order \(\alpha^{2}\alpha_{s}\) and including terms proportional to the product of the bottom and top Yukawa couplings, \(y_{b}y_{t}\), have been presented in [17]. Besides, additional identities satisfied by three-loop self-energy Master Integrals (MIs) with four and five propagators, which enable a straightforward numerical evaluation for a generic configuration of the masses in the propagators, have been recently reported in [18].
The paper is organized as follows. In Section II we show the technical details about the generation and regularization of the amplitudes for the three-loop Higgs self-energies involved in our calculation. In Section III a Feynman integral reduction procedure is presented and the election of a good basis of master integrals is discussed. A numerical analysis, where the obtained three-loop corrections to the Higgs mass at O(\(y_{t}^{6}\)) is evaluated as a function of the renormalization scale, is shown in Section IV. Finally, we give our conclusions and a further research outlook in Section V.
## II Regularized Higgs self-energies
In this work we have focused our attention on the contributions coming from the three-loop self-energy corrections to the Higgs boson mass including the external momentum dependence. The Higgs self-energies have been computed at order \(y_{t}^{6}\) in the non-QCD sector of the SM. Thus, the non-light fermion limit is assumed and therefore the Yukawa couplings and the masses of the other fermions are disregarded with respect to the top quark ones. The complete expression is written as
\[\Sigma_{hh}^{(3l)} = y_{t}^{6}(\Delta_{0}+t\Delta_{1}+t^{2}\Delta_{2}+t^{3}\Delta_{3}) \tag{1}\] \[+ s\,y_{t}^{6}(\Delta_{0}^{\ast}+t\Delta_{1}^{\ast}+t^{2}\Delta_{ 2}^{\ast}),\]
where \(t\) represents the squared top mass, \(t=M_{t}^{2}\), while \(s\) stands for the squared momentum in the external lines of the Higgs self-energies, \(s=p^{2}\).
In order to obtain the expressions of \(\Delta_{i}\) and \(\Delta_{j}^{s}\) it is necessary to generate the Higgs self-energy diagrams and their corresponding amplitudes. This has been done with the help of the Mathematica package FeynArts[19; 20]. At the considered perturbative order, only the nine different self-energy topologies depicted in FIG. 1 contribute. Note that topologies with just cubic vertices are required, this is equivalent to impose an adjacency of three lines in the CreateTopologies function of FeynArts. Moreover, the computation was done in the so called _Parameter Renormalized Tadpole_ (PRT) scheme [21; 22; 10; 23] where the renormalized vacuum expectation value of the Higgs field is the minimum of the Higgs effective potential and therefore the self-energies are made of 1PI diagrams that do not contain tadpole insertions. Although this scheme is known to be numerical stable as terms with negative powers of the Higgs self-coupling are not included, it has the unpleasant feature that self-energies are gauge-dependent quantities. In this work we have adopted the
Figure 1: Examples of diagrams contributing to the O(\(y_{t}^{6}\)) Higgs self-energy corrections in the non-QCD sector. The external dashed lines represent the Higgs boson field. The internal dashed lines represent all possible contributing scalar fields, while the solid lines represent a top or a bottom quark. Only propagators with a top quark line are massive.
Landau gauge, where the Goldstone bosons are massless, in order to minimize the number of energy scales appearing in the Feynman integrals.
Once the content of the particles are included in the nine topologies, with the help of the InsertFields function of FeynArts, the number of generated self-energy diagrams, whose amplitudes are different from zero at \(y_{t}^{6}\), increases to \(125\). Examples of such diagrams are also shown in FIG. 1. Note that the external dashed lines propagate only the Higgs field (\(h\)), while the internal lines in the no-light fermions limit of the non-QCD sector can propagate fermions (solid lines) like the top quark (\(t\)) and bottom quark (\(b\)) fields, as well as scalars like the Higgs and the Goldstone bosons (\(G^{0}\) and \(G^{\pm}\)) fields. The cubic vertices involved in the computation are \(hht\), \(G^{0}G^{0}t\) and \(G^{\pm}tb\). The contribution of the bottom mass to the latter vertex is disregarded when appears in the numerators of the integrands.
The considered three-loop self-energy integrals are ultraviolet divergent in four-dimensions since all of them contain two scalar and six fermionic propagators; therefore, they are analytically continued to \(D=4-2\varepsilon\) dimensions using the dimensional regularization (DREG) scheme [24; 25; 26; 27]. In order to implement the regularization prescription, the FeynArts amplitudes are exported to the language of FeynCalc[28; 29] which is a Mathematica code useful in general to perform the necessary algebraically manipulations involved in the calculation of multi-loop Feynman integrals. The gamma matrices are defined as a set of \(D\) matrices obeying
\[\{\gamma^{\mu},\gamma^{\nu}\}=2g^{\mu\nu}I;\qquad\text{Tr}I=4. \tag{2}\]
Feynman diagrams involving the charged Goldstone bosons, \(G^{\pm}\), where traces with \(\gamma_{5}\) and an arbitrary number of gamma matrices appear, require some care. In that case we use the practical non-cyclicity prescription [30; 31] where \(\gamma_{5}\) is an anticommuting object satisfying
\[\{\gamma_{5},\,\gamma^{\mu}\}=0;\qquad\gamma_{5}^{2}=1, \tag{3}\]
together with the condition that the use of cyclicity in traces involving an odd number of \(\gamma_{5}\) matrices is forbidden. Using the above anticommutation relation and the Clifford algebra in eq. (2) any product of Dirac matrices can be ordered in a canonical way. That is, the \(\gamma_{5}\) matrices are completely eliminated for an even number of them, while for an odd number only one \(\gamma_{5}\) survives and it is always moved to the right of the product. In particular, due to the presence of four independent momentum scales, namely, the external momentum \(p\) and the loop-momenta \(q_{1}\), \(q_{2}\) and \(q_{3}\), diagrams can contain traces with a single \(\gamma_{5}\) and at most four \(\gamma\) matrices. Thus, the next relations are also required:
\[\text{Tr}\left[\gamma_{5}\right]=\text{Tr}\left[\gamma^{\mu_{1}}\dots\gamma^ {\mu_{2n-1}}\gamma_{5}\right]=0, \tag{4}\]
\[\text{Tr}\left[\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}\gamma^{\sigma}\gamma_{5} \right]=\begin{cases}-4i\epsilon^{\mu\nu\rho\sigma}&\mu,\nu,\rho,\sigma\in\{ 0,1,2,3\}\\ 0&\text{otherwise}\end{cases}. \tag{5}\]
A further examination of all the Feynman diagrams for each topology in FIG. 1 shows that topologies 1, 4, 6 and 9 do not contain traces with the matrix \(\gamma_{5}\). Topologies 5 and 8 contain traces with one \(\gamma_{5}\) and at most three \(\gamma\) matrices which vanish according to eq. (4). For the topologies 2 and 7 the sum of the amplitudes produces a cancellation of the terms with any trace involving the matrix \(\gamma_{5}\). Finally, topology 3 contain contributions with a trace of a single \(\gamma_{5}\) and four \(\gamma\) matrices that have to be evaluated according to eq. (5).
In addition, it is worth mentioning that for amplitudes with closed fermion-loops, which is the case of all the topologies in FIG. 1, the usual Breitenlohner-Maison scheme [32; 33] and the non-cyclicity scheme considered in our calculation produce identical results.
## III Good master integrals
Once the amplitudes are regularized, each of them can be written as a superposition of a large set of about one thousand of integrals with the following structure:
\[\left\langle\frac{\mathcal{N}(q_{i}\cdot q_{j},q_{i}\cdot p,p^{2 })}{D_{1}^{\nu_{1}}D_{2}^{\nu_{2}}D_{3}^{\nu_{3}}D_{4}^{\nu_{4}}D_{5}^{\nu_{5} }D_{6}^{\nu_{6}}D_{7}^{\nu_{7}}D_{8}^{\nu_{8}}D_{9}^{\nu_{9}}D_{0}^{\nu_{0}}} \right\rangle_{3l}, \tag{6}\] \[\left\langle(\dots)\right\rangle_{3l}=(Q^{2})^{3\varepsilon}\int \frac{d^{D}q_{1}}{(2\pi)^{D}}\int\frac{d^{D}q_{2}}{(2\pi)^{D}}\int\frac{d^{D} q_{3}}{(2\pi)^{D}},\]
where \(Q\) is the renormalization scale defined as in the \(\overline{MS}\) scheme, \(Q^{2}=4\pi e^{-\gamma_{E}}\mu^{2}\), in terms of the unit mass \(\mu\) and of the Euler-Mascheroni constant \(\gamma_{E}\). The denominators \(D_{j}\) are inverse scalar propagators:
\[\begin{array}{ll}D_{1}=\left(q_{1}^{2}-m_{1}^{2}\right),&D_{2}=\left(q_{2}^ {2}-m_{2}^{2}\right),\\ D_{3}=\left(q_{3}^{2}-m_{3}^{2}\right),&D_{4}=\left((q_{1}-q_{2})^{2}-m_{4}^{ 2}\right),\\ D_{5}=\left((q_{1}-q_{3})^{2}-m_{5}^{2}\right),&D_{6}=\left((q_{2}-q_{3})^{2}-m_ {6}^{2}\right),\\ D_{7}=\left((q_{1}+p)^{2}-m_{2}^{2}\right),&D_{8}=\left((q_{2}+p)^{2}-m_{8}^{2 }\right),\\ D_{9}=\left((q_{3}+p)^{2}-m_{9}^{2}\right),&D_{0}=\left((q_{1}-q_{2}+q_{3}+p)^{ 2}-m_{0}^{2}\right),\end{array} \tag{7}\]
while the numerator \(\mathcal{N}\) is a function of scalar products involving the three loop momenta and the external momenta. At this point the coefficients of the integrals depend on \(y_{t}\), \(t\) and \(s\), while the masses in the propagators \(D_{j}^{-1}\) can be \(m_{j}=0,M_{t}\). The precise configuration of the masses defines the family to which the integrals belong, while the set of exponents \(\{\nu_{j}\}\) defines sectors from the families. For the planar diagrams, represented by the topologies 1 to 8, one must remove the denominator \(D_{0}\) which is equivalent to set \(\nu_{0}=0\), while the non-planar diagrams contained in the topology 9 satisfy \(\nu_{8}=0\). Note that, in order to express any scalar product in \(\mathcal{N}\) as a combination of inverse propagators, we need a basis of nine propagators for each family. Thus, the numerator \(\mathcal{N}\) is rewritten, as usual, in terms of the \(D_{j}\)'s leading to scalar integrals which can also contain irreducible numerators, that is, denominators with negative integer exponents. The resulting integral families for each topology are listed in Table 1. An individual topology can contain
multiple families and each family can contain at most six massive propagators. Besides, the exponents \(\{\nu_{j}\}\) take values from \(-3\) to \(3\).
The obtained set of scalar integrals are not independent of each other, they can be related through additional recurrence relations coming from the integration by parts (IBP) and Lorentz Invariant (LI) identities. We have used the code Reduze[34; 35] to reduce any scalar integral as a linear superposition of a basis of Master Integrals
\[\tilde{G}_{\{\nu_{0},\ldots,\nu_{0}\}}=\left\langle\prod_{j=0}^{9}D_{j}^{-\nu _{j}}\right\rangle_{3l}, \tag{8}\]
with coefficients that are rational functions of polynomials depending on the space-time dimension and all the kinematical invariants involved in the calculation. As expected, in complicated situations like the IBP reduction of three-loop self-energy integrals with at least two energy scales, the basis provided by Reduze, \(\tilde{G}_{\{i\}}\), can be inefficient since denominators of some of the MIs coefficients are quite cumbersome, containing big expressions that require a long time processing and operative memory, but also containing kinematical singularities (independent upon \(D\)) described by the Landau conditions [36] and/or divergences in \(D-4=2\varepsilon\) (independent upon the kinematical invariants) which would imply the evaluation of finite parts of the Laurent expansion in \(\varepsilon\) of the MIs [37; 38; 39]. In order to handle this situation we follow the prescription discussed in [40] based on the Sabbah's theorem [41] and therefore we have implemented in Mathematica, with the help of FIRE[42; 43], a transition from the "bad" basis of MIs, to an appropriate basis, \(G_{\{j\}}\), where denominators of the coefficients are "good" enough that are simple expressions free of kinematical and non-kinematical singularities. Thus, the election of the new master integrals has been done by imposing that polynomials in the denominators of the coefficients do not vanish in the limit where \(D-4\) goes to zero. The Sabbah's theorem guarantees the existence of such a good basis, but in practice this implies finding extra relations between the master integrals, such that
\[\tilde{G}_{\{i\}}=\sum_{j=1}^{|\sigma|}\frac{n_{i,j}}{d_{i,j}}G_{\{j\}}, \tag{9}\]
for a given sector \(\sigma\) of which \(|\sigma|\) represents the length of the related multi-index, and where the coefficients \(n_{i,j}\) must contain products of polynomials that cancel the bad denominators of the coefficients of the masters \(\tilde{G}_{\{i\}}\) in the original IBP reduction, while \(d_{i,j}\) must be a good denominator. A simple example can be found in the family \(\{134679\}\) of the first topology (see FIG. 1 and Table 1). A bad election of the basis in the reduction procedure can lead to coefficients with nul denominators for \(D=4\), of the form
\[(-5+D)(-4+D)(-3+D)(-10+3D)st^{2}(-s+2t)\] \[\times(-s+4t)(-s+10t)(s^{2}-16st+24t^{2}) \tag{10}\]
or an even worse coefficient can arise with denominator
\[2(-4+D)(s-4t)^{2}t(-38997504s^{18}+159422976Ds^{18})\] \[\times t(-288550464D^{2}s^{18}+\cdots+244\text{ terms}), \tag{11}\]
manifesting moreover threshold singularities. The denominator of eq. (11) is generated by the sector with the MIs \(\tilde{G}_{\{-1,0,1,1,0,1,1,0,0\}}\), \(\tilde{G}_{\{0,0,2,1,0,2,1,0,0\}}\) and \(\tilde{G}_{\{0,0,1,1,0,1,1,0,0\}}\). A better choice of the basis, with the master integrals \(G_{\{1,-1,1,1,1,1,1,0\}}\), \(G_{\{1,0,1,1,1,1,1,1\}}\), \(G_{\{0,0,1,1,1,1,1,1\}}\), \(G_{\{0,0,1,1,1,1,1,1,1\}}\), can avoid this problem and produce a simpler result of the total amplitude for the first topology:
\[\mathcal{A}_{1}^{\{134679\}}=y_{t}^{6} \left[t\right. \left(4G_{\{0,0,1,1,1,0,1,1\}}+2G_{\{0,0,1,1,1,0,1,1\}}\right.\] \[-4G_{\{0,0,1,1,1,1,1,0,1\}}-4G_{\{1,-1,1,0,1,1,1,1\}}\] \[+2G_{\{1,-1,1,1,1,1,0,1\}}+2G_{\{1,-1,1,1,1,1,1,0\}}\] \[+4G_{\{1,0,0,0,1,1,1,1,1\}}-4G_{\{1,0,0,1,1,1,0,1\}}\] \[+2G_{\{1,0,0,1,1,1,1,1,0\}}-4G_{\{1,0,1,1,0,1,0,1,1\}}\] \[+4G_{\{1,0,1,1,0,1,0,1,0\}}-4G_{\{1,0,1,0,1,1,1,0\}}\] \[+2G_{\{1,0,1,1,1,-1,1,1\}}-2G_{\{1,0,1,1,1,1,0,0,1\}}\] \[-2G_{\{1,0,1,1,1,1,1,0,0\}}+2G_{\{1,0,1,1,1,1,1,1\}}\] \[+t^{2}\left(8G_{\{0,0,1,1,1,1,1,1\}}+8G_{\{1,0,0,1,1,1,1,1\}}\right.\] \[+8G_{\{1,0,0,1,1,1,1,1\}}-16G_{\{1,0,1,0,1,1,1,1\}}\] \[+8G_{\{1,0,1,1,1,0,1,1,1\}}+8G_{\{1,0,1,1,1,1,0,1\}}\] \[\left.-16G_{\{1,0,1,1,1,1,1,0,1\}}+8G_{\{1,0,1,1,1,1,1,1\}}\right)\] \[+t^{3}\,32G_{\{1,0,1,1,1,1,1,1\}}\big{]}\] \[+sy_{t}^{6} \left[t\right. \left(-2G_{\{1,0,1,0,1,1,1,1,1\}}+4G_{\{1,0,1,0,1,1,1\}}\right.\] \[-2G_{\{1,0,1,1,1,0,1,1,1\}}-2G_{\{1,0,1,1,1,0,1\}}\] \[\left.+4G_{\{1,0,1,1,1,1,0,1\}}-2G_{\{1,0,1,1,1,1,1,0\}}\right.\] \[\left.-t^{2}\ 8G_{\{1,0,1,1,1,1,1,1\}}\right] \tag{12}\]
without pathological denominators. Note that master integrals contain 9 indices because \(D_{0}\) is omitted in the
\begin{table}
\begin{tabular}{|c|c|} \hline Topology & Propagator \\ \hline \hline
1 & \(\{134679\}\) \\ \hline
2 & \(\{1278\}\), \{12378\} \\ \hline
3 & \(\{1379\}\), \{123789\}, \{134679\} \\ \hline
4 & \(\{24589\}\) \\ \hline
5 & \(\{258\}\), \{278\}, \{2578\}, \{24589\} \\ \hline
6 & \(\{125678\}\) \\ \hline
7 & \(\{17\}\), \{147\}, \{157\}, \{1457\} \\ \hline
8 & \(\{17\}\), \{127\}, \{157\}, \{1257\} \\ \hline
9 & \(\{123790\}\) \\ \hline \end{tabular}
\end{table}
Table 1: Integral families. An integral family is represented with a list \(\{ijk...\}\). Each number in the list gives the position “\(j\)” of a massive propagator \(D_{j}^{-1}\). The missing propagators are massless.
planar topologies while \(D_{8}\) is removed in non-planar diagrams. Analogous simple expressions have been derived for topologies 2, 4 and 6, the results for the amplitudes \(\mathcal{A}_{3}\), \(\mathcal{A}_{5}\), \(\mathcal{A}_{7}\), \(\mathcal{A}_{8}\) and \(\mathcal{A}_{9}\) are instead somewhat lengthy. All the amplitudes can be consulted by the following link [https://github.com/fisicateoticAUDP/HiggsSM](https://github.com/fisicateoticAUDP/HiggsSM) together with the list of good master integrals, the useful IBP reductions and the main Mathematica routines implemented to carry out this computation. In particular, the planar diagrams can be reduced to a superposition of 212 MIs, while the non-planar diagrams can be expressed in terms of 82 masters. Even if a good basis of MIs could be found with the help of the Sabbah's theorem in this computation, when the number of energy scales is increased the coefficients of the master integrals get even worse and make inefficient any IBP reduction procedure. This kind of problems also appears in beyond the SM theories, as is the case of the SUSY calculations of \(M_{h}\), where the analogous contribution at order \(y_{t}^{6}\) is missing [44] and at least one additional scale (the squarks mass scale) has to be included. Analytical approaches where an IBP reduction can be avoided and the amplitudes can be directly evaluated for an arbitrary number of energy scales, as it is done for instance with the Loop-Tree Duality technique [45; 46], could be an interesting alternative.
## IV Numerical analysis
In this section we discuss the numerical evaluation of the three-loop Higgs self-energy corrections at O(\(y_{t}^{6}\)) obtained after summing the amplitudes \(\mathcal{A}_{j}\) of the 21 families reported in Table 1. The final amplitude of the genuine three-loop 1PI Higgs self energy
\[\Sigma_{hh}^{(3l)}(s,Q,M_{t},y_{t})=\sum_{j}\mathcal{A}_{j}, \tag{13}\]
requires the evaluation of 294 MIs which are functions of the top quark mass \(M_{t}\) and the squared external momentum \(s\) of the self-energies. We set the value of the external momentum at the experimental central value of the Higgs boson mass \(M_{h}\), \(\sqrt{s}=125.09\) GeV [47]. In order to numerically generate the Laurent \(\varepsilon\)-expansion of each master integral, we have used the code FIESTA 5.0[48] which implements the sector decomposition approach. The expansion goes up to \(\varepsilon^{0}\) order, the evanescent terms of order \(\varepsilon^{n}\) with \(n>0\) are not needed since the coefficients of the good master integrals do not contains poles in \(D=4\). Besides, the evaluation of the amplitude has to include the evolution of the top Yukawa coupling \(y_{t}\) and the mass parameter \(M_{t}\) as a function of the energy scale \(Q\) in the \(\overline{MS}\) scheme.
In this analysis we use the full three-loop \(\overline{MS}\) renormalization group equations (RGEs) of the SM parameters [49; 50; 51; 52; 53; 54; 55; 56] plus the \(O(\alpha_{s}^{5})\) QCD contributions to the strong coupling beta function [57; 58; 59; 60] and the \(O(\alpha_{s}^{5})\) QCD contributions to the beta functions of the Yukawa couplings [61; 62; 63]. This is in order to obtain the running of \(y_{t}\) from 10 to 500 GeV as is shown in FIG. 2. To draw the evolution we chose the initial benchmark model point specified on the top of the plot, which yields at \(Q_{0}=0.1731\) TeV the central values of the SM masses (\(M_{h}=125.1\) GeV, \(M_{t}=173.1\) GeV, etc.) as given in the last edition of the Review of Particle Properties [64].
The next plots also follows this boundary condition.
On the other hand, the top quark pole mass is evolved in the tadpole-free \(\overline{MS}\) scheme with the help of SMDR, as is shown in the FIG. 3, including the pure QCD 1-loop [65], 2-loop [66], 3-loop [67] and 4-loop [68; 69; 70; 71] contributions plus the non-QCD 1-loop, mixed EW-QCD 2-loop and full 2-loop EW [72] corrections to the quark top mass. Those contributions have been also computed in different renormalization schemes [73; 74; 75], however, for our first numerical analysis of the Higgs corrections we use the black curve in FIG 3 which contains all the contributions together in the tadpole-free scheme. A discussion including the differences between all the schemes for a running top quark mass is necessary, as is emphasized in [75], but this is left as a work for a future publication. The other lines represent the theoretical predictions of \(M_{t}\) at different perturbative orders and are pictured to note that the pure QCD predictions have a very large scale dependence of a few GeVs when \(Q\) is varied from 60 to 500 GeV and therefore the EW corrections cannot be neglected and must be included in the numerical analysis, as has already been pointed out in [72; 73; 74; 75], since our amplitudes are sensible to the precise value of \(M_{t}\). When the full 2-loop EW contribution is added, the renormalization scale dependence decreases by about 97% in the range of \(Q\) considered.
Finally, we study the numerical behaviour of the resulting new contributions to the Higgs self-energies
Figure 2: Renormalization group evolution of the top Yukawa coupling \(y_{t}\) in the \(\overline{MS}\) scheme including the full 3-loop RGEs for all the SM parameters and the QCD beta functions of \(y_{t}\) and \(\alpha_{s}\) up to 5-loops. Here \(g_{1}\) and \(g_{2}\) stands for the EW gauge couplings, \(v\) is the Higgs vev and \(\lambda\) represents the quartic Higgs self-coupling.
containing all momentum dependence which are obtained from the difference
\[\Delta M_{h}=\text{Re}\left[\Sigma_{hh}^{(3l)}(p^{2}=M_{h}^{2})-\Sigma_{hh}^{(3l)} (p^{2}=0)\right]. \tag{14}\]
In FIG. 4\(\Delta M_{h}\) is shown as a function of the renormalization scale from \(Q=60\) GeV to \(Q=500\) GeV. In the plot is included the real contributions from the finite part (black curve) and the coefficients of the simple (yellow) \(\frac{1}{\varepsilon}\), double (green) \(\frac{1}{\varepsilon^{2}}\) and triple (red) \(\frac{1}{\varepsilon^{3}}\) poles separately. Note from FIG. 2 that the coupling \(y_{t}\) goes out the perturbative regime bellow \(Q=60\) GeV and therefore this region was excluded in the analysis.
The coefficients of the poles have a mild dependence on the renormalization scale, the triple pole coefficient varies about \(0.5\) MeV for \(60\text{ GeV}\leq Q\leq 500\) GeV, in this case the dependence on \(Q\) is not explicit, the variation is due to the RG evolution of \(y_{t}\) and \(M_{t}\). The double pole coefficient contains an explicit logarithmic dependence on \(Q\) implying a variation of about \(1.5\) MeV. The simple pole coefficient contains a squared logarithmic dependence on \(Q\) which amounts to a variation of about \(6.2\) MeV. Finally, the finite part have a size of about \(51\) MeV for \(Q=173.1\) GeV and contains a significant renormalization scale dependence, it decreases by about \(47\%\) in the complete \(Q\) range considered. In particular, when \(Q\) is varied around the EW scale from \(100\) GeV to \(300\) GeV the correction is reduced by about \(16\) MeV which is of the same order of magnitude than the size of the anticipated experimental precision at HL-LHC (\(10-20\) MeV [76]) and at the future colliders ILC (\(14\) MeV [77]) and FCC-ee (\(11\) MeV [78]). The inclusion of the new three-loop corrections \(\Delta M_{h}\) into the complex pole mass, \(s_{pole}^{h}\), for the SM Higgs boson and the further analysis of the numerical impact on the theoretical prediction of the Higgs boson pole mass are non-trivial tasks. They require the iterative evaluation of the MIs and amplitudes at \(s=\text{Re}(\text{s}_{\text{pole}}^{\text{h}})\), instead of the naive evaluation at \(s=M_{h}^{2}\), and an additional prescription for the renormalization of the UV sub-divergences in order to get the correct values of the \(M_{h}\)-predictions at three-loop level. The numerical evaluation of the Higgs boson pole mass, including the pure three-loop corrections presented in this work, will be done in a future analysis.
## V Conclusions and perspectives
In this article we have presented a new contribution to the SM Higgs boson mass perturbative corrections coming from the pure three-loop Higgs self-energies at order \(y_{t}^{6}\) including the external momentum dependence. This implies a Feynman diagrammatic evaluation of eight planar and one non-planar topologies with only cubic vertices and a fermion loop in the internal lines. The Higgs self-energies do not contain the tadpole contributions since the renormalized vev of the Higgs field is considered as the minimum of the Higgs effective potential. As a consequence, the considered contributions have a good perturbative behaviour but acquire an additional gauge dependence, we have used the Landau gauge in order to reduce the number of energy scales in the Feynman amplitudes. Besides, we worked in the gaugeless and non-light fermions limits where the EW vector boson and all the light fermion masses are disregarded; thus, the final result is expressed in terms of the top quark mass \(M_{t}\) and the Higgs boson mass \(M_{h}\). The DREG procedure was adopted in order to regularize the Feynman amplitudes associated to the Higgs self-energies, in particular, a non-eclicity prescription was applied to deal with the regularization of the \(\gamma_{5}\) matrix. The resulting regularized amplitudes are expressed in terms of thousands of scalar integrals which are reduced to a superposition of a basis of master integrals through the IBP and LI identities implemented in the code Reduze. This automated
Figure 4: Renormalization group scale dependence coming from the external momentum contribution to the three-loop Higgs self-energy correction at order \(y_{t}^{6}\) in the SM. The evolution of the finite part and the coefficients of the simple, double and triple poles have been included.
Figure 3: Evolution of the top quark mass \(M_{t}\) as a function of the renormalization scale \(Q\) in the \(\overline{MS}\) scheme. The different perturbative contributions are shown. In particular, the black line contains the 4-loop QCD and the full 2-loop EW corrections.
reduction leads to a set of master integrals which contains large coefficients with kinematic singularities and non-kinematic divergences at \(D=4\) space-time dimensions. The above mentioned singular behaviour as well as the length of the expressions of the coefficients get worse when the number of scales is increased. However, we have showed that those divergences are spurious and can be removed with a good redefinition of a suitable basis, whose existence is guaranteed by the Sabbah's theorem. The expressions obtained for the amplitudes of the involved topologies are thus linear combinations of a set of 212 planar and 82 non-planar good MIs with coefficients that do not contain poles at \(D\to 4\), it has the advantage that the evanescent terms of the Laurent expansion of the masters are not required. A first numerical analysis allows to measure the size of the new momentum dependent Higgs self-energy contributions showing a value of \(\sim 51\) MeV at the benchmark model point which produces the experimental values of the SM masses, but it also displays a significant renormalization scale dependence of a few tens of MeV which are of the same magnitude order than the expected precision at the coming colliders experiments.
Several research perspectives are left open for future works. The inclusion of the new momentum dependent corrections into the complex mass pole of the Higgs propagator and the study of the numerical impact on the theoretical prediction together with the perturbative stability of \(\overline{MS}\) renormalization of the Higgs mass will be faced in a forthcoming publication. A further numerical analysis including the different renormalization prescriptions for the top quark mass must be also considered. Besides, the developed routines for this computation will be extended to include the quantum corrections to the SM gauge boson masses \(M_{Z}\) and \(M_{W}\) at the same perturbative order considered here. An extension of the momentum dependent Higgs self-energies at order \(y_{t}^{6}\) to include supersymmetric contributions coming from the stop sector of the MSSM in the Dimensional Reduction scheme [27] is also under consideration. The theoretical uncertainties in the MSSM scenarios amount a size between 1 to 5 GeV, which is one magnitude order greater than the experimental error in \(M_{h}\), in this context the calculation of missing higher order corrections is mandatory. This implies, nevertheless, the inclusion of at least one additional scale, the SUSY scale, and therefore we finally point out that an alternative approach to the IBPs reductions must be considered to deal with the problem of the large divergent MIs coefficients, this is valid in general for higher order perturbative calculations involving an arbitrary number of energy scales.
| ```
新物理信号の探索をHiggs精度測定に付加する役割は、高Luminosity大型 Hadron Collider (HL-LHC) と将来のcolliderプログラムに大きく貢献します。Higgsの性質は、より高い精度で測定されることが期待されており、電弱パラメータの高次の perturbative計算が理論的に求められます。特に、標準模型における正規化されたHiggsボソンの質量パラメータは、電弱スケール周辺に有意な変動を示しており、将来のcollider期待値を上回る理論的不確定性を示しています。 renormalization groupの下でのより安定した結果を、零 externeMomentum Higgs自己エネルギーから計算することができます。この計算に利用できる計算方法には、QCDセクターの3ループ補正が含まれています。この研究では、質量Higgsボソンの増加のleading non-QCD 3-loop補正を、Top-Yukawaセクターの$y_t^6 |
2303.18040 | Investigating the amplitude and rotation of the phase spiral in the
Milky Way outer disc | Context: With the data releases from the astrometric space mission Gaia, the
exploration of the structure of the Milky Way has developed in unprecedented
detail and unveiled many previously unknown structures in the Galactic disc and
halo. One such feature is the phase spiral where the stars in the Galactic disc
form a spiral density pattern in the $Z-V_Z$ plane. Aims: We aim to
characterize the shape, rotation, amplitude, and metallicity of the phase
spiral in the outer disc of the Milky Way. This will allow us to better
understand which physical processes caused the phase spiral and can give
further clues to the Milky Way's past and the events that contributed to its
current state. Methods: We use Gaia data release 3 (DR3) to get full position
and velocity data on approximately 31.5 million stars, and metallicity for a
subset of them. We then compute the angular momenta of the stars and develop a
model to characterise the phase spiral in terms of amplitude and rotation at
different locations in the disc. Results: We find that the rotation angle of
the phase spiral changes with Galactic azimuth and Galactocentric radius,
making the phase spiral appear to rotate about $3^\circ$ per degree in Galactic
azimuth. Furthermore, we find that the phase spiral in the $2200 - 2400$ kpc km
s$^{-1}$ range of angular momentum is particularly strong compared to the phase
spiral that can be observed in the solar neighbourhood. The metallicity of the
phase spiral appears to match that of the Milky Way disc field stars.
Conclusions: We created a new model capable of fitting several key parameters
of the phase spiral. We have been able to determine the rotation rate of the
phase spiral and found a peak in the phase spiral amplitude which manifests as
a very clear phase spiral when using only stars with similar angular momentum. | S. Alinder, P. J. McMillan, T. Bensby | 2023-03-31T13:20:21 | http://arxiv.org/abs/2303.18040v3 | # Investigating the amplitude and rotation of the phase spiral
###### Abstract
Context:With the data releases from the astrometric space mission _Gaia_, the exploration of the structure of the Milky Way has developed in unprecedented detail and unveiled many previously unknown structures in the Galactic disc and halo. One such feature is the _Gaia_ phase spiral where the stars in the Galactic disc form a spiral density pattern in the \(Z-V_{Z}\) plane. Many questions regarding the phase spiral remain, particularly how its amplitude and rotation change with position in the Galaxy.
Aims:We aim to characterize the shape, rotation, amplitude, and metallicity of the phase spiral in the outer disc of the Milky Way. This will allow us to better understand which physical processes caused the phase spiral and can give further clues to the Milky Way's past and the events that contributed to its current state.
Methods:We use _Gaia_ data release 3 (DR3) to get full position and velocity data on approximately 31.5 million stars, and metallicity for a subset of them. We then compute the angular momenta of the stars and develop a model to characterise the phase spiral in terms of amplitude and rotation at different locations in the disc.
Results:We find that the rotation angle of the phase spiral changes with Galactic azimuth and Galactocentric radius, making the phase spiral appear to rotate with these quantities. Furthermore, we find that the phase spiral in the \(2200-2400\) kpc km s\({}^{-1}\) range of angular momentum is particularly strong compared to the phase spiral that can be observed in the solar neighbourhood. The metallicity of the phase spiral appears to match that of the Milky Way disc field stars.
Conclusions:We created a new model capable of fitting several key parameters of the _Gaia_ phase spiral. We have been able to determine the rotation rate of the phase spiral to be about \(2^{\circ}\) per degree in Galactic azimuth. We find a peak in the amplitude of the phase spiral at \(L_{Z}\approx 2300\) km kpc s\({}^{-1}\) which manifests as a very clear phase spiral when using only stars with similar angular momentum. These results provide insights into the physical processes that led to the formation of the phase spiral and contribute to our understanding of the Milky Way's past and present state.
## 1 Introduction
How large spiral galaxies form and which processes contribute to their formation are open questions. By studying the structure of our own galaxy, the Milky Way, we can find traces of these processes and start to piece together its formation history. However, detailed structures that carry signatures of galaxy evolution and accretion events tend to phase mix and disappear with time. The outer disc of the Galaxy has longer dynamical timescales, meaning that dynamical and physical structures there remain for longer times (Freeman & Bland-Hawthorn, 2002). Therefore, the outer Galactic disc is a good place to study when trying to answer questions about the Milky Way's past.
The European Space Agency's _Gaia_ mission (Gaia Collaboration et al., 2016) has provided accurate astrometric data for almost two billion stars in the Milky Way, and its different data releases (DR1 Gaia Collaboration et al., 2016, DR2 Gaia Collaboration et al., 2018, 2018) have allowed us to reveal over more detailed and delicate structures in our Galaxy. Examples include the _Gaia_-Enceladus-Sausage, the remnants of an ancient merger with a massive galaxy (Belokurov et al., 2018; Helmi et al., 2018); the Radcliffe wave, a large nearby structure of gas that contains several stellar nurseries (Alves et al., 2020); the three-dimensional velocities of stars in the satellite dwarf galaxy Sculptor, allowing a close look at the kinematics of a dark matter dominated system (Massari et al., 2018); many details about the structure of the Galactic halo leading to insights into its formation (Helmi et al., 2017); and the phase spiral (or'snail shell'), a spiral pattern that was discovered by Antoja et al. (2018) in the phase plane defined by the vertical distance from the Galactic plane (\(Z\)) and the vertical velocity component (\(V_{Z}\)).
The existence of the phase spiral physically means that the distribution of the \(V_{Z}\)-velocities for the stars at certain \(Z\)-positions is uneven in a way that looks like a spiral when plotted on a phase space diagram. For example, when looking at stars in the solar neighbourhood with \(Z\approx 0\) pc, there are more stars with \(V_{Z}\approx-20\) km s\({}^{-1}\) and fewer stars with \(V_{Z}\approx-15\) km s\({}^{-1}\) than expected from a smooth symmetrical distribution. The phase spiral was mapped within a Galactocentric range of \(7.2<R/\) kpc \(<9.2\) and within \(15^{\circ}\) of the anti-centre direction (opposite to the Galactic centre) by Bland-Hawthorn et al. (2019), to \(6.6<R/\) kpc \(<10\) by Laporte et al. (2019), to \(6.34<R/\) kpc \(<12.34\) by Wang et al. (2019), and Xu et al. (2020) extended the furthest outer detection to 15 kpc from the Galactic centre. When investigations and simulations of the phase spiral were done across a larger range of positions in the Galaxy, these studies found that the phase spiral changes shape with Galactocentric radius. Close to the solar radius, it has a greater extent in the \(V_{Z}\) direction, and
at greater Galactocentric radii it has a larger extent in the \(Z\) direction. This increase in vertical extent at greater Galactocentric distances is due to the change in gravitational potential and a reduction in vertical restoring force.
The phase spiral is thought to be a response of the Galactic disc to a perturbation that pushed it out of equilibrium. This response, over time, winds up in the \(Z\)-\(V_{Z}\) plane into a spiral due to phase-mixing. In this simple picture, the time since the perturbation determines how wound the phase spiral has become, while any variation with Galactic azimuth, such as a rotation of the phase spiral in the \(Z\)-\(V_{Z}\) plane, corresponds to a difference in the initial perturbation felt by stars at different azimuths. Wang et al. (2019) looked at the phase spiral at different Galactic azimuths and found that the amplitude of the spiral pattern changes. Widmark et al. (2022) show that the orientation of the phase spiral changes with Galactic azimuth and that the difference across 180\({}^{\circ}\) of the Galactic azimuth in a heliocentric system will be about 140\({}^{\circ}\). They show a very slight positive change in angle with radial distance, but only in cells they have marked as less reliable (see Widmark et al. 2022, Figs. D.1 and D.2 for details). Bland-Hawthorn and Tepper-Garcia (2021) show the rotation of the phase spiral at different galactic azimuths in their N-body simulation of the effects of the passage of the Sagittarius dwarf galaxy on the disc. The rotation of the phase spiral is an important part of any attempt at modelling it directly, and an important property to capture in any simulation because it is tied to the potential of the disc. In this study, we will present measurements of the propagation of the rotation angle of the phase spiral.
The chemical composition of the phase spiral was investigated by Bland-Hawthorn et al. (2019) using elemental abundances from the GALAH survey (Buder et al., 2018). They found no evidence that the phase spiral is a single-age population (such as a star cluster or similar) because the trend in metallicity is smoothly varying. This indicates that the stars in the phase spiral are part of the general population of the Milky Way disc. An (2019), using data from APOGEE DR14 (Abolfafhi et al., 2018), examined the metallicity of the disc and found an asymmetry in the \(Z\)-direction, with higher mean metallicity above the plane of the Galaxy than below. They explain this asymmetry as being caused by the phase spiral as it would push stars to greater \(Z\)-distances. These results are reported as being in agreement with the findings of Bland-Hawthorn et al. (2019). In this study, we use global metallicity data on a large number of stars to investigate the chemical properties of the phase spiral.
Several theories for the origin of the phase spiral exist in the literature. Among the proposed scenarios, the most popular one is that the phase spiral was caused by gravitational interactions between the Milky Way and a massive external object. The primary observational evidence for this scenario is the presence of the Sagittarius dwarf galaxy (Ibata et al., 1994) which is undergoing disruption by the Milky Way (Binney and Schonrich, 2018; Laporte et al., 2019; Bland-Hawthorn et al., 2019). If the Sagittarius dwarf galaxy is the cause, then the properties of the phase spiral and the properties of the Sagittarius dwarf galaxy at the time when the interaction took place are linked and knowledge of one can be used to derive the properties of the other, for example, the mass history of the Sagittarius dwarf galaxy, and the time of impact (Bland-Hawthorn and Tepper-Garcia, 2021). Darling and Widrow (2019) discusses the possibility that the phase spiral is caused by bending waves (physical displacement of stars). Several phenomena can cause these waves, including dwarf galaxy impacts and gravitational effects from the bar or spiral structure of the Galaxy. Frankel et al. (2022) and Antoja et al. (2022) both find that a simple model with a single cause for the perturbation fails to explain the observations and calls for more complex models. Hunt et al. (2022); Bennett et al. (2022) and Tremaine et al. (2023) suggest, in different ways, that the formation history of the phase spiral cannot be explained with a single impact but perhaps rather originates from several small disturbances.
The primary goal of this paper is to map the rotational angle, amplitude, and chemical composition of the phase spiral. By using the most recent data from _Gaia_, DR3, we aim to investigate these properties in higher definition than before. As we learn more about the extent, amplitude, rotation, and shape of the phase spiral, we might be able to strengthen the evidence for one of the proposed formation scenarios, leading to a greater understanding of the formation history of the Milky Way. We start by presenting how the stellar sample is selected in Sect. 2. In Sect. 3 we develop the model that we use to analyse the phase spiral and how it changes across the Galactic disc. In Sect. 3.6 we examine the chemical composition of the phase spiral and in Sect. 4 we discuss our results. Finally, we summarise our findings and conclusions in Sect. 5.
## 2 Data
To study the phase spiral we need stars with known three-dimensional velocities. We use _Gaia_ DR3 (Gaia Collaboration et al., 2016, 2022) to get positions, proper motions, and radial velocities for the stars. The distances were calculated by Bailer-Jones et al. (2021) who used a Bayesian approach with a direction-dependant prior on distance, the measured parallaxes, and _Gaia_ photometry, exploiting the fact that stars of different colours have different ranges of probable absolute magnitudes. The ADQL-query used to retrieve this data from the public _Gaia_ database1 was:
Footnote 1: [https://gea.esac.esa.int/archive/](https://gea.esac.esa.int/archive/)
SELECT source_id, ra, dec, pmra, pmdec, r_med_photogeo, radial_velocity FROM external.gaiaerd3_distance JOIN gaiadr3.gaia_source USING (source_id) WHERE parallax_over_error>=3 and radial_velocity IS NOT NULL and r_med_photogeo IS NOT NULL This query resulted in 31,552,449 stars being selected. We use parallax_over_error>3 as this removes the most uncertain distance measurements.
For the chemical investigation, we use the global metallicity [M/H] data from _Gaia_ DR3 RVS spectra (Recio-Blanco et al., 2022) with the ADQL-query:
SELECT source_id, mh_gspspec, flags_gspspec FROM gaiadr3.astrophysical_parameters JOIN gaiadr3.gaia_source USING (source_id) WHERE parallax_over_error>=3 and left_gspspec > 3500 and logg_gspspec BETWEEN 0 and 5 and left_gspspec_upper - teff_gspspec_lower < 750 and logg_gspspec_upper - logg_gspspec_lower < 1 and mh_gspspec_upper - mh_gspspec_lower < 5 and mh_gspspec IS NOT NULL and radial_velocity IS NOT NULL This query resulted in 4,516,021 stars being selected. We use quality cuts as recommended by Recio-Blanco et al. (2022) combined with those used for the main sample. These cuts remove
stars with low temperatures as these stars are known to have complex, crowded spectra and stars with \(\log(g)\) and \(T_{\rm eff}\) not between the upper and lower confidence levels. We also filter out the least reliable K and M-type giant stars using the supplied flag as there exists a parameterisation problem for cooler and metal-rich stars of this type. The final sample for the chemical investigation consists of this table combined with the previous one to get positions, velocities and spectral data in the same sample and is 4,303,484 stars after quality cuts.
For reasons given in Sect. 3, we will base our analysis on samples defined by the angular momenta of the stars. The angular momentum2 is computed as \(L_{Z}=R\,|V_{\phi}|\).
Footnote 2: We use a Galactocentric coordinate system centred on the Galactic centre with the Sun on the (negative) \(X\)-axis at a distance of 8.122 kpc and a height of 20.8 pc with the \(Y\)-axis pointing towards \(l=90^{\circ}\) and the Z-axis pointing to \(b=90^{\circ}\). Galactic azimuth (\(\phi\)) is decreasing in the direction of Galactic rotation and the Sun is located at \(\phi=180^{\circ}\). The velocity of the Sun is \([V_{B,O}=-12.9,V_{B,O}=-245.6,V_{Z,O}=-7.78]\) km s\({}^{-1}\)(Reid & Brunthaler, 2004; Drimmel & Poggio, 2018; GRAVITY Collaboration et al., 2018; Bennet & Bovy, 2019). For the computations and definitions of coordinates, we use Astropy v5.2 (Astropy Collaboration et al., 2022).
The distribution of the stars in the Galactocentric Cartesian \(X\)-\(Y\) plane is shown in Fig. 1. We can see that the sample mostly contains stars with Galactocentric distances \(5-12\) kpc. This allows us to study the phase spiral in regions far from the solar neighbourhood and measure how it changes with location. The top row shows the full sample to the left and the sample of stars with [M/H] values to the right. The bottom row is split into three bins with different angular momentum. In the bin with the highest angular momentum (right-most panel), most stars are \(\sim\)2 kpc further out than those in the low angular momentum bin (left-most panel).
## 3 The Gaia phase spiral
Figure 2 shows the density of stars within \(5^{\circ}\) of the anti-centre direction plotted in the \(L_{Z}\)-\(V_{Z}\) plane. The thick line is the Galactic disc and the "\(V_{Z}\) feature \(2^{\circ}\) at \(L_{Z}~{}\approx~{}2700\) kpc km s\({}^{-1}\) is the bifurcation which was discovered by Gaia Collaboration et al. (2021) and investigated by McMillan et al. (2022) who found that it may be an effect of the passage of the Sagittarius dwarf galaxy. Several other features can be seen, but a particularly clear one that we focus on is the wrinkle labelled "\(V_{Z}\) feature \(1^{\circ}\) at \(L_{Z}\approx 2300\) kpc km s\({}^{-1}\) and the apparent overdensity centred on \((L_{Z},V_{Z})=(2300\) kpc km s\({}^{-1}\), \(-20\) km s\({}^{-1}\)). These regions and features are marked lines in Fig. 2. Finding this seemingly isolated overdensity of stars sitting below the thick line was surprising since the stars otherwise show a smooth falloff from the centre in the vertical directions. As we will show, the cause for the highlighted overdensity and \(V_{Z}\) feature 1 in Fig. 2 seems to be that, in the range \(2200<L_{Z}/\) kpc km s\({}^{-1}<2400\), a higher proportion of the stars are part of the phase spiral, giving the stars in that region an unusual \(V_{Z}\) distribution and a very prominent phase spiral.
### Selection of stars
For an investigation of a structure of stars like the phase spiral, a velocity-dependant selection will produce a more sharply defined phase spiral than a position-dependent selection because the phase spiral is a dynamical structure (Bland-Hawthorn et al., 2019; Li, 2021; Antoja et al., 2022). Samples based on position will contain stars with a wide range of velocities and orbital frequencies, meaning one will indirectly sample a large part of the Galaxy which can be useful when addressing other research questions (Hunt et al., 2021; Gandhi et al., 2022), such as those in Bensby et al. (2014) where relatively nearby stars were sampled
Figure 1: Top middle panel: Number density of stars used in our investigation in the Galactocentric Cartesian \(X\)-\(Y\) plane. This panel contains stars in the \(2000<L_{Z}/\) kpc km s\({}^{-1}<2600\) range. Top right panel: Number density of stars within our sample that have global metallicity [M/H] values. Bottom panels: Number density of the selected stars in the used angular momentum bins in the \(X\)-\(Y\) plane. The circled red dot is the location of the Sun in all panels. The bin size for all panels is 200 pc by 200 pc.
to map the age and abundance structure of several components of the Milky Way.
Here we do a quick comparison of samples selected by Galactocentric position and by angular momentum. Using the Galactic potential from McMillan (2017), we compute the guiding centres for hypothetical stars with \(L_{Z}=2200\) kpc km s\({}^{-1}\) and \(L_{Z}=2400\) kpc km s\({}^{-1}\) to be \(R\approx 9.5\) kpc and \(R\approx 10.4\) kpc, respectively. The \(Z\)-\(V_{Z}\) phase space for the stars between these Galactocentric radii is shown in Fig. 3a, and the same for stars in the angular momentum range in Fig. 3b. The phase spiral based on stars in the \(9.5<R/\) kpc \(<10.4\) range is visible but less clear while the phase spiral based on stars in the \(2250<L_{Z}/\) kpc km s\({}^{-1}<2350\) range is more prominent. This is because panel a contains stars that are part of different-looking phase spirals, making the stars fill in the gaps in each other's patterns, whereas panel b mostly contains stars that are part of one single phase spiral, making the pattern clear. Panel a contains a total of 1,045,921 stars, panel b contains 1,348,886 stars.
### Model of the phase spiral
To quantify the properties of the phase spiral as functions of \(R,\phi\) and \(L_{Z}\) we construct a model inspired by those used by Widmark et al. (2021) and Guo et al. (2022). The model is built by creating a smoothed background from observed data, then a spiral-shaped perturbation that, when multiplied by the background, recreates
Figure 3: a: The number density of stars in the \(Z\)-\(V_{Z}\) phase plane in the \(9.5<R/\) kpc \(<10.4\) range. b: The number density of stars in the \(Z\)-\(V_{Z}\) phase plane in the \(2200<L_{Z}/\) kpc km s\({}^{-1}<2400\) range. a shows a less clearly defined phase spiral than b.
Figure 2: Column normalized histogram of star number density in the \(L_{Z}-V_{Z}\) plane in the Galactic outer disc. The region of interest is marked by solid white lines at \(L_{Z}=[2200,2400]\) kpc km s\({}^{-1}\), with dashed white lines at \(L_{Z}=2000\) and \(2600\) kpc km s\({}^{-1}\) marking the areas used for comparisons in Figs. 1, 11, and 13. Features mentioned in the text are also marked. The figure contains all stars in our sample with \(175^{\circ}<\phi<185^{\circ}\), 12,723,513 in total.
the observed distribution of stars in the \(Z\)-\(V_{Z}\) plane. This way, the spiral can be isolated and quantified. In this model, we compute values for the phase distance \(r\) and the phase angle \(\theta\) by using
\[r=r(Z,V_{Z})=\sqrt{Z^{2}+\left(\frac{V_{Z}}{S}\right)^{2}}, \tag{1}\]
\[\theta=\theta(Z,V_{Z})=\arctan\left(\frac{1}{S}\frac{V_{Z}}{Z}\right), \tag{2}\]
where \(S\) is a scale factor which determines the ratio between the axes and is a free parameter in the model. These coordinates are illustrated in Fig. 4 with a simple diagram. A larger value of \(S\) stretches the \(V_{Z}\) axis, thus controlling the axis ratio of the spiral, see the bottom-left panel of Fig. 5. A star experiencing a harmonic motion in \(Z\) will trace a circle in the phase plane for the right value of \(S\), of \(S\) is closely related to the vertical frequency of oscillations in the Galactic disc. We therefore restrict it to \(30<S<70\) where \(S\) is in units of \(\mathrm{kms^{-1}\,kpc^{-1}}\).
Starting from Guo et al. (2022)'s discussion of the shape of the phase spiral, we consider the quadratic spiral
\[r=a+b\phi_{s}+c\phi_{s}^{2}, \tag{3}\]
where \(\phi_{s}\) is the angle of the spiral. They claim that an Archimedean spiral3\((a=0,c=0)\) fits the data well enough. We, however, found that our model fits better when we do not require that \(c=0\). We can assume \(a=0\) without loss of generality. As we construct the model we will be referring to Fig. 5 for illustrations of the effects of each parameter. Figure 5 contains six panels. Panel A shows the spiral perturbation for a certain set of parameters. Each other panel shows the spiral perturbation with one parameter increased and will be referred to as that parameter is introduced. We write the equation for the radial distance of the phase spiral as
Footnote 3: An Archimedean spiral is a spiral with a linear relation between angle and radial distance. Expressed in polar coordinates the spiral traces \(r=b\phi_{s}\).
\[r=b\phi_{s}+c\phi_{s}^{2}, \tag{4}\]
which means
\[\phi_{s}(r)=-\frac{b}{2c}+\sqrt{\left(\frac{b}{2c}\right)^{2}+\frac{r}{c}}. \tag{5}\]
The parameter \(b\) is the linear winding term of the spiral. Higher values of \(b\) mean the spiral winds slower and moves further in \(r\) per turn, see the top-middle panel in Fig. 5. The value of \(b\) has to be positive and by inspection we find it to provide reasonable results between 0.01 and 0.175. \(c\) is the quadratic winding term. It has a similar meaning to \(b\) except it does not act equally at all \(r\), having a smaller effect close to the middle of the spiral and a greater further out, see the top-right panel of Fig. 5. \(c=0\) means that the spiral is Archimedean and its radius has a constant increase with increasing angle. \(c\) has to be positive and, by inspection, we find that by limiting its upper value to 0.005 we get reasonable results.
Following Widmark et al. (2021) we take the form of the perturbation to be
\[f(r,\Delta\theta)=1-\alpha\cdot\mathrm{mask}(r)\cos(\Delta\theta), \tag{6}\]
where \(\alpha\) is a free parameter of the model that defines the amplitude of the phase spiral. This spiral perturbation can have values in the range \(1+\alpha\) to \(1-\alpha\). If \(\alpha=0\) then the smoothed background is unperturbed by the spiral, if \(\alpha=1\) there are no stars that are not part of the spiral. We define \(\Delta\theta\) as the phase angle relative to the peak of the perturbation as a function of phase distance as
\[\Delta\theta=\theta-\phi_{s}(r)-\theta_{0}, \tag{7}\]
where \(\theta_{0}\) is the angle offset, which is a free parameter, giving us
\[f(r,\theta)=1-\alpha\cdot\mathrm{mask}(r)\cos(\theta-\phi_{s}(r)-\theta_{0}). \tag{8}\]
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Name & min & max & Description & Unit \\ \hline \(\alpha\) & 0 & 1 & Amplitude of spiral pattern & – \\ \(b\) & 0.01 & 0.175 & Linear winding parameter & pc rad\({}^{-1}\) \\ \(c\) & 0.0 & 0.005 & Quadratic winding parameter & pc rad\({}^{-2}\) \\ \(\theta_{0}\) & \(-\pi\) & \(\pi\) & Angle offset & rad \\ \(S\) & 30 & 70 & Scale factor & \(\mathrm{kms^{-1}\,kpc^{-1}}\) \\ \(\rho\) & 0 & 0.3 & Inner mask distance & kpc \\ \hline \end{tabular}
\end{table}
Table 1: Free parameters in the model of the phase spiral.
Figure 4: Illustration of the phase-plane coordinates used. \(r\) is the phase distance and \(\theta\) is the phase angle. In this example, \(\theta=45^{\circ}\). The scale factor \(S\) has been chosen such that the \(r\)-vector could be drawn with constant length regardless of angle.
The innermost part of the phase spiral cannot be accurately fitted with this kind of model because the part of the Galactic disc with low \(Z\)-displacement and velocity is subject to small perturbations which wash out the phase spiral. We therefore apply a mask to reduce the strength of the spiral perturbation in this region. Like Widmark et al. (2021), we use the logistic function (a sigmoid function) for our masking function. The logistic function has the property that it is bounded by zero and one, and smoothly (exponentially) changes between them, thereby bringing any value into the zero-to-one range in a naturalistic way. We define the masking function as
\[\text{mask}_{\text{inner}}(r)=\text{sigm}\left(\frac{r-\rho}{0.1\text{ kpc}}\right), \tag{9}\]
where
\[\text{sigm}(x)=\frac{1}{1+e^{-x}}, \tag{10}\]
is the sigmoid function and \(\rho\) is the radius of the mask, which is a free parameter in the model. The mask reduces the impact of the inner part of the spiral by "flattening" it, bringing it closer to one, see the bottom-right panel of Fig. 5. A larger value of \(\rho\) means a larger part of the spiral is flattened. By inspection, we find that this value should be less than 0.3 which we apply as a prior.
We also use an outer mask to reduce the influence of the most distant regions of the phase plane with very few stars. Similarly
Figure 5: Examples of the effects on the spiral perturbation when changing (increasing) the different parameters in the model. Panel A) shows the spiral perturbation for a certain set of parameters. This is taken as the default for the comparison in this figure. Panel B) shows the spiral perturbation with an increased linear winding parameter. Panel C) shows the spiral perturbation with an increased quadratic winding parameter. Note that the inner part of the spiral is still similar to panel A). Panel D) shows the spiral perturbation with an increased phase angle, rotating it half a revolution. Panel E) shows the spiral perturbation with an increased scale factor which increases the \(V_{Z}\)-\(Z\) axis ratio. Panel F) shows the spiral perturbation with the inner mask distance increased which makes the inner parts less distinct.
Figure 6: Example of the process from data to fitted model. a: Data used for the model, a two–dimensional histogram showing number density consisting of 1,396,320 stars, b: Initial background, c: Data / initial background, d: Extracted spiral perturbation, e: Initial fit spiral. f: Final background, g: Data / final background, h: Best fit spiral. See text for details on individual panels. This example consists of stars with \(8.4<R/\text{kpc}<10.4\) and \(165^{\circ}<\phi<195^{\circ}\).
to Widmark et al. (2021) we use
\[{\rm mask}_{\rm outer}(Z,V_{Z})=-{\rm sign}\left(\left(\frac{Z}{1\,{\rm kpc}} \right)^{2}+\left(\frac{V_{Z}}{40\,{\rm km\,s}^{-1}}\right)^{2}-1\right)-1, \tag{11}\]
for the outer mask. This mask is applied to both data and model when evaluating how good the fit is. Note that these two masks have different purposes. The inner mask is only applied to the model to reduce the strength of the perturbation in a small area, while the outer mask reduces the importance of the outermost data in our results.
Combining Eqs. 5, 8, and 9 gives the spiral perturbation as
\[f(r,\theta)=1-\alpha\cdot{\rm sign}\,(r-\rho)\cos(\theta-\phi_{s}(r)-\theta_{0}), \tag{12}\]
where \(\alpha,\rho\), and \(\theta_{0}\) as well as \(b\) and \(c\) are free parameters of the model. The prior we use is based on observations of the phase spiral and chosen in a way to ensure that the sampler converges to the most reasonable solution. The prior uses uniform probabilities for all parameters between the values listed in Table 1. This table also contains a summary of the parameters with their units.
### Fitting procedure
The spiral perturbation is found through an algorithm that involves an iterative procedure to create a smooth background. With a smooth background, we can define the perturbation which has the parameters of the phase spiral. This background is not a two-dimensional Gaussian or other simple shape, it is complicated and changes depending on where in the Galaxy you look, in part because of interstellar extinction. The procedure for fitting a spiral perturbation to the data is illustrated in Fig. 6 and the letters in this subsection refer to the panels in the figure. The figure contains stars with \(8.4<R/\,{\rm kpc}<10.4\) and, \(165^{\circ}<\phi<195^{\circ}\) in the \(2200<L_{Z}/\,{\rm kpc\,km\,s}^{-1}<2400\) range. The procedure contains the following steps.
1. Collect the data into a two-dimensional number density histogram in the \(Z-V_{Z}\) phase plane (panel a). The model uses a bin size of 25 pc by 2 km s\({}^{-1}\) except in cases with fewer stars when larger bins are used. For example, Fig. 7 uses bins of \(33\frac{1}{5}\) pc by \(2\frac{2}{5}\) km s\({}^{-1}\).
2. Create the first background using the observed data (panel b). The background is created from data that has been smoothed by a SciPy Gaussian kernel-density algorithm using Scott's rule (Scott, 1992) to determine the kernel size, and mirrored along the \(V_{Z}=0\) axis because the background velocity distribution is here assumed to be approximately symmetric.
In panels b and c we can see that this background still contains some structure from the data and that the spiral pattern in panel c is not clear.
1. Find the spiral perturbation (Eq. 12) that, multiplied by this background fits the data best (panel d). The parameter space is explored and the best fit is found by using a Markov Chain Monte Carlo (MCMC) 4 approach. To find a fit, we need to define a probability function of a given model that takes the data and our prior into account. Given that we are using an MCMC sampler we can ignore any multiplicative constants and can say that the relevant value is \(p\), where Footnote 4: The model is implemented in Python using the package emcee(Foreman-Mackey et al., 2013) as an MCMC sampler. \[\ln p=-\frac{1}{2}\sum\left(\frac{(N-f(r,\theta)\cdot B)^{2}}{f(r,\theta)\cdot B }\right)+\ln(P_{\rm prior}),\] (13) where \(N\) is the data in the form of number counts for each bin, \(B\) is the background, and \(P_{\rm prior}\) is the prior probability. This perturbation is multiplied by the background and outer mask (Eq. 11) and compared to the data (panel e).
5. Divide the data by the spiral perturbation produced in the fit to create an improved background which lacks some of the structure of the initial one (panel f). This new background is smoothed by averaging every pixel with its nearest neighbours (including diagonals) and is not necessarily symmetric in \(V_{Z}\) anymore.
The process from point 3 to here is repeated until the new background no longer provides a better fit. The background converges quickly, usually not improving further after three iterations. The difference this process makes for the background can be seen by comparing panels c and g and noting in panel g the clearer spiral pattern.
6. When making a new background no longer improves the fit, take the final background and perturbation and make the final best fit (panel g). The final parameters are the median of the final samples found by the MCMC sampler.
The model is robust and capable of fitting spirals even to regions with relatively few stars. This is because the quality of the fit is judged by how well the smooth background is made to look like the data, and the spiral perturbation is the way in which it can change this background. In Fig. 7 we show an example of the process when dust severely obscures the sample. The figure contains data in the \(8<R/{\rm kpc}<12\), \(150^{\circ}<\phi<155^{\circ}\), and \(2200<L_{Z}/\,{\rm kpc\,km\,s}^{-1}<2400\) ranges. The model still produces a reasonable fit and provides the parameters of the phase spiral.
Figure 7: Example of a selection of stars near the edge of our considered area, containing only about 22,000 stars. The sample contains stars with \(8<R/{\rm kpc}<12\) and \(150^{\circ}<\phi<155^{\circ}\). Upper left: The phase plane showing strong extinction by dust. Upper right: The background produced by the model. Lower left: The spiral perturbation produced by the model (this panel does not share the colour bar with the rest). Lower right: The best fit. We can see that even absent a clear spiral pattern in the data, the model still produces a convincing spiral and fit.
### Rotation of the phase spiral
The animation5 shows the phase spiral smoothly transition from stars at \(\phi\approx 210^{\circ}\) to stars at \(\phi\approx 150^{\circ}\) with bins \(10^{\circ}\) wide. Here the phase spiral can be seen to spin clockwise about half a rotation as we decrease the Galactocentric azimuthal angle from \(\phi=150^{\circ}\) to \(\phi=210^{\circ}\). At either end of the range, there is a reduction of the number counts of stars in the mid-plane of the disc, because interstellar dust blocks our view of these stars. Figure 8 shows the phase spirals at three different galactic azimuths for stars with \(2200<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2400\), along with the perturbations fitted to the data. It is evident that the rotation angle of the phase spiral increases (rotating counterclockwise) with Galactic azimuth, changing by roughly \(80^{\circ}\) over the \(30^{\circ}\) change in azimuth from \(\sim\)\(165^{\circ}\) to \(\sim\)\(195^{\circ}\).
Footnote 5: View the animation at
[https://lu.box.com/s/lkd@3c3gzcj29eqfgprsfmuit8rbqyrl](https://lu.box.com/s/lkd@3c3gzcj29eqfgprsfmuit8rbqyrl)
The parameter \(\theta_{0}\) which we fit in our model is not a convenient or particularly helpful description of the rotation of the phase spiral because the angle parameter (\(\theta_{0}\)) in the model has a degeneracy with the winding parameter (\(b\)) and to a certain extent the quadratic winding parameter (\(c\)) and different sets of these values can produce very similar spirals except in the most central regions, which are removed by the inner mask. Therefore, we describe the rotation of the phase spiral by the angle which maximises Eq. 12 (i.e. \(\Delta\theta=0\)) at a fixed phase distance of \(r=150\,\mathrm{pc}\) and call this angle \(\theta_{0,\mathrm{model}}\). This angle is shown in Fig. 8 with a red line, and the phase distance is shown with a white ring (scaled to the same axis ratio as the phase spiral) in
Figure 8: Upper row: The phase spiral at low, medium, and high galactic azimuth (\(\phi\)) with the angle \(\theta_{0,\mathrm{model}}\) marked with a red line and \(\theta_{0,\mathrm{model}}=0\) marked with a white dashed line. Lower row: The corresponding spiral perturbations fitted to the data with \(\theta_{0,\mathrm{model}}\) marked with a red line and the measurement distance for \(\theta_{0,\mathrm{model}}\) marked with a white ring.
Figure 9: Normalized \(Z\) distributions for stars at \(2200<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2400\) at different Galactic azimuths. Note the seemingly bimodal distribution at Galactic azimuth far from \(180^{\circ}\), this is an effect of dust hiding stars in the middle of the disc.
the lower row. The angle \(0^{\circ}\) is shown with a dashed white line in the upper row.
Figure 9 shows the \(Z\)-distribution for 6 different ranges of Galactic azimuth, each \(10^{\circ}\) wide, between \(150^{\circ}\) and \(210^{\circ}\), for stars in the \(2200<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2400\) range. Here, we can clearly see the reduction in the number of stars close to the Galactic plane (\(Z\approx 0\)) at high and low Galactic azimuth. This is because of dust in the plane of the Galactic disc, obscuring the true distribution of stars. Despite this, we see a shift as Galactic azimuth increases with more stars with low \(Z\) at low Galactic azimuth than at higher Galactic azimuth, where there are more stars at high \(Z\) instead. This is because stars in the phase spiral get pushed to greater \(Z\) distances.
Figure 10 shows a map of the phase spiral's rotation angle (\(\theta_{0,\,\mathrm{model}}\)) on a top-down radial grid of the Galactic disc. Three plots are shown, each containing stars in a different angular momentum and Galactocentric radial distance range. The left plot contains stars in the \(2000<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2200\) and \(7.5<R/\mathrm{kpc}<10\) range, the middle plot has stars in the \(2200<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2400\) and \(8.5<R/\mathrm{kpc}<11\) range and the right plot has stars in the \(2400<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2600\) and \(9.5<R/\mathrm{kpc}<12\) range, all the between \(150^{\circ}\) and \(210^{\circ}\) in Galactic azimuth. The zero-point of the rotation angle is set to be 0 at the \(V_{Z}=0\) line at \(Z>0\) (the positive \(x\)-axis as indicated in Fig. 8). In Fig. 10, we see a change in this rotation angle from high to low Galactic azimuth. In the left and middle plots, we see a relatively smooth decrease in rotation angle as Galactic azimuth decreases, changing by about \(180^{\circ}\) over \(60^{\circ}\) in the Galactic azimuth. The right panel shows the same trend but less smoothly. The left panel shows a radial increase in rotation angle by about \(40^{\circ}\) over \(2.5\,\mathrm{kpc}\) while the middle panel shows a radial decrease in this angle by about \(70^{\circ}\) over \(2.5\,\mathrm{kpc}\). The right panel appears to show an increase in angle with radial distance.
### Amplitude of the phase spiral
Figure 11 shows the \(Z\)-\(V_{Z}\) phase plane for the three regions marked with lines in Fig. 2. The left and right panel contains stars in the \(2000<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2200\) and \(2400<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2600\) ranges respectively. Both these regions show weak and/or almost dissolved phase spiral patterns. The middle panel, which corresponds to the \(2200<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2400\) range, shows a clear, single-armed, phase spiral pattern.
Our model contains a parameter for the amplitude, or strength, of the phase spiral pattern (\(\alpha\)). Figure 11 shows the amplitude of the phase spiral as a function of angular momentum in the bottom panel. There is a peak at \(L_{Z}\approx 2300\) kpc km s\({}^{-1}\), which is what we expected from Fig. 2 and the top row of Fig. 11. The shaded region in the plot is between the 84th and 16th percentiles. These are used to show the statistical uncertainties in the model. The systematic uncertainties are expected to be larger. The jagged part between \(L_{Z}\approx 2000\,\mathrm{kpc\,km\,s^{-1}}\) and \(2100\,\mathrm{kpc\,km\,s^{-1}}\) are examples of the modelling procedure finding an alternate solution. By visual inspection, we can conclude that the phase spirals found at these points are not the best fits. The line rises in the high end of the plot, indicating another peak beyond \(L_{Z}=2600\,\mathrm{kpc\,km\,s^{-1}}\). This seems to correspond to "\(V_{Z}\) feature 2" in Fig. 2 and the bifurcation discussed by McMillan et al. (2022). The bottom plot contains points based on bins that are \(1000\) pc by \(30^{\circ}\). These bins are centred on the guiding centre radius corresponding to that angular momentum. Because the bins are \(30^{\circ}\) wide, we are measuring phase spirals with rotational angles over a \(\sim\)70\({}^{\circ}\) range, as large as that seen in Fig. 8.
Figure 12 shows a map of the amplitude (\(\alpha\)) on a top-down radial grid of the Galactic disc. Three plots are shown, each containing stars in a different angular momentum and Galactocentric radial distance range, the same as in Fig. 10. The figure shows that the brightest region, with the highest amplitude, is the middle panel with stars in the \(2200<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2400\) range as we would expect from Fig. 11. The figure also shows that the region of the highest amplitude moves outward with \(L_{Z}\), as well as in each bin. There is a slight trend for the amplitude to decrease at higher and lower Galactic azimuths in the figure. This is believed to be an observational effect.
### Chemical composition of the phase spiral
Figure 13 shows the \(Z\)-\(V_{Z}\) phase plane coloured by the mean global metallicity for the sample of stars that we have metallicities for in the same three ranges in angular momentum as in Fig. 11. A similar, but weaker, spiral pattern can be seen here. We observe that stars in the phase spiral have a slightly higher metallicity than those outside the pattern, indicating a common origin between the stars in the arm of the phase spiral and those
Figure 10: Angle (\(\theta_{0,\,\mathrm{model}}\)) of the phase spiral as measured by the model, showing the rotation across the Galactic disc. The plots show data for different regions of the Galaxy as seen from above for the three angular momentum ranges marked in Fig. 2. The colour bar is periodic and the zero point is arbitrary.
in the Galactic thin disc. A clear decreasing trend in mean metallicity with angular momentum can also be seen. The similarities between the spiral patterns in Figs. 11 and 13 are noteworthy but not surprising as stars in the Galactic thin disc are known to have higher metallicity and the phase spiral is assumed to be a perturbation of the Galactic disc which moves stars away from the midplane. They both show stars in the same angular momentum ranges and the same phase spiral pattern appears. In the left panel, the central region of phase space (the thin disc) shows high [M/H] values. In this panel, an arm of the phase spiral can be seen emerging from the top of this central region, at about \(Z\approx-300\) pc, \(V_{Z}\approx 20\) km s\({}^{-1}\). In the middle panel, a one-armed spiral is visible in stars with mean metallicity of \(\langle[{\rm M/H}]\rangle\approx-0.15\) against a background of \(\langle[{\rm M/H}]\rangle\approx-0.22\). This panel lacks the high metallicity region in the centre of the phase plane, instead having the region of highest metallicity be in the arm of the phase spiral. Even the less dense gap between the wraps of the phase spiral arm is visible as a darker curve
Figure 11: Measurements of the amplitude of the phase spiral as a function of angular momentum. Top: Number density of stars at low, medium, and high angular momentum, showing the phase spiral change shape and amplitude. Bottom: Amplitude of the phase spiral pattern as a function of angular momentum. The lines are the same as in Fig. 2. The shaded area shows the 84th and 16th percentiles.
Figure 12: Amplitude (\(\alpha\)) of the phase spiral as measured by the model for different regions of the Galaxy as seen from above for three angular momentum ranges marked in Fig. 2. The brightness of the plots corresponds to the height of the line in the bottom panel in Fig. 11, showing the change in amplitude across the Galactic disc.
near the centre of the phase plane. The right panel shows a faint trace of a spiral arm at \(Z\approx 500\,\mathrm{pc}\), \(V_{Z}\approx-20\,\mathrm{km\,s^{-1}}\). Note that the colour scale in this panel is shifted slightly towards lower metallicity values in order to bring out the remaining structure.
## 4 Discussion
### Formation
The results presented in the previous section challenge certain proposed formation mechanisms for the phase spiral. A smoothly changing angle of the phase spiral across a wide range of different Galactic azimuths and radii, such as we observe in Fig. 10 and our animation6, would seem to indicate a single-impact formation mechanism. However, numerous recent papers are pointing in the opposite direction, that a single-impact origin scenario is too simple to explain all the observations (e.g. Tremaine et al., 2023; Frankel et al., 2022). In this context, more advanced models that consider multiple galactic components and the wider cosmological context may be more suited for studying a complicated system like the phase spiral. Garcia-Conde et al. (2022) look at phase spirals in a cosmological simulation and conclude that phase spirals still appear, even if the interacting satellite galaxies are less massive or more distant than the Sagittarius dwarf galaxy is thought to have been (Niederste-Ostholt et al., 2010).
Footnote 6: [https://lu.box.com/s/lkd@3c3gzcjj29eqfgprsfmuit8rbqyrl](https://lu.box.com/s/lkd@3c3gzcjj29eqfgprsfmuit8rbqyrl)
Knowledge of how the phase spiral shifts across the Galactic disc can be related to the properties of the disc and the cause of the perturbation. Widmark et al. (2021) used the velocities of stars in the disc and phase spiral to infer the potential of the disc, and thereby its mass. Comparing how the phase spiral has propagated through the disc with results from modelling studies can lead to better constraints for these methods. It should be noted that, according to Grand et al. (2022), the torque on the disc caused by the dark matter wake of a passing satellite galaxy can be significantly stronger than the direct interaction with the satellite galaxy. This means that the connection between the perturbing satellite galaxy and the perturbation in the disc may not be as simple as some previous idealised models have assumed. Explaining the physics on scales as small as those focused on in this paper (roughly the area covered by Fig. 10) in the context of a cosmological simulation presents a challenge for the modelling community.
### Hot/cold orbits
Our results can also be seen as being in tension with those of Li & Shen (2020) and Bland-Hawthorn et al. (2019), who argue that the phase spiral is more clearly defined in stars on dynamically cold (close to circular) orbits than in stars on dynamically hot (far from circular) orbits. Li & Shen (2020) argue that stars on hotter orbits should be excluded from samples used in phase-mixing studies to provide clearer samples. The results of this paper combined with those of Frankel et al. (2022) are in tension with these conclusions. Figure 11 (bottom row) contains stars on cold orbits by only including stars that are within \(500\,\mathrm{pc}\) radially of where a star with the same angular momentum on a circular orbit would be. This result is similar to that by Frankel et al. (2022), who show that stars on hot orbits at \(L_{Z}\approx 2300\,\mathrm{kpc}\) km s\({}^{-1}\) still produce a phase spiral with higher amplitude than stars on cold orbits at \(L_{Z}\approx 1800\,\mathrm{kpc}\) km s\({}^{-1}\) (see their Fig. 6 for details). Both results show the same feature, a region with a more prominent phase spiral, despite containing separate populations of stars with different dynamics.
Frankel et al. (2022) conduct a similar investigation of the amplitude of the phase spiral pattern as a function of angular momentum in the range \(1250<L_{Z}/\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}<2300\). Their sample consists of stars within a \(0.5\,\mathrm{kpc}\) cylinder centred on the Sun meaning that the stars included in this volume with high angular momentum (\(>2000\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}\)) are all going to be on relatively dynamically hot orbits. Our sample contains stars whose position and guiding centre are further out, meaning that when considering the high angular momentum case, the stars are on dynamically cooler orbits. They show a general increase in amplitude with angular momentum, with the highest peak at \(L_{Z}\approx 2350\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}\), (see their Fig. 6 for details). The bins containing stars with high angular momentum in their sample hold few stars leading to a relatively large scatter in the results. Our results also extend to higher angular momentum meaning that, in Fig. 11, we can see the dip in amplitude at \(L_{Z}\approx 2500\) kpc km s\({}^{-1}\).
The questions posed in Bland-Hawthorn et al. (2019) are still relevant. How are different populations of stars affected by whatever mechanism caused the phase spiral? How is the gas affected? Were the stars in the phase spiral formed in it or were they swept up into it after they had already formed? These questions are mostly outside the scope of this paper but could bring significant insights into the dynamic processes that shape our galaxy.
Figure 13: Phase spirals coloured by mean global metallicity at low, medium, and high angular momentum, showing that the spiral pattern is visible, compare to Fig. 11. Note that the rightmost panel has different values on the colour bar. The data is split into the three angular momentum ranges marked in Fig. 2.
### Metallicity
Widrow et al. (2012) discovered an asymmetry in the \(Z\)-distribution of stars in the Galactic disc which we now associate with the phase spiral. They found that when looking at the number density of stars as \(\left(\mathrm{North}-\mathrm{South}\right)/\left(\mathrm{North}+\mathrm{South}\right)\) the result is \(<0\) at \(\left|Z\right|\approx 400\,\mathrm{pc}\) and \(>0\) at \(\left|Z\right|\approx 800\,\mathrm{pc}\). An (2019) analysed this asymmetry further, specifically looking at the metallicity of the stars. They found that the vertical metallicity distribution is asymmetric in a similar complicated manner. Our results suggest that the arm of the phase spiral drives stars to greater \(Z\)-distances in the region of the Galaxy we study. This would push stars from the disc vertically away from it, and preferentially in the positive \(Z\)-direction (An, 2019).
Bland-Hawthorn et al. (2019) looked at the difference in the phase spiral when using different cuts in the elemental abundance plane. They found that more metal-rich (\(\left\{\mathrm{Fe}/\mathrm{H}\right\}>0.1\)) stars were concentrated in the central part of the phase spiral. As we can see in Fig. 13, we also see that stars with higher mean metallicity can be found in the centre of the phase spiral. The conclusion is that these stars were formed in the Galactic thin disc and then perturbed to move out of it. This would explain the asymmetry in the \(Z\)-distribution and the concentration of metal-rich stars in the phase spiral.
### Effects of rotation
Previous studies have shown that the phase spiral changes shape and becomes more flattened along the \(Z\)-axis with increasing Galactocentric radius (Bland-Hawthorn et al., 2019; Wang et al., 2019; Khanna et al., 2019; Laporte et al., 2019; Xu et al., 2020; Hunt et al., 2021; Li, 2021; Antoja et al., 2022). If the rotation of the phase spiral is not taken into consideration when studying it, some features are at risk of being washed out. For example, in Fig. 2, the sample is restricted to stars with Galactic azimuth of \(175^{\circ}<\phi<185^{\circ}\), otherwise, the feature of interest is not clearly visible. Future authors should be aware of this phenomenon and how it may affect their results.
In Fig. 9 it appears that stars are missing in the centre of the Galactic disc at high or low Galactic azimuth. This is attributed to dust. We also see an asymmetry in the \(Z\)-distribution when comparing regions at high and low Galactic azimuth. This effect could be caused by the rotation of the phase spiral as it brings the phase spiral arm out of the high \(Z\) region at lower Galactic azimuth. We do not believe this is caused by the warp of the Galactic disc, as the warp only starts being measurable at Galactocentric distances greater than those considered here, at about 10 kpc (Cheng et al., 2020). However, it seems like the phase spiral and the Galactic warp overlap in certain regions of the Galaxy and are perhaps related.
## 5 Summary and Conclusions
In this work, we use data from _Gaia_ DR3 to investigate the _Gaia_ phase spiral by making a new model capable of fitting several of its key characteristics. We use a sample of stars with measured radial velocities to get full three-dimensional information on both their position and velocity, a sample of about 31.5 million stars. Using our model, we have been able to determine the rate of rotation of the phase spiral with Galactic azimuth and the amplitude of the phase spiral as a function of angular momentum. We find that, for the data we explore, the phase spiral rotates with Galactic azimuth. We find a peak in the amplitude of the phase spiral at \(L_{Z}\)\(\approx\)2300 km kpc s\({}^{-1}\) which manifests as a very clear phase spiral pattern in number density when using only stars with similar angular momentum.
Our main findings in this paper are listed here:
1. The phase spiral changes orientation along both Galactic radial distance and Galactic azimuth, and it rotates at a rate which is three times the rate of the azimuthal angle, a rate of \(\sim\)180\({}^{\circ}\) per 60\({}^{\circ}\) Galactic azimuth for stars with angular momenta from 2000 km kpc s\({}^{-1}\) to 2400 km kpc s\({}^{-1}\), corresponding to orbits typically found outside the Sun's position in the Galaxy.
2. The amplitude of the phase spiral pattern changes with angular momentum with a peak at about 2300 \(\pm\) 100 kpc km s\({}^{-1}\), producing a substantially clearer spiral pattern in number density.
3. The stars in the phase spiral arm are chemically very similar to those in the \(Z\)-centre of the Galactic disc. This indicates that the stars in the phase spiral originally belonged to the Galactic thin disc.
4. We can confirm the conclusions of An (2019) and Bland-Hawthorn et al. (2019) that the Z-asymmetry of the metallicity gradient of the Galaxy is caused by the metal-rich arm of the phase spiral pushing such stars to greater \(Z\)-positions.
The reason for the change in the \(L_{Z}\)-\(V_{Z}\) distribution between the solid lines in Fig. 2, the overdensity seen below the thick line, was found to be the phase spiral. The line is raised to about 15 km s\({}^{-1}\), corresponding to when the phase spiral first turns onto the negative \(Z\)-values, the lower clump sits at \(-20\) km s\({}^{-1}\) which corresponds to then the spiral arm turns back to the positive \(Z\)-values.
By combining the data from _Gaia_ with that coming from the soon-to-be operational spectrographs 4MOST (Bensby et al., 2019; Chiappini et al., 2019) and WEAVE (Jin et al., 2023), more light will be shed on the origins of the phase spiral by revealing detailed chemical abundances for millions of stars in all parts of the Milky Way.
###### Acknowledgements.
PM gratefully acknowledges support from project grants from the Swedish Research Council (Vetenskapidr, Reg: 2017-03721; 2021-04153). TB and SA acknowledge support from project grant No. 2018-04857 from the Swedish Research Council. Some of the computations in this project were completed on computing equipment bought with a grant from The Royal Physiographic Society in Lund. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating the _Gaia_ Multilateral Agreement. This research has made use of NASA's Astrophysics Data System. This paper made use of the following software packages for Python, Rungry Harris et al. (2020), Astropy Astronomy Collaboration et al. (2022), eence Forman-Mackey et al. (2013), SciPy Virtanen et al. (2020).
| Gaiaデータの公開により、銀河の構造の探索は unprecedented detail で進み、銀河盤とハローにこれまで知られていなかった構造が明らかにされました。その一つは $Z-V_Z$ 平面で星たちが螺旋密度パターンを形成する、相対螺旋です。目的:銀河の外部盤の相対螺旋の形状、回転、振幅、金属含有率を特徴づけ、物理的プロセスが相対螺旋を形成したのかを理解し、銀河の過去とその現在の状態に寄与した出来事をより明確に知るための手がかりを得る方法:Gaiaデータリリース3(DR3)を利用して、約3150万個の星に位置と速度のデータ、そして一部の星における金属含有率を取得します。これに基づいて、星々の角運動量を計算し、相対螺旋を振幅と回転で表すモデルを構築します。結果:相対螺旋の回転 |
2309.12250 | SQUARE: Automatic Question Answering Evaluation using Multiple Positive
and Negative References | Evaluation of QA systems is very challenging and expensive, with the most
reliable approach being human annotations of correctness of answers for
questions. Recent works (AVA, BEM) have shown that transformer LM encoder based
similarity metrics transfer well for QA evaluation, but they are limited by the
usage of a single correct reference answer. We propose a new evaluation metric:
SQuArE (Sentence-level QUestion AnsweRing Evaluation), using multiple reference
answers (combining multiple correct and incorrect references) for sentence-form
QA. We evaluate SQuArE on both sentence-level extractive (Answer Selection) and
generative (GenQA) QA systems, across multiple academic and industrial
datasets, and show that it outperforms previous baselines and obtains the
highest correlation with human annotations. | Matteo Gabburo, Siddhant Garg, Rik Koncel Kedziorski, Alessandro Moschitti | 2023-09-21T16:51:30 | http://arxiv.org/abs/2309.12250v1 | # SQUARE: Automatic Question Answering Evaluation using
###### Abstract
Evaluation of QA systems is very challenging and expensive, with the most reliable approach being human annotations of correctness of answers for questions. Recent works (AVA, BEM) have shown that transformer LM encoder based similarity metrics transfer well for QA evaluation, but they are limited by the usage of a single correct reference answer. We propose a new evaluation metric: SQuArE (Sentence-level QUStion AnsweRing Evaluation), using multiple reference answers (combining multiple correct and incorrect references) for sentence-form QA. We evaluate SQuArE on both sentence-level extractive (Answer Selection) and generative (GenQA) QA systems, across multiple academic and industrial datasets, and show that it outperforms previous baselines and obtains the highest correlation with human annotations.
## 1 Introduction
Automatic evaluation of Question Answering systems to gauge correctness of an answer for a question is a challenging task. This task is important for preserving a quick velocity in evaluating and development of new QA systems, and creating large high quality training corpora for LLM-based QA systems. The most common approach for this task is to obtain human annotations of correctness of answers for questions, which is slow, expensive, and challenging (annotating complete answer sentences for questions has been shown to achieve poor inter-annotator agreement).
Span extraction (MR) based QA systems are typically evaluated using token matching metrics such as EM (Exact Match) or F1, however, these cannot be extended for evaluating complete sentence-form answers coming from Answer Sentence Selection (AS2) systems [1, 13, 14]. Token/segment-level similarity metrics such as EM, F1, BLEU, etc. fail to capture the semantic coherence between entities/concepts of the answer sentence and the question. Recently, AVA [20] and BEM [1] have proposed transformer LM encoder based similarity metrics for sentence-form extractive QA evaluation by encoding the question, target answer (which needs to be evaluated) and a reference answer (which is treated as a gold standard (GS)).
One of the major limitations of AVA/BEM is the use of a single reference answer. There are several types of questions that have multiple diverse correct answers, other questions that have relevant information spread across multiple reference answers, and other ambiguous/under-specified or opinion seeking questions that may have several possible answers (we motivate this with examples in Section 3). Additionally, AVA/BEM only use information from a correct reference answer for evaluating a target answer, but information and semantics from an incorrect reference answer (which are readily available for several datasets) can also help refine the accuracy of the prediction.
Figure 1: An illustration of SQuArE: an automatic question answering evaluation metric that uses multiple references: positive and negative to evaluate the correctness of a target answer for a particular question.
Motivated by the above shortcomings of AVA/BEM, we propose SQuArE (Sentence-level QUestion AnsweRing Evaluation), a supervised transformer LM encoder based automatic QA evaluation metric that uses multiple reference answers by combining multiple correct and incorrect answers to assign a correctness score for an answer to a question. We evaluate SQuArE on four sentence-level extractive QA datasets, and show that it outperforms previous baselines and achieves the highest correlation with human annotations.
The last few years have seen several research works Hsu et al. (2021); Muller et al. (2021); Gabburo et al. (2022) transition from extractive sentence-form QA towards generating natural sounding sentence-form answers. This paradigm (termed GenQA) synthesizes information using different pieces of information spread across many relevant candidates (while suppressing any irrelevant information) to improve the answering accuracy and style suitability. AVA/BEM have only been evaluated on extractive QA, and not for GenQA, so it is unclear if a transformer encoder based semantic matching metric will correlate with human annotations on a sentence-form generated answer. We strengthen the generality of SQuArE as a QA evaluation metric by showing that it outperforms the AVA/BEM baselines for GenQA systems in addition to extractive QA systems. We will release the code and trained model checkpoints for SQuArE at [https://github.com/amazon-science/square](https://github.com/amazon-science/square) for the NLP and QA community to use our automatic QA evaluation metric.
## 2 Related Work
**Automatic Text Similarity Evaluation:** Token/N-grams level similarity metrics like BLEU Papineni et al. (2001) and ROUGE Lin (2004) are not suitable for QA evaluation, and have been shown to achieve poor correlation with human judgements Reiter (2018); Gabburo et al. (2022). Kusner et al. (2015) propose using a distance function between word embeddings for text similarity. Other research works Kusner et al. (2015); Clark et al. (2019) have proposed evaluation metrics based on Wasserstein distance. Recent years have seen a number of automatic evaluation metrics being proposed for Neural Machine Translation (MNT) and summarization tasks like BERT-Score Zhang et al. (2020)-02-24), BLEURT Sellam et al. (2020), COMET Rei et al. (2020), etc. that use contextual embeddings from transformer encoders. Similar approaches extend for text style Wegmann and Nguyen (2021) and summarization Cao et al. (2020); Zeng et al. (2021).
**QA Evaluation:** For entity level span-extraction MR tasks, Yang et al. (2018) adapt BLEU, ROUGE for answer comparison, with a focus on "yes-no" and "entity" questions. Si et al. (2021) mine entities from KBs to use them as additional gold answers for MR tasks, our approach shares this intuition of using multiple diverse reference answers for evaluation. Chen et al. (2019) propose a modification of BERTScore for QA by using the question and the paragraph context along with the answer. Empirically however, they demonstrate that for extractive MR tasks, F1 works as a reasonable metric, but this does not transfer well for generative QA. Min et al. (2021) uses human annotations to evaluate correct answers that are not contained in the GS answer. For sentence-level extractive QA (AS2), AVA Vu and Moschitti (2021) and BEM Bulian et al. (2022) are two recently proposed learned metrics.
## 3 Methodology
Being a knowledge-intensive task, automatic QA evaluation typically requires leveraging knowledge from external sources to evaluate correctness of answer (e.g., Knowledge Bases, Gold standard reference answers). We can formalize automatic QA evaluation with the notation: \(f(q,a,c){\rightarrow}p\), where \(f\) is the automatic evaluation function applied to question \(q\), target answer \(a\) and reference context \(c\), and outputs a correctness score \(p\in[0,1]\).
Previous works (AVA, BEM) show that using a single GS reference answer as the context \(c\) achieves higher correlation with human annotations that only using \(q\) and \(a\). In this paper, we propose a supervised learned metric SQuArE that enriches the reference context \(c\) for QA evaluation using: (i) multiple gold standard references, and (ii) negatively annotated answers as negative references.
**Multiple Reference Answers** In AVA/ BEM, using a single correct reference limits the evaluation scope of QA system predictions.
* Several types of questions may have multiple and diverse correct answers: for example _"What is a band?"_ is correctly answered by both _"A flat, thin strip or loop of material, used as a fastener"_ and _"A band is a group of people who perform instrumental and/or vocal music"_
* Knowledge seeking questions may have pieces of relevant information spread across multiple references: for example _"Who is Barack Obama"_ can be answered by combining information across multiple answers _"He served as the 44th president of the U.S. from 2009-2017"_, _"He was a member of the Democratic Party, and served as a U.S. senator from 2005-2008"_, etc.
* For ambiguous/under-specified questions that do not have a single correct answer or opinion seeking questions, using a single GS reference answer can be limiting and provide an incorrect evaluation of the answering capability of a QA system. Consider the question _"When is the next world cup"_ for which both the answers _"The next FIFA football world cup is in 2026"_ and _"The next ICC cricket world cup is in 2023 in India"_ are correct as the questions fails to specify the name of the sport (many more possible answers).
**Negative Reference Answers** An automatic QA evaluation system can use the information and semantics from an incorrect answer to help refine the accuracy of its prediction. Consider the question _"Which movies of Dwayne Johnson released in 2017"_ with the positive reference _"Dwayne The Rock' Johnson starrer Baywatch premiered in 2017"_. Only using this reference, both the answers _"Baywatch and Jungle Cruise"_ and _"The Fate of the Furious and Baywatch"_ appear to be equally correct for this question. However when we add in an incorrect reference for the question _"Jungle Cruise is a movie starring the Rock and Emily Blunt that released in 2021"_, the automatic QA evaluation can identify that the second answer is probably more correct than the first one. Several sentence-form extractive QA datasets such as ASNQ Garg et al. (2020), WikiQA, TREC-QA, etc. have a large number of negatively labeled answer candidates for each question, which can be exploited for automatic evaluation of QA systems for these datasets.
**SQuArE** Motivated by the above reasons, we modify the context \(c\) of automatic evaluation \(f(q,a,c){\rightarrow}p\) to include a combination of \(n_{+}\) correct and \(n_{-}\) incorrect reference answers, i.e, \(c:c^{+}{=}\{c^{+}_{1},...,c^{+}_{n_{+}}\}\cup c^{-}{=}\{c^{-}_{1},...,c^{-}_{ n_{-}}\}\). During supervised learning, SQuArE learns to minimize the semantic distance between a correct target answer from the set of correct references \(c^{+}\) and maximizing the semantic distance from the set of incorrect references \(c^{-}\). We prefix a prompt (_Pos_Ref / Neg_Ref_) to each reference to indicate the correctness/incorrectness of the reference to the model. Specifically, a \((q,a,c^{+},c^{-})\) input for SQuArE is encoded as **"Question**: \(q\) Target: \(a\)Pos_Ref: \(c^{+}_{1}\)\(\cdots\)Pos_Ref: \(c^{+}_{n_{+}}\)Neg_Ref: \(c^{-}_{1}\cdots\)Neg_Ref: \(c^{-}_{n_{-}}\)" as illustrated in Figure 1.
The choice of reference answers can create biases in automatic QA evaluation. For a given question, collecting a set of diverse reference answers and ensuring they exhaustively cover all the concepts needed to answer the question is challenging and very expensive. In this paper, we utilize existing annotated answer candidates (both positive and negative) in high-quality labeled datasets as references. Extending automatic QA evaluation to previously unseen questions (without any references) is a challenging open problem in NLP QA.
## 4 Experiments and Results
### Datasets
**WQA** Web Question Answers (WQA) is a public dataset Zhang et al. (2021) containing 149,513 questions, each associated with \(\sim\)15 answer candidates retrieved from a large-scale web index with human annotations.
**WikiQA** A small AS2 dataset Yang et al. (2015) with questions from Bing search, and answers extracted from Wikipedia. We use the most popular clean setting (questions having at least one positive and one negative answer).
**TREC-QA** A small AS2 dataset Wang et al. (2007) containing factoid questions. We only retain questions with at least one positive and one negative answer in the development and test sets.
**IQAD** A large scale Industrial QA Dataset containing non-representative de-identified user questions from a commercial personal assistant. IQAD contains 10k questions, and \(\sim\)200 answer candidates retrieved for each question using a large scale web index that contains over 100M web documents. Results on IQAD are presented relative to a baseline to comply with company policies.
**GenQA-MTURK** This dataset is composed of 3k questions from 3 datasets (1k each): MS-MARCO Bajaj et al. (2018), WikiQA and TREC-QA using GenQA models evaluated in Hsu et al. (2021); Gabburo et al. (2022). For each question we generate an answer using 8 different GenQA models (details in Appendix B) based on T5-Large. We annotate all the answers of this dataset for their correctness, using MTurk using \(5\) independent annotations for each QA pair. We use majority voting over the 5
annotations for each QA pair.
**Answer Equivalence (AE):** A question answering dataset released by Bulian et al. (2022) where each sample contains a question, a candidate answer (typically short answers), and a positive reference (typically entity-based) carefully selected to avoid the candidate-reference exact match (EM).
### Models and Baselines
We use DeBERTaV3-Large (He et al., 2021) for SQuArE, and compare with three baselines (proposed in AVA/BEM): **QT: Question-Target** that takes input a question and the target answer, **TR: Target-Reference** that takes input a reference GS answer and the target answer, and **TQR: Target-Question-Reference** that takes as input a question, the target answer and a reference GS answer. For our experiments, we set the total number of reference \((n_{+})+(n_{-}){=}5\) per question.
We also compare SQuArE against two additional baselines: (i) **BEM**(Bulian et al., 2022), a recently released reference-based automatic evaluation metric (trained on the AE dataset), and (ii) a large language model (**LLM**) based approach using two versions of the Falcon (Almazrouei et al., 2023) model. For fair comparison with the baselines, we perform evaluation in the zero-shot setting for the WikiQA and TrecQA datasets, and after fine-tuning on the AE dataset. For more details on the implementation of these baselines, refer to Appendix A.2.
### Results
We present results comparing SQuArE with the baselines on large datasets (from both extractive QA: AS2 and generative QA: GenQA) in Table 1. Using GS human annotations for each dataset, we compute accuracy, Area Under the Curve (AUROC), and Pearson Correlation of each automatic QA metric. We observe that on all datasets, SQuArE significantly outperforms the baselines and achieves the highest accuracy and AUROC with human annotations.
**Zero-shot Setting:** To show strong generalization to out-of-distribution datasets (zero-shot setting), we train SQuArE and the other baselines on the WQA dataset, and use this for evaluation on other datasets. Specifically, we use two small datasets: WikiQA and TREC-QA (exploring both extractive: AS2 and generative settings), and one large dataset MS-MARCO. Results presented in Table 2 highlight that SQuArE achieves the highest accuracy and correlation with human annotations.
**Comparison with BEM and LLMs:** We present comparison with BEM and LLM baselines in Table 3 on WikiQA, TrecQA and Answer Equivalence (AE) datasets. On the WikiQA and TrecQA datasets, the results show that SQuArE outperforms both the baselines, which stems from (i) the usage of multiple references, and (ii) the references for these datasets being complete sentences in comparison to entities/short-answers which are used for training BEM. On the AE dataset, zero-shot SQuArE (which is trained on the WQA dataset) performs inferior (0.572 vs 0.897 in accuracy) to the
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline
**Dataset** & **Technique** & **\# Refs** & **Accuracy** & **AUROC** & **Correlation** \\ \hline \multicolumn{6}{c}{**Answer Sentence Selection (AS2)**} \\ \hline \multirow{4}{*}{WQA} & AVA-TR & 1 & 0.734 & 0.809 & 0.716 \\ & AVA-QT & 0 & 0.790 & 0.851 & 0.750 \\ & AVA-TQR & 1 & 0.809 & 0.873 & 0.771 \\ & SQuArE & **5** & **0.833** & **0.896** & **0.793** \\ \hline \multirow{4}{*}{IQAD} & AVA-TR & 1 & Baseline & Baseline & Baseline \\ & AVA-QT & 0 & +1.946 & -0.393\% & +0.682\% \\ & AVA-TQR & 1 & +8.02\% & +5.75 & +6.178\% \\ & SQuArE & **5** & **+22.24\% & **+14.01\%** & **+16.062\%** \\ \hline \multicolumn{6}{c}{**Answer Generation (GenQA)**} \\ \hline \multirow{4}{*}{MS-MARCO} & AVA-TR & 1 & 0.882 & 0.768 & 0.610 \\ & AVA-QT & 0 & 0.882 & 0.777 & 0.623 \\ & AVA-TQR & 1 & 0.878 & 0.790 & **0.636** \\ & SQuArE & 5 & **0.895** & **0.832** & 0.629 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on WQA, IQAD, MS-MARCO measured using Accuracy, Area under the curve and Pearson Correlation with gold labels. Results on IQAD are relative to AVA-TR baseline (due to data being internal). # Refs refers to the total number of reference answers used for the metric.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Dataset** & **Technique** & **Accuracy** & **AUROC** & **Correlation** \\ \hline \multicolumn{6}{c}{**Answer Sentence Selection (AS2)**} \\ \hline \multirow{4}{*}{WikiQA} & AVA-TR & 0.701 & 0.633 & 0.532 \\ & AVA-QT & 0.900 & 0.804 & 0.637 \\ & AVA-TQR & 0.903 & 0.805 & 0.632 \\ & SQuArE & **0.919** & **0.851** & **0.676** \\ \hline \multirow{4}{*}{TrecQA} & AVA-TR & 0.911 & 0.913 & 0.816 \\ & AVA-QT & 0.885 & 0.927 & 0.737 \\ & AVA-TQR & 0.906 & **0.972** & 0.797 \\ & SQuArE & **0.924** & 0.969 & **0.842** \\ \hline \multicolumn{6}{c}{**Answer Generation (GenQA)**} \\ \hline \multirow{4}{*}{MS-MARCO} & AVA-TR & 0.843 & 0.683 & 0.587 \\ & AVA-QT & 0.772 & 0.693 & 0.580 \\ & AVA-TQR & 0.839 & 0.738 & 0.601 \\ & SQuArE & **0.845** & **0.773** & **0.620** \\ \hline \multirow{4}{*}{WikiQA} & AVA-TR & 0.692 & 0.670 & 0.602 \\ & AVA-QT & 0.627 & 0.798 & 0.667 \\ & AVA-TQR & 0.671 & 0.811 & 0.678 \\ & SQuArE & **0.694** & **0.819** & **0.690** \\ \hline \multirow{4}{*}{TrecQA} & AVA-TR & 0.847 & 0.784 & 0.615 \\ & AVA-QT & 0.709 & 0.816 & 0.612 \\ \cline{1-1} & AVA-TQR & 0.779 & **0.857** & 0.647 \\ \cline{1-1} & SQuArE & **0.890** & 0.818 & **0.671** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Zero-shot evaluation using QA evaluation models trained on WQA. Same metrics used as Table 1.
BEM baseline (which is trained on the AE dataset). This drop in zero-shot performance of SQuArE compared to BEM can be attributed to (i) the lack of multiple references, and (ii) the references in AE being of a different style/format than those used for training SQuArE (entities/short answers v/s complete sentences). On fair comparison (when SQuArE(AE) is fine-tuned on the AE dataset), it is able to beat the BEM baseline in both accuracy (0.908 vs 0.897) and AUROC (0.966 vs 0.859).
**Comparison with text similarity metrics:** We also compare SQuArE with learned text similarity metrics: BLEURT and BERTScore in Table 4. Results show that SQuARe achieves a higher correlation with manual annotations than BLEURT and BERTScore. For complete details, see Appendix C.
### Ablation studies
To assess the improvements from different design choices used in SQuArE, we conduct ablation studies to show how the use of negative and multiple references improves the performance and correlation with human annotations. To perform these studies we pick one dataset (WQA) and present comparisons in Tab. 5.
**Usage of Negative references:** To support our claim that using negative references can improve the automatic QA evaluation, we compare two additional models/baselines: (i) AVA-TQR(-) which refers to an AVA baseline which only uses a single negative reference, and (ii) SQuArE(+) which refers to a SQuArE model which only uses multiple positive references. On comparison with results in Table 1, AVA-TQR(-) outperforms both AVA-QT (model without references) and AVA-TR (model without the question). This validates our intuition on the importance of negative references. SQuArE(+) outperforms the AVA-TQR baseline, but performs inferior to the SQuArE using a combination of both positive and negative references, thereby validating our claim that the combination of positive and negative references improves the accuracy and the generalizability of SQuArE.
**Number of references:** We hypothesize that higher number of labeled references help with improved correlation of SQuArE with human evaluation. To support this intuition, we present an ablation study where we vary the total number of references from 5 per question to: (i) using 3 references per question, and (ii) randomly sampling \(\in[1,5]\) references per question. We observe that SQuArE using 5 references outperforms SQuArE using 3 references (0.833 v/s 0.821 in accuracy), while SQuArE using a random sample of \(\in[1,5]\) references (0.820 accuracy) performed comparable to SQuArE using 3 references.
## 5 Conclusion
In this paper, we propose SQuArE transformer LM encoder-based learned metric that uses multiple reference answers (positive + negative) for automatically evaluating sentence-level QA systems. We evaluate sentence-level extractive QA: AS2 and answer generation (GenQA) systems across multiple academic and industrial datasets and show that SQuArE achieves the highest correlation with human annotations beating previous baselines.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Dataset** & **SQuArE** & **BLEURT** & **BERTScore** \\ \hline MS-MARCO & **0.238** & 0.142 & 0.168 \\ WikiqA & **0.425** & 0.219 & 0.233 \\ TreeQA & **0.862** & 0.341 & 0.646 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Pearson Correlation of evaluation metrics with human annotations on GenQA-MTURK.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Dataset** & **Approach** & **\# Refs** & **Accuracy** & **AUROC** \\ \hline \multirow{4}{*}{WikiQA} & BEM & 1 & 0.863 & 0.553 \\ & Falcon-7B & 1 & 0.081 & 0.448 \\ & Falcon-40B & 1 & **0.963** & 0.499 \\ & SQuArE & 5 & 0.919 & **0.851** \\ \hline \multirow{4}{*}{TreeQA} & BEM & 1 & 0.866 & 0.819 \\ & Falcon-7B & 1 & 0.601 & 0.529 \\ \cline{1-1} & Falcon-40B & 1 & 0.848 & 0.509 \\ & SQuArE & 5 & **0.924** & **0.969** \\ \hline \multirow{4}{*}{AE} & BEM & 1 & 0.897 & 0.959 \\ & SQuArE & 1 & 0.572 & 0.718 \\ \cline{1-1} & SQuArE(AE) & 1 & **0.908** & **0.966** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparing SQuArE against BEM and LLM baselines on the WikiqA, TreeQA and AE datasets. The BEM baseline is trained on the AE dataset. We use the same metrics as Table 1.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Technique** & **\# Refs** & **Accuracy** & **AUROC** & **Correlation** \\ \hline AVA-TQR(-) & 1 & 0.800 & 0.864 & 0.763 \\ SQuArE(+) & 5 & 0.815 & 0.885 & 0.783 \\ SQuArE & 3 & 0.821 & 0.889 & 0.787 \\ SQuArE & [1,5] & 0.820 & 0.889 & 0.786 \\ SQuArE & 5 & **0.833** & **0.896** & **0.793** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation studies evaluating the benefits of using negative references, and the impact of number of references on the performance of SQuArE. AVA-TQR(-) and SQuArE(+) refer to an AVA model only using negative references and a SQuArE model only using positive references. # Refs is the total number of references used for the metric. [1,5] refers to the number of references being randomly sampled \(\in[1,5]\).
### Limitations
Our approach of training QA evaluation metrics requires access to large GPU resources for training large transformer encoders such as DeBERTa, etc. For the experiments in this paper, we only consider datasets from the English language, however we conjecture that our techniques should work similarly for languages with a similar morphology. Since SQuArE is a learned evaluation metric based on large transformers, it might be challenging to effectively learn in a scarce-data setting. While we have shown impressive zero-shot evaluation results in Table 2, extending to a completely new data domain/new language might be challenging for SQuArE to adapt to without access to any labeled data. As visible from Tables 1 and 2, SQuArE's accuracy on human annotations is in the range of 80-90%, highlighting that there is still a gap with respect to human evaluation. For safety critical applications, human evaluation still remains the best means to evaluate Question Answering systems.
| QAシステムの評価が非常に難しいと高価であり、最も信頼性の高いアプローチは、質問の答えの正確性を評価するための人間の注釈です。近年、(AVA、BEM)の研究では、文法的な意味を捉えることができるtransformer LMエンコーダーベースの類似性指標がQA評価に適応していることが示唆されています。しかし、これらの指標は、1つの正解の参照回答を使用するため、制限があるという点が指摘されています。私たちは、複数の参照回答(複数の正解と不正解の参照を使用)を用いることで、文法的な意味を捉えることができる新たな評価指標であるSQuArE(Sentence-level QUestion AnsweRing Evaluation)を提案しました。SQuArEは、文章レベルの選出型(回答選択)と生成型(GenQA)のQAシステムを、複数の学術的および工業的データセットで評価し、前述の基線と比較して、優れた性能を発 |
2305.00581 | Multimodal Graph Transformer for Multimodal Question Answering | Despite the success of Transformer models in vision and language tasks, they
often learn knowledge from enormous data implicitly and cannot utilize
structured input data directly. On the other hand, structured learning
approaches such as graph neural networks (GNNs) that integrate prior
information can barely compete with Transformer models. In this work, we aim to
benefit from both worlds and propose a novel Multimodal Graph Transformer for
question answering tasks that requires performing reasoning across multiple
modalities. We introduce a graph-involved plug-and-play quasi-attention
mechanism to incorporate multimodal graph information, acquired from text and
visual data, to the vanilla self-attention as effective prior. In particular,
we construct the text graph, dense region graph, and semantic graph to generate
adjacency matrices, and then compose them with input vision and language
features to perform downstream reasoning. Such a way of regularizing
self-attention with graph information significantly improves the inferring
ability and helps align features from different modalities. We validate the
effectiveness of Multimodal Graph Transformer over its Transformer baselines on
GQA, VQAv2, and MultiModalQA datasets. | Xuehai He, Xin Eric Wang | 2023-04-30T21:22:35 | http://arxiv.org/abs/2305.00581v1 | # Multimodal Graph Transformer for Multimodal Question Answering
###### Abstract
Despite the success of Transformer models in vision and language tasks, they often learn knowledge from enormous data implicitly and cannot utilize structured input data directly. On the other hand, structured learning approaches such as graph neural networks (GNNs) that integrate prior information can barely compete with Transformer models. In this work, we aim to benefit from both worlds and propose a novel Multimodal Graph Transformer for question answering tasks that requires performing reasoning across multiple modalities. We introduce a graph-involved plug-and-play quasi-attention mechanism to incorporate multimodal graph information, acquired from text and visual data, to the vanilla self-attention as effective prior. In particular, we construct the text graph, dense region graph, and semantic graph to generate adjacency matrices, and then compose them with input vision and language features to perform downstream reasoning. Such a way of regularizing self-attention with graph information significantly improves the inferring ability and helps align features from different modalities. We validate the effectiveness of Multimodal Graph Transformer over its Transformer baselines on GQA, VQAv2, and MultiModalQA datasets.
## 1 Introduction
A myriad of complex real-world tasks require both prior knowledge and reasoning intelligence Yi et al. (2018); Ilievski and Feng (2017). These days, vision-and-language reasoning tasks such as as vision question answering (VQA) Antol et al. (2015) and multimodal question answering (MultimodalQA) Talmor et al. (2021) post further needs for integrating structured info from different input modalities and thus perform reasoning. Towards this, two questions yield: What is the best way to integrate prior knowledge and reasoning components from multiple modalities in a single model? How would such an integration lead to accurate models, while being more computationally efficient and allowing for significantly more interpretability? Such questions are important to address when scaling reasoning systems to real-world use cases.
These years, there are a spectrum of methods in the literature exploring different ways of integrating structured prior information. Graph neural networks (GNNs) Wu et al. (2020), have been widely used in representation learning on graphs. Some experts tried to investigate the embedding of the structured information by resorting to them. However, GNNs are inefficient Wu et al. (2020) and they can barely compete with Transformer models. Besides, most GNNs are designed to learn node representations on fixed and homogeneous graphs. Thereby, it is suboptimal to operate GNNs on vision-and-language tasks such as visual question answering (VQA), w
Figure 1: Overview of Multimodal Graph Transformer. It takes visual features, text features, and their corresponding generated graphs as inputs. The generated graph is first converted to an adjacency matrix to induce the mask matrix \(\mathbf{G}\). The modified quasi-attention score in the Transformer is computed to infer the answer. In the formular, \(\mathbf{G}\) is the graph-induced matrix constructed by concatenating adjacency matrices both from the vision and the language end. \(\mathbf{\hat{G}}\) is the trainable bias. The input features from different modalities are fused along with graph info to perform downstream reasoning.
in these problems (e.g. scene graphs) can be more complex; Alternatively, knowledge graphs (KGs), such as Freebase Bollacker et al. (2008), represent world-level factoid information of entities and their relations in a graph-based format, surfaced these years. They have been successfully used in vision and language applications including VQA Marino et al. (2019). However, they have not been dedicated to be applied to our scenario, more concretely, we aim at filling the gap of capturing prior knowledge in Transformer models.
To mitigate deficiencies of the existing methods, this paper proposes a novel plug-and-play graph-involved Transformer-based method for multimodal question answering tasks. Our method is _Multimodal Graph Transformer_ in the sense that it is built upon the well-established Transformer Vaswani et al. (2017) backbone, albeit with several key fundamental differences. First, we introduce a systematic scheme to convert text graphs, dense region graphs, and semantic graphs from vision and language tasks to adjacency matrices to use in our method. Second, instead of directly computing the attention score, we learn the newly proposed quasi-attention score with graph-induced adjacency matrices live at its heart, to signify the importance of learning relative importance as a highly effective inductive bias for computing the quasi-attention score. Third, different from previous Transformer methods, where self-attention are fully learned from data, we switch gears to introduce the graph-structured information in the self-attention computation to guide the training of Transformers as shown in Figure 1.
The main contributions are summarized below:
* We propose a novel Multimodal Graph Transformer learning framework that combines multimodal graph learning from unstructured data with Transformer models.
* We introduce a modular plug-and-play graph-involved quasi-attention mechanism with a trainable bias term to guide the information flow during training.
* The effectiveness of the proposed methods is empirically validated on GQA, VQA-v2, and MultiModalQA tasks.
## 2 Related Works
### Multimodal question answering
Visual Question Answering (VQA)Antol et al. (2015) has been a prominent topic in the field of multimodal question answering, garnering significant attention and advancing significantly since the introduction of the first large-scale VQA dataset byAntol et al. (2015). To answer VQA questions, models typically leverage variants of attention to obtain a representation of the image that is relevant to the question Andreas et al. (2016); Yang et al. (2015); Xu and Saenko (2016); Fukui et al. (2016); Lu et al. (2016). A plethora of works Liang et al. (2021); Hudson and Manning (2018); Yi et al. (2018); Xiong et al. (2016); Kim et al. (2018); Teney et al. (2017) have attempted to enhance the reasoning capability of VQA models, with Teney et al. (2017) proposing to improve VQA using structured representations of the scene contents and questions. They developed a deep neural network that leverages the structure in these representations and builds graphs over scene objects and question words. The recent release of MultiModalQA Talmor et al. (2021), a dataset that demands joint reasoning over texts, tables, and images, has received widespread attention. However, similar to VQA, existing MultiModalQA methods have not fully utilized structured information from the input concepts. To address this, we propose a combination of multimodal graph learning and Transformer models to improve question answering across inputs from multiple different modalities.
### Attention mechanisms
The attention mechanism Xu et al. (2015, 2015); Devlin et al. (2018), has dramatically advanced the field of representation learning in machine learning. The attention mechanism is introduced in Vaswani et al. (2017) and widely used in language tasks (i.e., abstract summarization Xu et al. (2020)), machine translation Bahdanau et al. (2014), reading comprehension Dai et al. (2020), question answering Min et al. (2019), etc. Zhang et al. (2020) proposes using syntax to guide the text modeling by incorporating explicit syntactic constraints into attention mechanisms. Meanwhile, it has seen increasing application in multimodal tasks Li et al. (2020); Nam et al. (2017); Lu et al. (2016), where it is usually used for learning of interactions between multiple inputs. Following their success, Transformer models have also shown impressive results
on several vision-and-language tasks Chen et al. (2019); Hu et al. (2020); He et al. (2022); Sun et al. (2019). Yun et al. (2019) proposes Graph Transformer Networks (GTNs) that can generate new graph structures and learn effective node representation on the new graphs in an end-to-end fashion. Different from these works, our work incorporates graph information from different modalities into the Transformer to improve the reasoning ability.
### Exploiting graphs in multimodal reasoning
Considering that graph priors can transfer commonalities and mitigate the gap between visual and language domains, researchers explore how to use graphs Teney et al. (2017); Yu et al. (2020) properly in both tasks. In recent years, many classes of GNNs have been developed for both tasks which are divided into two approaches: spectral Bruna et al. (2013) and non-spectral methods Chen et al. (2018). Graphs can also be transferred into latent variables by GCN Yang et al. (2019); Yao et al. (2018), which can be directly utilized by models. However, the need for aligning graph priors from different modalities to do reasoning limits the use of graph priors. Our work addresses this problem via the graph-involved quasi-attention mechanism.
### Pretraining
Pretrained models in computer vision Simonyan and Zisserman (2014); He et al. (2016) and NLP Devlin et al. (2018); Yang et al. (2019); Liu et al. (2019), have achieved state-of-the-art performances in many downstream tasks Zhu et al. (2017); Jiang et al. (2022); Karpathy and Fei-Fei (2015); Lee et al. (2018). Other pretrained models Lu et al. (2019); Sun et al. (2019) based on BERT Devlin et al. (2018) and ViLT Kim et al. (2021) also demonstrate their effectiveness on downstream vision-language tasks. Recent works on vision-language pretraining such as OSCAR Li et al. (2020) perform cross-modal alignment in their visual-language pretraining models. Likewise, our proposed method includes cross-modality alignment, which is critical for reasoning. Our proposed modular plug-and-play graph-involved quasi-attention mechanism is also model-agnostic and can be also applied to other pretrained Transformer-based vision and language models.
## 3 Multimodal Graph Transformer
### Background on Transformers
The Transformer layer Vaswani et al. (2017) consists of two modules: a multi-head attention and a feed-forward network (FFN). Specifically, each head is represented by four main matrices: the query matrix \(\mathbf{W}_{i}^{q}\in\mathbb{R}^{d^{m}\times d^{q}/h}\), the key matrix \(\mathbf{W}_{i}^{k}\in\mathbb{R}^{d^{m}\times\frac{d^{k}}{h}}\), the value matrix \(\mathbf{W}_{i}^{v}\in\mathbb{R}^{d^{m}\times\frac{d^{v}}{h}}\), and the output matrix \(\mathbf{W}_{i}^{o}\in\mathbb{R}^{\frac{d^{v}}{h}\times d^{o}}\), and takes the hidden states \(\mathbf{H}\in\mathbb{R}^{l\times d^{m}}\) of the previous layer as input, where \(d\) denotes the dimension of the model, \(h\) represents the number of head, and \(i\) denotes the index of layer number. The output of attention is given by:
\[\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i}=\mathbf{H}\mathbf{W}_{i}^{q},\mathbf{H}\mathbf{W}_{i}^{k}, \mathbf{H}\mathbf{W}_{i}^{v} \tag{1}\]
\[\mathrm{Attention}\left(\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i}\right)=\mathrm{SoftMax }\left(\frac{\mathbf{Q}_{i}\mathbf{K}_{i}^{T}}{\sqrt{\frac{d^{q,k}}{h}}}\right)\mathbf{V}_ {i} \tag{2}\]
\[\mathbf{H}_{i}=\mathrm{Attention}\left(\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i}\right)\mathbf{W }_{i}^{o} \tag{3}\]
where \(\mathbf{Q}_{i}\in\mathbb{R}^{l\times\frac{d^{q}}{h}},\mathbf{K}_{i}\in\mathbb{R}^{l \times\frac{d^{k}}{h}},\mathbf{V}_{i}\in\mathbb{R}^{l\times\frac{d^{v}}{h}}\) are obtained by the linear transformations of \(\mathbf{W}_{i}^{q},\mathbf{W}_{i}^{k},\mathbf{W}_{i}^{v}\) respectively. \(\mathrm{Attention}(\cdot)\) is the scaled dot-product attention operation. Then output of each head is transformed to \(\mathbf{H}_{i}\in\mathbb{R}^{l\times d^{o}}\) by \(\mathbf{W}_{i}^{o}\).
### Framework overview
The entire framework of the proposed Multimodal Graph Transformer method is depicted in Figure 2. Without loss of generality, we assume the end task is VQA in the following discussion while noting that our framework can be applied to other vision-language tasks, such as multimodal question answering.
Given the input images and questions, the framework first constructs three graphs, including the semantic graph, dense region graph, and text graph, which will be described in more detail in the following sections. The graph \(G=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) represents the set of nodes in the graph and \(\mathcal{E}\) represents the edges connecting them, is fed into Transformers to guide the training process.
### Multimodal graph construction
We build three types of graphs and feed them into Transformers: _text graph_, _semantic graph_, and _dense region graph_. We now introduce them in detail.
Text graphThe task of Visual Question Answering involves a combination of an image, a question, and its corresponding answer. To process the question, we extract the entities and create a text graph representation. We then build the graph \(G=(\mathcal{V},\mathcal{E})\) as shown in the left of Figure 2. The set of nodes, \(\mathcal{V}\), represents the entities and the set of edges, \(\mathcal{E}\), represents the relationships between the pairs of entities. This results in:
* A set of \(N\) entities, each represented by a vector of token embeddings, that constitute the nodes of the graph.
* A set of pairwise relations between entities, forming the edges of the text graph. The relationship between entities \(i\) and \(j\) is represented by a vector \(e_{ij}\) which encodes the relative relationships.
Semantic graphIn tasks such as multimodal question answering, there might be additional inputs in the form of tables or lengthy paragraph sentences. To handle these inputs, a linear representation of the table can be created and a semantic graph can be constructed using a similar approach. They are processed using the scene graph parser Zhong et al. (2021), which transforms the text sentence into a graph of entities and relations, as depicted in Figure 3. The output of the scene graph parser includes:
* A set of \(N\) words that constitute the nodes of the semantic graph, where \(N\) is the number of parsed words in the texts.
* A set of possible pairwise relations between words, such as "left" and "on" as shown in Figure 3, which constitute the edges of our graph. An edge between words connecting \(j\) to \(i\) is represented by \(e_{ij}\), namely, the connectivity is indicated as: \(e_{ij}=\begin{cases}0,&i,j\ \text{ not connected}\\ 1,&i,j\ \text{ connected}\end{cases}\).
Dense region graphThe visual features are extracted by slicing the input images into patches and flattening them. A dense region graph \(G=(\mathcal{V},\mathcal{E})\) is then converted into masks, with \(\mathcal{V}\) being the set of extracted visual features and \(\mathcal{E}\) being the set of edges connecting each feature node, following the method described in Kim et al. (2021). This results in a graph that is nearly fully connected.
The resulting three graphs are then transformed into adjacency matrices, where the elements are either -\(\infty\) or zero. The conversion process is depicted in Figure 3 using the semantic graph as an
Figure 3: The naive demonstration of converting a semantic graph into an adjacency matrix. Cells in blue means ’0’s for that element in the graph matrix, while white ones means ’-inf’s. We employ the matrix as the mask when computing the quasi-attention.
Figure 2: The figure illustrates the overall framework of our Multimodal Graph Transformer. The input from different modalities are processed and transformed into corresponding graphs, which are then converted into masks and combined with their features to be fed into Transformers for downstream reasoning. In detail, semantic graphs are created through scene graph generation methods, dense region graphs are extracted as densely connected graphs, and text graphs are generated through parsing.
example. These adjacency matrices are used inside the scaled dot-product attention to control the flow of information, by masking out (setting to \(-\infty\)) the values.
### Graph-involved quasi-attention
In order to effectively utilize structured graph knowledge in our self-attention computation, we incorporate the graph as an extra constraint in each attention head by converting it into an adjacency matrix. The graph matrix, denoted as \(\mathbf{G}\), is constructed by combining various masks. An illustration of this process can be seen in Figure 4. The visual mask is generated from the dense region graph, while the text mask is derived from the text graph. Additionally, the cross-modal mask is set to an all-zero matrix to encourage the model to learn the cross-attention between visual and text features, thereby promoting alignment across the different modalities.
Within the context of adding graph information, when vision graph mask and text graph mask are concatenated and aligned with image and text features, we believe that a more flexible masking-out mechanism is beneficial, rather than keeping a single constant mask matrix inside the Softmax operation. Drawing insights from Liu et al. (2021), where they include a relative position bias to each head in computing similarity, we also intuitively parameterize a trainable bias \(\hat{\mathbf{G}}\) and involve it in the training process. Finally, we compute the quasi-attention as follows:
\[\mathrm{Attention}=\mathrm{SoftMax}(\frac{\mathbf{Q}_{i}\mathbf{K}_{i}^{T}}{\sqrt{ \frac{d^{\parallel k}}{h}}}+\mathbf{G}+\lambda\hat{\mathbf{G}})\mathbf{V}_{i}, \tag{4}\]
where \(\lambda\) is the tradeoff hyper-parameter that controls the contribution of \(\hat{\mathbf{G}}\), and \(\mathbf{G}\) is our graph-induced matrix constructed by concatenating a graph matrix both from the vision and the language end. Here for clear clarification, we use \(\mathbf{G}\) and \(\hat{\mathbf{G}}\) to distinguish the graph matrices fixed and trainable, respectively. During training, \(\mathbf{G}\) is frozen as before and does not receive gradient updates, while \(\hat{\mathbf{G}}\) contains trainable parameters.
We now introduce the motivation behind adding two types of graph matrices. We perform the masking process by adding \(\mathbf{G}\) when computing the quasi-attention because it can be interpreted as a form of attentional pooling (learning to align), in which each element of \(\mathbf{G}\) pools all relevant information across all elements of the relative importance matrix computed by \(\left(\frac{\mathbf{Q}_{i}\mathbf{K}_{i}^{T}}{\sqrt{\frac{d^{\parallel k}}{h}}}\right)\). Hence during fine-tuning, the model ignores redundant features and only focuses on useful information. The mask can also force the model to learn the cross attention between features from the images and questions and perform aligning across them. And the trainable bias \(\hat{\mathbf{G}}\) captures information gained during the training process. Such information is valuable for fine-tuning, making the Transformer more robust and helping it gain numerical stability.
### Training
The interdependence of output features from various modalities calls for a unified optimization approach for the Transformers in both the visual question answering and multimodal question answering tasks. To accomplish this, we implement a kind of end-to-end training, which ensures the optimality of the models. The final outcome of our models is a classification logit, which is generated by the VQA models that select the best answer from the available candidate answers. To evaluate the accuracy of the models, we compute the cross-entropy loss Zhang and Sabuncu (2018) using the output logits produced by the Transformer. This measure helps us determine the difference between the predicted class probabilities and the actual class labels.
## 4 Experiments
### Datasets
#### Vqa v2
The VQA v2 dataset Goyal et al. (2017) extends the VQA Antol et al. (2015) dataset to better balance visual and textual information through the collection of complementary images. Each
Figure 4: A naive demonstration of adding the graph-induced mask while computing the quasi-attention when the inputs are from two modalities. The visual mask is the mask converted from the dense region graph and the text mask is converted from the text graph. The cross-modal mask, which is always set as an all-zero matrix, is imposed to encourage the model to learn the cross-attention between the image features and text features, thus facilitating the alignment across them.
question in VQA v2 is associated with a pair of similar images with different answers, resulting in a total of 1.1 million QA pairs and 204,000 images. The data split for VQA v2 includes a training set with 83,000 images and 444,000 questions, a validation set with 41,000 images and 214,000 questions, and a test set with 81,000 images and 448,000 questions. The annotated answers are in natural language, but they are commonly converted to a classification task with 3,129 answer classes. As described by Anderson et al. (2018), the model selects the answer to each question from a set of 3,129 most frequent answers. Following this convention, we fine-tune the multimodal graph transformer model on the VQAv2 training and validation sets, while reserving 1,000 validation images and related questions for internal validation.
GqaThe GQA dataset contains 22M questions over 113K images. The questions in GQA are designed to require multi-hop reasoning to test the reasoning skills of VQA models. GQA greatly increases the complexity of the semantic structure of questions, leading to a more diverse function set. The real-world images in GQA also bring in a bigger challenge in visual understanding. We treat the task as the classification task reffering to the VQA v2 setting.
MultiModalQAMultiModalQA (MMQA) contains 29, 918 questions. We split the dataset with reference to the public split. Around 60% of the questions in MMQA are compositional. The answer for each question can be a single answer or a list of answers.
### Baselines
We compare with four state-of-the-art VQA models: LXMERT Tan and Bansal (2019), NSM Hudson and Manning (2019), OSCAR Li et al. (2020), and VinVL Zhang et al. (2021).
* LXMERT Tan and Bansal (2019) designs five pretraining tasks: masked language modeling, feature regression, label classification, cross-modal matching, and image question answering to pretrain a large Transformer model. Towards this, a large-scale Transformer Vaswani et al. (2017) model is built that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modal encoder.
* NSM Hudson and Manning (2019) predicts a probabilistic graph that represents its underlying semantics and performs sequential reasoning over the graph to traversing its nodes to make the inference.
* OSCAR Li et al. (2020) uses object tags detected in images as anchor points to significantly ease the learning of alignments, improving previous methods and using self-attention to learn image-text semantic alignments.
* VinVL Zhang et al. (2021) developed a new object detection model to create better visual features of images than previous classical object detection models.
We compare with four baselines introduced in the MultiModalQA paper Talmor et al. (2019),
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & Method & Open questions & Binary questions & Overall accuracy \\ \hline \multirow{6}{*}{GQA} & LXMERT Tan and Bansal (2019) & - & - & 60.0 \\ & LXMERT w/ Graph Tan and Bansal (2019) & - & - & 61.4 \\ & HANS Kim et al. (2020) & - & - & 69.4 \\ & NSM Hudson and Manning (2019) & 49.3 & 78.9 & 63.2 \\ & OSCAR Li et al. (2020) & - & - & 61.6 \\ & VinVL Zhang et al. (2021) & - & - & 65.1 \\ & Multimodal Graph Transformer (Ours) & 59.4 & 80.5 & 68.7 \\ \hline \multirow{6}{*}{VQA v2} & LXMERT Tan and Bansal (2019) & - & - & 72.4 \\ & HANS Kim et al. (2020) & - & - & 65.1 \\ \cline{1-1} & NSM Hudson and Manning (2019) & - & - & 63.0 \\ \cline{1-1} & OSCAR Li et al. (2020) & - & - & 73.8 \\ \cline{1-1} & VinVL Zhang et al. (2021) & - & - & 76.6 \\ \cline{1-1} & Multimodal Graph Transformer (Ours) & 66.5 & 87.0 & 74.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy (%) comparison of different methods on the VQA task. Ours has the second best performance and is comparable to state-of-the-art methods. After applying our proposed quasi-attention mechanism and exploiting the use of graphs, there is also a 2% improvement of overall accuracy on the LXMERT baseline, suggesting the generalization ability of our method.
2021): Question-only (Kaushik and Lipton, 2018), Context-only (Kaushik and Lipton, 2018), AutoRouting, ImplicitDecomp.
* Question-only is a sequence-to-sequence model that directly generates the answer given the question.
* Context-only first predicts the question type using the classifier and then feed in the relevant context to predict the answer.
* AutoRouting first determines the modality where the answer is expected to occur, and then runs the corresponding single-modality module.
* ImplicitDecomp is a 2-hop implicit decomposition baseline and so far the state-of-the-art method on the MultiModalQA dataset.
### Implementation details
The input texts undergo preprocessing using a scene graph parser which extracts entities and their relationships. The text features are obtained through a pre-trained BERT tokenizer, allowing us to extract text spans of individual entities and text spans containing two related entities. As for images, we employ the methods described in Dosovitskiy et al. (2020); Kim et al. (2021) to extract visual features and create graph masks. This involves resizing the shorter edge of the input images while preserving the aspect ratio and limiting the longer edge, followed by patch projection and padding for batch training. The resulting patch embeddings are used as inputs along with constructed dense region graph that is densely connected. The Transformer backbone used in this setting is the pretrained VIT-B-32 (Dosovitskiy et al., 2020) version, consisting of 12 layers with a hidden size of \(H\) = 768, layer depth of \(D\) = 12, patch size of \(P\) = 32, a multi-layer perceptron size of 3072, and 12 attention heads. To test this setting, all inputs and graphs are merged and processed by the Transformer backbone, which learns from features from different modalities.
#### 4.3.1 MultiModalQA
We further investigate the effectiveness of our proposed method on MultiModalQA (Talmor et al., 2021), a recently introduced and demanding task that requires joint reasoning across various modalities such as texts, images, tables, etc. We employ a Multimodal Graph Transformer to tackle the task, using the same approach for extracting vision and text features as in VQA. Additional modalities, such as tables, are encoded by linearizing them and utilizing pre-trained models like RoBERTalarge (Liu et al., 2019). After generating text graphs, semantic graphs, and dense region graphs from input questions, text, tables, and images, we feed them along with the extracted features into the Transformer.
### Results and analysis
Table 1 presents a comparison of the accuracy of our proposed method on the GQA dataset with previous state-of-the-art methods. Our proposed method ranks second in terms of accuracy and outperforms the third best method by a substantial margin, with an absolute improvement of over 3% in overall accuracy. The performance of our method is comparable to the state-of-the-art method.
We also conducted experiments on the VQA v2 dataset, and the results are summarized in Table 1 and Table 3. As shown, there are significant improvements over methods without graphs, suggesting that incorporating graph information into the Transformer is effective.
Additionally, after incorporating our proposed graph method into LXMERT, we can observe a boost in overall accuracy on the GQA dataset, demonstrating the generalization ability of the proposed method in incorporating graph information into quasi-attention computation.
Table 2 compares the Exact Match (EM) and average F1 score of our proposed method on the
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & EM & F1 \\ \hline Question-only & 16.9 & 19.5 \\ Context-only & 6.6 & 8.5 \\ \hline AutoRouting & 32.0 & 38.2 \\ ImplicitDecomp & 46.5 & 51.7 \\ \hline Human & 84.8 & 90.1 \\ \hline \hline Multimodal Transformer w/o Graph & 50.1 & 56.4 \\ Multimodal Graph Transformer (Ours) & 52.1 & 57.7 \\ \hline \hline \end{tabular}
\end{table}
Table 2: EM (%) and F1 (%) of Multimodal Graph Transformer and its Transformer baseline on questions in MultiModalQA that require reasoning over multiple modalities. We also quote the results from the MultiModalQA (Talmor et al., 2021) paper. Incorporating graph information into the Multimodal Graph Transformer can boost about 2% F1 and 4% EM performance.
MultiModQA dataset with the baseline. The results show that our proposed method outperforms the baseline without the aid of graph information, demonstrating the generalization of our method to more complicated vision-and-language reasoning tasks.
### Ablation studies
We perform ablation studies to verify the necessity of using two-stream inputs with the help of graphs to deal with input from different modalities, with GQA dataset as our testing bed. For all experiments, we use the overall accuracy as the evaluation metric.
The results presented in Table 3 show the superiority of our proposed Multimodal Graph Transformer over the method where a single modality input is fed into a Transformer. Our method, which involves dividing the input streams into two separate parts and processing each part through a Transformer, outperforms the Multimodal Transformer without Graph. This demonstrates the beneficial effect of incorporating graph information into the processing of the input data and performing training. The use of different input features with the help of graphs allows for a better alignment of the information from different modalities, which is reflected in the improved performance of our proposed method.
### Qualitative results
One qualitative example is shown in Figure 5. As can be seen, predictions from Multimodal Graph Transformer are more relevant to contents of the input image as the graph information improves the inferring ability of the Transformer, which further indicates the effectiveness of Multimodal Graph Transformer.
## 5 Conclusions
In this paper, we have presented a novel method to integrate structured graph information to guide the Transformers training. Our method can model interactions between different modalities and achieves competitive performance on multimodal reasoning tasks such as VQA and MultiModalQA. Experimental results show that our method outperforms many other methods on the GQA dataset. More importantly, the proposed quasi-attention mechanism is model-agnostic and it is possible to apply it to other Transformer-based methods. We will test our methods on other vision-and-language reasoning tasks and include the comparison with existing graph representation learning methods in our future work.
## 6 Limitations and Potential Risks
The Limitations of the proposed Multimodal Graph Transformer include the potential preservation of fairness and bias issues inherent in the pretrained Transformer models, despite the involvement of graph information. Additionally, the integration of graphs may introduce new biases that can further exacerbate the problem. One potential source of bias is the vision-and-language dataset itself, which may favor majority cases and overlook minority
Figure 5: A qualitative comparison from VQA v2. _fresh_ is the ground truth. Predictions from the Multimodal Graph Transformer (ours) are more relevant to the contents of the input image and achieve a higher confidence score over the ground truth.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & Method & Open questions & Binary questions & Overall accuracy \\ \hline \multirow{3}{*}{GQA} & One-modality Transformer & 47.7 & 78.1 & 62.7 \\ & Multimodal Transformer w/o Graph & 49.9 & 81.0 & 65.4 \\ & Ours & **60.1** & **90.2** & **72.4** \\ \hline \multirow{3}{*}{VQA v2} & One-modality Transformer w/ one Transformer & 60.5 & 85.4 & 70.1 \\ & Multimodal Transformer w/o Graph & 64.8 & 86.3 & 72.1 \\ \cline{1-1} & Ours & **66.7** & **87.2** & **74.6** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation Studies on the GQA and VQA v2 datasets. The figure demonstrates the effectiveness of incorporating graph information into the Transformer architecture through ablation studies performed on the GQA and VQA. The results of these studies clearly indicate that including graph information can lead to an improvement in performance.
cases. Unfortunately, the proposed method is not equipped to address these biases and issues, making further research and consideration crucial when building upon or directly using this method for vision and language tasks.
| Transformerモデルが視覚と言語タスクにおいて成功していますが、彼らはしばしば膨大なデータから無意識に知識を学習し、構造的な入力データを利用することはできません。一方、グラフニューラルネットワーク(GNN)などの構造学習アプローチは、事前情報を含めることでTransformerモデルと競合することは難しく、この研究では、両方の世界を収容し、多様性のある質問応答タスクを必要とする新しい multimodal Graph Transformerを提案します。私たちは、テキストと視覚データから取得した多様性のあるグラフ情報を、自体の注意に関与する新しいグラフ介入プラグアンドプレイ quasi-attentionメカニズムを導入することで、効果的な事前情報を統合します。特に、テキストグラフ、密接な領域グラフ、セマンティックグラフを構築して、隣接行列を生成し、それらを視覚と言語の特徴と組み合わせて、下流の推理を行います。このような自己注意をグラフ情報で正規化する方法により |
2309.11759 | Symbol Detection for Coarsely Quantized OTFS | This paper explicitly models a coarse and noisy quantization in a
communication system empowered by orthogonal time frequency space (OTFS) for
cost and power efficiency. We first point out, with coarse quantization, the
effective channel is imbalanced and thus no longer able to circularly shift the
transmitted symbols along the delay-Doppler domain. Meanwhile, the effective
channel is non-isotropic, which imposes a significant loss to symbol detection
algorithms like the original approximate message passing (AMP). Although the
algorithm of generalized expectation consistent for signal recovery (GEC-SR)
can mitigate this loss, the complexity in computation is prohibitively high,
mainly due to an dramatic increase in the matrix size of OTFS. In this context,
we propose a low-complexity algorithm that incorporates into the GEC-SR a quick
inversion of quasi-banded matrices, reducing the complexity from a cubic order
to a linear order while keeping the performance at the same level. | Junwei He, Haochuan Zhang, Chao Dong, Huimin Zhu | 2023-09-21T03:43:28 | http://arxiv.org/abs/2309.11759v3 | # Symbol Detection for Coarsely Quantized OTFS
###### Abstract
This paper explicitly models a coarse and noisy quantization in a communication system empowered by orthogonal time frequency space (OTFS) for cost and power efficiency. We first point out, with coarse quantization, the effective channel is imbalanced and thus no longer able to circularly shift the transmitted symbols along the delay-Doppler domain. Meanwhile, the effective channel is non-isotropic, which imposes a significant loss to symbol detection algorithms like the original approximate message passing. Although the algorithm of generalized expectation consistent for signal recovery (GEC-SR) can mitigate this loss, the complexity in computation is prohibitively high, mainly due to an dramatic increase in the matrix size of OTFS. In this context, we propose a low-complexity algorithm that embed into GEC-SR a quick inversion of the quasi-banded matrices, thus reducing the algorithm's complexity from cubic order to linear order, while keeping the performance at almost the same level.
OTFS, coarse quantization, GEC-SR, matrix inversion, low-complexity.
## I Introduction
Interest in estimating signal parameters from quantized data has been increased significantly in recent years [1]. Ultra-wideband applications, such as millimeter-wave communications, require high sampling rates, but conventional analog-to-digital converters (ADCs) are expensive and power-hungry. In cases that are cost and power constrained, the use of high-precision ADCs is not feasible, which makes ADCs with coarse quantization a better choice for systems like 6G [2]. For 6G a prominent waveform candidate is _orthogonal time frequency space (OTFS)_[3, 4], a 2D modulation technique that transforms the information to the delay-Doppler (DD) coordinate. OTFS enjoys an excellent robustness in high-speed vehicular scenarios, while orthogonal frequency division multiplexing (OFDM) suffers from disrupted orthogonality among subcarriers due to the high Doppler shift.
Detection for symbols in the delay-Doppler domain is key to the OTFS communications. Linear detectors like LMMSE are usually prior-ignorant, i.e., they are unaware of the prior information of transmitted symbols, and therefore not optimal in general sense. Non-linear detectors like sphere decoding are optimal in detection accuracy but often suffer from an unaffordable computational complexity. An effective and efficient alternative is to use message passing (MP) for the detection of OTFS symbols, which includes: [5] proposed a hybrid message passing detector for fractional OTFS that combines standard MP approximate MP; [6] adopted Gaussian mixture distribution as the messages; [7] designed a hybrid detection algorithm that combines MP with parallel interference cancellation; [8] detected the signals in both time and DD domains iteratively using a unitary transformation; [9] developed a message passing algorithm that utilized the sparsity of the effective channel; [10] applied expectation propagation (EP) to the detection and reduced significantly its complexity by exploiting the channel structure; [11] proposed an unitary approximate message passing (UAMP) algorithm, addressing the challenge of large channel paths and fractional Doppler shifts, effectively and efficiently; [12] circumvented the matrix inversion of vector approximate message passing (VAMP) by an average approximation. These works, however, considered only the ideal case of infinite-precision ADCs. The influence of coarse quantization is not yet accounted for.
This paper models explicitly the coarse and noisy quantization for the OTFS communications. We find that a major difference between coarse quantization and the infinite-precision case is: the effective channel is no longer a multiplication of three matrices, i.e., the postprocessing transform, the multi-path fast-fading channel, and the preprocessing transform; instead, a non-linear mapping enters between the postprocessing and the remaining two, making it impossible to model them as an integrated whole. Ignoring the difference and applying directly the algorithms above is seen to bring about noticeable performance loss. To overcome the limitation, we consider a generalized linear model (GLM) [13] that takes in the noisy quantization, the fast-fading channel, and the preprocessing transform, and validate the performance of two efficient solvers, GAMP [13] and GEC-SR [14]. We find that GEC-SR is much robuster to the change of effective channel; however, the complexity of GEC-SR quickly soars up as the matrix size in OTFS squares that of the OFDM counterpart.
In this context, we propose a low-complexity GEC-SR, which utilizes a quick inversion of the quasi-banded matrices. The idea of inverting a quasi-banded matrix was not new [10, 15]; however, the channel matrix here is asymmetric (due to a lack of the postprocessing transform), and the matrix to invert is in general not quasi-banded. Interestingly, we find that if we approximate the GEC-SR covariance matrix by a scaled identity matrix, the one to invert simply reduces to be quasi-banded. It is also worth noting the method of [10] is not applicable to coarse quantization, because [10, Eq. (40)] holds only in the quantization-free case. Finally, we carry out simulations to confirm the effectiveness of the proposed algorithm. To sum up, we contribute in these two aspects:
* We point out, in the presence of coarse quantization, the effective channel becomes imbalanced, containing only one of two transform matrices, which makes the OTFS modulation unable to circularly shift the transmitted sym | 本稿は、正交時間周波数空間(OTFS)を駆動する通信システムにおいて、粗雑かつノイズを含む量子化をexplicitにモデル化しています。コストと電力効率を重視したためです。まず、粗雑量子化を用いることで、有効チャネルが不均衡となり、送信シンボルを遅延・ドップラー空間で循環シフトすることが不可能になります。さらに、有効チャネルは非等方的であり、それによって、元の近似メッセージパス(AMP)のようなシンボル検出アルゴリズムに大きな損失をもたらします。GEC-SR(信号回復のための一般化期待値相関)アルゴリズムは、この損失を軽減できますが、計算の複雑さは極めて高く、これはOTFSの行列サイズの大幅な増加が原因です。この文脈において、本稿では、GEC-SRにクイックな quasi-banded matrix の逆算を組み込むことで |
2309.09116 | A Hierarchical Framework for explaining the Cosmic Ray Spectrum using
Diffusive Shock Acceleration | The hypothesis that the entire cosmic ray spectrum, from $\lesssim1\,{\rm
GeV}$ to $\gtrsim100\,{\rm EeV}$ energy, can be accounted for by diffusive
shock acceleration on increasingly large scales is critically examined.
Specifically, it is conjectured that Galactic cosmic rays, up to $\sim3\,{\rm
PeV}$, are mostly produced by local supernova remnants, from which they escape
upstream. These cosmic rays initiate a powerful magnetocentrifugal wind,
removing disk mass and angular momentum before passing through the Galactic
Wind Termination Shock at a radius $\sim200\,{\rm kpc}$, where they can be
re-accelerated to account for observed cosmic rays up to $\sim30\,{\rm PeV}$.
The cosmic rays transmitted downstream from more powerful termination shocks
associated with other galaxies can be further accelerated at Intergalactic
Accretion Shocks to the highest energies observed. In this interpretation, the
highest rigidity observed particles are protons; the highest energy particles
are heavy nuclei, such as iron. A universal "bootstrap" prescription, coupling
the energy density of the magnetic turbulence to that of the resonant cosmic
rays, is proposed, initially for the highest energy particles escaping far
ahead of the shock front and then scattering, successively, lower energy
particles downstream. Observable implications of this general scheme relate to
the spectrum, composition and sky distribution of Ultra-High-Energy Cosmic
Rays, the extragalactic radio background, the Galactic halo magnetic field and
Pevatrons. | Roger Blandford, Paul Simeon, Noémie Globus, Payel Mukhopadhyay, Enrico Peretti, Kirk S. S. Barrow | 2023-09-17T00:26:21 | http://arxiv.org/abs/2309.09116v1 | # A Hierarchical Framework for explaining the Cosmic Ray Spectrum using Diffusive Shock Acceleration
###### Abstract:
The hypothesis that the entire cosmic ray spectrum, from \(\lesssim 1\,\mathrm{GeV}\) to \(\gtrsim 100\,\mathrm{E}\mathrm{e}\mathrm{V}\) energy, can be accounted for by diffusive shock acceleration on increasingly large scales is critically examined. Specifically, it is conjectured that Galactic cosmic rays, up to \(\sim 3\,\mathrm{Pe}\mathrm{V}\), are mostly produced by local supernova remnants, from which they escape upstream. These cosmic rays initiate a powerful magnetocentrifugal wind, removing disk mass and angular momentum before passing through the Galactic Wind Termination Shock at a radius \(\sim 200\,\mathrm{kpc}\), where they can be re-accelerated to account for observed cosmic rays up to \(\sim 30\,\mathrm{Pe}\mathrm{V}\). The cosmic rays transmitted downstream from more powerful termination shocks associated with other galaxies can be further accelerated at Intergalactic Accretion Shocks to the highest energies observed. In this interpretation, the highest rigidity observed particles are protons; the highest energy particles are heavy nuclei, such as iron. A universal "bootstrap" prescription, coupling the energy density of the magnetic turbulence to that of the resonant cosmic rays, is proposed, initially for the highest energy particles escaping far ahead of the shock front and then scattering, successively, lower energy particles downstream. Observable implications of this general scheme relate to the spectrum, composition and sky distribution of Ultra-High-Energy Cosmic Rays, the extragalactic radio background, the Galactic halo magnetic field and Pevatrons.
## 1 Hierarchical Cosmic Ray Acceleration
Cosmic rays, our first non-electromagnetic astronomical messenger, were discovered over a century ago. It is generally accepted that most GeV-TeV particles are accelerated at SuperNova Remnants, SNR. It is widely agreed that they are produced by Diffusive Shock Acceleration, DSA [e.g., 1], which can account for the energetics and the observed power-law distribution, after allowing for energy-dependent propagation. This spectrum steepens above \(\sim 3\,\mathrm{PeV}\), and several possible sources, including pulsars and their nebulae, the Galactic Center and the Galactic Wind Termination Shock, GWTS [2, and references therein], have been proposed. The highest energy cosmic rays (\(E\gtrsim 1\,\mathrm{EeV}\)) are generally argued to be extragalactic in origin, and proposed sources include relativistic jets from stellar and supermassive black holes, and large-scale, nonrelativistic Intergalactic Accretion Shocks, IAS [3, 4].
In this report, we describe one approach to explaining the entire cosmic ray spectrum using one mechanism -- DSA -- in a variety of locales (Fig. 1). We do this less in the spirit of strong advocacy and more with the wish to exploit a common physical description of DSA to derive observable implications which can falsify the general model. To be specific, we postulate that suprathermal particles from stellar winds are accelerated by expanding supernova shock fronts and that the observed GeV-PeV spectrum combines the highest energy particles, which escape upstream, with the lower energy particles, which are transmitted downstream.
All of these cosmic rays must escape the Galactic disk in \(\lesssim 10\,\mathrm{Myr}\) as a hydromagnetic
Figure 1: Montage displaying major features of the hierarchical framework for explaining cosmic ray acceleration from \(\lesssim 1\,\mathrm{GeV}\) to \(\gtrsim 100\,\mathrm{EeV}\). Non-relativistic cosmic rays in the hot interstellar medium are accelerated by supernova shock waves up to \(\sim\mathrm{PeV}\) energies, with the most energetic particles escaping ahead of the shock front to become most of the observed Galactic cosmic ray distribution. These cosmic rays drive an MHD wind through the Galactic halo eventually passing through a termination shock where more energetic particles are accelerated. Those that escape upstream contribute to the observed \(\gtrsim\mathrm{PeV}\) spectrum; those that are transmitted downstream join outflows from more powerful galaxies to form the intergalactic cosmic ray distribution. Intergalactic cosmic rays can fall into strong intergalactic shock fronts, notably those surrounding rich clusters of galaxies where they can be re-accelerated to EeV energies.
wind carrying off mass, angular momentum and energy from the Galactic disk. This wind passes through a termination shock at the periphery of the (dark matter-dominated) Galaxy at \(\sim 200\,\)kpc, where cosmic rays are further re-accelerated. Those that escape upstream into the halo and the disk contribute to the "shin" of the observed spectrum; those that are transmitted downstream join even more powerful outflows from active, including starburst, galaxies to build up the PeV cosmic ray population in the intergalactic medium, evolving to form a "cosmic web," of IAS including quasi-spherical shocks around rich clusters, quasi-cylindrical shocks around filaments, and sheets. We may live in a filament or sheet which may contribute cosmic rays with energies up to \(\sim 3\,\)EeV transmitted downstream and contributing to the spectrum we observe. Finally, we propose that the highest energy cosmic rays -- up to \(\sim 200\,\)EeV -- derive from upstream escape of particles accelerated at the strongest, nearby, intergalactic shocks like the accretion shocks surrounding the Virgo Cluster (17 Mpc) and more distant, stronger shocks surrounding richer clusters. The sources must be relatively nearby because these extreme energy cosmic rays have comparatively short lifetimes in the cosmic microwave background [e.g., 5, and Globus et al., these proceedings].
## 2 Generic Diffusive Shock Acceleration
The central idea behind DSA is that high energy particles are scattered by a spectrum of hydromagnetic waves so that they diffuse with coefficient \(D\) relative to the background plasma moving with velocity \(\mathbf{u}\). Their energies change little so long as \(\mathbf{u}\) changes slowly. However, at a shock front, the speed of the plasma changes abruptly and cosmic rays experience a relative gain, \(O(u/c)\), measured in a frame moving with the gas. A typical cosmic ray crosses the shock \(O(c/u)\) times leading to an energy gain \(O(1)\). This is a statistical (Fermi-like) process, and an exponentially small number of particles will have an exponentially large gain in energy leading to a power-law spectrum. The spectrum of the accelerated particles depends upon the geometry of the flow and the microphysics of the shock. It is important to allow for the divergence/convergence of the flow, which affects the particle acceleration and escape.
A common feature of DSA is that it must be near maximally efficient, at least when the shock is strong if it is to account for the full spectrum of cosmic rays. This has the immediate implication that the cosmic rays participate in the dynamics of the flow and change both \(\mathbf{u}\) and \(D\). Most discussions of DSA have either assumed or attempted to calculate a form for \(D\). Sometimes these have been inspired by linear or quasilinear growth of plasma instabilities; sometimes, the evolution from a non-accelerating initial state is followed using a Particle-In-Cell, PIC, code. These simulations have taught us much but lack the dynamic range and geometry needed to capture the entire problem. Mostly, they have not demonstrated the high levels of scattering required for maximal acceleration. In this report, we ask what form of \(D\) might account for the observations and then ask under what circumstances could it be sustained.
Let us first set up a formalism appropriate to an idealized model of a quasi-spherical IAS surrounding a rich cluster but adaptable to other shocks. Specifically, we consider a stationary spherical gas inflow with speed \(u(r)\) towards a spherical shock with radius \(r_{\rm shock}\) and speed \(u_{\rm shock}\). The corresponding gas density is \(\rho_{\rm shock}\), and the rate of flow of kinetic energy across the shock is \(L_{\rm shock}=2\pi r_{\rm shock}^{2}\rho_{\rm shock}u_{\rm shock}^{3}\). We consider protons with rigidity \(R\), which we measure in units of a rigidity scale \(R_{0}\) and introduce \(q\equiv\ln(R/R_{0})\). We posit the presence of a spatial diffusion
coefficient \(D(r,q)\). The usual equation for cosmic ray transport ahead of a strong, nonrelativistic shock front describes the evolution of the isotropic part of the particle distribution function in the frame of the gas, \(f(r,R)\), under a combination of convection by the gas and diffusion through it. It is convenient to replace \(f\) with \(N(r,q)=16\pi^{2}r^{2}R^{3}f\), which is the number of cosmic rays per unit \(r,q\). The equation of cosmic ray particle conservation can then be written as
\[\partial_{t}N+\partial_{r}\left[-Nu-D\ (\partial_{r}N-2N/r)\right]+\partial_{q} \left[N\dot{q}\right]=0, \tag{1}\]
where \(\dot{q}\equiv dq/dt\) combines the adiabatic acceleration due to compression of the gas, \(\dot{q}_{\rm ad}=(1/3r^{2})d(r^{2}u)/dr\), and radiative loss \(\dot{q}_{\rm loss}\) due to interaction with the microwave background. We can regard the two expressions within square brackets as components of particle flow vector \({\bf F}=\{F_{r},F_{q}\}\).
Formally, the rate of cosmic ray acceleration is determined at, and in the frame of, the shock front by imposing continuity of the particle flux at a given rigidity. We optimize the acceleration by supposing that the shock is strong with compression ratio 4 and that there is no diffusion downstream of the shock. This implies that \(F_{r}=-(\partial_{q}N+N)/4\) at the shock. For small enough values of \(q\), we suppose that protons in the intergalactic medium are swept into the shock front, where they are subject to DSA. \(F_{r}\) is negative, the diffusion scale height \(\sim D/u\) is much smaller than \(r\) so the shock is effectively planar. We recover the usual Green's function solution, \(N(1,q)=-4e^{-q}\int_{0}^{q}dq^{\prime}e^{q^{\prime}}F_{r}(1,q^{\prime})\), so that the slope of the spectrum \(N(1,q)\) falls off more slowly than \(e^{-q}\).
We next suppose that there is some intermediate range of \(q\) for which the supply of protons ahead of the shock can be ignored. Particles of lower \(q\) will be accelerated with a spectrum \(N(1,q)\propto e^{-q}\), the scale height will still be \(\ll r_{\rm shock}\), and there will be an absence of particles with this \(q\) far upstream. In this regime, \(F_{r}\sim 0\). However, the scale height will increase with \(q\) and eventually become \(\sim r\) so that the protons can escape with \(F_{r}>0\). It is helpful to ignore the downstream flow and introduce a fictitious surface current along \(q\) at the shock so that inflowing particles at low \(q\) become escaping particles at high \(q\). Note that if the mean free path \(\ell\) satisfies \(ur/c\lesssim\ell\lesssim r\), the particle can escape and still propagate diffusively.
## 3 Bootstrap Process
Efficient acceleration depends upon maintaining a short scattering mean free path. Most analyses to date, both analytical and computational, have focused on the growth of various instabilities close to the subshock. However, efficient acceleration to \(\gtrsim\) EV rigidity at an IAS requires that \(\lesssim\) nG magnetic field in the IGM become \(\gtrsim\)\(\mu\)G field, and we suppose that this grows spatially towards the shock through interaction with particles that escape upstream and impedes their escape. This "bootstrap" [6] process initiates magnetic field growth in a manner that will be idiosyncratic to the conditions and will involve a combination of MHD, resonant and "Bell" instabilities [e.g., 7]. The field will quickly become nonlinear and strongly turbulent with no preferred direction and spanning a range wavevectors, \(k\). At this point we suppose that the primary interaction involves particles of rigidity \(R\) and Larmor radius \(r_{g}\sim k^{-1}\sim R/B(R)\) (implicitly and in SI units) interacting with waves of spectral energy density, per \(q\), \(W_{\rm mag\,}q\sim B(R)^{2}/\mu_{0}\). We then suppose that the mean free path is \(\ell(r,q)\sim r_{g}\). (This is a spectral generalization of the "Bohm" hypothesis.)
Now \(W_{\rm mag\,\,q}\) will be determined by the resonant particles with corresponding spectral energy density \(W_{\rm part\,\,q}=RN(r,q)/4\pi r^{2}\). We next introduce a simple, equipartition ansatz, that \(W_{\rm mag\,\,q}\sim W_{\rm part\,\,q}\), again eschewing constants \(O(1)\). Adopting these relations, we obtain an expression for the nonlinear diffusion coefficent \(D(r,q)=\ell(r,q)c/3\sim(rc/3)(4\pi R/\mu_{0}Nec)^{1/2}\). The particle - wave interaction time will be of order the wave period which is of order the time for wave energy to flow through \(k\)-space and shorter than the flow time, \(\sim(c/u)\ell\).
There are two relevant limits to the maximum rigidity of the particles that escape upstream, \(R_{\rm max}\). The first, \(R_{\rm diff}\), comes about because the energy density of the resonant waves at \(R_{\rm max}\) at a distance \(\sim r_{\rm shock}\) ahead of the shock front should be at most a modest fraction of the gas kinetic energy density \(\sim\rho u^{2}/2\). Otherwise it and the associated cosmic-ray energy will decelerate the gas and weaken the shock. This sets a limit \(R_{\rm diff}\sim(\mu_{0}L_{\rm shock}u/4\pi c^{2})^{1/2}\). This bound is related to, though ultimately lower than that given by [8]. Our prescription probably maximizes \(R_{\rm diff}\) at a given shock. A more complete, time-dependent treatment, will include mass, momentum and energy conservation at the fluid level. The second limit, \(R_{\rm loss}\), arises from setting the radiative loss time \(|\dot{q}_{\rm loss}|^{-1}\) to the acceleration/flow time \(\sim r_{\rm shock}/u_{\rm shock}\).
Figure 2: (a) Contours of proton number, (per \(r\), \(q\)), \(N(r,q)\), in \(r-q\) space ahead of a strong cluster accretion shock modeled on the Virgo cluster. There is supposed to be a source of intergalactic cosmic rays with \(q<-3\) that sources a shock surface current flowing towards positive \(q\), shown as green arrows at \(r=1\). This surface current falls slowly for \(-3\lesssim q\lesssim 0\) as the upstream cosmic rays have scale-heights smaller than \(r\) and essentially no particles escape upstream. For \(0\lesssim q\lesssim 3\), an increasing fraction of the particles escape while experiencing modest adiabatic acceleration and increasingly strong radiative loss. The arrows show the flow of protons in \(r-q\) space. (b) Spectral proton luminosity, \(dL/dq\) at the Galaxy. The peak of the spectrum of escaping protons in this example has a rigidity \(\sim 7\,{\rm EV}\) and an energy \(\sim 7\,{\rm EeV}\). Heavy nuclei, such as iron, propagate in the same fashion at the same rigidity but are subject to greater losses and, for iron, the peak rigidity is \(\sim 4\,{\rm EV}\). The corresponding energy is \(\sim 100\,{\rm EeV}\). Richer clusters can accelerate heavy nuclei to the highest energies measured, \(\gtrsim 200\,{\rm EeV}\).
## 4 Intergalactic Accretion Shocks
We now apply these ideas to a model of a specific IAS, that associated with the Virgo cluster. In this section, we measure \(u\) in units of \(u_{\rm shock}=1000\,{\rm km\,s^{-1}}\), \(r\) in units of \(r_{\rm shock}=2\,{\rm Mpc}\) and \(\rho\) in units of \(\rho_{\rm shock}=10^{-29}\,{\rm g\,cm^{-3}}\), adopting convenient estimates based upon the observations. This gives \(L_{\rm shock}\sim 2\times 10^{45}\,{\rm erg\,s^{-1}}\). This gives \(R_{\rm diff}\sim 10\,{\rm EV}\). For the interactions of protons with the CMB, the radiative loss rate can be approximated by \(\dot{q}_{\rm rad}=-0.05R^{0.5}-5\times 10^{-5}R^{2.2}\,{\rm Gyr^{-1}}\), for \(2\lesssim R\lesssim 200\). This yields a more stringent estimate \(R_{\rm loss}\sim 5\,{\rm EV}\).
A full treatment of this problem will include the gas, cosmic-ray and magnetic contributions to the time-dependent evolution, including the shock. It should also describe the downstream flow which affects the boundary condition. Here we report on a simpler approach that seeks a stationary solution that matches the observations with plausible magnetic and particle energy densities. We suppose that the gas speed in scaled units is \(u=1.18r^{-.5}-0.18r\), which has the Galaxy recede from Virgo at \(u(8.5)\sim-1.1\). In addition, the shock is supposed strong with compression ratio 4 and the contribution of the high energy particles to the upstream pressure and energy density is subdominant. The domain of the particle distribution is taken to be the rectangle formed by \(r=r_{\rm shock}=1\), \(q=q_{\rm min}=-3\), \(r=r_{\rm Galaxy}=8.5\), \(q=q_{\rm max}=3\). At \(q_{\rm min}\), we impose the test particle solution for \(N\) with \(F_{r}=0\) and normalization such that the proton energy density is below the kinetic energy density. \(N\) vanishes at \(q_{\rm max}\) and there is an outflow boundary condition for \(r=r_{\rm Galaxy}\). As can be seen from Fig. 2, our prescription for \(D(r,q)\) can lead to a spectrum of escaping particles with \(R_{\rm max}\sim 7\,{\rm EV}\), and accelerates a significant luminosity extending up to \(\sim 10\,{\rm EV}\).
The proton rigidities achievable under this scenario are insufficient to account for the highest energies reported. However, DSA depends upon rigidity, and heavier nuclides can be accelerated to much higher energy. A proper treatment of this problem mandates a nuclear reaction network. Here, we just illustrate what could be happening by considering an exaggerated injection of iron nuclei which propagate in the turbulence generated by the protons with \(\dot{q}_{\rm rad,Fe}=-0.07R^{2.2}\,{\rm Gyr^{-1}}\). The produces \(R_{\rm max}\sim 2\,{\rm EV}\), slightly less than those associated with protons. The associated energy is \(E_{\rm max}\sim 50\,{\rm EeV}\). More powerful shocks within the cosmic ray horizon can accelerate heavy nuclei to somewhat great energy, as observed.
While it appears that rich clusters can accelerate the highest energy cosmic rays observed, we need less extreme accelerators to fill in the gap between the UHECR and galaxy-generated intergalactic cosmic rays. For these, we turn to the FAS. The same principles can be used as for cluster shocks with the important differences that filaments are essentially cylindrical, as opposed to spherical, that they are typically weaker with modest Mach numbers and that we may be much more interested in the cosmic ray population transmitted downstream. This is because the Galaxy may be located inside just such a filament. Radiative loss is generally unimportant here. As the spectrum is much steeper, these cosmic rays should be mostly protons below the ankle in the spectrum.
## 5 Galactic Wind Termination Shocks
Observations of the secondary to primary ratio of Galactic cosmic rays strongly indicate that they escape the Galactic disk in \(\lesssim 10\,{\rm Myr}\). Much of the hot gas from the supernova remnants that create them cannot cool and should likewise escape. The interstellar medium is an open system
and these winds carry off mass, angular momentum and energy and help supply the circumgalactic medium with a population of \(\sim 1-10^{6}\,\)GeV cosmic rays. More powerful galaxies and their active nuclei do likewise. These intergalactic cosmic rays are the dominant input for re-acceleration by IAS.
However, the escape velocity from the Galaxy's dark matter potential is several times the circular velocity and powering the wind presents a challenge. The best way to meet this interpretative challenge is to posit the presence of an MHD wind. In this case, gas only has to have sufficient specific enthalpy to be levitated to a modest height where it can pass through an Alfven critical point and magnetic stress can take over to propel the combined hot gas plus cosmic ray fluid to radii \(\sim 200\,\)kpc. At some point, a Galactic wind must pass through a termination shock. Although this is definitely an MHD shock and somewhat different in character, it is expected to be an effective cosmic ray accelerator. Again, the shock is likely to be roughly spherical but the upstream flow is diverging, not converging, giving a negative contribution to \(\dot{q}\). Against this, cosmic rays do not escape the upstream flow and can achieve modest, incremental energy gain from quite different parts of the shock as the mean free path at the highest energies may be an appreciable fraction of the radius. The contribution of the particles transmitted downstream fromall galaxies is plausibly consistent with the input cosmic ray spectrum invoked in Sec. 4.
## 6 Supernova Remnants
We take the next inferential step by applying the bootstrap approach to supernova remnant shocks. Specifically, we adopt the same basic prescription for the diffusion coefficient as we used for IAS and assume that most of the observed spectrum is dominated by the cosmic rays that escaped upstream. Of course, the accelerators are expected to be far more diverse with hot stellar wind shocks and supershocks formed by OB associations making significant contributions. This interpretation is quite different from the original motivation for DSA which presumed a single power law transmitted downstream. We now know that the measured spectra are far more structured and this could reflect an idiosyncratic local history. An important part of traditional cosmic ray acceleration schemes is injection. In the DSA context these are the suprathermal particles that are chosen for acceleration to much higher energies. We have implicitly proposed that injection is unimportant in the termination and cluster shocks. The same could be true for supernova remnants if sufficient MeV cosmic rays are produced by stellar winds in the hot interstellar medium, prolonging their ionization loss times.
## 7 Observational Implications
This hierarchical model for explaining most of the 36 octave, observed cosmic ray spectrum adopting one basic acceleration scheme, DSA plus bootstrap, is bold, simple and, probably, quite refutable. Despite it being the final stage of the process, we have highlighted UHECR production by IAS. The common features of DSA present at all stages are a quasi-spherical flow, the emphasis on the particles escaping upstream and the presence of strong MHD turbulence ahead of the shock by resonant cosmic rays. In the case of a IAS, however, the prediction is that the maximum energy would not scale with rigidity, because of the limit imposed by the losses in the extragalactic photon backgrounds. The bootstrap conjecture that there is a universal, local proportionality between the
cosmic ray pressure and the energy density of the resonant waves is simple, efficient and easy to use. There is no question that this ansatz oversimplifies the problem but it is easy to imagine a sequence of increasingly ambitious, high dynamic range simulations to explore it further.
The observational prediction is that there should be no signs of time-dependence, like what is expected from a transient model (see [5]). The secure dipole anisotropy (6.9 \(\sigma\)) and what becomes of less significant identifications should be consistent with the location of the Virgo cluster and richer clusters within the cosmic ray horizon after taking into account the Galactic magnetic field deflections. An intriguing possibility is that individual shocks be detectable through their radio synchrotron emission associated with much lower energy electrons and that the the totality of all such shocks accounts for the ARCADE 2 spectrum [4, and Simeon et al., these proceedings]. The downstream transmission of cosmic rays, predicted on this basis, might come into conflict with upper limits on \(\gamma\)-ray emission from the outskirts of clusters from indirect searches for dark matter. Perhaps the most prescriptive implication of the Galactic wind stage is the prediction of a magnetic field that is strong and increasingly toroidal with radius (somewhat similar to what has been found in X-ray polarimetric observations of Pulsar Wind Nebulae and _in situ_ measurements of the solar wind). Cosmic-ray propagation analyses for our Galaxy and infrared plus radio polarimetry of other galaxies are likely to confront these predictions. The identification of Galactic "Pevatrons" should help us understand if SNR are as efficient accelerators as suggested here.
This work was supported by a grant from the Simons Foundation (00001470, RB, NG).
| 宇宙線スペクトル全体(約1 GeV から 約100 EeV のエネルギー)が、より大きなスケールで拡散的衝撃加速によって説明できる可能性について、その仮説を厳密に検討する。特に、銀河Cosmic Rays は約 3 PeV までが、主にローカルなスーパーノバ Remnants から発生し、それらは上流に逃げ出す。これらの宇宙線は、強力な磁場回転風を発生させ、銀河風終端障壁を通過する前に、ディスク質量と角運動量を削除する。この障壁の周りに位置する銀河風終端障壁は、約 200 kpc の半径で、宇宙線は再加速され、約 30 PeV の宇宙線にまで到達する。他の銀河のより強力な終端障壁から流出する宇宙線は、インター galactIC accretion shock によってさらに加速される可能性がある。 |
2309.06225 | GRFolres: A code for modified gravity simulations in strong gravity | GRFolres is an open-source code for performing simulations in modified
theories of gravity, based on the publicly available 3+1D numerical relativity
code GRChombo.
Note: Submitted for review in the Journal of Open Source Software; Comments
welcome; The code can be found at https://github.com/GRChombo/GRFolres | Llibert Aresté Saló, Sam E. Brady, Katy Clough, Daniela Doneva, Tamara Evstafyeva, Pau Figueras, Tiago França, Lorenzo Rossi, Shunhui Yao | 2023-09-12T13:42:27 | http://arxiv.org/abs/2309.06225v2 | # GRFolres: A code for modified gravity simulations in strong gravity
###### Abstract
The following brief overview has been prepared as part of the submission of the code to the Journal of Open Source Software. The code itself can be found at [https://github.com/GRChombo/GRFolres](https://github.com/GRChombo/GRFolres) 1.
Footnote 1: Folres (pronounced _fol-res_) is a word meaning covers or linings in the Catalan language. It has a specific application in the tradition of _Castells_ (Human Towers), denoting the second layers of reinforcement above the base _pinga_. We use it here in analogy to our understanding of effective field theories (EFTs) of gravity as an infinite sum of terms organised as a derivative expansion, in which the first one corresponds to GR (with up to 2 derivatives), and the second one to modified theories up to 4 derivatives, which are those that we are able to simulate with GRFolres.
## 1 Summary
Gravitational waves (GWs) are generated by the mergers of dense, compact objects like black holes (BHs) and neutron stars (NSs). This provides an opportunity to study the strong field, highly dynamical regime of Einstein's theory of general relativity (GR) at higher curvature scales than previous observations [1, 2, 3, 4, 5, 6]. It is possible that at such scales modifications to GR may start manifest. However, in order to detect such modifications, we need to understand what deviations could look like in theories beyond GR, in particular in the merger section of the signal in near equal mass binaries, which are key targets of the LIGO-Virgo-KAGRA network of detectors (and their future 3G successors). Such predictions necessitate the use of numerical relativity (NR), in which the (modified) equations of GR are evolved from an initial configuration several orbits before merger, through the merger period and the subsequent "ringdown", during which the gravitational wave signal can be extracted near the computational boundary.
Current waveforms are tested for consistency with GR by measuring parameterised deviations to the merger, inspiral and ringdown phases [7, 8, 9, 10, 11], and not by comparison to any particular theories. If we obtain predictions for specific models, we can check whether such parameterised deviations are well-motivated and consistent in alternative theories of gravity [1, 12, 13, 14, 15, 16, 17], and quantify our ability to constrain model parameters using GW observations.
There are many ways to modify GR, one of the simplest being to couple an additional scalar degree of freedom to gravity, which may (if certain conditions are satisfied) result in so-called "hairy" stationary black hole solutions; that is, black holes with a stable, non trivial configuration of the scalar field around them (see [18] for a review). An example of this is the class of Horndeski models [19]. Cubic Horndeski theories have been studied in [20, 21] and an implementation of this is included in GRFolres. Another more general example within the Horndeski models is the four-derivative scalar-tensor theory (4\(\partial\)ST), which is the most general theory with up to fourth powers of the derivatives (but still second order equations of motion). Despite their relative simplicity, many models have lacked well-posed (and thus numerically stable) formulations until recently.
An important breakthrough was made in 2020 by Kovacs and Reall, who showed that Horndeski theories are indeed well-posed in a modified version of the harmonic gauge [22, 23] - a particular coordinate system already used in NR. Subsequently, several specific theories within these classes were probed in their highly dynamical and fully non-linear regimes [24, 25, 26, 27]. The extension of the results of [22, 23] to the alternative "singularity avoiding" coordinates in [28, 29, 30] offers an alternative gauge in which to probe questions of hyperbolicity, and may even offer stability advantages for certain cases such as unequal mass ratios, as studied in [27]. Numerical work on these theories is still in the early stages of development and many technical details on their numerical implementation need to be further investigated. Equally, many scientific questions, concerning our accurate understanding of binary black holes' phenomenology in alternative theories of gravity and their implications for tests of GR, also remain unanswered.
The goal of GRFolres is to meet this need for further research, and to provide a model code to help others develop and test their own implementations. The code is based on the publicly available NR code GRChombo [31, 32], which itself uses the open source Chombo framework [33] for solving partial differential equations (PDEs).
In the following sections, we discuss the key features, motivations, and applications of the code.
## 2 Key features
GRFolres inherits many of the features of GRChombo and Chombo. Here we list the key features.
* The code implements the modified moving puncture gauge that ensures a well-posed evolution in the weak coupling regime, as proposed in [28]. The precise form of the gauge and its parameters can be changed and the standard moving puncture gauge is safely recovered by setting certain parameters to zero.
* The currently available theories in the code are \(4\partial\)ST and cubic Horndeski. The code is templated over the theory (in the same way that GRChombo is templated over a matter class) so that it can easily be changed without major code modifications. The code also provides an implementation of \(4\partial\)ST without backreaction onto the metric (but including the possibility of using the new gauge), to enable comparison with previous works using the decoupling limit approximation.
* The fields are evolved with a 4th order Runge-Kutta time integration and their derivatives calculated with the same finite difference stencils used in GRChombo (4th and 6th order are currently available).
* GRFolres inherits all the available boundary conditions in GRChombo, namely, extrapolating, Sommerfeld (radiative), reflective, and periodic.
* The current examples use solutions that approximately or trivially solve the modified energy and momentum constraints of the theory. An elliptic solver for more general configurations is under development, using a modified CTTK formalism [34, 35].
* GRFolres has routines for monitoring the constraint violation and calculating the energy densities associated with the different scalar terms in the action, as discussed in [28, 29, 30]. Other diagnostics can be added as required. We also extract data for the tensor and scalar gravitational waveforms.
* Following the structure of GRChombo, the GRFolres code is also written in C++ and uses object oriented programming (OOP) and templating.
* GRChombo uses hybrid OpenMP/MPI parallelism with explicit vectorisation of the evolution equations via intrinsics, and is AVX-512 compliant.
* The code inherits the flexible AMR grid structure of Chombo, which provides Berger-Oliger style [36] AMR with block-structured Berger-Rigoutsos grid generation [37]. Depending on the problem, the user may specify the refinement to be triggered by the additional degrees of freedom, i.e. the scalar field, or those of the metric tensor.
Statement of Need
As far as we are aware there is currently no other publicly available code that implements the \(4\partial\)ST theory of modified gravity or the cubic Horndeski theory in (3+1)-dimensional numerical relativity.
There is at least one private code, based on the PAMR/AMRD and HAD [38, 39] infrastructure, that was used in the first works to successfully implement the modified general harmonic gauge for \(4\partial\)ST [24, 25, 26, 27]. Since this code uses a Generalised Harmonic Coordinates (GHC) formulation, it necessitates excision of the interior of black holes, which can be difficult to implement in practice. As a consequence, many groups in the numerical relativity community have opted to use singularity avoiding coordinates such as the BSSN [40, 41, 42], Z4C [43, 44] or CCZ4 [45, 46] formulations in the puncture gauge [47, 48], which do not require the excision of the interior of black holes from the computational domain. In GRFolres, we use the results of [28, 29, 30] to extend the well-posed formulations of modified gravity to singularity avoiding coordinates. This provides an alternative gauge to the modified GHC one used by other groups. Not only does this provide a valuable comparison to their work, but also eliminates the need for excision.
There are also a number of (3+1)-dimensional codes that implement the equations for the additional scalar degree of freedom in Einstein-scalar-Gauss-Bonnet without backreaction onto the metric tensor, including one implementation using GRChombo [49], which we have integrated into GRFolres to enable comparison between the methods. In particular, Canada ([https://bitbucket.org/canuda](https://bitbucket.org/canuda)) [50] which uses the Einstein Toolkit ([http://einsteintoolkit.org/](http://einsteintoolkit.org/)), with its related Cactus ([http://cactuscode.org](http://cactuscode.org)) [51, 52] and Kranc ([http://kranccode.org](http://kranccode.org)) [53] infrastructure, was used in [50, 54, 55, 56, 57]. Another implementation is based on the Spectral Einstein Code or SpEC ([http://www.black-holes.org/SpEC.html](http://www.black-holes.org/SpEC.html)) [58], as used in [59]. A neutron star background was considered in [60] with a modification of SACRA-MPI code [61, 62]. Whilst order-reduced methods like those in [12, 49, 50, 54, 55, 56, 57, 59, 60, 63, 64, 65] provide an estimate of the scalar dynamics and associated energy losses, they may miss information about the fully non-linear impact on the metric and suffer from the accumulation of secular errors over long inspirals.
In spherical symmetry several codes have been developed that implement Einstein-scalar-Gauss-Bonnet (a subset of the \(4\partial\)ST theory that we include as an example in GRFolres). In particular, using the NRPy framework ([http://astro.phys.wvu.edu/bhathome](http://astro.phys.wvu.edu/bhathome)) [66] in [63], the private code of Ripley & Pretorius in [67, 68, 69, 70, 71], and a modification of the GR1D code [72, 73] for calculating core collapse in Einstein-scalar-Gauss-Bonnet in [74]. There is also the fully nonlinear spherical code developed in [75, 76]. Spherical codes provide a useful testing ground in which coordinate ambiguities can be avoided [67], and a well posed formulation is easier to obtain, but they lack the generality required to study objects with angular momentum, or binary mergers.
## 4 Research projects to date using GRFolres
So far the code has been used to study a range of fundamental physics problems, as listed here.
* The test field case was used in [49] to model the scalar waves produced during the ringdown stage of binary black hole coalescence in Einstein-scalar-Gauss-Bonnet, and quantify the extent to which current and future gravitational wave detectors could observe the spectrum of scalar radiation emitted.
* The regime of validity of effective field theory in collapse and binary evolutions in cubic Horndeski theories was studied in [20, 21]. It was found that the mismatch of the gravitational wave strain can be as large as 10%-13% in the Advanced LIGO mass range for such theories.
* In the work [28], the code was developed and tested, with waveforms for shift-symmetric theories of Einstein-scalar-Gauss-Bonnet gravity produced for equal mass binaries, as illustrated in Fig. 3.
* In the work [29], the studies were extended to binary mergers in theories with spin-induced scalarisation. The clouds formed are dumbbell-like in shape, as illustrated in Fig. 4.
* In the work [30], the dependence of the conditions for hyperbolicity and weak coupling were studied for spin-induced scalarisation, and the critical thresholds found for a number of cases, as illustrated in Fig. 5.
## Acknowledgements
We thank the entire GRChombo\(\;\)2 collaboration for their support and code development work. PF and KC are supported by an STFC Research Grant ST/X000931/1 (Astronomy at Queen Mary 2023-2026). PF is supported by a Royal Society University Research Fellowship No. URF\(\backslash\)R\(\backslash\)201026, and No. RF\(\backslash\)ERE\(\backslash\)210291. KC is supported by an STFC Ernest Rutherford fellowship, project reference ST/V003240/1. LAS is supported by a QMUL Ph.D. scholarship. DD acknowledges financial support via an Emmy Noether Research Group funded by the German Research Foundation (DFG) under grant no. DO 1771/1-1. LR is supported by a Royal Society Renewal Grant, No. URF\(\backslash\)R\(\backslash\)201026, and a Research Expenses Enhancement Award, No. RF\(\backslash\)ERE\(\backslash\)210291. TE is supported by the Centre for Doctoral Training (CDT) at the University of Cambridge funded through STFC. SY acknowledges the support from China Scholarship Council.
Footnote 2: www.grchombo.org
Development of the code used in this work utilised the ARCHER2 UK National Supercomputing Service3 under the EPSRC HPC project no. E775, the CSD3 cluster in Cambridge under Projects No. DP128. The Cambridge Service for Data Driven Discovery (CSD3), partially operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility. The DiRAC component of CSD3 is funded by BEIS capital via STFC capital Grants No. ST/P002307/1 and No. ST/ R002452/1 and STFC operations Grant No. ST/R00689X/1. DiRAC is part of the National e-Infrastructure4. Calculations were also performed using the Sulis Tier 2 HPC platform hosted by the Scientific Computing Research Technology Platform at the University of Warwick. Sulis is
Figure 1: Contour plot of network signal-to-noise ratio (SNR) for the scalar ringdown of a binary black hole (BBH) in Einstein-scalar-Gauss-Bonnet gravity at 1 Gpc as observed by the Virgo, Livingston and Hanford network of detectors at design sensitivity. Taken from [49].
funded by EPSRC Grant EP/T022108/1 and the HPC Midlands+ consortium. This research has also utilised Queen Mary's Apocrita HPC facility, supported by QMUL Research-IT. This study is in part financed by the European Union-NextGenerationEU, through the National Recovery and Resilience Plan of the Republic of Bulgaria, project No. BG-RRP-2.004-0008-C01. We acknowledge Discoverer PetaSC and EuroHPC JU for awarding this project access to Discoverer supercomputer resources.
| GRFolresは重力修正理論におけるシミュレーションを実行するためのオープンソースコードです。GRChomboの公開されている3+1D数値relativityコードを使用しています。注:オープンソースソフトウェア学会誌へのレビュー投稿。コメント歓迎。コードは https://github.com/GRChombo/GRFolres にあります。 |
2302.14553 | Angular Momentum Fluctuations | In this paper, we consider angular momentum fluctuations of a Schwartzschild
black hole in thermal equilibrium with radiation which, for the sake of
simplicity is here modeled by a scalar field. Important, we do not set the
black hole angular momentum $J$ identically to zero at the outset; we allow it
to have a small value (in the sense that $J/M<<1$) and then study the
conditions for thermodynamical equilibrium; only then take the $J\rightarrow 0$
limit. We calculate the black hole's angular momentum fluctuations which turn
out to have two independent contributions: one that comes from the black hole
itself, with no respect to the radiation, and another one that arises from the
radiation. The result is astonishingly simple: the radiation contribution
depends exclusively on the cut-off proper distance from the horizon (or
equivalently, the width of the brick wall), while the black hole contribution
is proportional to its event horizon area. Accordingly, there are no strictly
static black holes in nature, they randomly rotate in all possible directions.
Since a black hole is nothing but geometry, we are dealing with geometry
fluctuations -- our results are of quantum-gravitational nature (albeit at a
semi-classical level). Interestingly enough, if we apply to the black hole
fluctuations component the (quantum) rules of angular momentum we obtain an
event horizon area quantization rule, albeit with a different spectrum from an
equally spaced area spectrum which is widely accepted in the literature. | Marcelo Schiffer | 2023-02-28T13:18:36 | http://arxiv.org/abs/2302.14553v1 | # Angular Momentum Fluctuations of a Schwarzschild Black Hole
###### Abstract
In this paper we consider angular momentum fluctuations of a Schwartzschild black hole in thermal equilibrium with radiation which, for the sake of simplicity is here modeled by a scalar field. Important, we do not set the black hole angular momentum \(J\) identically to zero at the outset; we allow it to have a small value (in the sense that \(J/M<<1\)) and then study the conditions for thermodynamical equilibrium; only then take the \(J\to 0\) limit. We calculate the black hole's angular momentum fluctuations which turn out to have two independent contributions: one that comes from the black hole itself, with no respect to the radiation, and another one which arises from the radiation. The result is astonishingly simple: the radiation contribution depends exclusively on the cut-off proper distance from the horizon (or equivalently, the width of the brick-wall), while the black hole contribution is proportional to its event horizon area. Accordingly, there are no strictly static black holes in nature, they randomly rotate in all possible directions. Since a black hole is nothing but geometry, we are dealing with geometry fluctuations - our results are of quantum-gravitational nature (albeit at semi-classical level). Interestingly enough, if we apply to the black hole fluctuations component the (quantum) rules of angular momentum we obtain an event horizon area quantization rule, albeit with a different spectrum from equally spaced area spectrum which is widely accepted in the literature.
**PACS numbers : 04.70Dy, 05.40.-a, 05.70-a,52.25 Kn:**
## I A static black hole in equilibrium with radiation
A rotating black hole of mass \(M\) and angular momentum \(J\) is described by the Kerr geometry. In Boyer-Lindquist coordinates the metric coefficients take the form:
\[g_{tt} = -\left(1-\frac{2Mr}{\Sigma}\right)\,, \tag{1}\] \[g_{\varphi\varphi} = \left(r^{2}+a^{2}+2\frac{Ma^{2}r\sin^{2}\theta}{\Sigma}\right) \sin^{2}\theta\,,\] (2) \[g_{t\varphi} = -\frac{2Mar\sin^{2}\theta}{\Sigma}\,,\] (3) \[g_{\theta\theta} = \Sigma\,,\] (4) \[g_{rr} = \frac{\Sigma}{\Delta}\,, \tag{5}\]
where
\[\Sigma=r^{2}+a^{2}\cos^{2}\theta\,, \tag{6}\] \[\Delta=r^{2}-2Mr+a^{2}\,, \tag{7}\]
with \(a=J/M\). The black hole event horizon is located at
\[r_{+}=M+\sqrt{M^{2}-a^{2}} \tag{8}\]
and the corresponding area is
\[A=8\pi Mr_{+}\,. \tag{9}\]
In this geometry, particles are dragged with angular velocity \(\Omega=-g_{\varphi t}/g_{\varphi\varphi}\). Now consider such a black hole in thermal equilibrium with a bath of scalar particles within a confining vessel of radius \(R>>2M\), observed by an an observer rotating with a constant angular velocity \(\omega\). The need of a rotating system will become clear shortly. The
total energy and angular momentum of the system are
\[\mathcal{E} =M+\sum_{m}\int\sqrt{-g}drd\theta d\varphi\int\frac{n(\epsilon,m,r, \theta)\epsilon}{e^{\beta(\epsilon-m\hbar\omega)}-1}d\epsilon\,, \tag{10}\] \[\mathcal{J} =J+\sum_{m}\int\sqrt{-g}drd\theta d\varphi\int\frac{n(\epsilon,m,r,\theta)m\hbar}{e^{\beta(\epsilon-m\hbar\omega)}-1}d\epsilon\,, \tag{11}\]
where \(n(\epsilon,m,r,\theta)\) represents the density of states per unit volume with a fixed azimuthal quantum number \(m\) in the rest frame. The free energy of the radiation is:
\[\beta F=\sum_{m}\int\sqrt{-g}drd\theta d\varphi\int n(\epsilon,m,r,\theta)\ln \left(1-e^{-\beta(\epsilon-\hbar m\omega)}\right)d\epsilon\,. \tag{12}\]
We assume that the total angular momentum \(\mathcal{J}\) vanishes, the black hole's angular momentum fluctuations result from the absorption and emission of quanta from the radiation. The angular momentum conservation condition (eq.(11)) can be expressed in terms of the free energy as:
\[J-\left(\frac{\partial F}{\partial\omega}\right)=0\,. \tag{13}\]
Similary, the energy conservation condition (eq. (10) )reads [1]:
\[\mathcal{E}=M+\omega J+\left(\frac{\partial(\beta F)}{\partial\beta}\right)\,. \tag{14}\]
The total entropy is given by
\[\mathcal{S}=\frac{2\pi Mr_{+}}{\hbar}+\beta^{2}\left(\frac{\partial F}{ \partial\beta}\right)\,. \tag{15}\]
The first term in the last equation corresponds to the Bekenstein-Hawking entropy. Integrating eq.(12) by parts
\[F=-\sum_{m}\int\int\sqrt{-g}drd\theta d\varphi\frac{\gamma(\epsilon,m,r, \theta)}{e^{\beta(\epsilon-m\hbar\omega)}-1}d\epsilon\,, \tag{16}\]
where \(\gamma(\epsilon,m,r,\theta)\) is the total number of states (per unit volume) for a given energy and azimuthal number \(m\), \(n=d\gamma/d\epsilon\). In the rotating frame, the azimuthal angle is \(\tilde{\varphi}=\varphi-\omega t\) and the metric elements reads
\[\tilde{g}_{tt} =g_{tt}+\omega^{2}g_{\varphi\varphi}+2\omega g_{t\varphi}\,, \tag{17}\] \[\tilde{g}_{t\varphi} =g_{t\varphi}+\omega g_{\varphi\varphi}\,, \tag{18}\]
all other metric coefficients remaining the same as the in the non-rotating frame. The energy measured in this frame is \(\tilde{\epsilon}=\epsilon-m\hbar\omega\).
In what follows, for the sake of completeness we follow the discussion given by Chang-Young et all [3]. A massless scalar field satisfies the wave equation :
\[\frac{1}{\sqrt{-\tilde{g}}}\partial_{a}\left(\tilde{g}^{ab}\sqrt{-\tilde{g}} \partial_{b}\Phi\right)-\xi R\Phi=0\,, \tag{19}\]
where \(\xi\) represents the coupling constant. Neglecting back-reaction of the geometry and a semi-classical approximation \(\Phi=\Phi_{0}e^{i(-\tilde{\epsilon}t+m\tilde{\varphi}+S(r,\theta))/\hbar}\). Then, it follows that
\[\tilde{\epsilon}^{2}\frac{\tilde{g}_{\tilde{\varphi}\tilde{\varphi}}}{D}+2 \frac{\tilde{g}_{\tilde{\varphi}\tilde{\epsilon}}}{D}m-m^{2}\frac{(-\tilde{g}_ {tt})}{D}-\left[\frac{1}{g_{rr}}p_{r}^{2}+\frac{1}{g_{\theta\theta}}p_{\theta }^{2}\right]=0\,, \tag{20}\]
with \(p_{r}=\frac{\partial S}{\partial r}\), \(p_{\theta}=\frac{\partial S}{\partial\theta}\), \(D=-g_{rr}\tilde{g}_{tt}+\tilde{g}_{t\tilde{\varphi}}^{2}\) and we inverted also the metric In the semi-classical approximation the number of states for a fixed value of \(m\) in the rotating frame is the volume in phase space [2]-[4] :
\[\gamma(\tilde{\epsilon},m)=\frac{1}{(2\pi\hbar)^{3}}\int d\tilde{\varphi}\,dp _{\theta}d\theta\,dp_{r}\,dr\,. \tag{21}\]
Performing the immediate integrations over \(p_{r}\) and \(\tilde{\varphi}\), and inserting the value of \(p_{r}(r,\theta)\) obtained from eq. (20)
\[\gamma(\tilde{\epsilon},m)=\frac{1}{(2\pi^{2}\hbar)^{3}}\int d\tilde{\varphi}dp _{\theta}d\theta\ dr\sqrt{g_{rr}}\left[\tilde{g}_{\tilde{\varphi}\tilde{ \varphi}}\tilde{\epsilon}^{2}+2\tilde{g}_{t\tilde{\varphi}}m\tilde{\epsilon}-m ^{2}(-\tilde{g}_{tt})\right]^{1/2}\,. \tag{22}\]
Then, integrating over the classically allowed region it follows that
\[\gamma(\tilde{\epsilon},m)=\frac{1}{16\pi^{2}\hbar^{3}}\int d\theta\,dr\frac{ \sqrt{g_{rr}g_{\theta\theta}}}{D}\left(\tilde{g}_{\tilde{\varphi}\tilde{ \varphi}}\tilde{\epsilon}^{2}+2\tilde{g}_{t\tilde{\varphi}}m\tilde{\epsilon}- m^{2}(-\tilde{g}_{tt})\right)\,. \tag{23}\]
The total number of states for all \(m\) is given by summing \(\sum_{m}\gamma(\tilde{\epsilon},m)\). Approximating the sum over \(m\) by an by integral over the region where the integrand is positive yields
\[\gamma(\tilde{\epsilon})=\frac{1}{12\pi^{2}\hbar^{3}}\int\sqrt{-g}d\tilde{ \varphi}d\theta\ dr\frac{1}{(-\tilde{g}_{tt})^{2}}\tilde{\epsilon}^{3}\,, \tag{24}\]
where \(-g=g_{rr}g_{\theta\theta}D\) is the determinant of the Kerr metric.
Inspecting this expression, it is easy to identify the density of states in the rotating frame.
\[\gamma(\tilde{\epsilon},r,\theta)=\frac{1}{12\pi^{2}\hbar^{3}(-\tilde{g}_{tt })^{2}}\tilde{\epsilon}^{3}\,, \tag{25}\]
Inserting this density of states in eq.(16) and performing the Bose-Einstein-like integration, the free energy boils down to a simple result
\[F=-\frac{\pi^{2}}{120\hbar^{3}\beta^{4}}V\,, \tag{26}\]
where we defined
\[V=\int_{r_{+}+h}^{R}dr\int d\theta\frac{\sqrt{-g}}{\tilde{g}_{tt}^{2}}\,, \tag{27}\]
with
\[\tilde{g}_{tt}=g_{tt}+(\omega^{2}-2\omega\Omega)g_{\varphi\varphi}\,. \tag{28}\]
Inspecting eqs. (26),(27) we notice that the relevant (inverse) temperature in the free function is is \(\beta\sqrt{-g_{tt}}\) which is nothing but Tolman's inverse temperature, the local temperature measured by the rotating observer [5]-[7]. Following t'Hooft, we introduced a cut-off \(h\) at the horizon that can either represents a brick wall \(\Phi(r_{+}+h)=0\) or our ignorance of how to properly renormalize the divergences at the horizon. The total energy reads
\[{\cal E}=M+\omega J+\frac{\pi^{2}}{40\hbar^{3}\beta^{4}}V \tag{29}\]
where \(\omega\) is to be regarded as a chemical potential that implements angular momentum conservation (eq.(13)):
\[\frac{30\hbar^{3}\beta^{4}}{\pi^{2}}J+\int_{r_{+}+h}^{R}dr\int d\theta\frac{ \sqrt{-g}}{(-\tilde{g}_{tt})^{3}}g_{t\varphi}+\omega\int_{r_{+}+h}^{R}dr\int d \theta\frac{\sqrt{-g}}{(-\tilde{g}_{tt})^{3}}g_{\varphi\varphi}=0\,. \tag{30}\]
Equivalently \(\omega\) can be regarded as the angular velocity of a rotating observer must have such that the total angular momentum vanishes in his frame.
At last, the total entropy reads
\[{\cal S}=\frac{4\pi Mr_{+}}{\hbar}+\frac{\pi^{2}}{30\hbar^{3}\beta^{3}}V\,. \tag{31}\]
The first derivatives \((\frac{\partial{\cal S}}{\partial M})_{J}=0\) together with the energy constraint (eq.(14)) gives the radiation temperature
\[\beta=\frac{4\pi M}{\hbar}\left(\frac{M}{\sqrt{M^{2}-J^{2}/M^{2}}}+1\right)\,. \tag{32}\]
In order to study thermodynamical fluctuations of a Schwatzschild we need to expand the entropy up to the second order in \(J\) (or \(a\)). At the lowest order in \(J\):
\[g_{t\varphi}\approx-\frac{2J\sin^{2}\theta}{r}\,, \tag{33}\]
so the second term in eq.(30) is at least linear in \(J\), and so must be also \(\omega\). Solving this equation at the first order in \(J\), we replace the Kerr metric coefficients by the Schwartzschild metric with the exception of \(g_{t\varphi}\), which takes the above value. Then :
\[\frac{30\hbar^{3}\beta_{0}^{4}}{\pi^{2}}J-2J\int d\theta\sin^{3}\theta\int_{2M+ \delta}^{R}\frac{r^{4}dr}{(r-2M)^{3}}+\omega\int d\theta\sin^{3}\theta\int_{2M +\delta}^{R}\frac{r^{7}dr}{(r-2M)^{3}}=0\,. \tag{34}\]
were \(\beta_{0}=8\pi M/\hbar\). The density of states is highly peaked near the horizon [3], so most of the contribution to the integral comes from the lower integration limit. After some algebra
\[J=4M^{3}\left[1+3\left(\frac{\delta}{M}\right)+{\cal O}\left(\frac{\delta}{M} \right)^{2}\log\left(\frac{\delta}{M}\right)\right]\omega\,. \tag{35}\]
Notice that
\[r_{+} \approx 2M-\frac{a^{2}}{2M} \tag{36}\] \[\beta^{-3} \approx \left(\frac{\hbar}{8\pi M}\right)^{2}\left(1-\frac{3a^{2}}{4M^{2 }}\right)\,. \tag{37}\]
Expanding eq.(31) to the second order in \(a\) is a bit more sweaty. Let \(f(r,\theta)\) represent the integrand in eq.(27),then
\[\int_{r_{+}+\delta}^{R}f(r,\theta)drd\theta=\int_{2M+\delta}^{R}f_{0}(r,\theta )drd\theta+\int_{2M+\delta-a^{2}/2M}^{2M+\delta}f_{0}(r,\theta)drd\theta+\int_ {2M+\delta}^{R}f_{2}(r,\theta)drd\theta\,, \tag{38}\]
where \(f_{0}(r,\theta),f_{2}(r,\theta)\) represent the zeroth and second order expansion in terms of \(a^{2}\). Since
\[\sqrt{-g}=(r^{2}+a^{2}\cos^{2}\theta)\sin\theta\,, \tag{39}\]
and at the relevant order
\[\Omega\approx\frac{2Ma}{r^{3}}\,, \tag{40}\]
then
\[\frac{1}{(\tilde{g}_{tt})^{2}}\approx\frac{r^{2}}{(r-2M)^{2}}-\frac{4Ma^{2} \cos^{2}\theta}{(r-2M)^{3}}+2\omega^{2}\frac{r^{5}\sin^{2}\theta}{(r-2M)^{3}} -\omega\frac{8Mar^{2}\sin^{2}\theta}{(r-2M)^{3}}\,. \tag{41}\]
At last,
\[V=V_{0}-\frac{a^{2}}{M}\frac{(2M+\delta)^{4}}{\delta^{2}}+\frac{2a^{2}}{3} \int_{2M+\delta}^{R}\frac{r^{2}dr}{(r-2M)^{2}}-\frac{8Ma^{2}}{3}\int_{2M+ \delta}^{R}\frac{r^{2}dr}{(r-2M)^{3}}+\frac{8\omega^{2}}{3}\int_{2M+\delta}^{ R}\frac{r^{7}dr}{(r-2M)^{3}}-\frac{32Ma\omega}{3}\int_{2M+\delta}^{R}\frac{r^{4}dr}{(r-2M)^{3}} \tag{42}\]
where
\[V_{0}=2\int_{2M+\delta}^{R}\frac{r^{4}\sin\theta drd\theta}{(r-2M)^{2}} \tag{43}\]
The density of states is very very large near the horizon [3; 4] and we take only the contribution from the lower limit of the integral where the divergence occurs. Putting all the pieces together (eqs.(31,36,3742) the total entropy boils down to
\[{\cal S}={\cal S}_{0}-\frac{2\pi J^{2}}{\hbar M^{2}}\,-\frac{J^{2}}{480\pi M^ {2}\delta^{2}}\,-\frac{J^{2}}{480\pi M^{3}\delta}\,, \tag{44}\]
where \({\cal S}_{0}\) is the total entropy for \(J=0\). Clearly the equilibrium condition \((\frac{\partial S}{\partial J})_{M}=0\) is satisfied identically. Angular momentum fluctuations are given by [14] :
\[\frac{1}{(\Delta J)^{2}}=-\left(\frac{\partial^{2}{\cal S}}{\partial J^{2}} \right)_{J=0} \tag{45}\]
Keeping only the most divergent term and expressing the coordinate distance in terms of the proper distance \(\delta=\Delta^{2}/8M\), we can write
\[\frac{1}{(\Delta J)^{2}}=\frac{1}{(\Delta J)^{2}_{BH}}+\frac{1}{(\Delta J)^{2} _{\mbox{field}}}\,, \tag{46}\]
where these terms represent black hole and radiation fluctuations:
\[(\Delta J)_{BH}=\sqrt{\frac{\hbar}{4\pi}}M\qquad,\qquad(\Delta J)_{field}= \frac{\sqrt{15\pi}}{2}\Delta^{2} \tag{47}\]
This is an amazingly simple result. Part of the fluctuations has origin in the black hole's quantum atmosphere, which is the (quantum) field within a small shell of proper-width \(\Delta\) around the horizon. The black hole fluctuations are a bit mysterious. Mathematically it originates from the Bekenstein-Hawking entropy - thus being a true property of the black hole. Let us consider only this angular momentum fluctuation. Assume \(\overline{m}=0\), then, from the basic properties of angular momentum
\[\Delta m^{2}=\overline{m^{2}}=\frac{1}{2l+1}\sum_{m=-l}^{l}m^{2}\approx\frac{ l^{2}}{3}\hbar^{2}\,, \tag{48}\]
were we approximated the sum by an integral. Equating \(\Delta J^{2}_{BH}=\hbar^{2}\Delta m^{2}\), if follows that the event horizon is quantized:
\[A_{BH}\approx\frac{64\pi^{2}\hbar}{3}l^{2}\,. \tag{49}\]
This result is at odds with the linear spacing area spectrum vindicated by most authors [15]-[18]. Anyway it is very surprising that the black hole area quantization results from the quantum rules of angular momentum.
## II Concluding remarks
Angular momentum fluctuations emerge from a thin shell of Planckian width and from the black hole itself..The former is Universal, does not depend on the black hole mass but only upon the width of this quantum atmosphere; the latter depends on the event horizon area. Being so different, they must have different physical origins. That is to say, a Schwatzschild black hole rotates randomly in all possible directions. The physical meaning of the fluctuations remains elusive. Surprisingly, the rules of angular momentum implies that the event horizon is quantized and a for large mass, the mass spectrum depends (nearly) linearly on the quantum number \(l\) which also relates to the angular momentum fluctuations. Furthermore assuming that the cut-off parameter is Planckian, we can write \(\Delta^{4}=\mu\hbar^{2}/15\pi^{2}\) for \(\mu\) a numerical value of order one. Accordingly the (averaged) Cauchy horizon \(\overline{r_{-}}=\overline{J^{2}}/2M^{3}\sim\mu\hbar^{2}/(8\pi(M^{2}+\mu\hbar)M)\) never vanishes. Since a black hole is nothing but geometry, our results should be regarded as being of (semi-classical ) quantum gravity nature.
| この論文では、シュワルツ schild black holeの角運動量変動を熱平衡にある放射でモデル化し、ここでは単純化のためスカラー場として扱います。重要なのは、初期にブラックホールの角運動量$J$をゼロに設定するのではなく、小さな値 ($J/M<<1$) を持つことを許可し、その後、熱力学平衡の条件を調べます。そして、$J\rightarrow 0$ の極限まで取り上げます。ブラックホールの角運動量変動を計算し、結果として、2つの独立した寄与が得られます。一つは、放射に関係なくブラックホール自身から来るもので、もう一つは、放射から来るものです。結果は驚くべき単純さ: 放射の寄与は、ホライゾンからのカットオフ適切な距離に依存し、またはBrick wall の幅に依存します。一方、ブラックホールの寄与は、そのイベントホールの |
2309.16036 | Multichannel Voice Trigger Detection Based on
Transform-average-concatenate | Voice triggering (VT) enables users to activate their devices by just
speaking a trigger phrase. A front-end system is typically used to perform
speech enhancement and/or separation, and produces multiple enhanced and/or
separated signals. Since conventional VT systems take only single-channel audio
as input, channel selection is performed. A drawback of this approach is that
unselected channels are discarded, even if the discarded channels could contain
useful information for VT. In this work, we propose multichannel acoustic
models for VT, where the multichannel output from the frond-end is fed directly
into a VT model. We adopt a transform-average-concatenate (TAC) block and
modify the TAC block by incorporating the channel from the conventional channel
selection so that the model can attend to a target speaker when multiple
speakers are present. The proposed approach achieves up to 30% reduction in the
false rejection rate compared to the baseline channel selection approach. | Takuya Higuchi, Avamarie Brueggeman, Masood Delfarah, Stephen Shum | 2023-09-27T21:28:50 | http://arxiv.org/abs/2309.16036v2 | # Multichannel Voice Trigger Detection Based on
###### Abstract
Voice triggering (VT) enables users to activate their devices by just speaking a trigger phrase. A front-end system is typically used to perform speech enhancement and/or separation, and produces multiple enhanced and/or separated signals. Since conventional VT systems take only single-channel audio as input, channel selection is performed. A drawback of this approach is that unselected channels are discarded, even if the discarded channels could contain useful information for VT. In this work, we propose multichannel acoustic models for VT, where the multichannel output from the fond-end is fed directly into a VT model. We adopt a transform-average-concatenate (TAC) block and modify the TAC block by incorporating the channel from the conventional channel selection so that the model can attend to a target speaker when multiple speakers are present. The proposed approach achieves up to \(30\%\) reduction in the false rejection rate compared to the baseline channel selection approach.
Takuya Higuchi\({}^{1}\), Avamarie Brueggeman\({}^{2}\)*, Masood Delfarah\({}^{1}\), Stephen Shum\({}^{1}\)\({}^{1}\)Apple, USA
\({}^{2}\)The University of Texas at Dallas, USA Voice triggering, keyword spotting, multichannel acoustic modeling
## 1 Introduction
Voice trigger detection is an essential task for a voice assistant system, allowing a user to activate the voice assistant by simply speaking a wake word. Noise robustness is an important aspect of successful voice triggering (VT). A front-end speech enhancement and/or separation system is commonly used to improve the noise robustness [1]. Speech separation is especially useful for VT and other downstream tasks when multiple speakers are present in a recording because a typical acoustic model cannot deal with speech mixtures.
However, multiple separated and enhanced signals from the front-end system cannot directly be input to a typical VT system because the VT system takes only a single channel input. A simple solution to address this is channel selection, where one channel is selected from the multiple channels before VT [2]. A downside of this approach is that unselected channels are discarded even though they might contain useful information for VT. For example, if the front-end system wrongly suppresses a part of a target speech and introduces distortions in the selected channel, the suppressed part of the target speech could be contained in the other channels.
We present a novel multichannel acoustic model for VT, where the multiple separated/enhanced channels from the front-end system are directly fed into a VT acoustic model. We adopt a recently-proposed transform-average-concatenate (TAC) block [3] to perform inter channel processing within the acoustic model. In addition, we combine the selected channel in the TAC block so that the model is informed of the channel of most interest for VT. We conduct experimental evaluations on a far-field VT task, where the proposed multichannel approach outperforms a single channel baseline VT with channel selection by up to \(30\%\) relative in terms of the false rejection rate (FRR).
## 2 Related Work
Multichannel acoustic modeling has been investigated for far-field automatic speech recognition [4]. Sainath et al. proposed using convolutional neural networks (CNNs) on multichannel time domain signals [5, 6] to directly learn both spatial and spectral characteristics from training data. A similar approach has also been used for keyword spotting [7]. More recently, attention based approaches have been proposed for multichannel acoustic modeling [8, 9], where cross-channel attention is performed to learn from inter-channel characteristics.
Although these approaches are end-to-end optimized for the target tasks, the model complexity and compute cost usually increase due to the joint spatial and spectral modeling, or the cross-channel attention operation, which is unsuitable for on-device applications such as VT.
Other types of multichannel approaches have also been proposed for keyword spotting, where multichannel features are concatenated [10] or attention based weighted sum is performed on the multichannel features [11, 12]. Although these operations are simple and computationally light, they may not be enough to model inter-channel characteristics.
Multichannel modeling has also been explored for neural network-based speech enhancement and separation. A TAC block is proposed for simple yet effective multichannel mod
eling for speech enhancement and separation [3, 13, 14]. The TAC block is defined with simple channel-wise transformations, pooling and concatenation operations.
Our proposed multichannel VT modeling is based on the TAC block because it employs simple and light-weight operations for non-linear inter-channel modeling. For the multichannel input, we use the output of the front-end system, i.e., enhanced and separated signals similarly to the prior work [11, 12]. This allows us to exploit the existing front-end system and use potentially more informative signals for VT than the raw multi-microphone signals, while the model performs non-linear inter-channel operations with the TAC block in contrast to [11, 12]. Moreover, we incorporate channel selection to allow the model to focus on the target speaker (See Section 4.2).
## 3 Baseline System
Figure 2 (a) shows the flowchart of a baseline system [2]. A front-end system consists of a speech enhancement module and a speech separation module. The enhancement module produces a single-channel enhanced speech signal, whereas the separation module produces \((N-1)\)-channel output for \(N-1\) separated signals. The separation module is especially useful when observed signals contain multiple speech signals, such as target speaker and TV noise. Then \(N\) signals from both modules are fed into a VT system.
The VT system employs a two-stage approach [15, 16, 17] to save run-time cost on-device. A small VT model (1st pass model) is always-on and takes streaming audio of each channel from the front-end. Once we detect an audio segment with a VT score exceeding a certain threshold, the audio segment is fed to a larger VT model and re-examined to reduce false activations. Since the VT models take only a single-channel audio, channel selection is performed using the 1st pass model [2].
The 1st pass model processes each channel from the front-end system, and produces a VT score independently. Then, the channel with the highest 1st pass VT score is selected and fed into the 2nd pass model that has more complexity and is more accurate than the 1st pass model. This approach has an advantage when multiple speakers are present, because one channel containing a keyword phrase can be selected among multiple separated speech signals. However, a drawback of this approach is that unselected channels are discarded and not used for the 2nd pass model, whereas noise and interference signals in the discarded channels could also be useful for accurate VT in the 2nd stage. In addition, the speech enhancement or separation module could suppress the target speech and introduce distortions when there is no interference speech and/or background noise, which can be ineffective for VT.
## 4 Proposed Multichannel VT Modeling
In this paper, we propose multichannel acoustic models that can take the multichannel output from the front-end system. Figure 2 (b) shows the flowchart of our proposed system. While the 1st pass model still performs VT on each channel separately, the proposed 2nd pass acoustic model takes all the channels. In addition, the selected channel obtained with the conventional channel selection is also fed to the model in order to keep the advantage of the conventional approach on speech mixtures. We adopt and modify a recently-proposed TAC block for combining the multiple channels in a VT acoustic model.
### Transform-average-concatenate (TAC)
Let us consider \(N\) channel signals from the front-end system that performs speech enhancement and separation. Let \(\mathbf{Z}_{i}\in\mathbb{R}^{T\times F}\) denote a time series of a \(F\)-dimensional feature from channel \(i\). We first apply a linear layer and the parametric rectified linear unit (PReLU) activation function [18] to \(\mathbf{z}_{i,t}\):
\[\mathbf{h}_{i,t}=P(\mathbf{z}_{i,t}), \tag{1}\]
where \(\mathbf{z}_{i,t}\) denotes the feature vector at time \(t\) for channel \(i\) and \(P(\cdot)\) denotes the linear transformation followed by the PReLU activation. Then, \(\mathbf{h}_{i,t}\) is averaged across the channels and fed into another linear layer with the PReLU activation function as:
\[\mathbf{h}_{t}^{avg}=Q(\frac{1}{N}\sum_{i}\mathbf{h}_{i,t}). \tag{2}\]
Then \(\mathbf{h}_{t}^{avg}\) is concatenated with \(\mathbf{h}_{i,t}\) and fed into a third linear layer and the PReLU activation function as:
\[\hat{\mathbf{h}}_{i,t}=R([\mathbf{h}_{i,t};\mathbf{h}_{t}^{avg}]). \tag{3}\]
Finally, a residual connection is applied to obtain an output of the TAC block as:
\[\hat{\mathbf{z}}_{i,t}=\mathbf{z}_{i,t}+\hat{\mathbf{h}}_{i,t}. \tag{4}\]
These operations enable learning inter-channel characteristics with the simple channel-wise transformations and the pooling operation. Note that all the operations in the TAC block
Figure 1: (a) A conventional single channel voice trigger detection. (b) The proposed multichannel voice trigger detection with channel selection.
is permutation invariant between the channels by design for microphone-array agnostic modeling, which allows us to feed the arbitrary number of separated/enhanced signals into the TAC block.
### Modified TAC with selected channel
Although the permutation-invariant operations enable micro-array-agnostic speech separation in the previous literature [14], its permutation-invariant nature would be problematic when one of the channels is of more interest for VT. For example, the front-end system performs speech separation and produces multiple channels which contain either target or interference speech at each channel. A VT model should attend to the target speech to perform VT detection. However, the TAC block processes every channel equally, which confuses the VT model during training and inference when multiple speakers are present.
To address this, we propose exploiting the selected channel obtained with the conventional channel selection approach. Figure 2 shows the proposed block obtained by modifying the conventional TAC block. Let \(\mathbf{z}_{sc,t}\) denote a feature vector of the selected channel. The modified TAC block takes a feature vector \(\mathbf{z}_{i,t}\) for channel \(i(=1,...,N)\) as well as \(\mathbf{z}_{sc,t}\). We first apply eq. (1) to \(\mathbf{z}_{sc,t}\) as with \(\mathbf{z}_{i,t}\):
\[\mathbf{h}_{sc,t}=P(\mathbf{z}_{sc,t}), \tag{5}\]
while the average operation is performed on \(\mathbf{h}_{i,t}(i=1,...,N)\) using eq. (2). Then, \(\mathbf{h}_{sc,t}\) is concatenated with \(\mathbf{h}_{i,t}\) and \(\mathbf{h}_{t}^{avg}\), and fed into a linear layer and the PReLU activation function as:
\[\hat{\mathbf{h}}_{i,t}=R([\mathbf{h}_{i,t};\mathbf{h}_{t}^{avg};\mathbf{h}_{ sc,t}]). \tag{6}\]
This operation distinguishes the selected channel from the other channels and encourages the model to learn from the selected channel. Finally, another linear layer and the PReLU activation function is applied to \(\mathbf{h}_{sc,t}\) to reduce the dimensionality to dimensionality of the input before the residual connection:
\[\hat{\mathbf{z}}_{sc,t}=\mathbf{z}_{sc,t}+S(\mathbf{h}_{sc,t}), \tag{7}\]
while \(\hat{\mathbf{z}}_{i,t}\) (\(i=1,...,N\)) is obtained with eq. (3).
### Acoustic modeling with self-attention layers
The TAC block was combined with self-attention layers in [14], where self-attention is performed for each output channel of TAC blocks. This approach drastically increases a runtime computation cost because a quadratic self-attention operation is repeated for each channel, which is unsuitable for VT that should be run on-device with low latency. To alleviate this, we apply an average pooling layer before feeding a multichannel output from the TAC block to the self-attention layers for temporal and spectral modeling.
## 5 Experimental Evaluation
We evaluated the effectiveness of the proposed approach on a far-field VT task. Since some practical use cases (e.g., the presence of playback/interference speakers) are not well-represented in common public datasets, we used our in-house dataset for evaluation. The proposed approach was compared with a conventional single channel VT that used channel selection. It should be noted that a simple concatenation of multichannel features used in [10] did not outperform the single channel baseline in our preliminary experiments.
### Data
For training, we used \(\sim 2.3\) million human-transcribed single channel utterances. Multichannel reverberant signals were simulated by convolving measured room impulse responses (RIRs), which were recorded in various rooms and microphone locations with a six channel microphone-array. In addition, roughly \(20\%\) of utterances were augmented by convolving simulated four-channel RIRs and then adding multichannel non-speech noise signals or multichannel playback. Then we combined these three types of utterances to obtain a simulated multichannel training dataset. Finally, the multichannel signals were fed into the front-end system to obtain one enhanced signal and three separated signals for each utterance.
For evaluation, we used an in-house dataset, where positive samples were collected in controlled conditions from 100 participants. Each participant spoke the keyword phrase to the smart speaker with six microphones. Note that there was a mismatch in microphone arrays used for a part of training data and test data, which was compensated for by the front-end that always produced four channels. The recordings were made in various rooms in four different acoustic conditions: quiet (no playback, no noise), external noise, e.g., from TV or appliances, music playing from the device at medium volume, and music playing at loud volume. 1300 such positive samples were collected. For negative data, we collected 2000 hours of audio by playing podcasts, audiobooks, etc, which did not contain the keyword phrase. The negative samples were also recorded with the smart speaker. The same front-end system was applied to the evaluation dataset to obtain enhanced and separated signals for each sample.
Figure 2: The modified TAC block with the selected channel.
### Settings
For the front-end, we used echo cancellation and dereverberation followed by a mask-based beamformer for speech enhancement and blind source separation. See [2] for more details of the front-end. The speech separation module produced three separated signals, and so we obtained four channel signals in total from the front-end. It should be noted that our proposed model architecture can be used with any front-end that produces a multichannel output.
For the 1st pass VT and channel selection, we used 5x64 fully-connected deep neural networks (DNNs). The 5x64 DNNs predicted a frame-wise posterior for 20 classes: 18 phoneme classes for the keyword, one for silence and one for other speech. Then a hidden Markov model (HMM) decoder produced a VT score and alignment for a trigger phrase based on the posteriors in a streaming fashion. This 1st pass model was run on the four channels separately and produced four VT scores. Then, a trigger segment in the channel with the highest score was input to the larger VT model in the second stage for the baseline.
For the VT models in the second stage, we used a Transformer encoder [19, 20] as an acoustic model. The baseline single channel model consisted of six Transformer encoder blocks, each of which had a multi-head self-attention layer with 256 hidden units and 4 heads, followed by a feed-forward layer with 1024 hidden units. Finally a linear layer transformed the output from the Transformer blocks to logits for 54 phoneme labels and one blank label for a Connectionist Temporal Classification (CTC) loss [21]. A VT score was obtained by computing a decoding score for the wake word. The baseline model used 40-dimensional log-mel filter bank features with \(\pm\) 3 context frames as input.
For the proposed multichannel model, we simply prepended one original/modified TAC block and an average pooling layer to the baseline model. The modified TAC block had \(3\times 256\) hidden units for \(P\) and \(Q\), and \(280\) units for \(R\) and \(S\). We used the log-mel filter bank features from the four channels as input. Since our training data was not a keyword specific dataset on which the channel selection could be performed, we used the channel from the speech enhancement module as a pseudo selected channel during training. We also compared with a standard TAC block where the modified TAC block with the selected channel was replaced with the TAC that took only the four channels without knowing which one was the selected channel. The numbers of model parameters for the baseline, the proposed models with the standard and modified TAC were \(4.8M\), \(6.1M\) and \(6.5M\), respectively. It should be noted that simply increasing the model size of the baseline single channel model did not improve a VT performance in our preliminary experiments.
All the models were trained with the CTC loss using the Adam optimizer [22]. The learning rate was initially set at \(0.0005\), then gradually decreased by a factor of \(4\) after \(10\)th epoch, until we finished training after \(28\) epochs. We used \(32\) GPUs for each model training and the batch size was set at \(128\) at each GPU.
### Results
Table 1 shows FRRs with a threshold that gives \(0.01\) FA/hr on the overall dataset for each model. In the quiet condition, the proposed multichannel models outperformed the baseline with a large margin. This could be because the speech enhancement and separation were unnecessary in this case and would introduce distortions to the target speech in the selected channel, while the proposed models compensated the distortions by looking at all four channels (plus the selected one). In the music playback conditions, we observed moderate improvements with the proposed models. This could be because the multichannel models could learn echo residuals more effectively from the multichannel signals, where different front-end processing was applied at each channel. In the noisy condition, the vanilla TAC regressed compared to the baseline. We found that failure cases contained speech interference from TV. This is reasonable because there is no cue for the vanilla TAC to determine the target speaker when multiple speakers are present in the separated signals. By incorporating the selected channel, the proposed approach achieved a similar performance on the noisy condition while outperforming the baseline on the other conditions. The proposed approach with the selected channel achieved \(30\%\), \(13\%\) and \(4.6\%\) relative reductions in FRRs on the quiet, medium and loud volume playback conditions, respectively, and a \(7.5\%\) FRR reduction on the overall dataset compared to the single channel baseline. These results show the effectiveness of the proposed approaches.
## 6 Conclusions
In this paper, we propose multichannel acoustic modeling for VT based on the TAC block. The multichannel acoustic model directly takes multichannel enhanced and separated signals from the front-end and produces a VT score. We further modify the original TAC block by incorporating the selected channel to deal with speech mixtures. The experimental results show that the proposed multichannel model outperforms the single channel baseline in the quiet and playback conditions, and achieves a similar performance in the noisy condition.
## 7 Acknowledgement
We thank Mehrez Souden for his feedback on the paper and the helpful discussions.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & quiet & noisy & \begin{tabular}{c} medium \\ playback \\ \end{tabular} &
\begin{tabular}{c} loud \\ playback \\ \end{tabular} & overall \\ \hline baseline & 4.48 & **6.75** & 5.23 & 15.32 & 7.74 \\ \hline proposed & **2.25** & 10.7 & **3.78** & **12.46** & 7.20 \\ \hline proposed & \multirow{2}{*}{3.12} & \multirow{2}{*}{7.12} & \multirow{2}{*}{4.55} & \multirow{2}{*}{14.62} & \multirow{2}{*}{**7.16**} \\ (+ selected channel) & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: False rejection rates [\(\%\)] for different conditions at an operating point of 1 FA/100 hrs. | **発声トリガ(VT)は、ユーザーがトリガーフレーズを話してデバイスを起動することを可能にする。フロントエンドシステムは、音声の強化と/または分離を typically 使用して、複数の強化されたおよび/または分離された信号を生成する。従来の VT システムは、単一チャネルのオーディオのみを入力として受け取るため、チャンネル選択が行われる。このアプローチの欠点は、選択されていないチャネルは、VTに必要な情報を含んでいる場合でも廃棄されることである。この論文では、VTにマルチチャンネルの音声モデルを提案する。フロントエンドからのマルチチャンネル出力は、VTモデルに直接入力される。TACブロックを採用し、このTACブロックを、従来のチャンネル選択からチャンネルを取り込んだことで、マルチスピーカー環境においてターゲットスピーカに注意を向けられるようにモデルを調整する。提案されたアプローチは、従来のチャンネル選択アプローチと比べて、誤認識率を30%削減する |
2309.17328 | A Babcock-Leighton dynamo model of the Sun incorporating toroidal flux
loss and the helioseismically-inferred meridional flow | We investigate whether the Babcock-Leighton flux-transport dynamo model
remains in agreement with observations if the meridional flow profile is taken
from helioseismic inversions. Additionally, we investigate the effect of the
loss of toroidal flux through the solar surface. We employ the 2D
flux-transport BL dynamo framework. We use the helioseismically-inferred
meridional flow profile, and include toroidal flux loss in a way that is
consistent with the amount of poloidal flux generated by Joy's law. Our model
does not impose a preference for emergences at low latitudes, but we do require
that the model produces such a preference. We can find solutions in general
agreement with observations, including the equatorward drift of the butterfly
wings and the cycle's 11 year period. The most important free parameters in the
model are the depth to which the radial turbulent pumping extends and the
turbulent diffusivity in the lower half of the convection zone. We find that
the pumping needs to extend to depths of about $0.80R_{\odot}$ and the bulk
turbulent diffusivity needs to be around 10 km$^2$/s or less. We find that the
emergences are restricted to low latitudes without the need to impose such a
preference. The flux-transport BL model, incorporating the helioseismically
inferred meridional flow and toroidal field loss term, is compatible with the
properties of the observed butterfly diagram and with the observed toroidal
loss rate. Reasonably tight constraints are placed on the remaining free
parameters. The pumping needs to be to just below the depth corresponding to
the location where the meridional flow changes direction. Our linear model does
not however reproduce the observed "rush to the poles" of the diffuse surface
radial field resulting from the decay of sunspots -- reproducing this might
require the imposition of a preference for flux to emerge near the equator. | S. Cloutier, R. H. Cameron, L. Gizon | 2023-09-29T15:36:11 | http://arxiv.org/abs/2309.17328v1 | A Babcock-Leighton dynamo model of the Sun incorporating toroidal flux loss and the helioseismically-inferred meridional flow
###### Abstract
Context:Key elements of the Babcock-Leighton model for the solar dynamo are increasingly constrained by observations.
Aims:We investigate whether the Babcock-Leighton flux-transport dynamo model remains in agreement with observations if the meridional flow profile is taken from helioseismic inversions. Additionally, we investigate the effect of the loss of toroidal flux through the solar surface.
Methods:We employ the two-dimensional flux-transport Babcock-Leighton dynamo framework. We use the helioseismically-inferred meridional flow profile, and include toroidal flux loss in a way that is consistent with the amount of poloidal flux generated by Joy's law. Our model does not impose a preference for emergences at low latitudes, we do however require that the model produces such a preference.
Results:We can find solutions in general agreement with observations, including the latitudinal migration of the butterfly wings and the cycle's 11 year period. The most important free parameters in the model are the depth to which the radial turbulent pumping extends and the turbulent diffusivity in the lower half of the convection zone. We find that the pumping needs to extend to depths of about \(0.80R_{\odot}\) and the bulk turbulent diffusivity needs to be around \(10\) km\({}^{2}\)/s or less. We find that the emergences are restricted to low latitudes without the need to impose such a preference.
Conclusions:The flux-transport Babcock-Leighton model, incorporating the helioseismically inferred meridional flow and toroidal field loss term, is compatible with the properties of the observed butterfly diagram and with the observed toroidal loss rate. Reasonably tight constraints are placed on the remaining free parameters. The pumping needs to be to just below the depth corresponding to the location where the meridional flow changes direction, and where numerical simulations suggest the convection zone becomes marginally subadiabatic. Our linear model does not however reproduce the observed "rush to the poles" of the diffuse surface radial field resulting from the decay of sunspots - reproducing this might require the imposition of a preference for flux to emerge near the equator.
## 1 Introduction
The solar cycle is driven by a self-excited fluid dynamo which is induced by the interaction between the large-scale magnetic field and flows within the convection zone of the Sun (Ossendriyver, 2003; Charbonneau, 2014). In the first part of the dynamo loop, differential rotation winds up poloidal magnetic field generating toroidal field. This is the so-called \(\Omega\)-effect. The \(\Omega\)-effect is both well understood and constrained by observations.
In the second part of this loop, the toroidal field generates new poloidal magnetic field. This new poloidal field has the opposite polarity to the original poloidal field. Each 11-year sunspot (or Schwabe) cycle is half of the 22-year magnetic (or Hale) cycle required to revert to the original polarity. Non-axisymmetric flows and fields play a critical role during this phase of the cycle. The non-axisymmetric processes involved in the second phase, however, are far from being either well understood or constrained.
A major success of helioseismology was the determination of the sub-surface solar rotation profile, which challenged the dynamo-wave paradigm (Gough & Toomre, 1991). This challenge lead to the flux-transport dynamo (FTD) model (Wang et al., 1991) where the deep meridional circulation causes the emergence locations of sunspots to drift equatorwards during a solar cycle. Observational and theoretical studies (see eg. Dasi-Espuig et al., 2010; Kitchatinov & Olemskoy, 2011; Cameron & Schussler, 2015) provide strong support for the Babcock-Leighton mechanism (BL - Babcock, 1961; Leighton, 1964, 1969) to be the dominant mechanism in the second part of the dynamo loop, as opposed to the turbulent \(\alpha\)-effect (Parker, 1955; Steenbeck et al., 1966). At the core of the BL mechanism is the role played by the surface field. Sunspots emerge in bipolar magnetic pairs, with an east-west orientation usually in accordance with Hale's law. There is also a statistical tendency called Joy's law where the following spots to emerge closer to the poles and the leading spots to emerge closer to the equator. Sunspots decay within a few days to months, after which the field is dispersed by small-scale convective motions and transported poleward by the meridional flow. The flux cancellation of leading sunspot fields across the equator allows for the net buildup of a polar field by trailing sunspot fields.
Transport processes are required at the surface in order to transport the radial field from the equator to the poles, and to transport the subsurface toroidal field equatorwards to account for the equatorial migration of the butterfly wings (Sporer's law).
The surface part of the required transport has been established by observations of the surface meridional flow and the success of the surface flux transport model. The helioseismically inferred subsurface meridional flow is a relatively new constraint for these models. The use of the helioseismically-inferred meridional flow profile removes a number of free parameters from Babcock-Leighton type models. This makes a comparison with the observations a tighter test of the model and allows us to better constrain the remaining free parameters.
An additional recent constraint is that toroidal flux is lost through flux emergence, with a timescale estimated to be around 12 years by Cameron & Schussler (2020, hereafter CS20). This paper will investigate if the BL FTD model, using the meridional flow inferred by Gizon et al. (2020, hereafter G20), and including the toroidal flux loss associated with flux emergence, is consistent with observations. To this end we introduce a loss term in the evolution equation for the toroidal field that is consistent with the evolution of the poloidal flux associated with Joy's law.
In the Babcock-Leighton type of model considered in this paper, the turbulent convective motions are not explicitly simulated, instead their affect on the magnetic field is parameterized (eg. Charbonneau 2014). Mean-field theory (Moffatt 1978; Krause & Radler 1980) actually shows that including the effect of turbulence introduces a large number of parameters. In the Babcock-Leighton model only a few are kept, the most important of which include the \(\alpha\)-effect, an increased turbulent diffusion, and downward diamagnetic pumping. Of these, the Babcock-Leighton \(\alpha\)-effect is poorly understood but well constrained by observations, while turbulent pumping and turbulent diffusion are largely unconstrained.
In this paper we will see if the FTD model, with the observed meridional flow and flux loss, is compatible with the Babcock-Leighton model, and what constraints it places on the other processes of the model.
## 2 Model
### Dynamo equations
In mean-field theory, the axisymmetric large-scale magnetic and velocity fields are decomposed into poloidal and toroidal components as:
\[\@vec{B}=\nabla\times[A(r,\theta,r)\vec{\epsilon}_{\phi}]+B(r,\theta,r)\vec{ \epsilon}_{\phi}, \tag{1}\]
\[\@vec{u}=\@vec{u}_{m}(r,\theta)+r\sin\theta\,\Omega(r,\theta)\vec{\epsilon}_{ \phi}, \tag{2}\]
where \(A\) is the \(\phi\)-component of the vector potential field and \(B\) is the toroidal component of the large-scale magnetic field, \(\@vec{u}_{m}\) is the meridional circulation, and \(\Omega\) is the differential rotation. For an isotropic turbulent diffusivity that has only a \(r\)-dependence, the kinematic mean-field dynamo equations are:
\[\frac{\partial A}{\partial t} =-\frac{\@vec{u}_{p}}{\varpi}\cdot\nabla(\varpi A)+\eta\left( \nabla^{2}-\frac{1}{\varpi^{2}}\right)A+S, \tag{3}\] \[\frac{\partial B}{\partial t} =-\varpi\@vec{u}_{p}\cdot\nabla\left(\frac{B}{\varpi}\right)+ \eta\left(\nabla^{2}-\frac{1}{\varpi^{2}}\right)B+\frac{1}{\varpi}\frac{ \partial(\varpi B)}{\partial r}\frac{d\eta}{dr}\] (4) \[-B\nabla\cdot\@vec{u}_{p}+\varpi[\nabla\times(A\vec{\epsilon}_{ \phi})]\cdot\nabla\Omega-L,\]
where \(\varpi=r\sin\theta\), \(\@vec{u}_{p}=\@vec{u}_{m}+\@vec{\gamma}\), \(\eta\) is the turbulent diffusivity, \(\@vec{\gamma}\) the turbulent pumping, \(S\) the BL source term, and \(L\) the toroidal field loss term due to flux emergence. The source and loss terms will be discussed in Section 2.4.
### Differential rotation and meridional circulation
The large-scale flows need to be prescribed in kinematic models. For the differential rotation, we use the simple model provided by Belvedere et al. (2000) and shown in the left panel of Figure 1:
\[\Omega(r,\theta)=\sum_{j=0}^{2}\cos(2j\theta)\sum_{i=0}^{4}c_{ij}r^{i}, \tag{5}\]
where the coefficients \(c_{ij}\) can be found in that paper. This fit is an approximation of the helioseismologically inferred rotation rate of Schou et al. (1998).
As mentioned in the introduction, we will use the meridional circulation inferred from observations. In this study, we use the inversions of G20. The authors furnish the meridional flow for cycles 23 and 24. In order to keep the parameter space study manageable, we take the average of the two cycles. In addition, since we are not here interested in the asymmetry between both hemispheres, we also symmetrize the flow across the equator. The right panel of Figure 1 shows the meridional circulation we use in all our models. Note that the flow switches from poleward to equatorward at a radius of about \(0.785R_{\odot}\), which we will call the meridional flow turnover depth \(r_{s}\), and the region beneath it the lower or deep convection zone, and the one above the upper or shallow convection zone.
### Turbulent parameterizations
We choose a turbulent diffusivity profile which is written as a double step as in Munoz-Jaramillo et al. (2011):
\[\begin{split}\eta(r)=\eta_{\rm RZ}+&\frac{\eta_{ \rm CZ}-\eta_{\rm RZ}}{2}\left[1+\mathrm{erf}\left(\frac{r-0.72R_{\odot}}{0.012R_{\odot}}\right)\right]\\ +&\frac{\eta_{R_{\odot}}-\eta_{\rm CZ}-\eta_{\rm RZ} }{2}\left[1+\mathrm{erf}\left(\frac{r-0.95R_{\odot}}{0.01R_{\odot}}\right) \right],\end{split} \tag{6}\]
where \(\eta_{\rm RZ}=0.1\) km\({}^{2}\)/s, \(\eta_{R_{\odot}}=350\) km\({}^{2}\)/s, and \(\eta_{\rm CZ}\) are respectively the radiative core, surface, and bulk values of the turbu
Figure 1: Rotation profile of Belvedere et al. (2000) given by equation 5 (left) and cycle-averaged and symmetrized stream function of the helioseismic meridional flow inversions of G20 (right). For the latter, positive values represent clockwise circulation and negative anticlockwise. The dash-dotted and dotted lines represent the approximate locations of the tachocline at \(0.7R_{\odot}\) and reversal of the meridional flow direction at \(0.8R_{\odot}\), respectively.
lent diffusivity. \(\eta_{CE}\) is a free parameter and \(\eta_{R_{0}}\) has been chosen to be consistent with estimates from observations (eg. Komm et al. [10]), the surface flux transport model of Lemerle et al. ([10]), and MLT (eg. Munoz-Jaramillo et al. [11]). The values in the first error function have been chosen so that the drop in diffusivity occurs mostly before the helioseismically determined position of the tachocline (Charbonneau et al. [10]), roughly coinciding with the overshoot region (Christensen-Dalsgaard et al. [11]).
For the turbulent pumping, we adopt the profile given by Karak & Cameron ([11], hereafter KC16 - see also the discussion in their section 2):
\[\@vec{\gamma}=-\frac{\gamma_{0}}{2}\left[1+\mathrm{erf}\left(\frac{r-r_{\gamma }}{0.01R_{0}}\right)\right]\hat{\@vec{e}}_{r}, \tag{7}\]
where here we take \(r_{\gamma}=r_{t}=0.785R_{\odot}\). This choice will be discussed in Section 4.2.3.
### Flux emergence
The emergence of bipolar magnetic regions both removes toroidal flux (see CS20) and creates poloidal flux (because of Joy's law). These two processes are clearly linked as are the respective loss and source terms in Equations 4 and 3.
We take the amount of flux emerging to be proportional to the toroidal flux density
\[b(\theta,t)=\int_{0.7R_{\odot}}^{R_{\odot}}B(r,\theta,t)r\mathrm{d}r, \tag{8}\]
where the integration is over the depth of the convection zone. This prescription corresponds to a dynamo where the toroidal field is not necessarily stored near the tachocline but can be distributed throughout the convection zone (KC16, Zhang & Jiang [23]). It is in part motivated by observations of dynamos in fully convective stars (Wright & Drake [11]), and by cyclic dynamo action in 3D MHD simulations of spherical shells without a tachocline (eg. Brown et al. [11]; Nelson et al. [24], [25]). The flux emergence rate \(R\) can be written in general as a function of latitude:
\[R(\theta,t)=f_{\theta}(\theta)\frac{b(\theta,t)}{\tau_{0}}, \tag{9}\]
where \(\tau_{0}\) is a timescale and \(f_{\theta}(\theta)\) is its latitudinal dependence, which we take to be
\[f_{\theta}(\theta)=\sin\theta, \tag{10}\]
corresponding to an emergence probability that is constant per unit length of the toroidal field lines.
In general the timescale, \(\tau_{0}\) in Equation 9 depends on the dynamics associated with flux emergence. If these dynamics are dominated by the large-scale field than the buoyant rise time and \(\tau_{0}\) to depend inversely on the mean-field value of \(B^{2}\) (Kichatinov & Pipin [10]). If however small-scale magnetic fields remain coherent over timescales longer than the correlation time, then the \(B\) filling factor which can be far from 1 becomes important. Some previous studies (eg. Schmitt & Schussler [11]; Moss et al. [12], Jennings & Weiss [13]) assume \(\tau\sim B^{-2}\) so that the loss term scales like \(B^{3}\). In this paper we consider the linear case where \(\tau_{0}\) is a constant, and which would correspond to a case where the field is composed of filamentary structures with lifetimes longer than the turnover timescale of the turbulence and with local field strengths drawn from some distribution which is independent of flux. We stress that the aim in this paper is to consider a simple linear system. We defer the nonlinear case to future work.
The orientation of the flux emergence is governed by Joy's law which states that the leading polarity flux emerges on average closer to the equator than the trailing polarity one. We take the form of Joy's law used in Leighton ([11]), \(\sin\delta=\frac{1}{2}\cos\theta\), where \(\delta\) is the angle between the solar equator and the line joining the two polarities. Then the rate at which toroidal flux density is lost due to flux emergence is
\[\begin{split}\frac{\partial b}{\partial t}\bigg{|}_{\mathrm{L}}& =-\cos\delta\;R(\theta,t),\\ &=-f_{\theta}(\theta)\cos\delta\frac{b(\theta,t)}{\tau_{0}},\end{split} \tag{11}\]
where the subscript \(L\) indicates the contribution from the loss term. The tilting of a toroidal flux tube as it emerges gives rise to a \(\theta\)-component of the same polarity. The rate at which this \(\theta\)-component of the flux density is lost is
\[\begin{split}\frac{\partial}{\partial t}\int_{0.7R_{\odot}}^{R_{ \odot}}B_{\theta}(r,\theta,t)r\mathrm{d}r\bigg{|}_{\mathrm{S}}&=- \sin\delta\;R(\theta,t),\\ &=-f_{\theta}(\theta)\sin\delta\frac{b(\theta,t)}{\tau_{0}}.\end{split} \tag{12}\]
where the subscript \(S\) indicates the contribution from the source term \(S\). This is what gives rise to the Babcock-Leighton mechanism. From the definition of the poloidal field (Equation 1),
\[B_{\theta}=-\frac{1}{r}\frac{\partial rA}{\partial r}. \tag{13}\]
We choose a depth \(R_{b}\) sufficiently below the base of the convection zone so that the 11-year cyclic component of the field is negligible there. Then multiplying both sides by \(r\) and integrating from the base of the convection zone to the surface, we obtain
\[\begin{split}\int_{R_{b}}^{R_{\odot}}B_{\theta}(r,\theta,t)r \mathrm{d}r=&-(A(R_{\odot},\theta,t)R_{\odot}-A(R_{b},\theta,t)R _{b})\\ =&-A(R_{\odot},\theta,t)R_{\odot}.\end{split} \tag{14}\]
Therefore, in terms of the \(\phi\)-component of the poloidal vector potential \(A\), Equation 12 becomes
\[\frac{\partial A(R_{\odot},\theta,t)}{\partial t}\bigg{|}_{\mathrm{S}}=f_{ \theta}(\theta)\sin\delta\frac{b(\theta,t)/R_{\odot}}{\tau_{0}}. \tag{15}\]
Next, we need to prescribe the radial structure of the source and loss terms. For the source term \(S\), we follow KC16 and assume
\[S(r,\theta,t)=f_{r}^{S}(r)\sin\theta\sin\delta\frac{b(\theta,t)/R_{\odot}}{ \tau_{0}}, \tag{16}\]
where
\[f_{r}^{S}(r)=\frac{1}{2}\left[1+\mathrm{erf}\left(\frac{r-r_{S}}{0.01R_{\odot}} \right)\right]. \tag{17}\]
\(r_{S}\) is the depth to which the source extends. For most of the calculations, we choose \(r_{S}=0.85R_{\odot}\) (as in Munoz-Jaramillo et al. [11]). There are indications that this disconnection should happen much deeper than the usually assumed shallow location of \(0.95R_{\odot}\) (Longcope & Choudhuri [10]). We will nevertheless vary this parameter in order to study its impact on the solutions.
For the loss term \(L\), we first distribute the toroidal flux density loss (equation 11) over radius according to the amount of flux initially present, so that
\[L(r,\theta,t)=f_{r}^{L}(r)\sin\theta\cos\delta\frac{B(r,\theta,t)}{\tau_{0}}, \tag{18}\]
where
\[f_{r}^{L}(r)=\frac{1}{2}\left[1+\mathrm{erf}\left(\frac{r-0.70R_{\odot}}{0.01R _{\odot}}\right)\right]. \tag{19}\]
In order for the differential form of the source and loss terms to be valid, the emergence timescale \(\tau_{e}\) must be considered infinitesimal with respect to the timescale over which the magnetic configuration of the large-scale field changes appreciably. It follows that a timescale separation must hold:
\[\tau_{e}\ll P. \tag{20}\]
In the case of the Sun, \(\tau_{e}\sim 1\) day, and \(P=11\) years, so the timescale separation is reasonable. This formulation implicitly ignores the effect of the meridional flow on the the emergence process. Nevertheless, this formulation is sufficient to study the general features of the solar cycle.
### Numerical procedure
Equations 3 and 4 are nondimensionalized and numerically solved in the meridional plane with \(0\leq\theta\leq\pi\) and \(0.65R_{\odot}\leq r\leq R_{\odot}\). We use a spatial resolution of \(241\times 301\) evenly spaced in radius and colatitute grid points and a time step of \(5\times 10^{-6}R_{\odot}^{2}/\eta_{r}\), where \(\eta_{r}=10\) km\({}^{2}\)/s. The inner boundary matches to a perfect conductor, so that:
\[A=0\quad\mathrm{and}\quad\frac{\partial(rB)}{\partial r}=0\quad\mathrm{at} \quad r=0.65R_{\odot}, \tag{21}\]
and the outer boundary condition is radial:
\[\frac{\partial(rA)}{\partial r}=0\quad\mathrm{and}\quad B=0\quad\mathrm{at} \quad r=R_{\odot}. \tag{22}\]
The latter is necessary for FTD models to match surface flux transport models (Cameron et al., 2012). A second-order centered finite difference discretization is used for the spatial variables and the solution is forwarded in time with the ADI scheme (Press et al., 1986). We use the code initially developed by D. Schmitt in Gottingen (as also used by Cameron et al., 2012).
The linearity of equations 3 and 4 allows us to choose \(\tau_{0}\) and \(\gamma_{0}\) such that the dynamo is approximately critical (\(\sigma\leq 5\times 10^{-5}\) per year) with a cycle period of 12 years (within 0.1%), roughly the average period of cycles 23 and 24. This way we reduce our parameter space to only one dimension (\(\eta_{\mathrm{CZ}}\)). Our model being linear also means we can arbitrarily scale \(A\) and \(B\). In order to facilitate comparisons with observations, we scale the fields so that the maximum of the surface radial field is 10 G, which is consistent with the observed polar field strengths at cycle minimum (eg. Hathaway, 2015).
## 3 Observational constraints
### Toroidal flux loss timescale
The general expression for the toroidal flux decay timescale is
\[\tau(t)=-\frac{\Phi(t)}{\mathrm{d}\Phi(t)/\mathrm{d}t}, \tag{23}\]
where \(\Phi\) is understood as the net subsurface toroidal flux in the northern hemisphere:
\[\Phi(t)=\int_{0}^{\pi/2}b(\theta,t)\mathrm{d}\theta, \tag{24}\]
and \(\frac{\Phi\Phi}{\mathrm{d}t}\) is its decay. In order to calculate the latter, we first need to specify the toroidal flux loss mechanism. For the loss \(L\) due to flux emergence, we have
\[\frac{\mathrm{d}\Phi}{\mathrm{d}t}\bigg{|}_{L}=-\int_{0}^{\pi/2}\int_{0.70R_{ \odot}}^{R_{\odot}}L(r,\theta,t)r\mathrm{d}r\mathrm{d}\theta, \tag{25}\]
and its corresponding timescale will be denoted by \(\tau_{L}\). Toroidal flux is also lost due to the explicit diffusion across the solar surface:
\[\frac{\mathrm{d}\Phi}{\mathrm{d}t}\bigg{|}_{\eta}=\eta_{R_{\odot}}\int_{0}^{ \pi/2}\left.\frac{\partial\left(rB\left(r,\theta,t\right)\right)}{\partial r }\right|_{R_{\odot}}\mathrm{d}\theta, \tag{26}\]
with an associated loss timescale of \(\tau_{\eta}\). The observational constraint on the flux loss timescale is given by CS20, and corresponds to \(\tau_{L}\approx 12\) years at solar maximum. Since these timescales vary across a cycle, we will calculate them at cycle maximum (to be defined in Section 3.2).
It is possible to estimate the range of values \(\tau_{0}\) should take. To do so we assume the tilt angle \(\delta\) associated with Joy's law is small, so that \(\cos\delta\sim 1\), and that the toroidal flux density can be approximated by \(b(\theta,t)=b_{0}(t)\sin^{\pi}\theta\), where \(b_{0}(t)\) is a time-dependent scalar, and \(m\) determines how closely the field is concentrated near the equator. With these approximations, we obtain:
\[1<\frac{\tau_{L}}{\tau_{0}}<\frac{\pi}{2}, \tag{27}\]
where \(\tau_{0}\) is the timescale for flux loss in the model, defined in equation 9, and the limits correspond to \(m=0\) and \(m=\infty\). The model parameter \(\tau_{0}\) should thus be comparable to (not more than a factor of 2 smaller than) the observed toroidal flux loss timescale \(\tau_{L}\).
### Polar cap and activity belt flux densities
An important observational constrain is that the maxima of azimuthally averaged polar flux densities should be around the same strength as maximum flux densities in the activity belt. Since the evolution equations we are using are linear, we have nominally set the maximum of the polar field to be 10 G. This implies that the azimuthally averaged radial field in the butterfly wings in the model should also be around 10 G. This is a constraint on the model which we will return to when evaluating whether the model can produce solar-like cycles.
### Cycle phase of polar maxima
An important constraint we will take into account is when the maximum of polar flux occurs. As it is observed to happen quite close to the activity minimum, its corresponding phase shift of about \(90^{\circ}\) with respect to cycle maximum. To measure this shift in the simulations, we need a definition for when the cycle maxima and the maxima of polar fields occur.
We begin with the surface radial flux of the polar cap
\[\Phi_{p}(t)\equiv 2\pi R_{\odot}^{2}\int_{60^{\circ}}^{90^{\circ}}B_{r}(R_{ \odot},\lambda,t)\,\mathrm{d}(\sin\lambda), \tag{28}\]
and of the activity belt
\[\Phi_{a}(t)\equiv 2\pi R_{\odot}^{2}\int_{0^{\prime}}^{40^{\prime}}B_{r}(R_{ \odot},\lambda,t)\,\mathrm{d}(\sin\lambda). \tag{29}\]
We then define cycle maximum times, \(T_{a}\), as the times when the activity belt flux \(\Phi_{a}\) is maximum. The times of the maximum polar flux \(T_{p}\) is similarly defined. \(T_{a}\) and \(T_{p}\) are both defined by the signed fluxes, and hence each have one maxima per magnetic cycle (of about 24 years).
For each cycle, \(i\), we then calculate the phase shift between the polar maximum times and the activity maximum times \(\Delta\phi\) by
\[\Delta\phi=\pi[T_{p}(i+1)-T_{a}(i)]/P-\pi, \tag{30}\]
where \(P\) is the (activity) cycle period.
## 4 Results
We first present our reference model including the Babcock-Leighton loss term in Section 4.1, then examine parameter sensitivity (Sections 4.2 to 4.5), and finally gauge the importance of the dynamo wave (Section 4.6).
### Reference model
Our reference model has a bulk diffusivity of \(\eta_{\mathrm{CZ}}=10\) km\({}^{2}\)/s, which places the simulation in the advection-dominated regime (an explanation of why such a low diffusivity is required in our setup is given in Section 4.2.2). Values of \(\tau_{0}=12.5\) years and \(\gamma_{0}=43.8\) m/s were required to achieve a critical 12-year (activity) cycle period dynamo. The reference model is presented in Figure 2. It can be seen that we are able to produce a reasonably solar-like butterfly diagram when using the helioseismically-inferred meridional flow of G20.
#### 4.1.1 Comparison with observational constraints
This model has an emergence loss timescale of \(\tau_{L}=17.2\) years, somewhat longer than the 12 years inferred by CS20. The diffusive loss timescale, on the other hand, is \(\tau_{\eta}=96.8\) years. The estimate of CS20 is based on all toroidal flux escaping through the photosphere. The combined rate at which flux is lost through the photosphere in the model can be estimated to be \(1/(1/\tau_{L}+1/\tau_{\eta})=14.6\) years and hence close to the inferred 12-year timescale. The value of \(\Delta\phi\) from the model is noticeably larger than the observed value at \(\Delta\phi=134^{\circ}\). The maximum value of the surface radial field in the butterfly wings is around 5 G, so about half of the maximum polar field strength (The observed average polar field is similar to the average in the butterfly wings, see e.g. the butterfly diagrams in Hathaway 2015). Considering the polar field strength is somewhat uncertain, our results are not inconsistent with observations. Our simulations do not have the problem of very large polar fields typical of FTD models (Charbonneau 2020).
The net toroidal flux \(\Phi\) (lower panel of Figure 3) shows that it reaches a maximum value of about \(5\times 10^{23}\) Mx close to cycle maximum, which is in rather good agreement with the estimates of Cameron & Schussler (2015) for cycles 22 and 23.
An important result of our model is that it achieves confinement of emergences to the low observed latitudes without the need for an emergence probability decreasing faster than \(\sin\theta\) with latitude. This can be seen in the upper panel of Figure 2, where we see that the toroidal flux density is mainly strong near the equator. The left panel of Figure 3 shows the toroidal field is mainly stored deep in the convection zone. This confinement of the toroidal field to deep in the convection zone is a consequence of the imposed radial pumping. The confinement to near the equator is then due to the meridional flow which advects the material from high latitudes towards the equator. The combination of radial pumping and equatorward meridional flow in the lower half of the convection zone leads to a stagnation point near the equator in the lower half of the convection zone where the field builds up until it is removed through emergence (also see Cameron & Schussler 2017; Jiang et al. 2013).
Our simulated butterfly diagram differs from the observed one in that it lacks a distinct "rush to the poles" of the trailing diffuse field of the decayed sunspots.
### Parameter dependence
Our model has 5 free parameters: the source and loss terms timescale \(\tau_{0}\) and the depth where sunspots are anchored \(r_{S}\), the turbulent pumping amplitude \(\gamma_{0}\) and the depth it reaches down to \(r_{\gamma}\), and the turbulent diffusivity in the bulk of the convection zone \(\eta_{CZ}\). In this section we will first provide a qualitative description of what different choices of the parameters produce.
Importantly, the results and constraints we find are for under the assumption that \(f_{\theta}(\theta)=\sin\theta\), i.e. that there is no imposed preference for emergences to occur at low latitudes. We also performed simulations with \(f_{\theta}(\theta)=\sin^{12}\theta\) (as in KC16), and as expected were are able to find critical dynamo solutions which match the observations for a much wider range of parameters.
#### 4.2.1 Influence of the source depth
We here investigate how the choice of \(r_{S}\) affects the solutions. We used the same value of \(\eta_{\mathrm{CZ}}\) as in reference case, and varied \(r_{S}\). The values of \(\tau_{0}\) and \(\gamma_{0}\) were then chosen so that the growth rate is zero and the cycle period is 12 years. We found that the solutions with flux loss are not very dependent on \(r_{S}\) for \(r_{S}\) extending from just above 0.78 to the surface. This is because the solutions with flux loss require strong pumping which rapidly stretches the poloidal field so that they extend radially to the depth at which the pumping stops. This makes the model insensitive to the initial depth of the poloidal source term in (at least in the region of parameter space near the reference case).
#### 4.2.2 The bulk diffusivity
We only find critical 12-year periodic solutions when the bulk diffusivity is of the order of 10 km\({}^{2}\)/s. This is a consequence of the strong radial shear of the equatorward component of the helioseismically-inferred meridional flow \(u_{\theta}\) in the lower third of the convection zone. The strong radial shear leads to toroidal flux at different depths being advected in latitude at very different rates. This implies that flux originally concentrated at one latitude over a range of depths will be quickly spread out in latitude.
To understand the role of the radial shear of the latitudinal flow, we can imagine toroidal field initially at one latitude but spread out in radius from \(r=0.766R_{\odot}\) to \(0.785R_{\odot}\). These depths were chosen so that the meridional flow will vary from almost 1 m/s equatorwards to almost 0 m/s. Over 5 years this will spread the flux over a latitudinal band of 157 Mm. This spreading out would be similar to a diffusivity of \((157)^{2}/5\) Mm\({}^{2}\)/year
= 156 km\({}^{2}\)/s. In the context of Babcock-Leighton FTD models, this is a large value. As a comparison, the 1D model of Cameron & Schussler (2017) which also assumes \(f_{\theta}(\theta)=\sin\theta\), requires the latitudinal diffusivity in the bulk of the convection zone to be lower than about 100 km\({}^{2}\)/s. This effective diffusivity is, however, in agreement with the estimate of of \(150-450\) km\({}^{2}\)/s of Cameron & Schussler (2016) based on the properties of the declining phase of solar cycle.
The essential point of the above is that, unless the toroidal field is confined to a narrow range of depths, the latitudinal shear in the meridional velocity quickly spreads the toroidal field out in latitude. Consequently, if there is no imposed preference for emerging near the equator then the butterfly diagram ceases to be solar-like. The requirement for confinement in latitude is what imposes the constraint that \(\eta_{\rm CZ}\approx 10\) km\({}^{2}\)/s. We consider this fixed for the rest of this paper. We also comment that if the radial shear in the differential rotation was weaker, then this constraint would be much weaker.
#### 4.2.3 Turbulent magnetic pumping
With our chosen \(f_{\theta}(\theta)=\sin\theta\) (Eq. 10), we find growing dynamo solutions only for values of \(r_{\gamma}\) not far away from \(0.785R_{\odot}\). This depth is where the helioseismically-inferred meridional flow profile changes direction from poleward to equatorward, and roughly where numerical simulations suggest the convection zone might be weakly subadiabatic (Hotta, 2017). A slightly broader range of pumping depths can be achieved when the loss term is not included, but shallower depths make the butterfly wings broader.
Our conclusion from this is that the pumping depth is fairly tightly constrained if the appearance of spots to low latitudes is only caused by the equatorward meridional flow leading to a build up at low latitudes. The depth of the pumping is poorly constrained if the preference for low latitude emergence is imposed.
Figure 3: Meridional cuts of the North hemisphere toroidal field (left column) and poloidal field (as \(r\sin\theta A\), right column) of the reference model for specific times indicated by the vertical dotted lines in Figure 2. The dotted lines are located at radii of 0.95, 0.85, and \(0.80R_{\odot}\), the approximate locations of the bottom of the near-surface shear layer, \(r_{\gamma}\), and \(r_{\gamma}\) respectively.
Figure 2: Time-latitude diagrams of the toroidal flux density \(b\) (top), surface radial field \(B_{\gamma}(R_{\odot})\) (middle), and the net toroidal flux in the northern hemisphere \(\Phi\) (bottom) for our reference model. The vertical dotted lines indicate the times where the snapshots of Figure 3 were taken.
Observations do not provide estimates for the amplitude of turbulent pumping at depth, and so it is interesting to compare our results with those from global MHD simulations. Shimada et al. (2022) find that in the outer half of the convection zone has a turbulent diffusivity of \(\sim 10\) km\({}^{2}\)/s, similar to the values we needed in our model without an imposed preference for emergence at low latitudes, they also find that \(\gamma_{r}\) peaks at \(\sim 10\) m/s, while Simard et al. (2016) and Warnecke et al. (2018) find amplitudes of the order of \(1-2\) m/s or half the root-mean-square velocity. The latter is of the order of \(40\) m/s according to mixing-length (Vitense 1953; Bohm-Vitense 1958) estimates. It thus appears the pumping velocities required in the reference case are too large by a factor of \(2\) to \(3\). We defer a discussion of this to Sections 4.4 and 4.5.
### Role of the toroidal flux loss term \(L\)
In order to gauge the importance of emergence loss term, in this subsection we switch off the term by setting \(L=0\) in Equation 4. We first use the same parameters as in the reference case. The resulting cycle period is shorter at \(11.3\) years. However, the solution is rapidly growing, with a growth rate of about \(66\%\) per cycle. This growth is not unexpected, as emergences are no longer able to remove the subsurface toroidal flux and it must now be removed either through its "unwinding" by the new cycle flux, or by diffusive cancellation across the equator. Because emergences no longer deplete the subsurface toroidal flux, more poloidal field is generated so that the polar fields are reversed much faster, explaining the shorter period.
We also investigated the \(12\) year period critical solutions when \(L=0\). Doing so required \(\tau_{0}=9.3\) years (as against the reference case where \(\tau_{0}=12.5\) years), and a turbulent pumping \(\gamma_{0}=6.41\) m/s. The time-latitude diagrams and meridional cuts for this case are shown in figures 4 and 5. The surface dif
Figure 4: Time-latitude diagrams of the toroidal flux density \(b\) (top), surface radial field \(B_{\rm s}(R_{\rm c})\) (middle), and the net toroidal flux in the northern hemisphere \(\Phi\) (bottom) for the case without the flux loss associated with flux emergence (and with a period of \(12\) years and zero growth rate). The vertical dotted lines indicate the times where the snapshots of Figure 5 were taken.
Figure 5: Meridional cuts of the North hemisphere toroidal field (left column) and poloidal field (as \(r\sin\theta A\), right column) of the the case without the flux loss associated with flux emergence (and with a period of \(12\) years and zero growth rate) for specific times indicated by the vertical dotted lines in Figure 4. The dotted lines are located at radii of \(0.95\), \(0.85\), and \(0.80R_{\odot}\), the approximate locations of the bottom of the near-surface shear layer, \(r_{S}\), and \(r_{\gamma}\) respectively.
fusion loss timescale is reduced to \(\tau_{\eta}=25.6\) years, only a factor of two larger than the observed value of the toroidal flux loss timescale. This is due to the lowered pumping amplitude, which makes the diffusion of the toroidal field through the surface less difficult than in the case with strong pumping. Even with such low pumping, the poloidal field near the surface is still almost radial (Figure 5).
Looking at the butterfly diagram, the most apparent difference is the large decrease of the polar field strengths compared to the fields in the butterfly wings. The maximum value of the latter goes up from about 5 to 7.5 G, which is relatively close to the observed value of around 10 G. In this case, we find maxima phase between the polar field maxima and active region maxima is \(101^{\circ}\), similar to that which is observed.
Note that the pumping amplitude of \(\gamma_{0}=6.41\) m/s in this model is in much better agreement with estimates from global MHD simulations (cf. Section 4.2.3). This is because models without the loss term achieve shorter periods much more easily. In principle, then, very large pumping velocities are not necessary to obtain a functioning dynamo for this class of models.
### Sensitivity to the meridional flow and differential rotation
Here we investigate how sensitive the simulations are to the meridional flow and differential rotation. We do this by considering the inferred meridional flow for cycles 23 and 24 separately, and a differential rotation profile which differs significantly from the one of the reference model at high latitudes. As is also the case for the reference solution, we do not impose a preference for emergences at low latitudes (if we impose a preference for emergences at low latitudes, then the parameter space where the model has similar properties to the observations becomes much larger). As all solutions mentioned in this section have qualitatively the same butterfly diagrams as the reference case they are not shown.
First, we consider individually the symmetrized (across the equator) meridional flow profiles of cycles 23 and 24. Using the meridional flow from cycle 23, a 12-year periodic critical dynamo requires \(\gamma_{0}=16.3\) m/s. This is a substantial reduction from the reference case. Increasing the period to 13.3 years and keeping the criticality requirement led to pumping speeds of \(\gamma_{0}=10\) m/s. Using the meridional flow from cycle 24, we were unable to find critical solutions with periods shorter than 12 years. A critical solution with \(\gamma_{0}=10\) m/s required a period of 16.6 years.
Clearly the model, where emergence is not restricted to low latitudes, is very sensitive to the meridional flow. This is because emergences at high latitudes are inefficient at getting flux across the equator, which is what eventually reverses the polar fields. The observational constraint, that the cycle period is similar to the timescale for which toroidal field is lost through the surface due to flux emergence, implies that the cycle period involves a balance between the flux transport to low latitudes and the loss through emergence.
In this context, both mean-field theory (eg. Kichatinov 1991; Kitchatinov & Nepomnyashchikh 2016) and global numerical models (eg. Shimada et al. 2022, and references therein) indicate equatorial latitudinal turbulent pumping could also be substantial. From the BL-FTD modelling this would correspond to an increase in the meridional return flow, and would lead to a reduction in the strength of the required radial pumping.
Second, we consider the sensitivity to differential rotation. Again we consider critical 12-year periodic solutions, using the average meridional flow profile used in the reference case. We now use the differential rotation profile of Larson & Schou (2018), which differs from that of the reference at high latitudes (rather than the analytic fit of Belvedere et al. (2000) often used in dynamo studies and used in the reference case). The required parameter values are \(\tau_{0}=11.8\) yrs and \(\gamma_{0}=28.5\) m/s. \(\tau_{0}\) is again of the order of cycle period and the 12-year estimate for the toroidal flux loss timescale of Cameron & Schussler (2020). But the pumping velocity is much more reasonable.
### Sensitivity to assumption that growth rate is zero and period is 12 years
We have thus far concentrated on the kinematic regime with zero growth rates. The Sun is certainly in a statistically saturated state. The kinematic case with growth rate zero is relevant if the system is weakly nonlinear. Whether or not this is the case for the Sun is open (for arguments in favour of this see van Saders et al. 2016; Metcalfe et al. 2016; Kitchatinov & Nepomnyashchikh 2017). In the strongly nonlinear case, the period will be substantially affected by the choice of the nonlinearity and the growth rate in the linear regime is no longer a constraint. The observations are thus less constraining in the strongly nonlinear case. For this reason, we have focused on the weakly nonlinear case and have looked for zero-growth rate solutions to the linear problem. The addition of a weak nonlinearity will slightly modify both the growth rate (in the saturated state it will be zero) and period. Hence in this section we consider the sensitivity of the growth rate and periods to \(\tau_{0}\) and \(\gamma_{0}\).
Figure 6 shows the cycle period and the growth of the toroidal flux per cycle as a function of the timescale parameter \(\tau_{0}\). As in KC16, we observe that increasing the source term amplitude \(\tau_{0}^{-1}\) causes the growth rate to increase until the cycle period becomes too short for the meridional flow to transport the field (see Section 4.1 of KC16). Eventually, the dynamo shuts down completely. Growing solutions can nonetheless be reached by further increasing the source term amplitude \(\tau_{0}^{-1}\). However, the resulting cycle periods are very short (\(\lesssim 3\) years) and the dynamo is now driven by a dynamo wave propagating equatorwards in the high-latitude tachocline.
The effect of the pumping amplitude on the growth rate and cycle period is shown in Figure 7. The growth rate is very sensitive to the pumping amplitude at lower values, where the operating threshold is not yet met, as flux emergence then quickly removes the toroidal flux at high latitudes. But the effect of
Figure 6: Percental per-cycle growth of the toroidal flux (solid line, left axis) and cycle period (dashed line, right axis) as a function of the timescale parameter \(\tau_{0}\).
pumping saturates as its amplitude increases. At some point the time required for the poloidal field to reach the lower convection zone is essentially instantaneous. Note that we have concentrated on solutions near the bifurcation point where dynamo action switches on. This very likely makes the model more sensitive to the different parameters than would be the case if we were considering a non-linear, saturated dynamo.
### Role of the dynamo wave
To investigate what role the subsurface meridional return flow is playing, we apply the same procedure as KC16 to our reference model, namely we switch off the equatorward component of the meridional flow (see Section 3.2 of KC16 for a discussion). Figure 8 shows the resulting magnetic field butterfly diagram. For the reference parameters this mode is decaying (and so is not a dynamo) with fields almost entirely located above \(45^{\circ}\). Since there is no equatorward component of the meridional flow, the equatorward migration of the field is due to the negative radial rotation shear in the high-latitude tachocline, and the direction of propagation is in accordance with the Parker-Yoshimura sign rule. We hence, not surprisingly, conclude that the subsurface meridional flow is essential in this model.
## 5 Conclusion
Using the helioseismically-inferred meridional flow of G20, we have shown that the Babcock-Leighton FTD model remains generally consistent with observations. We have also shown that the long-standing problem of the latitudinal distribution of sunspots can be solved if turbulent pumping reaches depths just under \(0.80R_{\odot}\), but not much deeper, where the meridional flow's direction switches from poleward to equatorward. High turbulent pumping velocities are necessary to essentially store the toroidal flux under this location, in agreement with the results of G20 (see also Parker 1987). There, the meridional flow, in conjunction with the \(\Omega\)-effect through the latitudinal shear present in the bulk of the convection zone (which is maximal at mid-latitudes), causes an accumulation of toroidal flux at equatorial latitudes. Turbulent pumping effectively short-circuits the meridional circulation, preventing significant generation of toroidal field at high latitudes. No additional restriction of emergences to low latitudes is required.
Our model using the helioseismically inferred meridional flow, and including the observed toroidal flux loss associated with flux emergence in a way that is consistent with the Babcock-Leighton source term, is able to reproduce the observed properties of the solar cycle, including the latitudinal migration of the sunspot wings and the approximately 11 year period. Our reference model predicts a toroidal flux loss timescale of 14.8 years at cycle maximum, compared to the estimate of 12 years of CS20.
###### Acknowledgements.
The authors wish to thank the anonymous referee for comments that helped improve the overall quality of this paper. SC is a member of the International Max Planck Research School for Solar System Science at the University of Gottingen. The authors acknowledge partial support from ERC Synergy grant WHOLE SUN 810218.
| Weは、Babcock-Leightonの流体輸送ダイナモモデルと観測との整合性を検証します。メディニアルフロープロファイルは、太陽の熱振動反復に由来するものです。また、太陽表面の渦巻状磁場流失の影響を調査します。私たちは2次元流体輸送のBLダイナモフレームワークを使用します。私たちは、太陽の熱振動反復から推定されたメディニアルフロープロファイルを使用し、Joyの法則によって生成されるポリオイド磁場量に一致するように渦巻状磁場の流失を含みます。このモデルには、低緯度でのエマージェンスの出現を強制する偏りは含まれていません。しかし、モデルは、低緯度でのエマージェンスの出現を要求します。モデルの解は、一般に観測と一致し、チョウの翼の軸方向の漂流と周期の11年間の |
2309.08208 | HM-Conformer: A Conformer-based audio deepfake detection system with
hierarchical pooling and multi-level classification token aggregation methods | Audio deepfake detection (ADD) is the task of detecting spoofing attacks
generated by text-to-speech or voice conversion systems. Spoofing evidence,
which helps to distinguish between spoofed and bona-fide utterances, might
exist either locally or globally in the input features. To capture these, the
Conformer, which consists of Transformers and CNN, possesses a suitable
structure. However, since the Conformer was designed for sequence-to-sequence
tasks, its direct application to ADD tasks may be sub-optimal. To tackle this
limitation, we propose HM-Conformer by adopting two components: (1)
Hierarchical pooling method progressively reducing the sequence length to
eliminate duplicated information (2) Multi-level classification token
aggregation method utilizing classification tokens to gather information from
different blocks. Owing to these components, HM-Conformer can efficiently
detect spoofing evidence by processing various sequence lengths and aggregating
them. In experimental results on the ASVspoof 2021 Deepfake dataset,
HM-Conformer achieved a 15.71% EER, showing competitive performance compared to
recent systems. | Hyun-seo Shin, Jungwoo Heo, Ju-ho Kim, Chan-yeong Lim, Wonbin Kim, Ha-Jin Yu | 2023-09-15T07:18:30 | http://arxiv.org/abs/2309.08208v1 | HM-CONFORMER: A CONFORMER-BASED AUDIO DEPFAKE DETECTION SYSTEM WITH HERARCHICAL POOLING AND MULTI-LEVEL CLASSIFICATION TOREN AGREGATION METHODS
###### Abstract
Audio deepfake detection (ADD) is the task of detecting spoofing attacks generated by text-to-speech or voice conversion systems. Spoofing evidence, which helps to distinguish between spoofed and bona-fide utterances, might exist either locally or globally in the input features. To capture these, the Conformer, which consists of Transformers and CNN, possesses a suitable structure. However, since the Conformer was designed for sequence-to-sequence tasks, its direct application to ADD tasks may be sub-optimal. To tackle this limitation, we propose HM-Conformer by adopting two components: (1) Hierarchical pooling method progressively reducing the sequence length to eliminate duplicated information (2) Multi-level classification token aggregation method utilizing classification tokens to gather information from different blocks. Owing to these components, HM-Conformer can efficiently detect spoofing evidence by processing various sequence lengths and aggregating them. In experimental results on the ASVspoof 2021 Deepfake dataset, HM-Conformer achieved a 15.71% EER, showing competitive performance compared to recent systems.
Hyun-seo Shin\({}^{*}\), Jungwoo Heo\({}^{*}\), Ju-ho Kim, Chan-yeong Lim, Wonbin Kim, and Ha-Jin Yu\({}^{\dagger}\)+School of Computer Science, University of Seoul
Footnote †: This work was supported by Institute of Information & communications Technology Planning & Evaluation(ITTP) grant funded by the Korea government(MSIT) (No.RS-2023-00263037, Robust deepfake audio detection development against adversarial attacks)
Audio deepfake detection, Anti-spoofing, Conformer, Hierarchical pooling, Multi-level classification token aggregation
## 1 Introduction
Recently, speech generation technologies such as voice conversion (VC) and text-to-speech synthesis (TTS) have become so sophisticated that their outputs cannot be distinguished from those of humans, and they are evolving dramatically [1, 2]. Due to the risk of abusing these technologies in deepfake crimes or spoofing attacks, audio deepfake detection (ADD) has recently become a notable research area. In this flow, continuous efforts are being made to develop various countermeasure (CM) systems against spoofing. Among these efforts, CM systems based on deep neural networks (DNNs) have proven to be particularly effective, demonstrating outstanding performance [3, 4, 5].
Over the last decade, CM systems that extract local features at the frame level and subsequently derive global features by aggregating them to the utterance level have revealed remarkable performance in ADD task. A prime example of this is LCNN-LSTM [3], which processes input features with light convolution neural network (CNN) and then reprocesses the extracted information with long short-term memory layers; this model has established itself as a benchmark in spoofing challenges [6]. More recently, models such as AASIST have emerged, leveraging a graph neural network as a back-end module to interpret the local features extracted by CNN, exhibiting superior results [4]. Furthermore, SE-Rawformer offers innovative approaches to CM systems by processing the CNN output with Transformers that operate on entire sequence lengths of the temporal axis [5].
Liu et al. [5] argue that evidence of voice spoofing may exist at both the local and global levels, with examples including unnatural emphasis and intonation (at the local level) and excessive smoothing (at the global level). This assumption aligns well with the goal of CM systems that extract local features by employing convolutional layers and refine these features at the global level. From this perspective, we consider that the Conformer architecture [7], which combines the Transformer specialized in extracting global features and the CNN specialized in exploring local features, would be well-suited for capturing evidence of voice spoofing. Moreover, differing from traditional methods that explore local features first and then global features, the Conformer simultaneously explores local-global features by fusing the Transformer encoder with the convolution module. The architecture of Conformer can provide a standard for the importance of local information by considering the entire information.
In this paper, we propose HM-Conformer by modifying the Conformer structure through hierarchical pooling and multi-level classification token aggregation (MCA) methods. The vanilla Conformer framework tends to carry duplicated information between frame-level features [8], since it was designed for automatic speech recognition that requires frame-level output (i. e., many-to-many task). On the other hand, in many-to-one scenarios such as classification, we hypothesize that conveying compact features is more advantageous than delivering overlapping features. To reduce duplicated information, HM-Conformer applies a hierarchical pooling method which adds downsampling layers between Conformer blocks. Furthermore, the strategy of utilizing information from various layers is widely recognized for enhancing the performance of classification tasks, including ADD task [9, 10, 11]. To this end, we devised MCA method to aggregate task-related information from various encoder blocks. MCA method employs CLS tokens [12], which are effective in extracting information from the sequence of tokens, at each set of blocks to extract information from multiple encoders. Subsequently, the processed CLS tokens are individually trained through different classifiers and loss functions to extract task-related features.
HM-Conformer trained using the ASVspoof 2019 logical access training dataset achieved remarkable performance on the ASVspoof 2021 Deepfake detection task [6]. It achieved an equal error rate (EER) of 15.71%, outperforming recent frameworks that did not employ ensemble techniques.
## 2 Conformer
Conformer [7] is an architecture proposed in the automatic speech recognition domain and has achieved superior performance compared to Transformer and CNN-based models [13, 14]. We attribute this performance to the fact that the Conformer adopts the advantages of both Transformer and CNNs by having a structure with a convolution module inserted within the Transformer encoder. The Transformer is effective in modeling long-range global features by self-attention mechanism, while CNN is specialized for processing local features. By fusing the two structures, the Conformer is able to capture both global and local features, which is suitable for detecting spoofing evidence scattered at a various range of scales.
As shown in Fig. 1 (a), Conformer consists of a convolutional sub-sampling and linear layer to tokenize the input, and Conformer blocks to process the tokens. The convolution sub-sampling consists of \(n\) 2D-convolution layers with a stride of 2 to adjust the sequence length of the input. Following this, the vectors with flattened channel and frequency axes are processed into tokens \(h_{0}\in\mathbb{R}\frac{2}{\pi}\times d\) through the linear layer. The Conformer block has a structure with two feed-forward modules that wrap around a multi-head self-attention (MHSA) and a convolution module in the center. If the output of the \(i\)-th Conformer block is defined as \(h_{i}\), then the Conformer block can be represented by the following Equations:
\[\widetilde{h}_{i-1}=h_{i-1}+\frac{1}{2}FFN\left(h_{i-1}\right), \tag{1}\]
\[h^{\prime}_{i-1}=\widetilde{h}_{i-1}+MHSA\left(\widetilde{h}_{i-1}\right), \tag{2}\]
\[h^{\prime\prime}_{i-1}=h^{\prime}_{i-1}+Conv\left(h^{\prime}_{i-1}\right), \tag{3}\]
\[h_{i}=LayerNorm\left(h^{\prime\prime}_{i-1}+\frac{1}{2}FFN\left(h^{\prime \prime}_{i-1}\right)\right), \tag{4}\]
where \(FFN\), \(MHSA\), and \(Conv\) refer to the feed-forward module, the multi-head self-attention module, and the convolution module, respectively. Due to MHSA and the convolution module, the Conformer block can process global and local features separately at each layer. Note that, unlike vanilla Conformer, we introduce global pooling using the SeqPooling [15] method to generate the final embedding \(e\in\mathbb{R}^{1\times d}\) from the output \(h_{6}\in\mathbb{R}^{\frac{T}{2^{n}}\times d}\) to perform the ADD task. Finally, as shown in Equation (5), the final embedding \(e\) is input into the classifier \(C\) to output a single scalar value \(Score\), and this structure is used as a baseline.
\[Score=C(e)=\sigma\left(eW_{1}^{c}+b^{c}\right)W_{2}^{c}, \tag{5}\]
where \(W_{1}^{c}\in\mathbb{R}^{d\times d/2}\), \(W_{2}^{c}\in\mathbb{R}^{d/2\times 1}\), and \(b^{c}\) denote trainable parameters, and \(\sigma\) is \(Swish\)[16] activation function.
## 3 Proposed System: HM-Conformer
In this study, we propose HM-Conformer for ADD task, which integrates two proposed methods into the Conformer structure: (1) hierarchical pooling method and (2) MCA method. The following two subsections and Fig. 1 (b), (c) describe the two proposed methods.
### Hierarchical pooling method
To improve the CM system using the Conformer, we noted that ADD task is a binary classification task, whereas Conformer was designed for sequence-to-sequence (seq-to-seq) tasks. In order to reduce the gap between the two tasks, we paid attention to research results that indicate tokens within the Transformer-based structure tend to become more similar to each other as they progress through encoders [8]. Based on this observation, it has been argued in the image processing field that conveying compact information is more advantageous than providing duplicated information in many-to-one tasks such as classification [17]. Inspired by this argument, we propose to apply a hierarchical pooling method that gradually downsamples the output of Conformer blocks to extract compact information for ADD task. By decreasing the sequence length of the tokens, the Conformer block can propagate more condensed features to the subsequent blocks. In addition, the hierarchical pooling method offers one more advantage: it can reduce computational costs.
Fig. 1 (b) illustrates the process of the hierarchical pooling method. First, the tokens \(h_{0}\), output from the convolution sub-sampling and linear layers, are passed to the Conformer block. Then, the outputs of the Conformer block \(h_{i}\) (\(i\in\{2,4\}\)) are processed through the pooling layer according to Equation (6).
\[\hat{h}_{i}=pooling\left(h_{i};\gamma\right), \tag{6}\]
Figure 1: Overview of the Conformer-based architectures. (a) depicts the overall architecture of the Conformer structure and the details of its blocks. (b) and (c) illustrate the architecture with the hierarchical pooling method and the proposed HM-conformer with hierarchical pooling and MCA methods.
where \(\gamma\) means the downsampling rate, which is fixed at 2 in this paper, and if \(h_{i}\in\mathbb{R}^{T^{\prime}\times d}\), then \(\hat{h}_{i}\in\mathbb{R}^{\frac{T^{\prime}}{2\gamma}\times d}\).
### Multi-level classification token aggregation method
The approach of aggregating and processing outputs from various layers, known as multi-level feature aggregation, is known to enhance the performance of classification tasks [9, 18, 19]. From this perspective, utilizing features extracted from multiple Conformer blocks may be beneficial for ADD task. However, lower layers of Transformer-based models are observed to process less task-relevant information [20], making the direct use of outputs from Conformer blocks potentially inefficient. Taking these characteristics into account, we propose MCA method that extracts task-relevant features from multiple blocks.
MCA is a method that extracts task-related information by training the CLS token, a widely used feature extraction technique in Transformer-based models for classification tasks, through auxiliary losses. CLS tokens are learnable vectors inserted at the beginning of a token sequence that serve as an aggregated representation of the sequence through MHSA modules [12]. Then, the aggregated representation can be utilized for classification tasks. Given this consideration, MCA method adds CLS tokens to the input sequence, and each set of Conformer blocks (called stage) has its own classifier and a loss function. Therefore, the lower block can be trained with more strong task-relevant supervision. Furthermore, MCA method can aggregate more discriminative features by adapting CLS tokens for each stage; when applied with the hierarchical pooling method, it becomes feasible to gather even more discriminative features from token sequences with diverse time scales.
Fig. 1 (c) depicts HM-Conformer where we applied the hierarchical pooling and MCA methods. Algorithm 1 shows the process of HM-Conformer. First, three CLS tokens which are randomly initialized vectors (pink dotted boxes in Fig. 1 (c)) are added to the input token sequence \(h_{0}\) as presented in Equation (7).
\[h_{0}=\{cls_{1},cls_{2},cls_{3},t_{1},...,t_{T/2^{n}}\}, \tag{7}\]
where \(cls_{i}\in\mathbb{R}^{d}\) and \(t_{i}\in\mathbb{R}^{d}\) denote CLS tokens for each stage and the tokens from input, respectively. These tokens are processed by the Conformer blocks, enabling CLS tokens to aggregate information from other tokens. The refined CLS tokens are separated before entering the pooling layer and then used to produce the final embedding with a global-level token (the lowest green box). Each of the CLS tokens, global-level token, and final embedding are transformed into an embedding \(e_{k}\) through a classifier, as shown in Algorithm 1. Subsequently, each embedding processed through its own OC-Softmax [21] loss function \(L_{os}\) to calculate the final loss \(L\):
\[Score_{k}=C_{k}(e_{k})=\sigma\left(e_{k}W_{1}^{k}+b^{k}\right)W_{2}^{k}, \tag{8}\]
\[L_{os_{k}}=\frac{1}{N}\sum_{j=1}^{N}\log\left(1+\exp^{\alpha\left(m_{y_{j}}- \hat{w}_{k}Score_{k}\right)\left(-1\right)^{y_{j}}}\right), \tag{9}\]
\[L=\sum_{k=1}^{5}w_{k}L_{oc_{k}}\left(Score_{k}\right), \tag{10}\]
where \(\hat{w}_{k}\), \(\alpha\), and \(m_{y_{j}}\) denote a trainable single parameter, a scale factor, and margins, respectively. \(y_{j}\) means labels that \(y_{j}=0\) for bona-fide and \(y_{j}=1\) for spoofed, and the hyper-parameter \(w_{k}\) denotes weight of \(L_{oc_{k}}\). Note that during inference, only the \(Score_{5}\) processed from the final embedding \(e_{5}\) is used.
## 4 Experimental Setup
### Datasets and evaluation metric
We used the training and development partitions of the ASVspoof 2019 logical access task datasets for model training. This training dataset consists of 5,128 bona-fide and 45,096 spoof utterances generated by six spoofing attack algorithms. Model evaluation was performed on the evaluation partitions of the ASVspoof 2021 deepfake (DF) task dataset. The DF evaluation dataset comprises 611,829 samples of 100 spoofed and bona-fide utterances combinations. We compared the performance of the models based on the Equal Error Rate (EER), the official metric for the ASVspoof 2021 challenge.
### Implementation details
We employed the OC-Softmax [21] loss function with hyper-parameters \(\alpha=20\), \(m_{0}=0.9\), and \(m_{1}=0.2\). To augment training data, we employed media codec1 augmentation, speed perturbation set to 0.9 and 1.1, SpecAugment [22] for randomly masking 0 to 20 frequency axes, and colored additive noise augmentation with signal-to-noise ratios randomly selected between 10 and 40. For
\begin{table}
\begin{tabular}{l|c} \hline \hline
**Frameworks** & **EER (\%)** \\ \hline LFCC-LCNN (Wang et al., 2021 [3]) & 23.48 \\ SE-Rawformer* (Liu et al., 2023 [5]) & 21.65 \\ AASIST (Jung et al., 2022 [4]) & 20.04 \\ SFR-CNN (Yamagishi et al., 2021 [6]) & 19.22 \\ \hline Conformer (baseline) & 18.91 \\
**HM-Conformer** & **15.71** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of EER (%) performance of the ASVspoof 2021 DF task evaluation in various frameworks. (*: our implementation)
input features, we used 400 frames of the 120-dimensional linear frequency cepstral coefficients (LFCCs), encompassing a window length of 20ms, a hop size of 10ms, a 512-point FFT, a linearly spaced triangle filter bank of 40 channels, and delta and delta-delta coefficients. We utilized a batch size of 240 and the Adam [23] optimizer with \(\beta 1=0.9\) and \(\beta 2=0.999\). Readers can access our codes at GitHub2.
Footnote 2: [https://github.com/talkingnow/HM-Conformer](https://github.com/talkingnow/HM-Conformer)
## 5 Results
### Comparison with recent CM systems
Table 1 shows the comparison of the performance of the HM-Conformer with recently proposed single CM frameworks on the ASVspoof 2021 DF evaluation. The baseline framework (Conformer) reveals an EER of 18.91%, which is improved performance than other frameworks, thereby validating its potential in ADD. The proposed HM-Conformer achieved an EER of 15.71%, representing an approximately 16% improvement over the baseline. These results indicate that we successfully adapted the vanilla Conformer framework for the ADD task using our proposed hierarchical pooling and MCA methods.
### Validation of the hierarchical pooling method
To validate the effectiveness of the hierarchical pooling method, we confirmed the performance variation when applying different types of pooling layers, as displayed in Fig. 2. In our experiments, all employed pooling strategies yielded superior performance compared to the baseline. These results demonstrate that conveying condensed features is reasonable for addressing the ADD task. Meanwhile, there is one further notable observation in Fig. 2. In previous studies on ADD task, pooling mechanisms that select more significant representations from extracted features, such as max-pooling, are often employed in building CM systems and show outstanding performance [3, 4]. Consistent with prior works, max and top-k pooling derived better performance than other pooling techniques in our experiments.
### Effectiveness of MCA method
Table 2 shows the results of the Conformer with MCA method under various conditions of the \(w\) ratio. Experiment #1 shows the EER of the baseline, which is the Conformer framework with a max pooling as described in subsection 5.2. Compared to the baseline, all experiments with MCA achieved superior performance as depicted in #2\(\sim\)#6. Based on these results, we concluded that our proposed MCA method could improve the Conformer's performance in the ADD task by transmitting appropriate information for detecting spoofing evidence. We also found that increasing the loss weight for the lower layers resulted in better performance than vice versa (#2, #3 vs #5, #6). In the end, by adjusting the MCA with \(w\) ratio 4:3:2:1:1 to the baseline, we attained the best performance HM-Conformer, which shows an EER of 15.71%.
### Ablation study of MCA method
In Table 3, we performed a token ablation experiment on the best HM-Conformer to prove that all of the different stages of information are valid. We observed that the performance of experiments #1\(\sim\)#6, excluding some elements, decreased compared to experiment #7, which used all elements. The results of these experiments suggest that all CLS tokens and the global-level token carry information regarding the spoofing evidence to the final embedding and that diverse discriminative information is significant for performance improvement.
## 6 Conclusion
In this study, we propose the HM-Conformer, a spoofing CM system, by modifying the Conformer structure through hierarchical pooling and multi-level classification token aggregation methods. The hierarchical pooling method, which can narrow the gap between seq-to-seq tasks and classification tasks, extracts compressed information suitable for the ADD task by reducing the sequence length of the Conformer block. MCA method enables the model to discern spoofing evidence from diverse sequence lengths at varying time compression levels. We verified that these two methods can enhance the Conformer, resulting in competitive performance in the ADD task when compared to modern frameworks.
\begin{table}
\begin{tabular}{l|c c c c c c c|c} \hline \hline
**No.** & \(w_{1}\) & \(w_{2}\) & \(w_{3}\) & \(w_{4}\) & \(w_{5}\) & **EER (\%)** \\ \hline \#1 & & & & & & & & 17.84 \\ \hline
**\#2** & 1 & : & 1 & : & 1 & : & 1 & : & 6 & 17.07 \\
**\#3** & 1 & : & 1 & : & 2 & : & 3 & : & 4 & 17.03 \\
**\#4** & 1 & : & 1 & : & 1 & : & 1 & : & 1 & 16.06 \\
**\#5** & **4** & **:** & **3** & : & **2** & : & **1** & : & **1** & **15.71** \\
**\#6** & 6 & : & 1 & : & 1 & : & 1 & : & 1 & 15.72 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of EER performance when changing the weights \(w_{k}\) of the loss function.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline
**No.** & \(e_{1}\) & \(e_{2}\) & \(e_{3}\) & \(e_{4}\) & **EER (\%)** \\ \hline \#1 & ✓ & & & & ✓ & 17.41 \\ \#2 & & ✓ & & & ✓ & 16.59 \\ \#3 & & & ✓ & ✓ & ✓ & 17.38 \\ \#4 & ✓ & & ✓ & ✓ & 16.92 \\ \#5 & & ✓ & ✓ & ✓ & 17.74 \\ \#6 & ✓ & ✓ & ✓ & & 16.71 \\ \hline \#7 & ✓ & ✓ & ✓ & ✓ & 15.71 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of EER performance for ablation study using the sub-set of \(e_{k}\). \(e_{1},e_{2}\), and \(e_{3}\) denote CLS token from \(k\)-th stage, and \(e_{4}\) means global-level token. (Ratio of loss weights 4:3:2:1)
Figure 2: Comparison of EER (%) performance between various pooling layers applied to hierarchical pooling method. Convolution pooling was performed using a convolution layer with a kernel size seven and a stride four. G-pooling adopted the method of Gao et al, [24]. | 音声 deepfake 検出 (ADD) は、音声合成または音声変換システムによる偽造攻撃を検出するタスクです。偽造証拠は、偽造と本物との区別を助ける入力特徴において、局所的またはグローバルに存在します。これらの特徴を捉えるために、Conformer は Transformer と CNN を含む構造を持っています。しかし、Conformer はシーケンスからシーケンスへのタスクに設計されており、ADD のタスクへの直接的な適用は最適化されていない可能性があります。この制限に対処するために、私たちは HM-Conformer を提案しました。これは、以下の2つのコンポーネントを組み合わせて構成されています。 (1) 階層的なプール方法によって、シーケンスの長さを徐々に短くすることで、重複情報を排除する (2) 階層的な分類トークン集約方法によって、異なるブロックからの情報を得るために分類トークンを利用する。これらのコンポーネント |
2309.10447 | Toward Unified Controllable Text Generation via Regular Expression
Instruction | Controllable text generation is a fundamental aspect of natural language
generation, with numerous methods proposed for different constraint types.
However, these approaches often require significant architectural or decoding
modifications, making them challenging to apply to additional constraints or
resolve different constraint combinations. To address this, our paper
introduces Regular Expression Instruction (REI), which utilizes an
instruction-based mechanism to fully exploit regular expressions' advantages to
uniformly model diverse constraints. Specifically, our REI supports all popular
fine-grained controllable generation constraints, i.e., lexical, positional,
and length, as well as their complex combinations, via regular expression-style
instructions. Our method only requires fine-tuning on medium-scale language
models or few-shot, in-context learning on large language models, and requires
no further adjustment when applied to various constraint combinations.
Experiments demonstrate that our straightforward approach yields high success
rates and adaptability to various constraints while maintaining competitiveness
in automatic metrics and outperforming most previous baselines. | Xin Zheng, Hongyu Lin, Xianpei Han, Le Sun | 2023-09-19T09:05:14 | http://arxiv.org/abs/2309.10447v2 | # Toward Unified Controllable Text Generation via Regular Expression Instruction
###### Abstract
Controllable text generation is a fundamental aspect of natural language generation, with numerous methods proposed for different constraint types. However, these approaches often require significant architectural or decoding modifications, making them challenging to apply to additional constraints or resolve different constraint combinations. To address this, our paper introduces Regular Expression Instruction (REI), which utilizes an instruction-based mechanism to fully exploit regular expressions' advantages to uniformly model diverse constraints. Specifically, our REI supports all popular fine-grained controllable generation constraints, i.e., lexical, positional, and length, as well as their complex combinations, via regular expression-style instructions. Our method only requires fine-tuning on medium-scale language models or few-shot, in-context learning on large language models, and requires no further adjustment when applied to various constraint combinations. Experiments demonstrate that our straightforward approach yields high success rates and adaptability to various constraints while maintaining competitiveness in automatic metrics and outperforming most previous baselines. 1
Footnote 1: Our code and data are available at [https://github.com/MrZhengXin/CTG-Regex-Instruction](https://github.com/MrZhengXin/CTG-Regex-Instruction).
## 1 Introduction
Generating texts according to human requirements has long been a critical challenge in natural language generation Ziegler et al. (2019); Ouyang et al. (2022). With the emergence of large language models, many tasks in natural language processing can be unified and converted into the formation of _controllable generation_Prabhumoye et al. (2020). For example, text classification Apte et al. (1994), cloze test Devlin et al. (2019), and multiple-choice question answering Lai et al. (2017) tasks constraint the output text to be exactly one of the given options; abductive reasoning Bhagavatula et al. (2020) specifies that the position of the output text is between the previous and future contexts; summarization task Luhn (1957) limits the length of output; machine translation Bar-Hillel (1960) demands to use the vocabulary of the target language for text generation.
For controllable text generation, typical fine
\begin{table}
\begin{tabular}{l} \hline \hline Lexicon \& length constraint \\ \hline _Input_ \\ \textless{expression>} \textless{mask\_0>} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{ } \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{ } \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{ } \textless{} \textless{} \textless{} \textless{} \textless{} \
grained control tasks include lexicon Lin et al. (2020), generating position Shen et al. (2020) and length Carlsson et al. (2022). Recently, various approaches have been proposed to satisfy these constraints, which can be categorized into three different paradigms: retraining or refactoring the model Keskar et al. (2019); Zhang et al. (2020); He (2021); Chan et al. (2021); tuning on given data Lester et al. (2021); Stiennon et al. (2020); manually designed post-processing Qin et al. (2020); Meng et al. (2022); Lu et al. (2021, 2022); Wang et al. (2021).
Despite the reasonable performance, current methods on transformer-based language models mainly focus on certain constraints but may not be easily transferred to others, let alone the combination of constraints. For example, Non-Residual Prompting Carlsson et al. (2022) and A*esque Decoding Lu et al. (2022) only considered lexical and length constraints, but it cannot arbitrarily specify which position the generated text shall occur; on the other hand, COLD Qin et al. (2022) can generate text given past and future context, but may not add word inclusion constraint nor restrict the output length. Moreover, these controlling methods assume that we have access to the probability distribution or even gradient of the model, but in the case of large language models where we can only obtain the output token via API, these methods may not be available, and thus black-box controlling techniques need further exploration.
To address the above challenges, we proposed instruction-based Regular Expression Instruction (REI), for universal fine-grained controllable generation. Table 1 present a few examples. Our instruction design is inspired by regular expression, which can easily describe mainstream constraints and their combinations. Following Rosenbaum et al. (2022), we use markup language to construct the expression, hoping that model can better distinguish between meta-data (instructions) and data (actual words). We use two popular paradigms, language model fine-tuning, and large language model few-shot, to teach the model to understand the input constraint expression.
Our method has several advantages. First, our constraint expression supports all typical fine-grained controlling task and is powerful enough to describe composite control specifications. Second, our method can be adapted to various scenarios, such as summarization with length constraint, terminology-constrained machine translation, and alternative-ending story infilling. Third, our method is easy to implement and highly transferable to other models since it requires only fine-tuning on medium-size models and no further modification on large language models, and it does not need access to probability distribution or gradient.
Experiments demonstrate that current state-of-the-art language models can understand our controlling language, achieving high success rate while maintaining high automatic evaluation metric score and surpassing most of the strong previous baselines under various constraints. We hope our work can shed light on future works.
## 2 Method
### Instruction Design
The controlling language REI follows the style of regular expression due to its expressiveness. Also, it's easy to evaluate whether the input expression instruction matches the generated text or not. Following Rosenbaum et al. (2022), HTML-like markup language is used, which helps the model learn that they are meaningful meta-data instructions rather than plain symbols, especially when using large language models in-context learning with limited examples and no parameter update. This markup label can also avoid the usage of the escape character.
REI contains several special labels, as shown in Table 1. <expression> and </expression> mark the beginning and the end of the expression and can be put anywhere in the input text, assuming we only generate according to one expression at a time. <mask_i> is equivalent to the regular expression ". *" and similar to the mask token in BART Lewis et al. (2020) and T5 Raffel et al. (2022), where at its position the model shall generate zero or more tokens. <options> and </options> is equivalent to the parentheses "(" and ")" in regular expression, the model shall choose one expression among the group. To make the recognition easier, we use <choice_i> and </choice_i> to wrap each choice. The regular expression notation of length counts at the character level, but in practice, we want to control the output word length. Therefore, we use the <length=> label to denote the constraint of output word count.
We avoid the shortcoming of T5 Raffel et al. (2022) span-corruption schema, where the model only generates discontinued spans rather than full
natural sentences Lester et al. (2021). On the other hand, we also overcome the redundancy of BART denoising schema He (2021), where the whole input is generated again, since we only generate the realized expression. Moreover, beyond fill-in-the-blank, we introduce choice-making, which further enriches the expressiveness of our controlling language.
### Training
Fine-tuningWe could automatically construct the training data from the corpus and conduct self-supervised learning. Alternatively, we could also directly convert the input of existing supervised datasets into the form of our controlling language, and use them to fine-tune state-of-the-art models such as FLAN-T5 Chung et al. (2022). The input format is shown in Table 1(a).
We include \(\alpha\)NLG Bhagavatula et al. (2020) and CommonGen Lin et al. (2020), two English controllable generation datasets of position and lexicon constraint. In \(\alpha\)NLG, given the past observation \(O_{1}\) and the future observation \(O_{2}\), the goal is to generate a hypothesis \(h\) that could follow \(O_{1}\) and trigger \(O_{2}\). The regular expression of the constraint is "\(\cdot\) *" since no lexicon constraint is required. In CommonGen, given a set of \(k\) concepts \(C=\{c_{0},c_{1},...,c_{k-1}\}\), the output text shall include those concepts and at the same time be consistent with common sense. While in the original setting, the appearance order of concepts and their word sense change is not provided, and the model shall make these decisions, here in our controlling language, the exact word and order must be given. Otherwise, we cannot construct the corresponding expression. So, we preprocess the original instances and recover the order and word sense of the concepts by the reference text. To help the model generate the concepts sequentially and track how many concepts it has already used, we append the serial number label \((i)\) to every concept \(c_{i}\) on both the input and output sides and remove the labels from the output generation once completed. The regular expression of the constraint is "\(\cdot\) *\(c_{0}\) *\(c_{1}\)... *\(c_{k-1}\) *".
We also leverage these two datasets to teach the model to control the output length by simply adding the length label with the ground truth length. To better track how many words the model itself has already generated, we append the length number label \(\_i\) to every word \(w_{i}\); for example, the sentence "Stephen knocked over a vase while drunk." becomes "Stephen_0 knocked_1 over_2 a_3 vase_4 while_5 drunk_6". Similarly, we remove the length number labels after completion.
Finally, we need to teach the model about choosing grammar. We use \(\alpha\)NLI Bhagavatula et al. (2020) dataset, the task of which is to determine whether \(H_{1}\) or \(H_{2}\) is the more plausible hypothesis given the past and future observations \(O_{1}\) and \(O_{2}\), and the constraint of the regular expression is "\((H_{1}|H_{2})\)".
\begin{table}
\begin{tabular}{l l} \hline \hline
**Task** & **Input with Control Expression** \\ \hline \(\alpha\)NLG & \(O_{1}\) \textless{} expression\textgreater{
In-context LearningFor large language models like GPT-3.5 Brown et al. (2020), where typically access is typically provided via API, we may not apply many traditional controllable generation technics. However, we can leverage its ability of in-context learning to conduct fine-grain constraint generation. More specifically, we leverage the ability to discover and imitate the repeated pattern Madaan and Yazdanbakhsh (2022); Min et al. (2022), which is desirable in our case, since unlike other natural language understanding tasks, the specific fine-grain constraint is a well-defined simple pattern that could be easily discoverable and imitable.
Given the input with control expression, we can select \(k\) instances with the same expression structure as the instruction prompt and send it to the large language model together with input. Naturally, when evaluating the test set, we can select examples from the training set or validation set, or other instances of the test set when they are not available. Consistently, we use the same input and output format described before, which saves extra efforts on prompt engineering. In addition, we simply use the popular json format " {"input": [INPUT], "output": [OUTPUT]} " for each demonstrating instances, and naturally seperate them with "un". By using json, we can further avoid the need for escape character if the input text happens to contain metadata like "Input" or "un".
### Inference
We use rejection sampling to generate output text that is matched by the control expression. Verifying the output is simple, since we could convert the control expression into regular expression and check the validity. Additionally, if the expression contains length constraint label, we count and compare the number of words in the output text. We try at most \(k\) times to avoid infinite loop and save costs if we use large language model API. When using medium or small size langauge model, to increase the generation quality, we can perform beam search first and see if it can generate a valid result at the first try.
### Recursive Decoding
Different choice might affect the generated text. For example, consider the case "\(S_{1}S_{2}S_{3}\).\(*\)(\(E_{1}\)\(|\)\(E_{2}\))", which gives the first three sentence and two alternative endings and the goal is to choose the correct ending while infill the fourth sentence at the same, which is not included in our fine-tuning data. Instead of directly jumping to the answer with possibly insufficient computation, we could also let the model "think step by step Kojima et al. (2022)". We can solve each choice expression first, then compare the complete choices "(\(S_{4}E_{1}\)\(|\)\(S_{4}^{\prime}E_{2}\))"". The generalized decoding procedure is presented at Algorithm 1, which assumes that each options is independ with each other and greedily solve them from left to right. We leave the evaluation of expression with multipe consecutive options Lu et al. (2022) for future work.
## 3 Experiment
### Setup
We conduct experiments on 2 Nvidia A100 GPUs, with about 10 total GPU hours locally. For medium-size language model, we use FLAN-T5-xl Chung et al. (2022) with Apache 2.0 license, which has 3B parameters and is fine-tuned on many natural language understanding and generation tasks. We use Huggingface Transformers library Wolf et al. (2020) with Apache-2.0 license for fine-tuning and evaluation. We trained the model for 3 epochs, with a batch size of 16 and learning rate of 3e-5. We set beam size to 4 for beam search and p to 0.95 for top-p sampling. We generate at most \(k=512\) samples if we do not obtain any valid outcome.
For large language model, we use GPT-3 Brown et al. (2020) text-davinci-003 version via OpenAI API, and the 175B model is calibrated with Reinforcement Learning from Human Feedback Stiennon et al. (2020). We feed 8 in-domain examples as the prompt, set the temperature to 0.7, and retry at most \(k=8\) times if the result is not valid. All results are from "single" run.
### Lexicon Constraint
#### 3.2.1 Lexicon Constraint Only
SettingWe evaluate our method on the devset of CommonGen Lin et al. (2020), as the reference text of the test set is not publicly available. As mentioned in 2.2, we feed the model with oracle concept order and word sense. For automatic metrics we use BLEU-4 Papineni et al. (2002), CIDEr Vedantam et al. (2015), SPICE Anderson et al. (2016) and Coverage (Cov.), which is the average ratio of input concepts that are present in lemmatiztized outputs.
ResultsWe compare the performance of our method with other baselines, including fine-tuning methods BART (Lin et al., 2020) and T5-Large (Lin et al., 2020), auxilary guiding model method NADO (Meng et al., 2022), prompting method NRP (Carlsson et al., 2022), and 8-shot pure natural language instruction (NLI) on GPT-3.5, which is shown at Table 2(a).
Given only 8 examples with a clear connection between input and output, GPT-3.5 still shows competitive performance in terms of text automatic metrics, and achieves high concept coverage, surpassing all the previous baselines. Compared with natural language instruction, the success rate is very close. And with more supervised data to modify the model's parameter, FLAN-T5-xl performs significantly better than GPT-3.5 and other previous baselines in all metrics and successfully satisfies all lexicon constraints.
#### 3.2.2 Lexicon & Length Constraint
As described in Section 2.2, we slightly modify the devset of CommonGen to introduce the additional length constraint and evaluate GPT-3.5 and FLAN-T5. For metric, we replace Coverage (Cov.) with Success Rate (SuR.), which is the average percentage of output that matches the input expression. In a composite task, the performance of GPT-3.5 downgrades dramatically and struggles to generate valid output, indicating that multi-concept inclusion and length control at the same time is challenging, especially for few-shot in-context learning. Yet, REI still outperforms NLI in terms of success rate, and the "high" n-gram metrics might also indicate the poor instruction following ability in terms of challenging fine-grain constraints, which is consistent with the finding of Zhou et al. (2023). FLAN-T5 only has a minor drop in performance and still maintains a high success rate since it has trained on this composite constraint.
### Position Constraint
#### 3.3.1 Position constraint only
SettingWe evaluate our method on the testset of \(\alpha\)NLG (Bhagavatula et al., 2020). The automatic metrics include BLEU-4 (Papineni et al., 2002), ROUGE-L (Lin, 2004) and BERTScore (Zhang* et al., 2020). We do not report Success Rate since it's always 100%.
ResultsAs presented in Table 3(a), we compare our method with two unsupervised baselines DeLorean (Qin et al., 2020) and COLD (Qin et al., 2022), non-autoregressive Diffusion-LM (Li et al., 2022) and two fine-tuned methods on 11B T5 (Khashabi et al., 2021), 20B UL2 (Tay et al., 2022) and 8-shot NLI on GPT-3.5.
With few-shot learning, GPT-3.5 outperforms two unsupervised baselines and Diffusion-LM, demonstrating its strong in-context learning ability given only a few infilling examples. Since it's a relatively simple constraint, the performance between REI and NLI is very close. With our careful instruction prompt design and adequate fine-tuning, 3B FLAN-T5 shows stronger performance than 11B T5, and remains competitive compared to 20B UL2.
\begin{table}
\end{table}
Table 3: Results on devset of CommonGen. The best models are **bold** within each metric.
#### 3.3.2 Position & Length Constraint
As mentioned in Section 2.2, we slightly modify the \(\alpha\)NLG test set to add the length constraint. We change the BERTScore metric to SuccessRate (SuR.). Table 3(b) shows the results. GPT-3.5 manages to imitate both position and length constraints, showing relatively high success rate, while under NLI, it performs badly. But with full-scale supervised learning, FLAN-T5 can robustly generate valid output on the test set 100% of the time. Also, in terms of automatic metrics, the output of both models does not downgrade dramatically.
#### 3.3.3 Position & Lexicon Constraint
We can also modify the \(\alpha\)NLG test set to add lexicon constraint, setting the keyword to be the first verb on the reference text. The input format is shown in Table 1(b), and Table 3(c) shows the results. For GPT-3.5, it still is very likely to generate valid output nearly all of the time, and the automatic metrics enjoy improvement compared with the results of no lexicon constraint, since the additional gold words are provided, and the verb constraint limits the vast scope of possible hypothesis space. Also, REI is slightly better than NLI. For FLAN-T5, although it has been trained on position constraint or lexicon constraint separately, it has not seen the combination, and yet still demonstrates strong performance.
#### 3.3.4 Position & Lexicon & Length Constraint
We can further combine all conditions together, adding both length and lexicon constraints on the test set of \(\alpha\)NLG. The input format is presented in Table 1(b), and Table 3(d) shows the results. Compositional constraints challenge few-shot GPT-3.5, as it's more difficult to generate output that matches all three requirements, and the success rate drops slightly. Interestingly, NLI got a very low success rate. But fully-trained FLAN-T5 exhibits robust transfer ability, as the simultaneous three constraints are not included in training data, but FLAN-T5 still manages to achieve close to 100% success rate.
#### 3.3.5 Position Constraint & Alternative Endings
On the test set of Story Cloze Test Mostafazadeh et al. (2016), which is to choose between the right ending and the wrong one given the four-sentence context, we additionally mask the fourth sentence and require the model to infill the missing sentence while determining the correct ending. The input format is shown in Table 1(b), and the result is shown in Table 1(b). We change the Success Rate (SuR.) metric to Accuracy (Acc.), since choosing either ending is valid. For GPT-3.5, we directly construct promoting examples with the initial input and final output, and surprisingly find that GPT-3.5 handles the composite constraint quite well, and chooses the right ending with not bad accuracy. Also, REI comes close to NLI in performance. For FLAN-T5-xl, we use the recursive decoding (Section 2.4, and it shows moderate performance, with lower accuracy but higher BLEU / ROUGE compared with GPT-3.5.
### Summarization with length constraint
REI can also easily support abstractive summarization with desired length Kikuchi et al. (2016); Fan et al. (2018), as long as the base model has been trained on the summarization task, which is the case in our choosing models FLAN-T5 Chung et al. (2022) and GPT-3.5 Ouyang et al. (2022). We choose to evaluate on the test set of English headline generation dataset Gigaword Graff et al. (2017),
\begin{table}
\end{table}
Table 4: Result on test of \(\alpha\)NLG.
2003), due to its short input and output length. Also, Gigaword is not included in the training set of FLAN-T5 or GPT-3.5. The input format is written in Table 2b. We use ROUGE-L Lin (2004) and Success Rate (SuR.) for metrics.
We compare our methods with two unsupervised unconstrained baselines SEQ Baziotis et al. (2019) and TED Yang et al. (2020), and the results are shown in Table 7. Both GPT-3.5 and FLAN-T5 exceed the two baselines in ROUGE-L score, showing relatively good text quality. Since the summarization task constrains more on the semantic of output compared with pure lexicon constraint (CommonGen) or position constraint (\(\alpha\)NLG), satisfying length constraint might be more difficult, and GPT-3.5 shows a relatively lower success rate, but NLI has the worst success rate. But nevertheless, FLAN-T5 still achieves 100% success rate. Notice that with limited REI training tasks, the model can still generalize to new tasks with the specific format, demonstrating the robust transfer ability under supervised learning.
### Terminology-constrained machine transltion
We can also apply REI to machine translation with terminology constraint Dinu et al. (2019), which is to ensure the given terminologies \(T=(t_{0},t_{1},...)\) are used in translation. We only test GPT-3.5 here, due to its superiority in multi-language understanding, while the majority of output language during pre-training, multi-task learning, and fine-tuning is English. We evaluate on the test set of Wiktionary and IATE Dinu et al. (2019), two English-German translation dataset, using BLEU-4 Papineni et al. (2002) and Terminology Coverage (Term) for metrics.
We compare our method with several strong baselines, including Constraint decoding Dinu et al. (2019), Train-by-replace Dinu et al. (2019), RePP Sun et al. (2022), TADA Ailem et al. (2021), EDITOR Xu and Carpuat (2021), Levenshtein Transformer Susanto et al. (2020), and 8-shot NLI on GPT-3.5. Due to its vast parameters, GPT-3.5 outperforms all other baselines in terms of BLEU score. Also, GPT-3.5 achieves near 100% terminology coverage rate, which is close to the existing upper limit. Finally, REI has a slightly higher term coverage than NLI.
### Qualitative Results
Table 8 shows the samples of lexicon & length constraints (Section 3.2.2), position & lexicon & length constraints (Section 3.3.4), position constraint with alternative ending (Section 3.3.5), summarization with length constraint (Section 3.4) and translation with terminology constraint (Section 3.5). Both FLAN-T5 and GPT-3.5 generate valid and fluent sentences. GPT-3.5 also uses more vivid or human-like words like "antihistamines" or the abbreviation "FIA", probably due to its large-scale model size and training corpus.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Method** & **BLEU** & **ROUGE** & **Acc.** \\ \hline NLI+GPT-3.5, 8 shot & 3.83 & **21.27** & **88.99** \\ REI+GPT-3.5, 8 shot & 3.77 & 20.56 & 88.72 \\ REI+FLAN-T5-xl & **3.87** & 20.9 & 84.61 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results on Story Cloze Test with positional constraint.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & \multicolumn{2}{c}{**Wiktionary**} & \multicolumn{2}{c}{**IATE**} \\
**Method** & **Term\%** & **BLEU** & **Term\%** & **BLEU** \\ \hline Constraint decoding Dinuu et al. (2019) & 99.50 & 25.80 & 82.00 & 25.30 \\ Train-by-replace Dinu et al. (2019) & 93.40 & 26.30 & 94.50 & 26.00 \\ RePP Sun et al. (2022) & 93.67 & 30.52 & 95.41 & 29.38 \\ TADA Ailem et al. (2021) & 96.84 & 26.73 & 98.02 & 27.11 \\ EDITOR Xu and Carpuat (2021) & 99.8 & 29.30 & **100.0** & 28.90 \\ Levenshtein Transformer Susanto et al. (2020) & **100.0** & 31.20 & **100.0** & 30.13 \\ \hline NLI+GPT-3.5, 8-shot & 99.03 & **37.62** & 98.07 & 32.22 \\ REI+GPT-3.5, 8-shot & 99.52 & 34.88 & 99.45 & **35.25** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results on Wiktionary and IATE.
## 4 Related Work
Tasks of Controllable Text GenerationControllable text generation refers to the tasks that generate text according to the controlling signals (Prabhumoye et al., 2020). Typically, the output can be constrained at three levels from coarse to fine: (Zhang et al., 2022) semantic, structural and lexical. At semantic level, the signals include topic (Tang et al., 2019), sentiment (Logeswaran et al., 2018), format (Li et al., 2020), toxicity (Krause et al., 2021) and other abstract attribute. At the structural level, the constraints include key-value data table (Novikova et al., 2017), syntax tree, and parts-of-speech (Li et al., 2022). At lexical level, then controlling elements include keyword (Lin et al., 2020), generating position (Shen et al., 2020) and length (Carlsson et al., 2022).
Methods of Controllable Text GenerationCurrent approach for controllable text generation can be summarized as three main categories (Zhang et al., 2022): retraining or refactoring the model, e.g. CTRL (Keskar et al., 2019), POINTER (Zhang et al., 2020), CMDP (Chan et al., 2021), Constrained BART (He, 2021), CoCon (Chan et al., 2021), PlanGen (Su et al., 2021) and InstructCTG (Zhou et al., 2023); tuning on given data, including model fine-tuning, Prompt Tuning (Lester et al., 2021) and RL-Fine Tuning (Stiennon et al., 2020); and post-processing, which can either design specific decoding strategy, e.g. Constrained Beam Search (Anderson et al., 2017), DeLorean (Qin et al., 2020), COLD (Qin et al., 2022), NeuroLogic (Lu et al., 2021); or using auxilary guiding model, e.g. PPLM (Anderson et al., 2017), GeDI (Krause et al., 2021), FUDGE (Yang and Klein, 2021), CTRLsum (He et al., 2022), Plug-and-Play Content Planning (Liu et al., 2022), NADO (Meng et al., 2022), and MACSum (Zhang et al., 2023).
## 5 Conclusion
We proposed Regular Expression Instruction (REI), a novel instruction-based method that unifies fine-grain lexical-level constrained text generation. Our method is highly adaptable, fitting either language model fine-tuning or large language model in-context learning. Our controlling language can also easily be applied to other related tasks, including story completion while infilling, summarization with length constraint, and machine translation with terminology constraint. Experiments show that our method has a high success rate and outperforms most of the previous strong baselines, demonstrating its effectiveness despite the simplicity. We leave the evaluation and improvement of more complex constraints for future works.
\begin{table}
\begin{tabular}{l l} \hline \hline CommonGen+length & \textless{expression}\textgreater{} \textless{}mask\_0\textgreater{} **dance**(0) \textless{}mask\_1\textgreater{} **performed**(1) \textless{}mask\_2\textgreater{} **stage**(2) \textless{}mask\_3\textgreater{} **wearing**(3) \textless{}mask\_4\textgreater{} **costumes**(4) \textless{}mask\_5\textgreater{} **leng**(1=1> \textless{}/expression\textgreater{}\textgreater{}\textless{}\textgreater{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textless{}\textless{}\textlessless{}\textless{}\textlessless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textlessless{}\textless{}\textlessless{}\textless{}\textlessless{}\textless{}\textlessless{}\textless{}\textlessless{}\textlessless{}\textlessless{}\textlessless{}\textlessless{}\textlessless{}\textless{}\textlessless{}\textless{}\textlessless{}\textless{}\textlessless{}\textlessless{}\textlessless{}\textless
### Limitations
Our proposed Regular Expression Instruction is serialized and cannot describe a set of keyword constraints where the appearing order is arbitrary, but only a list of keywords with determined order. Future work is needed to exceed the limit, either by approximating the word order or by repeated random sampling. Also, to obtain valid results we use reject sampling, which might need many repeated trials, thus reducing the efficiency and downgrading the speed. More efficient mechanisms with less retry are worth investigating. Additionally, under the current trends of the instruction following, more sophisticated prompts under 0-shot is worth investigating.
### Ethics Statement
This work involves no sensitive data and uses several public-available datasets. This work discusses controllable text generation, which aims for better usage of the black-box language model and may better reduce the problematic biases. We notice that the method proposed in this work can be used to generate disinformation or harmful content directly via controlling language, but the malicious usage can be further avoided by filtering out improper control input and stopping harmful content generation.
| 可制御テキスト生成は、自然言語生成における基盤的な側面であり、様々な制約タイプに対して多数の方法が提案されています。しかし、これらのアプローチはしばしば、アーキテクチャ的またはデコード変更を必要とするため、追加の制約や異なる制約組み合わせに対処するのに難しいです。この問題に対処するために、本論文では、正規表現指示(REI)を導入しました。REIは、正規表現の利点を最大限に活用し、様々な制約を統一的にモデルするよう、指示ベースのメカニズムを利用しています。特に、REIは、lexical、positional、およびlengthなどの、細分化された可制御生成制約をサポートし、これらの複雑な組み合わせもサポートしています。本方法は、中規模言語モデルの微調整や、大規模言語モデルにおける in-context 学習による少数訓練のみで実行できます。また、様々な制約組み合わせに対しては、調整を必要と |
2309.10321 | Markov Chain Monte Carlo for Bayesian Parametric Galaxy Modeling in LSST | We apply Markov Chain Monte Carlo (MCMC) to the problem of parametric galaxy
modeling, estimating posterior distributions of galaxy properties such as
ellipticity and brightness for more than 100,000 images of galaxies taken from
DC2, a simulated telescope survey resembling the upcoming Rubin Observatory
Legacy Survey of Space and Time (LSST). We use a physically informed prior and
apply selection corrections to the likelihood. The resulting posterior samples
enable rigorous probabilistic inference of galaxy model parameters and their
uncertainties. These posteriors are one key ingredient in a fully probabilistic
description of galaxy catalogs, which can ultimately enable a refined Bayesian
estimate of cosmological parameters. We systematically examine the reliability
of the posterior mean as a point estimator of galaxy parameters, and of the
posterior width as a measure of uncertainty, under some common modeling
approximations. We implement the probabilistic modeling and MCMC inference
using the JIF (Joint Image Framework) tool, which we make freely available
online. | James J. Buchanan, Michael D. Schneider, Kerianne Pruett, Robert E. Armstrong | 2023-09-19T05:09:11 | http://arxiv.org/abs/2309.10321v1 | # Markov Chain Monte Carlo for Bayesian Parametric Galaxy Modeling in LSST
###### Abstract
We apply Markov Chain Monte Carlo (MCMC) to the problem of parametric galaxy modeling, estimating posterior distributions of galaxy properties such as ellipticity and brightness for more than 100,000 images of galaxies taken from DC2, a simulated telescope survey resembling the upcoming Rubin Observatory Legacy Survey of Space and Time (LSST). We use a physically informed prior and apply selection corrections to the likelihood. The resulting posterior samples enable rigorous probabilistic inference of galaxy model parameters and their uncertainties. These posteriors are one key ingredient in a fully probabilistic description of galaxy catalogs, which can ultimately enable a refined Bayesian estimate of cosmological parameters. We systematically examine the reliability of the posterior mean as a point estimator of galaxy parameters, and of the posterior width as a measure of uncertainty, under some common modeling approximations. We implement the probabilistic modeling and MCMC inference using the JIF (Joint Image Framework) tool, which we make freely available online.
0000-0002-4882-7885]James J. Buchanan
0000-0002-4882-7885]Michael D. Schneider
0000-0002-4882-7885]Kerianne Pruett
0000-0002-4882-7885]Robert E. Armstrong
## 1 Introduction
Because gravitational lensing depends directly on the overall distribution of matter in a given patch of space, it gives a window into the overall structure and evolution of the universe as a whole (Kilbinger, 2015), enabling constraints on e.g. dark energy (LSST Dark Energy Science Collaboration, 2018). Much of the time the effect of lensing is too subtle to be observed in individual galaxies. Rather, so-called "weak lensing" is statistically inferred by analyzing the correlated pattern of measured shapes of multiple galaxies. Increasing the number of well-measured galaxy shapes is generally expected to improve the statistical strength of weak lensing inferences, as long as systematic errors can be controlled (Mandelbaum, 2018).
The Vera C. Rubin Observatory, under construction, is projected to begin the 10 year Legacy Survey of Space and Time (LSST; Ivezic et al., 2019) in 2024. The LSST will observe an unprecedented number of galaxies throughout a wide and deep volume of space. In order to take complete advantage of this dataset for cosmological inference, we are faced with correspondingly unprecedented demands on the mitigation and characterization of systematic uncertainties in galaxy shape measurements. Standard maximum likelihood estimators of galaxy shapes suffer from numerous biases from sources such as noise (Refregier et al., 2012), pixelation, point-spread function (PSF) distortions (Simon and Schneider, 2017), and potentially centroid estimation errors (Tessore, 2017). In addition to its own irreducible contributions to uncertainty, noise bias interacts with and amplifies the effects of model bias, the inability of a given galaxy model to exactly fit the truth (Kacprzak et al., 2014). These sources of bias must be calibrated away using estimator-specific methods (e.g. Tessore, 2017), which may still leave behind systematic uncertainties that are not always well understood. In any case, a single point estimate of any value, such as galaxy ellipticity, even when accompanied by a confidence interval, fails to reflect all possible information embedded in one's limited data set.
In contrast, a Bayesian forward modeling approach need not be similarly subject to the biases noted above--the noise level, PSF, and pixelization effects, plus many other effects on image rendering, can in principle be forward-modeled and thus naturally accounted for without a separate calibration step. Galaxy shape uncertainties can be described in a Bayesian sense by selecting a parametric family of galaxy light profiles, asserting a prior probability distribution on the profile parameters, and then finding the posterior probability distribution over these parameters for any specific | |
2309.16988 | A slime mold inspired local adaptive mechanism for flow networks | In the realm of biological flow networks, the ability to dynamically adjust
to varying demands is paramount. Drawing inspiration from the remarkable
adaptability of Physarum polycephalum, we present a novel physical mechanism
tailored to optimize flow networks. Central to our approach is the principle
that each network component -- specifically, the tubes -- harnesses locally
available information to collectively minimize a global cost function. Our
findings underscore the scalability of this mechanism, making it feasible for
larger, more complex networks. We construct a comprehensive phase diagram,
pinpointing the specific network parameters under which successful adaptation,
or tuning, is realized. There exists a phase boundary in the phase diagram,
revealing a distinct satisfiability-unsatisfiability (SAT-UNSAT) phase
transition delineating successful and unsuccessful adaptation. | Vidyesh Rao Anisetti, Ananth Kandala, J. M. Schwarz | 2023-09-29T05:16:19 | http://arxiv.org/abs/2309.16988v2 | # A slime mold inspired local adaptive mechanism for flow networks
###### Abstract
In the realm of biological flow networks, the ability to dynamically adjust to varying demands is paramount. Drawing inspiration from the remarkable adaptability of Physarum polycephalum, we present a novel physical mechanism tailored to optimize flow networks. Central to our approach is the principle that each network component--specifically, the tubes-- harnesses locally available information to collectively minimize a global cost function. Our findings underscore the scalability of this mechanism, making it feasible for larger, more complex networks. We construct a comprehensive phase diagram, pinpointing the specific network parameters under which successful adaptation, or tuning, is realized. There exists a phase boundary in the phase diagram, revealing a distinct satisfiability-unsatisfiability (SAT-UNSAT) phase transition delineating successful and unsuccessful adaptation.
## I Introduction
Biological circulatory systems, despite their intricacy, exhibit remarkable adaptability. They adeptly modify various attributes--such as vessel diameter, wall thickness, and the count of micro-vessels cater to the evolving metabolic needs of tissues [1; 2]. This adaptability can be perceived as an outcome of an optimization mechanism, where a global cost function is optimized. Intriguingly, this optimization is not typically directed by a central authority, but emerges from local interactions between different aspects of the system. For example, prior research illustrates how vascular flow efficiency [3] can be enhanced by altering attributes, e.g. tube thickness, based on local data such as flow within a tube.
Given the circulatory system example, as well as others [4; 5], a fundamental inquiry centers on how local interaction rules give rise to adaptive behaviors, and how these behaviors subsequently manifest as optimization algorithms in biological systems [6]. In this manuscript, we introduce a straightforward physical mechanism that potentially allows biological systems to implement such optimization strategies. Moreover, limitations on such optimization algorithms do indeed exist. For instance, one type of optimization algorithm for a large swath of some parameter space may not be feasible. Here, we explore the limitations on our physical mechanism for adaptation in the context of what is known as a Satisfiability/SAT-Unsatisfiability/UNSAT phase transition [7; 8; 9]. A similar transition was found in the study of limits on the multifunctionality of tunable flow and mechanical networks [10].
The specific question we are interested in is: How does one tune the node pressures of a flow network, by altering its tube thickness, via a physical mechanism that uses only local information? To answer this question, we take inspiration from the adaptive behaviour observed in Physarum polycephalum, or slime mold. This organism, in spite of not having a brain or nervous system, is capable of optimizing its network structure to determine the most efficient path between food sources [4; 11]. Prior research [12] indicates that slime mold possibly utilizes a form of chemical signaling for this purpose. Upon encountering food, it releases a signaling chemical at the location, which is then propagated throughout the organism via advection. This chemical, in turn, induces an expansion in the tubes through which it is flowing. As a result, the tubes that connect food sources through the shortest path undergo more pronounced expansion compared to those on longer routes, leading to an efficient connection between food sources. This behavior exemplifies how biological systems can harness physical processes to optimize for a specific task such as finding food.
Our system is a flow network - a set of nodes connected by tubes. Fluid flows in and out of the network through some of those nodes. This flow creates pressure values at each node, depending upon the architecture of the network and the conductance of the pipes. The task here is to modify the conductance of these pipes such that the pressure values at some 'output' nodes matches the desired pressures.
The first thing that we observe is that pressure at a node depends on the resistance to flow downstream. Therefore, to alter the pressure at a node, we must change the conductance of pipes downstream (Fig. 1). Consider an output node where we want the pressure to decrease. To do so a chemical at that node gets released which gets carried by the fluid flow. This chemical interacts with the tube such that, when it is flowing through a tube, it increases the conductance of the pipe by making it thicker. This increase in conductance decreases the
resistance to flow, which in turn decreases the pressure at the output node. Similarly, when we wish to increase the output node pressure, we must release a different kind of chemical which decreases the conductance of the pipes by making it thinner. Through this mechanism, the entire network undergoes adjustments. Each tube, relying on the locally available information--namely, the chemical concentration within it--fine-tunes the pressures at the output nodes to align with the target values. This localized adjustment is facilitated by our introduced method, where the discrepancy at the output nodes is conveyed into the network through a chemical signal, subsequently influencing the network structure.
In what follows, we have assessed the performance of our tuning mechanism across a range of network sizes. The scaling relations observed suggest that our tuning approach remains effective even as the network size increases. Notably, our results suggest a SAT-UNSAT phase transition [13] with a discontinuous jump in the fraction of networks that can be successfully adapted at the transition point along with universal scaling features.
## II The Tuning Process
### The System
We create networks consisting of N nodes and M edges. We first create a Barabasi-Albert network with connection parameter 1 using the python package, NetworkX. This creates a minimally connected network with N nodes and N-1 edges. To create a network with M edges we add M-(N+1) unique edges.
We select \(j\) boundary nodes, denoted as \(q_{1},q_{2},...,q_{j}\), and apply an external potential \(\mathbf{u}=[u(q_{1}),u(q_{2}),...,u(q_{j})]^{T}\). The resulting response potentials at the remaining nodes, termed interior nodes, are calculated by solving the discrete Laplace equation using the Schur compliment method [14]. From these interior nodes, we identify \(k\) output nodes \(p_{1},p_{2},...,p_{k}\). The potentials at these nodes are represented as \(v(p_{1}),v(p_{2}),...,v(p_{k})\). The objective is to adjust this network by altering the conductance of its pipes, aiming to align the output potential vector \(\mathbf{v}=[v(p_{1}),v(p_{2}),...,v(p_{k})]^{T}\) with a target output potential vector \(\mathbf{v_{des}}=[v_{des}(p_{1}),v_{des}(p_{2}),v_{des}(p_{3}),..,v_{des}(p_{ k})]^{T}\). For context, the target output potential vector \(\mathbf{v_{des}}\) is derived by introducing a perturbation to the potential at each node, with the perturbation value sourced from a uniform distribution spanning \([-\Delta,+\Delta]\).
### Implementation of the Tuning Process :
1. The input potential vector \(\mathbf{u}\) is applied at boundary nodes. A supervisor checks the output potentials \(\mathbf{v}\) at output nodes and compares them to the vector of desired output potentials \(\mathbf{v_{des}}\).
2. There are two kinds of chemicals, \(s_{+}\) and \(s_{-}\). \(s_{+}\) increases the conductance of the pipe when it is passing through it, and vice versa for \(s_{-}\). We assume that the output nodes release a chemical whose amount is proportional to the difference between the present output potentials and desired output potentials. At \(t=0\) for some \(a\in O,\ v(a)\neq v_{des}(a)\), then \[v(a)>v_{des}(a)\Rightarrow s_{+}(a)=\alpha(v(a)-v_{des}(a))\] (1) \[v(a)<v_{des}(a)\Rightarrow s_{-}(a)=\alpha(v_{des}(a)-v(a))\] (2) Where \(\alpha\) is the factor that controls the chemical response given by the node to the difference in potentials. Moreover, \(s_{+}(a)\) denotes amount of chemical (e.g. no of molecules) at node "a", and \(\mathbf{s}_{+}(t)\) denotes the array of chemical amount at each node at time \(t\).
3. This chemical is carried by the current in the network. Therefore, in the next time step the chemical flows to the neighbouring nodes of \(a\) that are
Figure 1: _Schematic for an adaptative flow mechanism._ [Arrow 1] To increase the pressure at node A, a specialized chemical (depicted in red) is introduced. This chemical is adectively transported by the fluid flow within the network. The fluid flow direction is indicated by the grey arrows. [Arrow 2] As this chemical traverses the tubes, it interacts with the tube’s structure, causing it to constrict and thereby increasing the flow resistance. This results in an increase in pressure at node A. [Arrows 3 & 4] Conversely, to decrease the pressure at node A, a different chemical is released that dilates the tubes. This dilation reduces flow resistance, leading to a decrease in pressure at node A.
downstream to \(a\)[15]. We call all such downstream neighbours of \(a\) as \(\mathcal{D}(a)\). Then for all \(b\in\mathcal{D}(a):\) \[s_{+}(b,t+1)=s_{+}(a,t)\times\frac{i(b,a)}{\sum_{x\in\mathcal{D}(a) }i(x,a)}\] (3) \[+\left(\text{incoming chemical from other nodes}\right)\] (4) where \(i(x,a)\) represents the current from \(a\) to \(x\). This is how the entire array \(\mathbf{s}_{+}\) and \(\mathbf{s}_{-}\) is modified at each time step. Note that all the chemical initially present at \(a\) flows downstream after one time step.
4. Using the above equation, an N \(\times\) N array \(\hat{S}_{+}\) is generated, where each entry \(i,j\) denotes the amount of chemical passing through the pipe \(\{i,j\}\) at step \(t\to t+1\). Let \(\hat{W}\) denote the conductance matrix of the graph, where each entry \(\{i,j\}\) denotes the conductance of that pipe. Then \[\hat{W}(t+1)=\hat{W}(t)+\beta(\hat{S}_{+}-\hat{S}_{-}),\] (5) \(\beta\) controls the response of the pipe to the passing chemical.
5. The new potentials are calculated on the interior vertices using \(\hat{W}(t+1)\). Again, the supervisor checks if \(v(a)=v_{des}(a)\). The chemical takes some time to reach the boundary nodes, where it drains out of the network [16]. Therefore, the total change in potential due to the chemical released at the output nodes is observed after some amount of time. Therefore, we introduce a time delay \(\tau\) before releasing the chemical once again [17].
6. This process is repeated iteratively.
## III Results
We implemented the aforementioned procedure on a network comprising 50 nodes and 118 edges, as depicted in Fig. 2. The conductance values for the flow network were uniformly sampled from the range \([10^{-5},1]\). External pressures were applied to the input nodes (highlighted in green), with each node's external pressure being uniformly sampled from \([-10,10]\). The objective was to adjust the resulting pressure at the output nodes (indicated in red) to align with specified target values. These target pressures were determined by setting \(\Delta=1\). Chemicals were introduced at these nodes at intervals of \(\tau=5\) units, influencing the conductance of downstream pipes. Successful tuning was evident as the error, represented as \(||\mathbf{v}-\mathbf{v_{des}}||\), decreased significantly, and the pressure at the output nodes converged to the desired values.
Fig. 3 presents the \(P_{SAT}\) values across varying parameters: number of edges (E), output nodes (M), total nodes (N), and with a fixed \(\Delta=0.1\). (While simulations were conducted for N = 50, 100, 150, 200, and 250, Fig.3 specifically displays results for N=50, 150, and 250). The other training parameters remain consistent with those used in Fig. 2. We define \(P_{SAT}\) as the proportion of networks that achieve successful tuning. A tuning process is deemed successful if the ratio of initial error to final error is less than \(10^{-2}\). This ratio was determined for each pixel through 100 training repetitions. A notable surge in \(P_{SAT}\) is observed, transitioning from a 'hard phase'--where tuning is ineffective--to an 'easy phase' where it predominantly succeeds. This phenomenon is further elaborated as a SAT-USAT phase transition in Fig. 4. The relevant tuning parameter in such transitions is the clause density \(\alpha\)[18, 13], which represents the ratio of clauses to variables. In our context, it is the ratio of nodes to tune to the number of edges (\(M/E\)). The right column of Fig. 3 illustrates the decline of \(P_{SAT}\) with an increasing \(\alpha\). In these plots, curves for constant \(E\) are shown. Increasing \(\alpha\)--achieved by increasing \(M\) while maintaining \(E\)--signifies with an increasing problem hardness. We fit these curves to a sigmoid-like function given by:
\[f(x,a,b)=\frac{1}{2}\left(1-\frac{e^{b\cdot(x-a)}-e^{-b\cdot(x-a)}}{e^{b\cdot( x-a)}+e^{-b\cdot(x-a)}}\right) \tag{6}\]
From these fits, we deduce the critical point \(\alpha_{c}\) and the transition width \(w\). \(\alpha_{c}\) is the \(\alpha\) value at which \(P_{SAT}=0.5\), and \(w\) represents the horizontal span between \(P_{SAT}=0.25\) and \(P_{SAT}=0.75\).
Figure 2: _The tuning process._ [a,b] The network undergoes modifications due to the tuning process. The color bar illustrates the conductance values both pre and post-training. [c] A plot of error versus time. The error plateaus since tuning a particular node ‘\(a\)’ is stopped when \(|v(a)-v_{des}(a)|<10^{-6}\)). [d] Pressures at the output nodes converge to their target values, represented by similarly colored translucent horizontal lines.
By analyzing how \(\alpha_{c}\) and \(w\) vary with \(E\), we deduce the critical number of nodes \(M_{c}=\alpha_{c}E\) that can be successfully tuned for varying system sizes, specifically for N=50, 100, 150, 200, and 250. The width around this critical value, in terms of the number of nodes for varying system sizes, is given by \(\Delta M_{c}=wE\) (refer to Fig. 4). We observe that the critical number of nodes \(\alpha_{c}E\) scales as a power law with respect to \(E\) with an exponent greater than 1 (\(M_{c}\sim E^{1.03}\)). This indicates that our tuning process is scalable and will remain effective for larger system sizes. To demonstrate that this is a SAT-UNSAT phase transition and not just a crossover, we present two supporting arguments: 1) We observe that \(wE\) scales with \(E\) with an exponent less than 1 (\(wE\sim E^{0.76}\)). This implies that \(w\) scales with \(E\) with a negative exponent, indicating that in the thermodynamic limit as \(E\rightarrow\infty\), the transition width \(w\) vanishes. 2) We note that this transition is universal, as all the transition plots in Fig. 3 (d-f) (and for sizes 100 and 150) collapse onto a universal function upon rescaling by \(\alpha\rightarrow(\alpha-\alpha_{c})/w\).
## IV Discussion
Inspired by Physarum polycephalum, we introduce a physical mechanism to tune flow networks, where each network component uses locally available information to optimise a global cost function. We show that this mechanism is scalable to larger networks and present a phase diagram demonstrating for what values of network parameters successful tuning is observed. Additionally, we showed that this optimization strategy exhibits a SAT-UNSAT phase transition.
In previous work, we employed a similar concept to train physical networks [19], where the gradient information was encoded in a chemical signal diffused throughout the network. While this approach is equivalent to executing gradient descent, its biological plausibility is questionable due to the slow diffusion-based transport of chemical signals. In contrast, our current methodology involves advective transport of these signals so that the adaptation occurs over faster time scales. However, with this approach, we do not anticipate a gradient descent of the MSE, as the chemical signals only modify the conductances downstream.
Given the parallels with particle jamming, which
Figure 3: _Adaptation phase diagram._ (a-c) Depictions of \(P_{SAT}\) for networks comprising 50, 150, and 250 nodes, respectively. The color bar indicates \(P_{SAT}\) values. The x-axis represents the number of output nodes (M), while the y-axis denotes the number of edges (E). (d-f) Illustrations of the decline in \(P_{SAT}\) in relation to an increase in \(\alpha=M/E\). Each curve is representative of a horizontal slice of the 2D phase diagram (displayed on the left) at a constant \(E\) value. The data is fitted to a sigmoid like curve as outlined in Eq. 6.
Figure 4: _Universal scaling near the transition._ a) Illustrates the scaling of the critical number of nodes that can be tuned with the varying number of edges \(E\) for different system sizes. (b) Depicts the scaling of the width around this critical number of nodes with respect to the number of edges \(E\) for varying system sizes. (c) Shows that the transition plots collapse onto one another upon rescaling for \(N=50,100..250\).
encompasses a SAT-USAT transition [20], we propose that an analogous tuning mechanism can be established for hard sphere systems. This has the potential to lay the groundwork for methods to embed memory [21] within these systems and as well learning within these systems given recent work reframing the learning process as feedback-based aging in a glassy landscape [22]. Moreover, the potential applications of our findings extend further. One immediate and practical application lies in the domain of urban water supply systems. As cities grow and evolve, their water supply networks face the challenge of dynamically adjusting to meet varying demands. The traditional methods of managing these pressures might not be efficient or responsive enough to cater to the rapid changes in demand. Our proposed mechanism offers a potential solution to this challenge, enabling water supply systems to self-optimize and adapt in real-time. Furthermore, the principles elucidated in our study could inspire innovative methodologies for other human-made transport systems. For instance, power grids, which require a delicate balance of supply and demand across vast networks, could benefit from a similar approach. By integrating mechanisms that allow for localized adjustments based on real-time feedback, these systems could become more resilient, efficient, and adaptive.
The authors acknowledge Benjamin Scellier and Karen Alim for helpful discussions. JMS acknowledges financial support from NSF-DMR-2204312.
| 生物系流路ネットワークにおいて、需要に応じた動的な調整能力は極めて重要です。Physarum polycephalum の驚くべき適応力からインスピレーションを得て、このネットワークを最適化する新たな物理的機構を提案します。私たちのアプローチの核心は、各ネットワークコンポーネント -- 特に管 -- が、局所的に利用可能な情報を使用して、グローバルコスト関数を最小限に抑えることによって、集団的に機能することです。私たちの研究結果により、この機構のスケーラビリティが強調されており、より複雑なネットワークに適用可能であることが示唆されます。この機構を構築するために、包括的な相図を作成し、成功した適応、またはチューニングを実現するネットワークのパラメータを特定します。相図には、成功と失敗の適応に関連する相変動を示す境界線があり、それが成功と失敗の適応を区別するSAT-UNSAT相変動を示しています。 |
2309.00132 | When a complementarity in the neutrino and the quark mixing meets a
parameter symmetry and its implications to the unitarity | We present a complementarity that addresses relationships among the
parameters in the neutrino and the quark mixing matrix, use it to estimate the
size of the uncertainty among the elements in the matrix and address its
implications to the unitarity of the quark mixing matrix and Wolfenstein
parameterization and the tension in the first row. First, we describe how a
complementarity with a phase being introduced as an extra parameter can be held
in the nine independent schemes of parameterizing the matrix introducing a
discrete parameter symmetry within a certain size of uncertainty and how it can
be related to a combination of sine functions. With that, for the first time,
we describe a method that we can use to constrain the size of the uncertainty
associated with the parameters, not the central values, complementing that
among the diagonal elements in the neutrino mixing matrix. Then we do the same
for the quark sector and discuss its implication in the relation to the size of
the uncertainty among the elements. Seeing that our estimation is larger than
that was reported by running the global fit in the quark sector, our result
could be an indication that we may need to be cautious when addressing the
tension in the first row of the matrix in the quark sector and when running
global fit to constrain the size of the uncertainty, where Wolfenstein
parameterization, one that is not unitarity guaranteed, is used, as opposed to
the combination of the three rotational matrix. Given that the size of the
uncertainty for the individual diagonal element in the second and the third
row, our result also could be an indication that we may need to wait until the
size of uncertainty for the second and the third row goes down further before
addressing the tension. It could be an opening of considering the possibility
of a mixing between the neutrino and the quark sector too. | Jae Jun Kim | 2023-08-31T20:48:49 | http://arxiv.org/abs/2309.00132v3 | # When a complementarity in the neutrino mixing meets a parameter symmetry and its implications
###### Abstract
We present a complementarity that complements relationships among the elements in the neutrino mixing matrix and address its physical implications. First we show how a complementarity with a phase being introduced as an extra parameter can be held in the nine independent schemes of parameterizing the matrix introducing a discrete parameter symmetry and a combination of sine functions, a part of Jarlskog invariant, within a certain size of uncertainty. Then, for the first time, we show that we can use the uncertainty associated with the complementarity as an empirical constraint complementing that among the diagonal elements in the neutrino mixing matrix. We discuss its physical implication in relation to the size of the uncertainty among the elements in the end.
## 1 Introduction
Are there complementarities that can complement the relationship among the elements in the neutrino mixing matrix? Furthermore, can the relationship that depends on how we parameterize the mixing lead us to one that is independent of the parameterization? If so, are there some physical implications associated with the complementarity? We initiated our study to seek answers to the questions.
Ever since we empirically confirmed that neutrinos do oscillate [7; 8], physicists have been striving to understand the nature of the neutrinos further. Our coming up with and studying lepton-quark [1] and self complementarities [4] in the neutrino sector have been a part of the efforts to uncover hidden layers of laws of our nature. We continue the effort in this study. We present that a version of a self complementarity that can be considered as an empirical constraint when building a mixing matrix model, investigate its possible origin in a combination of sine functions, from which a relationship among the diagonal elements in the unitary mixing matrix can be constrained further. Our goal is not only study a complementarity, which depends on how we parameterize the matrix, but to move further and address its relationship to the neutrino mixing matrix, which is independent no matter how we parameterize it. We use the relationship to address physical implications in the end.
We start with a previous complementarity studied before. One of most common complementarities can be written as,
\[\theta_{12}+\theta_{13}\sim\theta_{23} \tag{1}\]
, where the size of the three mixing angles are related to each other. Such has been studied under a few different versions of flavor symmetry models too [2; 9; 14].
A challenge we had with Equation 1 was that it cannot be held when the matrix was parameterized differently [4]. For instance, when we do the rotations in different orders, we end up with different analytical expressions, as illustrated in Equation 2. When we do so, the complementarity such as Equation 1 cannot be held in all the schemes but in a few.
Such hindered us to go further down the stream to generalize the relationship and come up with something that stays invariant.
Based on the result shown in [4], it is not too difficult to see that having only one or two \(\theta\)s as the parameters in the complementarity does not help us much when the goal is to realize one that can be held in the parameterization schemes. In other words, in any combination of three or less number of \(\theta\)s as parameters did not lead us to find out some pattern, complementarities, that can be held in the nine independent schemes. For that reason, as an ansatz, we introduce \(\delta\) as an extra parameter and propose a revised version of a complementarity, \(SC\), as
\[SC\equiv\theta_{12}+\theta_{13}+\theta_{23}+\delta_{13}\sim C \tag{2}\]
, where \(C\) is a constant, and it may be expressed as a function of a modulus of \(180^{\circ}\), which happened to work out under the discrete parameter symmetry. Equation 2 was briefly alluded in [5], where complementarities were studied to calculate the size of the mixing between the active and the sterile neutrino sector, but it was not tested further since then.
Equation 2 has advantages over Equation 1. First, it takes the parameters in a more democratic manner. It may not need to but doing so gives us an opportunity to address the complementarity under some symmetry that holds independent of which \(\theta\) being taken as a parameter. Second, we can introduce a modulus when expressing the complementarity, which will be addressed later on. In this study, we test the complementarity introducing a discrete parameter symmetry, relate it to the size of the elements in the neutrino mixing matrix and address its physical implications. In particular, we focus on using the uncertainty associated with the complementarity to estimate the size of that in the unitary mixing matrix.
Note that our study is not to justify why such a complementarity works out under some flavor mass model but to identify that can be held in different order of parameterizing them, calculate the size of the uncertainty associated with and then use it to constrain that for the elements in the mixing matrix.
## 2 Complementarity in different schemes of parameterizing the neutrino mixing matrix
As described in [4], the neutrino mixing matrix in the combination of three rotations in \(\theta_{12}\), \(\theta_{23}\) and \(\theta_{13}\) can be expressed in nine different ways. For the standard scheme [9] of writing the matrix, it can be written as,
\[PS1:U_{23}(\theta_{23})U_{13}(\theta_{13},\delta)U_{12}(\theta_{12}) \tag{3}\]
. In addition to the standard scheme, they can be written in eight other schemes as reordering the rotations as,
\[\begin{split} PS2:U_{12}(\theta_{13})U_{23}(\theta_{23},\delta)U_{12 }^{-}(\theta_{12})\\ PS3:U_{23}(\theta_{23})U_{12}(\theta_{12},\delta)U_{23}^{-}(\theta_ {13})\\ PS4:U_{23}(\theta_{23})U_{12}(\theta_{12},\delta)U_{13}^{-}(\theta_ {13})\\ PS5:U_{13}(\theta_{13})U_{23}(\theta_{23},\delta)U_{12}^{-}(\theta_ {12})\\...\\ PS9:U_{13}(\theta_{13})U_{12}(\theta_{12},\delta)U_{23}(\theta_ {23})\end{split} \tag{2}\]
, where \(U^{-}\) stands for an inverse matrix and the parameters are written in a same manner. Given the unitarity of the matrix, once we have the size of \(\theta\)s and \(\delta\)s in any one of the schemes including the standard scheme [8], we can calculate the size of the parameters expressed in other schemes.
Note that we do constrain the size of \(\theta\) to be in the physical region, \(0^{\circ}{<}\theta{<}90^{\circ}\). However, there could be four possible values of \(\delta\) with a same size of sine function. Taking the sign associated with Jarlskog invariant in the standard scheme,
\[J=sin\theta_{12}cos\theta_{12}sin\theta_{13}cos^{2}\theta_{13}sin\theta_{23} cos\theta_{23}sin\delta_{13} \tag{3}\]
, as a way to avoid the ambiguity associated with the size of \(\delta\), which will eliminate two out of four, given that it is \(<0^{\circ}\), and then manually testing the two remaining choices by entering \(\delta\) back to the mixing matrix, we calculate the size of all the parameters in all the nine parameterization schemes.
Taking the measured value of the parameters in the standard scheme [6],
\[\theta_{12}=33.8^{\circ},\theta_{23}=48.3^{\circ},\theta_{13}=8.6^{\circ}, \delta_{13}\sim 280.0^{\circ} \tag{4}\]
, for an inverted hierarchy, the result goes as,
\[\begin{split} PS:\ \theta_{12}:\ \theta_{23}:\ \theta_{13}:\ \ \ \ \delta_{13}:\ \ \ \ Sum\\ 1:\ 33.82\ 48.30\ \ 8.61\ \ \ 280.00\ \ \ 370.73\\ 2:\ 32.92\ 48.87\ 11.46\ \ 273.42\ \ 366.73\\ 3:\ 34.77\ 45.87\ 15.21\ \ 281.83\ \ 377.70\\ 4:\ 33.38\ 49.22\ 10.32\ \ 278.50\ \ 371.44\\ 5:\ 36.05\ 47.58\ 12.82\ \ 268.86\ \ 383.32\\ 6:\ 25.79\ 43.86\ 24.16\ \ 330.25\ \ 424.08\\ \
So, at least for the measured size of the mixing angles, the complementarity calculated in the different schemes agree in the order of \(\sim 10^{\circ}\), but only for the first five schemes. In other words, it is hard to realize a relationship that can be written as a function of the elements in the unitary mixing matrix at this moment. We need one that can be held in the nine different schemes so we can use the complementarity to address some relationship among the elements in the unitary matrix.
As a resolution, we introduce a discrete parameter symmetry [3]. We take the complementarity as a part of a combination of sine functions and do some approximation.
We start with \(S\), a combination of sine functions,
\[S\equiv sin\theta_{12}sin\theta_{23}sin\theta_{13}sin\delta_{13} \tag{6}\]
, which happened to be a part of \(J\). One of the reasons for considering the expression to embrace our complementarity as a part of it is due to its being invariant under the translation of the parameters under the parameter symmetry. It does not have to be what we have in Equation 6, under which the complementarity could be a part of, but we do need one where it stays invariant under the translation of the parameter by changing their sign or by shifting it with \(180^{\circ}\). For instance, when we change sign associated with \(\theta_{13}\) and \(\delta_{13}\), the expression remains same.
When we expand the sine function in the expression and take the first two leading terms and add them together, we end up with,
\[S\sim\frac{F}{A}\cdot[\;B+\theta_{12}+\theta_{13}+\theta_{23}+\delta_{13}+.. \;]\times sc+hc \tag{7}\]
, where \(A\) and \(B\) are numerical coefficient, \(sc\) is a sign conjugate of the expression in the bracket and \(hc\) is higher order terms. \(F\) can be written as,
\[F=\theta_{12}\cdot\theta_{13}\cdot\theta_{23}\cdot\delta_{13} \tag{8}\]
. For our convenience when calculating the size of the uncertainty later, we may define, \(AC\), the term in the bracket,
\[AC=[\;B+\theta_{12}+\theta_{13}+\theta_{23}+\delta_{13}+..\;]\times sc \tag{9}\]
. The next higher order terms in \(AC\) while keeping the complementarity is in the order of
\[hc\sim\frac{1}{40}\;\cdot\;\theta^{3}\sim 5\% \tag{10}\]
of the linear order term, \(\theta\), before doing the full expansion, even when \(\theta\sim 90^{\circ}\). For that reason, when estimating the size of the uncertainty for \(AC\) is a main goal, we may not include the higher order terms. However, we may need to include them for other studies later.
Equation 7 and 9 is where we see the complementarity in Equation 2 to be a part of the expression. What we can do with the complementarity is to apply translation, which is changing sign, among the parameters, under a discrete parameter symmetry [3], as a way to realize the complementarity being held for all the schemes.
In other words, we want to use Equation 2 as a part of a more general expression that stays invariant under a symmetry such as a discrete parameter symmetry and that can be related to some elements in the unitary matrix. If \(S\) can be expressed as a function of the elements, and the components in \(S\), can be constrained further based on what we have as a complementarity, we can use the complementarity to constrain the size of the uncertainty associated with elements in \(U\). Such is doable due to the combination of the sine function stays invariant under the translation of \(\theta\) when it is accompanied by that of \(\delta\)[3] in the modulus of \(180^{\circ}\). Changing signs of parameters in \(SC\) in Equation 2 does not change the overall value of \(S\) in Equation 2.
With that, for those where the complementarity does not hold, the bottom four in Equation 2, we apply the translation to some parameters. The symmetry in essence is about changing the sign of a \(\theta\) accompanied by that of a \(\delta\), when we do not consider the exchange of mass terms [3]. We end up with,
\[PS: \theta_{12}: \theta_{23}: \theta_{13}: \delta_{13}: Sum\] \[1: 33.82 48.30 8.61 280.00 370.73\] \[2: 32.92 48.87 11.46 273.42 366.73\] \[3: 34.77 45.87 15.21 281.83 377.70\] \[4: 33.38 49.22 10.32 278.50 371.44\] \[5: 36.05 47.58 12.82 268.86 383.32 \tag{11}\] \[6: [25.79] 43.86 [24.16] 209.75 203.68\] \[7: 56.95 61.72 48.96 204.72 372.37\] \[8: 45.26 [39.21] 31.89 337.12 374.97\] \[9: 23.39 53.54 [26.49] 328.75 379.20\]
, where the numbers in the bracket is to indicate that translated under the symmetry. For instance, in \(PS6\), instead of adding \(\theta_{12}\) as a part of the expression, we subtract it.
Under the discrete parameter symmetry, the complementarity, as a part of \(S\), the combination of the sine functions in Jarlskog invariant, can be held within \(\sim 10^{\circ}\), to the first order. In short, as long as we do the mixing in three \(\theta\)s and one \(\delta\), the values for individual parameters can change but the sum, \(SC\), can stay within the size. We use that to address a relationship about \(U\), the elements in the unitary mixing matrix.
Note that it is not to show that the complementarity stays exact, but to take the size of the uncertainty and use that in estimating that in other quantities such as the elements in \(U\).
Constraining the size of the uncertainty associated with a few elements in the unitary mixing matrix
Coming back to Equation 2, we do aware that it cannot be held exactly nor it can be directly expressed as a function of some elements in the neutrino mixing matrix, since it varies depending on how we parameterized it.
However, the combination of sine functions, Equation 6, can be. There we take an advantage of Equation 2 being a part of it and that is the essence of our study.
Interestingly, it was shown how the expression can be written differently depending on the order of the rotation [20]. For the standard scheme, \(U_{123}\), which is \(PS1\) in our case and where we do the rotation in 1, 2 and 3 in order, \(S\) can be written as,
\[S_{1}=J\cdot\frac{1}{U_{1}}\cdot\frac{1}{U_{3}} \tag{10}\]
, where \(J\) is Jarlskog invariant as we know and \(U_{1}\) and \(U_{3}\) are the two of the three diagonal elements in the unitary mixing matrix. The elements in the unitary mixing matrix can be written as,
\[\begin{array}{cccc}U_{1}:0.8214&NDE&NDE\\ U=&NDE&U_{2}:0.5453&NDE\\ &NDE&NDE&U_{3}:0.6577\end{array} \tag{11}\]
, where \(U\) is the neutrino mixing matrix and the numerical size for the diagonal elements are written, and \(NDE\) stands for non-diagonal elements in the matrix.
For the remaining five ways of parameterizing the matrix, where the order is taken place in different permutation, it is a matter of expressing them using a different set of elements.
Depending on the rotation, six different permutation is possible and there \(S\) can be written as,
\[S_{2}=J\cdot\frac{1}{U_{1}}\cdot\frac{1}{U_{2}} \tag{12}\]
, and the rest can be done in a same manner,
\[S_{3}=J\cdot\frac{1}{U_{2}}\cdot\frac{1}{U_{3}}\;,\;S_{4}=J\cdot \frac{1}{U_{1}}\cdot\frac{1}{U_{2}} \tag{13}\] \[S_{5}=J\cdot\frac{1}{U_{2}}\cdot\frac{1}{U_{3}}\;,\;S_{6}=J \cdot\frac{1}{U_{1}}\cdot\frac{1}{U_{3}} \tag{14}\]
. Due to Equation 6 can vary depending on the order of the rotation, we cannot say that the expression for different schemes need to be same. In other words, \(S\) can have different size.
However, as shown in Equation 12, 13 and 14, they can be represented by two out of three diagonal elements in the unitary mixing matrix with \(J\). There, taking the ratio of the two \(S\)s, we can use the complementarity studied to reduce or constrain the size of the uncertainty related to the elements in the unitary matrix. That is the focus in our study and that is going to be parameterization-independent since all the elements are expressed as a function of \(U\).
We show a case for the ratio of \(S_{1}\) and \(S_{2}\) and the rest can be done in a similar manner. It can be reduced down to the ratio of the diagonal elements in the matrix as,
\[R=\frac{S_{1}}{S_{2}}=\frac{U_{2}}{U_{3}} \tag{15}\]
, and we can do the same for other cases. We can use Equation 2 as an empirical constraint for the size of the uncertainty associated with \(U_{2}\) and \(U_{3}\). The relative uncertainty can be expressed as,
\[\Delta^{2}=\sum\Delta^{2}X\cdot\frac{1}{X^{2}} \tag{10}\]
, where \(\Delta^{2}\) represents a square of the size of the relative uncertainty, \(\Delta X\) represents the uncertainty associated with \(X\), and \(X\) represents the components in \(S\). \(X\) in our case are,
\[X=\theta_{12},\theta_{13},\theta_{23},\delta_{13},AC \tag{11}\]
. In Equation 11, the first four components is a part of \(F\) in Equation 10 and the last one is the complementarity in the expansion of \(S\), which is \(AC\).
Taking the size of the uncertainty for \(\theta\)s and \(\delta\),
\[\Delta\theta_{12}\sim 1^{\circ},\Delta\theta_{23}\sim 1^{\circ},\Delta\theta_{13 }\sim 0.1^{\circ},\Delta\delta_{13}\sim 30^{\circ} \tag{12}\]
, in \(1\sigma\) level of confidence interval, and that for,
\[\Delta SC\sim 10^{\circ} \tag{13}\]
, which is based on our study of the complementarity as shown in Equation 2, we calculate the size of the relative uncertainty for one of two \(S\)s to be,
\[\Delta_{1}^{2}\sim\frac{1}{35^{2}}+\frac{1}{45^{2}}+\frac{1}{100^{2}}+\frac{ 1}{7^{2}}+\frac{1}{20^{2}} \tag{14}\]
. Then we end up with for the size of the relative uncertainty for one of \(S\) as,
\[\Delta S\cdot\frac{1}{S}\sim 0.156\sim 16\% \tag{15}\]
. However, when the size is calculated for \(S_{1}\) in Equation 14, the component in the uncertainty calculation for \(S_{2}\) can be reduced further, due to the variations in the size of \(SC\) in Equation 10. It does not need to be same as \(\Delta_{1}\), given the result of Equation 10.
In other words, for \(S_{2}\) in Equation 10, we have different size, smaller size, for some components in the calculation of the overall relative uncertainty. For instance, that for \(\delta_{13}\), due to the complementarity that we studied, we can constrain it further, at least within the scenario of ordering the matrix in different orders.
\[\Delta\delta_{13}\sim\Delta SC+\Delta\theta_{12}+\Delta\theta_{23}+\Delta \theta_{13}\sim 10^{\circ} \tag{16}\]
. So the relative uncertainty for \(\delta_{13}\), which is a dominant uncertainty in the expression, for one of two \(S\)s in Equation 10 can be reduced by a factor of \(\sim\) 3. Such is doable since that for \(S_{1}\) addressed the size in general case.
With that, the size of the relative uncertainty for \(S_{2}\) in Equation 10 can be,
\[\Delta_{2}^{2}\sim\frac{1}{35^{2}}+\frac{1}{45^{2}}+\frac{1}{100^{2}}+\frac{ 1}{18^{2}} \tag{17}\]
, where the fourth component, which is for \(\delta_{13}\), is reduced due to Equation 13 and the last one, which is for \(SC\), is canceled out in the first order. Then the relative uncertainty for \(S_{2}\) in Equation 14 is,
\[\Delta S\cdot\frac{1}{S}\sim 0.067\sim 7\% \tag{15}\]
. Taking that into account, we calculate the size of the total relative uncertainty for the ratio of any combination of \(U\)s to be in the order of,
\[\Delta R\cdot\frac{1}{R}\sim 17\% \tag{16}\]
, as opposed to its being \(\sim 23\%\) when the complementarity in Equation 2 is not taken into account, or even larger when the complementarity is not considered at all. The point here is that the uncertainty associated with the ratio of the two elements in the unitary matrix are constrained, as long as we have expressed the elements in the mixing matrix as a function of three \(\theta\)s and one \(\delta\). Such an approach can be taken for that in the quark sector too. Identifying the relation with what we have in this study to that in the quark sector can certainly by one of future studies [18].
## 4 Discussion
In short, as introducing \(\delta\) as an extra parameter in the self complementarity, we can come up with a relationship among the diagonal elements in the unitary neutrino mixing matrix and use that to constrain the uncertainty associated with the ratio among the diagonal elements in the unitary mixing matrix. One of common complementarities such as Equation 1 can depend on how we parameterize the mixing matrix by simply ordering the rotation differently. However, a revised version such as Equation 2 can be taken into account under some analytical expression such as Equation 6, from which the relationship among the unitary matrix can be written, then we can utilize the complementarity as an empirical constraint for the relative uncertainty associated with, as described in this study.
Our study indicates that we may need to be cautious when reporting the size of the uncertainty among the diagonal elements. For instance, given the uncertainty for \(U_{1}\) being measured, we may use that to constrain that for \(U_{2}\) or \(U_{3}\) further, independent of how we parameterize the matrix. When you have a look at the size of the uncertainty reported in [6], it is based on what we have measured in the standard scheme of parameterizing the matrix with the unitarity of the matrix. For \(U_{1}\), we do not have \(\delta\) being a part of the component in the standard scheme, but it can have one when parameterized by ordering the rotation differently as in Equation 2. The same holds for other diagonal elements. With all that being taken into account, the uncertainty being small for \(U_{1}\) in the standard scheme need to be taken into account when addressing that of the other two diagonal elements.
As for our future study, we may take a revised version of the lepton-quark complementarity [1] and come up with a relationship among the elements in the unitary mixing matrix in the quark and the neutrino sector [18]. Due to the size of the uncertainty associated with the elements is smaller in the quark sector, such study may lead us to constrain the size of the elements in the neutrino mixing matrix even further or give us some idea that
we did not think of before. In addition, we may initiate some computational study of the revised version of the complementarity to see how large the variation is when the size of the mixing parameter varies. It is of our interest to use the method in this study but with other models that were described in [16, 17] too.
The author truly thanks his family for all their support. The author also thanks those for providing many thoughtful comments a while ago.
| neutrinoとクォーク混み込み行列におけるパラメータの関係性を解決する補完性について説明し、行列の要素間の不確かさを推定します。また、その不確かさへの影響をクォーク混み込み行列の不完全性、Wolfensteinパラメータ化、最初の列の緊張に示します。まず、追加パラメータとして相位を導入した補完性について説明します。これは、行列をパラメータ化するための九つの独立した方法の中で、一定の不確かさにおける離散パラメータ対称性を持つことを示します。その方法を、Sine関数との結合に関連付けることで、不確かさに対する制限方法を初めて説明します。その結果、最初の時間、不確かさを制限する方法は、中心値ではなく、ニュートリノ混み込み行列の対角要素間の不確かさを補完することです。次に、クォークセクターにも適用し |
2308.16479 | Kerr black hole shadows cast by extraordinary light rays with Weyl
corrections | We investigate the equation of motion for photons with Weyl corrections in a
Kerr black hole spacetime in a small coupling case. Our results show that Weyl
corrections yield phenomena of birefringence. The light rays propagating in the
spacetime are separated into the ordinary rays and the extraordinary rays, and
the propagation of the latter depends on the corrections. We probe the effects
of Weyl corrections on the Kerr black hole shadows casted by the extraordinary
rays and find that such corrections result in a weak stretching or squeezing in
the vertical direction for the black hole shadows. Finally, we also study the
change of the length of the Near-Horizon Extremal Kerr line (NHEK line) with
Weyl corrections. These features could help us to understand the
electrodynamics with Weyl corrections from black hole shadows. | Songbai Chen, Jiliang Jing | 2023-08-31T06:19:57 | http://arxiv.org/abs/2308.16479v2 | # Kerr black hole shadow casted by the extraordinary light rays with Weyl corrections
###### Abstract
We have investigated the equation of motion for photons with Weyl corrections in a Kerr black hole spacetime in the small coupling case. Our results show that Weyl corrections yield phenomena of birefringence. The light rays propagating in the spacetime are separated into the ordinary rays and the extraordinary rays, and the propagation of the latter depends on the corrections. We probe the effects of Weyl corrections on the Kerr black hole shadows casted by the extraordinary rays and find that such corrections result in a weak stretching or squeezing in the vertical direction for the black hole shadows. Finally, we also study the change of the length of the Near-Horizon Extremal Kerr line (NHEK line) with Weyl corrections. These features could help us to understand the electrodynamics with Weyl corrections from black hole shadows.
pacs: 04.70.Dy, 95.30.Sf, 97.60.Lf
Introduction
The images of the supermassive black holes M87* [1; 2; 3; 4] and Sgr A* [5; 6] open a new era of testing gravity in strong field regimes. The brightness and polarization patterns of surrounding emissions region in black hole images carry a wealth of information about the electromagnetic emissions [7; 8; 9; 10; 11; 12; 13], which provides a powerful way to probe the electromagnetic interaction, the matter distribution and the accretion process in the vicinity of black holes. In general, black hole's images depend on the parameters of background black hole, the dynamical properties of photon itself and the interactions between photon and other fields.
Most of current studies on black hole images are based on the standard Einstein-Maxwell theory in which there is only a quadratic term of Maxwell tensor directly related to electromagnetic field [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. Recently, black hole images have been studied in some interesting extensions of Einstein-Maxwell theory contained more interactions about electromagnetic field. These extra interactions modify Maxwell equations and change paths of photons moving in spacetimes, which inevitably affect the sizes and shapes of black hole shadows. In refs.[32; 33; 34], it is shown that the quantum electrodynamic corrections from the Euler-Heisenberg effective Lagrangian yields birefringence phenomenon so observer sees different shadow sizes of a single black hole for different polarization lights. Moreover, a phenomenological coupling between a photon and a generic vector field is also introduced to study black hole shadow [35]. It is found that the black hole shadow in edge-on view has different appearances for different frequencies of the observed light. The effects of axion-photon interaction on polarization patterns in black hole images have also been used to constrain axion-like dark matter around black holes [36].
It is well known that Weyl tensor is an important tensor in general relativity since it describes a type of gravitational distortion in the spacetime. Thus, the electrodynamics with Weyl corrections has been used to study optical properties in curve spacetimes. In this modified electrodynamics, the interaction between Maxwell field and Weyl tensor has a simple form. Especially, these Weyl corrections could emerge naturally in quantum electrodynamics with the photon effective action that originates from one-loop vacuum polarization [37]. Although the Weyl corrections appear firstly as an effective description of quantum effects, the extended theoretical models without the small coupling constant limit have been investigated for many physical motivations [38; 39; 40; 41; 42]. Recently, we have researched the effects of Weyl corrections on the shadow of a static phantom black hole [43] and find that there are double shadows for a single black hole because the natural lights can be separated into two kinds of linearly polarized light beams along different propagation paths. We also probe
the effects of Weyl corrections on the black hole image and its polarization distribution for a Schwarzschild black hole surrounded by an equatorial thin accretion disk [44]. The effects of Weyl corrections on strong gravitational lensing have been studied in the Schwarzschild black hole spacetime [45] and in the Kerr black hole spacetime [46]. However, in ref.[46], we have considered only a simple case where photons are limited in the equatorial plane [46]. The main reason is that in the rotating black hole spacetime the solutions describing the polarized photon motions are difficult to find in a general case. Thus, how Weyl corrections affect the shadow of a rotating black hole is still an open issue. In this paper, we make use of the small coupling approximation based on physically justification, and obtain two solutions of polarized photon motions. One of them corresponds to the ordinary rays with the same propagation paths as in the case without Weyl corrections. The other is for the extraordinary rays whose propagation paths depend on the corrections. With such special property of the extraordinary light rays, we can study the effects of Weyl corrections on the shadow of a rotating black hole.
The paper is organized as follows: In Sec.II, we introduce briefly equation of motion for the photons coupled to Weyl tensor in the Kerr black hole spacetime and present two solutions of polarized photon motions in the small coupling approximation. In Sec.III, we present numerically Kerr black hole shadow casted by the extraordinary light rays with Weyl corrections and probe their effects on the shadow. Finally, we end the paper with a summary.
## II Equation of motion for photons coupled to Weyl tensor in a Kerr black hole spacetime
We first briefly review the equations of motions for the photons interacted with Weyl tensor in a Kerr black hole spacetime by the geometric optics approximation [47; 48; 49; 50; 37]. In the curved spacetime, the action contained the electromagnetic field with Weyl corrections can be expressed as
\[S=\int d^{4}x\sqrt{-g}\bigg{[}\frac{R}{16\pi G}-\frac{1}{4}\bigg{(}F_{\mu\nu}F ^{\mu\nu}-4\alpha C^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma}\bigg{)}\bigg{]}, \tag{1}\]
where \(C_{\mu\nu\rho\sigma}\) is the Weyl tensor with the form
\[C_{\mu\nu\rho\sigma}=R_{\mu\nu\rho\sigma}-(g_{\mu[\rho}R_{\sigma]\nu}-g_{\nu[ \rho}R_{\sigma]\mu})+\frac{1}{3}Rg_{\mu[\rho}g_{\sigma]\nu}. \tag{2}\]
Here \(g_{\mu\nu}\) is metric of the background spacetime and the brackets around indices denote the antisymmetric part. \(R_{\mu\nu\rho\sigma}\), \(R_{\mu\nu}\) and \(R\) are Riemannian curvature tensor, Ricci tensor and Ricci scalar, respectively. \(F_{\mu\nu}\) is the usual electromagnetic tensor and the coupling constant \(\alpha\) has a dimension of length-squared. The coupling
term with Weyl tensor modifies Maxwell equation as
\[\nabla_{\mu}\bigg{(}F^{\mu\nu}-4\alpha C^{\mu\nu\rho\sigma}F_{\rho \sigma}\bigg{)}=0. \tag{3}\]
From the corrected Maxwell equation (3), we can get equation of motions for coupled photons by resorting to the geometric optics approximation where the wavelengths of photons \(\lambda\) are assumed to be much smaller than a typical curvature scale, but be larger than the electron Compton wavelength [47; 48; 49; 50; 37]. In this way, the electromagnetic field strength in Eq.(3) can be simplified as
\[F_{\mu\nu}=f_{\mu\nu}e^{i\theta}, \tag{4}\]
where \(f_{\mu\nu}\) is a slowly varying amplitude and \(\theta\) is a rapidly varying phase. This implies that the derivative term \(f_{\mu\nu;\lambda}\) is not dominated so it can be neglected. The wave vector \(k_{\mu}=\partial_{\mu}\theta\) can be regarded as the usual photon momentum in quantum theories. Combing with the Bianchi identity
\[D_{\lambda}F_{\mu\nu}+D_{\mu}F_{\nu\lambda}+D_{\nu}F_{\lambda\mu}=0, \tag{5}\]
one can find that the form of the amplitude \(f_{\mu\nu}\) must be
\[f_{\mu\nu}=k_{\mu}a_{\nu}-k_{\nu}a_{\mu}, \tag{6}\]
where \(a_{\mu}\) is the polarization vector satisfying the condition that \(k_{\mu}a^{\mu}=0\). Substituting Eqs.(4) and (6) into Eq. (3), we can obtain the equation of motions for photons with Weyl corrections as follows
\[k_{\mu}k^{\mu}a^{\nu}+8\alpha C^{\mu\nu\rho\sigma}k_{\sigma}k_{ \mu}a_{\rho}=0. \tag{7}\]
Thus, the Weyl corrections change the propagation of coupled photons in the background spacetime.
For a Kerr black hole, its metric has a form in the standard Boyer-Lindquist coordinates
\[ds^{2} = -\rho^{2}\frac{\Delta}{\Sigma^{2}}dt^{2}+\frac{\rho^{2}}{\Delta} dr^{2}+\rho^{2}d\theta^{2}+\frac{\Sigma^{2}}{\rho^{2}}\sin^{2}\theta(d\phi- \omega dt)^{2}, \tag{8}\]
with
\[\omega = \frac{2aMr}{\Sigma^{2}},\hskip 72.27pt\rho^{2}=r^{2}+a^{2}\cos^{2 }\theta,\] \[\Delta = r^{2}-2Mr+a^{2},\hskip 72.27pt\Sigma^{2}=(r^{2}+a^{2})^{2}-a^{2} \sin^{2}\theta\Delta. \tag{9}\]
Here the parameters \(M\) and \(a\) denote the mass and the spin parameter of the black hole, respectively. To build a local set of orthonormal frames in the Kerr black hole spacetime, one can resort to the vierbein fields defined by
\[g_{\mu\nu}=\eta_{ab}e^{a}_{\mu}e^{b}_{\nu}, \tag{10}\]
where \(\eta_{ab}\) is the Minkowski metric. The vierbein fields \(e^{a}_{\mu}\) have a form
\[e^{a}_{\mu}=\left(\begin{array}{cccc}\rho\frac{\sqrt{\Delta}}{\Sigma}&0&0&- \frac{\omega\Sigma}{\rho}\sin\theta\\ 0&\frac{\rho}{\sqrt{\Delta}}&0&0\\ 0&0&\rho&0\\ &0&0&\frac{\Sigma}{\rho}\sin\theta\end{array}\right), \tag{11}\]
and their inverse \(e^{\mu}_{a}\) are
\[e^{\mu}_{a}=\left(\begin{array}{cccc}\frac{\Sigma}{\rho\sqrt{\Delta}}&0&0&0 \\ 0&\frac{\sqrt{\Delta}}{\rho}&0&0\\ 0&0&\frac{1}{\rho}&0\\ \frac{\omega\Sigma}{\rho\sqrt{\Delta}}&0&0&\frac{\rho}{\Sigma\sin\theta}\end{array} \right). \tag{12}\]
With the notation for the antisymmetric combination of vierbeins [37; 47]
\[U^{ab}_{\mu\nu}=e^{a}_{\mu}e^{b}_{\nu}-e^{a}_{\nu}e^{b}_{\mu}, \tag{13}\]
the complete Weyl tensor in Kerr black hole spacetime can be expressed as in a simple form [47]
\[C_{\mu\nu\rho\sigma} = 2{\cal A}\bigg{(}U^{01}_{\mu\nu}U^{01}_{\rho\sigma}-U^{23}_{\mu \nu}U^{23}_{\rho\sigma}-U^{03}_{\mu\nu}U^{03}_{\rho\sigma}+U^{12}_{\mu\nu}U^{ 12}_{\rho\sigma}\bigg{)}+2{\cal B}\bigg{(}U^{02}_{\mu\nu}U^{02}_{\rho\sigma}-U ^{13}_{\mu\nu}U^{13}_{\rho\sigma}-U^{03}_{\mu\nu}U^{03}_{\rho\sigma}+U^{12}_{ \mu\nu}U^{12}_{\rho\sigma}\bigg{)} \tag{14}\] \[+ {\cal C}\bigg{(}U^{01}_{\mu\nu}U^{23}_{\rho\sigma}+U^{23}_{\mu \nu}U^{01}_{\rho\sigma}-U^{03}_{\mu\nu}U^{12}_{\rho\sigma}-U^{12}_{\mu\nu}U^{ 03}_{\rho\sigma}\bigg{)}+{\cal D}\bigg{(}-U^{02}_{\mu\nu}U^{13}_{\rho\sigma}- U^{13}_{\mu\nu}U^{02}_{\rho\sigma}-U^{03}_{\mu\nu}U^{12}_{\rho\sigma}-U^{12}_{\mu\nu}U^{ 03}_{\rho\sigma}\bigg{)}\] \[+ {\cal E}\bigg{(}U^{01}_{\mu\nu}U^{02}_{\rho\sigma}+U^{02}_{\mu \nu}U^{01}_{\rho\sigma}+U^{13}_{\mu\nu}U^{23}_{\rho\sigma}+U^{23}_{\mu\nu}U^{ 13}_{\rho\sigma}\bigg{)}+{\cal F}\bigg{(}U^{01}_{\mu\nu}U^{13}_{\rho\sigma}+U^{ 13}_{\mu\nu}U^{01}_{\rho\sigma}-U^{02}_{\mu\nu}U^{23}_{\rho\sigma}-U^{23}_{\mu \nu}U^{02}_{\rho\sigma}\bigg{)},\]
with
\[{\cal A} = \frac{Mr}{\rho^{6}\Sigma^{2}}(r^{2}-3a^{2}\cos^{2}\theta)[2(r^{2 }+a^{2})^{2}+a^{2}\Delta\sin^{2}\theta],\] \[{\cal B} = -\frac{Mr}{\rho^{6}\Sigma^{2}}(r^{2}-3a^{2}\cos^{2}\theta)[(r^{2 }+a^{2})^{2}+2a^{2}\Delta\sin^{2}\theta],\] \[{\cal C} = -\frac{aM\cos\theta}{\rho^{6}\Sigma^{2}}(3r^{2}-a^{2}\cos^{2} \theta)[2(r^{2}+a^{2})^{2}+a^{2}\Delta\sin^{2}\theta],\] \[{\cal D} = \frac{aM\cos\theta}{\rho^{6}\Sigma^{2}}(3r^{2}-a^{2}\cos^{2} \theta)[(r^{2}+a^{2})^{2}+2a^{2}\Delta\sin^{2}\theta],\] \[{\cal E} = -\frac{3a^{2}M\sqrt{\Delta}\cos\theta}{\rho^{6}\Sigma^{2}}(3r^{2 }-a^{2}\cos^{2}\theta)(r^{2}+a^{2})\sin\theta,\] \[{\cal F} = \frac{3aMr\sqrt{\Delta}}{\rho^{6}\Sigma^{2}}(r^{2}-3a^{2}\cos^{2} \theta)(r^{2}+a^{2})\sin\theta. \tag{15}\]
To obtain the equation of motions for coupled photons in a Kerr black hole spacetime, one can introduce three linear combinations of momentum components as in [37; 47]
\[l_{\nu}=k^{\mu}U^{01}_{\mu\nu},\hskip 28.452756ptn_{\nu}=k^{\mu}U^{02}_{\mu\nu}, \hskip 28.452756ptr_{\nu}=k^{\mu}U^{03}_{\mu\nu}, \tag{16}\]
together with the dependent combinations
\[p_{\nu}=k^{\mu}U^{12}_{\mu\nu}=\frac{1}{e^{0}_{t}k^{t}}\bigg{[}e^{1 }_{r}k^{r}n_{\nu}-e^{2}_{\theta}k^{\theta}l_{\nu}\bigg{]},\] \[m_{\nu}=k^{\mu}U^{23}_{\mu\nu}=\frac{1}{e^{0}_{t}k^{t}}\bigg{[}e^ {2}_{\theta}k^{\theta}r_{\nu}-(e^{3}_{t}k^{t}+e^{3}_{\phi}k^{\phi})n_{\nu} \bigg{]},\] \[q_{\nu}=k^{\mu}U^{13}_{\mu\nu}=\frac{1}{e^{0}_{t}k^{t}}\bigg{[}e^ {1}_{r}k^{r}r_{\nu}-(e^{3}_{t}k^{t}+e^{3}_{\phi}k^{\phi})l_{\nu}\bigg{]}. \tag{17}\]
These polarisation vectors are orthogonal to the wave vector \(k_{\nu}\). Contracting the equation (7) with the vectors \(l_{\nu}\), \(n_{\nu}\), \(r_{\nu}\), respectively, and making use of the relationship (17), the equation of motions of photons with Weyl corrections (7) can be further simplified as a set of equations for three independent polarisation components \(a\cdot l\), \(a\cdot n\), and \(a\cdot r\),
\[\left(\begin{array}{cc}K_{11}&K_{12}&K_{13}\\ K_{21}&K_{22}&K_{23}\\ K_{31}&K_{32}&K_{33}\end{array}\right)\left(\begin{array}{c}a\cdot l\\ a\cdot n\\ a\cdot r\end{array}\right)=0. \tag{18}\]
The coefficients \(K_{ij}\) are very complicated and here we do not list them (For details on the coefficients \(K_{ij}\), please refer to Eqs.(20)-(22) in [46]).
The necessary and sufficient condition for Eq.(18) to have non-zero solutions is that the determinant of its coefficient matrix is zero, i.e., \(|K|=0\). However, in the Kerr black hole spacetime, it is difficult to find a solution satisfied \(|K|=0\) in the general case due to the complicated coefficients \(K_{ij}\). Here, we limit ourselves to the case where the coupling parameter \(\alpha\) is very small, which is physically justified. Retaining only the first-order term and ignoring other higher-order terms, the determinant \(|K|\) can be expanded as a linear form of \(\alpha\)
\[|K|=\mathcal{K}^{2}[\mathcal{K}-8\alpha(\mathcal{C}+\mathcal{D})e^{1}_{r}k^{r} (\epsilon^{3}_{t}k^{t}+e^{3}_{\phi}k^{\phi})]+\mathcal{O}(\alpha^{2}), \tag{19}\]
with
\[\mathcal{K}=[-(e^{0}_{t})^{2}+(e^{3}_{t})^{2}]k^{t}k^{t}+(e^{1}_{r})^{2}k^{r}k ^{r}+(e^{2}_{\theta})^{2}k^{\theta}k^{\theta}+2e^{3}_{\phi}e^{3}_{t}k^{t}k^{ \phi}+(e^{3}_{\phi})^{2}k^{\phi}k^{\phi}. \tag{20}\]
Thus, in this small \(\alpha\) approximation, there exists two non-zero solutions of Eq.(18), i.e,
\[[-(e^{0}_{t})^{2}+(e^{3}_{t})^{2}]k^{t}k^{t}+(e^{1}_{r})^{2}k^{r} k^{r}+(e^{2}_{\theta})^{2}k^{\theta}k^{\theta}+2e^{3}_{\phi}e^{3}_{t}k^{t}k^{ \phi}+(e^{3}_{\phi})^{2}k^{\phi}k^{\phi}=0, \tag{21}\] \[[-(e^{0}_{t})^{2}+(e^{3}_{t})^{2}]k^{t}k^{t}+(e^{1}_{r})^{2}k^{r} k^{r}+(e^{2}_{\theta})^{2}k^{\theta}k^{\theta}+2e^{3}_{\phi}e^{3}_{t}k^{t}k^{ \phi}+(e^{3}_{\phi})^{2}k^{\phi}k^{\phi}-8\alpha(\mathcal{C}+\mathcal{D})\] \[\times e^{1}_{r}k^{r}(e^{3}_{t}k^{t}+e^{3}_{\phi}k^{\phi})=0. \tag{22}\]
The first solution (21) is independent of the coupling parameter \(\alpha\), whose behavior is similar to those of ordinary rays propagating in the anisotropic crystals. The second solution (22) depends on the coupling parameter
\(\alpha\), which behaves like those of extraordinary rays in crystal optics. These mean that Weyl corrections yield a phenomenon of birefringence in the spacetime. In other words, Weyl corrections split light rays in the Kerr spacetime into the ordinary rays and the extraordinary rays. In the following section, we will further study the shadow caused by extraordinary rays (22) and probe the corresponding effects from Weyl corrections.
Actually, the presence of Weyl correction terms in the light cone condition (22) indicates that the propagation paths of extraordinary light rays are non-geodesic in a Kerr spacetime. However, these photons can be looked as moving along the null geodesics in the spacetime described by an effective metric \(\gamma_{\mu\nu}\), i.e., \(\gamma^{\mu\nu}k_{\mu}k_{\nu}=0\)[51]. The effective metric for the coupled photons propagating along extraordinary rays in a Kerr black hole spacetime can be expressed as
\[ds^{2}=\gamma_{tt}dt^{2}+\gamma_{rr}dr^{2}+\gamma_{\theta\theta}d \theta^{2}+\gamma_{\phi\phi}d\phi^{2}+2\gamma_{t\phi}dtd\phi+2\gamma_{tr}dtdr+ 2\gamma_{r\phi}drd\phi, \tag{23}\]
with
\[\gamma_{tt}=-\bigg{(}1-\frac{2Mr}{\rho^{2}}\bigg{)},\qquad\qquad \gamma_{rr}=\frac{\rho^{2}}{\Delta},\qquad\qquad\gamma_{\theta\theta}=\rho^{2},\] \[\gamma_{\phi\phi}=\frac{\Sigma^{2}\sin^{2}\theta}{\rho^{2}}, \qquad\qquad\gamma_{t\phi}=-\frac{2Mar\sin^{2}\theta}{\rho^{2}},\] \[\gamma_{tr}=-\frac{8\alpha M^{2}a^{2}r\sin\theta\cos\theta(3r^{2} -a^{2}\cos^{2}\theta)}{\rho^{6}\Sigma\sqrt{\Delta}},\] \[\gamma_{r\phi}=\frac{4\alpha Ma\Sigma\sin\theta\cos\theta(3r^{2} -a^{2}\cos^{2}\theta)}{\rho^{6}\sqrt{\Delta}}. \tag{24}\]
As \(a=0\), we can find that the effective metric reduces to that of the usual Schwarzschild black hole and does not depend of the coupling parameter \(\alpha\). This means that in the first-order linear approximation of the determinant \(|K|\) with respect to \(\alpha\), the extraordinary ray travels along the same path as ordinary light ray, so there is no birefringence phenomenon in this case. Moreover, we also note that the propagation of the rays limited in the equatorial plane is independent of the coupling since the coupling-dependent metric functions \(\gamma_{tr}\) and \(\gamma_{r\phi}\) vanish as \(\theta=\frac{\pi}{2}\), which means the photon rings do not depend on the coupling in such small \(\alpha\) coupling case. The similar situation also appears in the case where the light rays propagate along the rotation axis (\(\theta=0,\ \pi\)) of the black hole. Thus, in this small coupling approximation, the Weyl corrections do not yield birefringence phenomenon if the light rays propagate along the rotation axis or in the equatorial plane of a rotating black hole.
The Hamiltonian of coupled photons moving along null geodesics in the effective spacetime (24) can be expressed as
\[H(x,p)=\frac{1}{2}g^{\mu\nu}(x)p_{\mu}p_{\nu}=0. \tag{25}\]
Since the metric functions in (24) are independent of the coordinates \(t\) and \(\phi\), there exist two conserved quantities, i.e., the photon's energy \(E_{0}\) and its \(z\)-component of the angular momentum \(L_{z0}\). However, due to the existence of \(drdt\) and \(drd\phi\) terms, the forms of \(E_{0}\) and \(L_{z0}\) are modified as
\[E_{0}=-p_{t}=-\gamma_{tt}\dot{t}-\gamma_{tr}\dot{r}-\gamma_{t\phi}\dot{\phi}, \quad L_{z0}=p_{\phi}=\gamma_{t\phi}\dot{t}+\gamma_{r\phi}\dot{r}+\gamma_{\phi \phi}\dot{\phi}. \tag{26}\]
With these two conserved quantities, one can obtain the equations of null geodesics
\[\dot{t} = \frac{\gamma_{\phi\phi}E_{0}+\gamma_{t\phi}L_{z0}+(\gamma_{tr} \gamma_{\phi\phi}-\gamma_{t\phi}\gamma_{r\phi})\dot{r}}{\gamma_{t\phi}^{2}- \gamma_{tt}\gamma_{\phi\phi}}, \tag{27}\] \[\dot{\phi} = \frac{\gamma_{t\phi}E_{0}+\gamma_{tt}L_{z0}+(\gamma_{tr}\gamma_{ t\phi}-\gamma_{tt}\gamma_{r\phi})\dot{r}}{\gamma_{tt}\gamma_{\phi\phi}-\gamma_{t \phi}^{2}},\] (28) \[\ddot{r} = \frac{\gamma_{t\phi}^{2}-\gamma_{tt}\gamma_{\phi\phi}}{\gamma_{ rr}(\gamma_{t\phi}^{2}-\gamma_{tt}\gamma_{\phi\phi})+\gamma_{r\phi}^{2}\gamma_{ tt}+\gamma_{tr}^{2}\gamma_{\phi\phi}-2\gamma_{tr}\gamma_{t\phi}\gamma_{r \phi}}\bigg{[}\frac{1}{2}\bigg{(}\gamma_{tt,r}t^{2}-\gamma_{rr,r}\dot{r}^{2}+ \gamma_{\theta\theta,r}\dot{\theta}^{2}+\gamma_{\phi\phi,r}\dot{\phi}^{2}\] (29) \[+ 2\gamma_{t\phi,r}\dot{t}\dot{\phi}-2\gamma_{tr,\theta}\dot{t} \dot{\theta}-2\gamma_{\theta\theta,\theta}\dot{r}\dot{\theta}-2\gamma_{r\phi, \theta}\dot{\theta}\dot{\phi}\bigg{)}\] \[+ \frac{\gamma_{t\phi}\gamma_{r\phi}-\gamma_{tr}\gamma_{\phi\phi}}{ \gamma_{t\phi}^{2}-\gamma_{tt}\gamma_{\phi\phi}}\bigg{(}\gamma_{tr,r}\dot{r}^{ 2}+\gamma_{tt,r}\dot{t}+\gamma_{tt,\theta}\dot{t}\dot{\theta}+\gamma_{tr, \theta}\dot{r}\dot{\theta}+\gamma_{t\phi,r}\dot{r}\dot{\phi}+\gamma_{t\phi, \theta}\dot{\theta}\dot{\phi}\bigg{)}\] \[+ \frac{\gamma_{tr}\gamma_{t\phi}-\gamma_{tt}\gamma_{r\phi}}{ \gamma_{t\phi}^{2}-\gamma_{tt}\gamma_{\phi\phi}}\bigg{(}\gamma_{r\phi,r}\dot{r }^{2}+\gamma_{r\phi,\theta}\dot{r}\dot{\theta}+\gamma_{t\phi,r}\dot{r}\dot{r} +\gamma_{t\phi,\theta}\dot{t}\dot{\theta}+\gamma_{\phi\phi,r}\dot{r}\dot{ \phi}+\gamma_{\phi\phi,\theta}\dot{\theta}\dot{\phi}\bigg{)}\bigg{]},\] \[\ddot{\theta} = \frac{1}{2\gamma_{\theta\theta}}(\gamma_{tt,\theta}\dot{t}^{2}+2 \gamma_{tr,\theta}\dot{t}\dot{r}+\gamma_{rr,\theta}\dot{r}^{2}-2\gamma_{\theta \theta,r}\dot{r}\dot{\theta}-\gamma_{\theta\theta,\theta}\dot{\theta}^{2}+ \gamma_{\phi\phi,\theta}\dot{\phi}^{2}+2\gamma_{t\phi,\theta}\dot{t}\dot{\phi} +2\gamma_{r\phi,\theta}\dot{r}\dot{\phi}). \tag{30}\]
The presence of the metric functions \(\gamma_{tr}\) and \(\gamma_{r\phi}\) yields that the quantities \(\dot{t}\) and \(\dot{\phi}\) depend on \(\dot{r}\), which means that the motion of photons has some behaviors differed from the non-coupling case. Thus, it is expected that the shadow of Kerr black hole casted by the extraordinary rays with Weyl corrections should possess some new behaviors.
## III Shadow of Kerr black hole casted by the extraordinary rays with Weyl corrections
Since the null equations (27)-(30) can not be variable-separable, we have to resort to "backward ray-tracing" method [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24] to simulate numerically the shadow of Kerr black hole casted by the extraordinary rays with Weyl corrections. In this method, the light rays are assumed to evolve from the observer backward in time and then the position of each pixel in the final image can be got by solving numerically the nonlinear differential equations (27)-(30). The shadow of the black hole in observer's sky is determined by the pixels related to the light rays falling down into black hole. For the effective spacetime (23), the local basis of observer \(\{e_{\dot{t}},e_{\dot{r}},e_{\dot{\theta}},e_{\dot{\psi}}\}\) can be expanded as a form in the coordinate basis of the background spacetime \(\{\partial_{t},\partial_{r},\partial_{\theta},\partial_{\psi}\}\)
\[e_{\dot{\mu}}=e_{\dot{\mu}}^{\nu}\partial_{\nu}, \tag{31}\]
where the transformation matrix \(e^{\nu}_{\hat{\mu}}\) satisfies \(g_{\mu\nu}e^{\mu}_{\hat{\alpha}}e^{\nu}_{\hat{\beta}}=\eta_{\hat{\alpha}\hat{ \beta}}\), and \(\eta_{\hat{\alpha}\hat{\beta}}\) is the usual Minkowski metric. Since the effective spacetime (23) is asymptotically flat, one can conveniently choose a decomposition
\[e^{\nu}_{\hat{\mu}}=\left(\begin{array}{cccc}\zeta&\varepsilon&0&\gamma\\ 0&A^{r}&0&\varpi\\ 0&0&A^{\theta}&0\\ 0&0&0&A^{\phi}\end{array}\right), \tag{32}\]
where \(\zeta\), \(\varepsilon\), \(\gamma\), \(\varpi\), \(A^{r}\), \(A^{\theta}\),and \(A^{\phi}\) are real coefficients. Actually, the decomposition (32) is connected with a reference frame with zero axial angular momentum in relation to spatial infinity [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. Making use of the Minkowski normalization
\[e_{\hat{\mu}}e^{\hat{\nu}}=\delta^{\hat{\nu}}_{\hat{\mu}}, \tag{33}\]
we have
\[A^{r}=\sqrt{\frac{\gamma_{\phi\phi}}{\gamma_{rr}\gamma_{\phi\phi }-\gamma_{r\phi}^{2}}},\qquad A^{\theta}=\frac{1}{\sqrt{g_{\theta\theta}}}, \qquad A^{\phi}=\frac{1}{\sqrt{g_{\phi\phi}}},\qquad\zeta=\sqrt{\frac{\gamma_ {rr}\gamma_{\phi\phi}-\gamma_{r\phi}^{2}}{\cal H}},\] \[\varepsilon=\frac{\gamma_{t\phi}\gamma_{r\phi}-\gamma_{tr}\gamma_ {\phi\phi}}{\sqrt{(\gamma_{rr}\gamma_{\phi\phi}-\gamma_{r\phi}^{2})\cal H}}, \qquad\gamma=\frac{\gamma_{tr}\gamma_{r\phi}-\gamma_{t\phi}\gamma_{rr}}{ \sqrt{(\gamma_{rr}\gamma_{\phi\phi}-\gamma_{r\phi}^{2})\cal H}},\qquad\varpi=- \frac{\gamma_{r\phi}}{\sqrt{\gamma_{\phi\phi}(\gamma_{rr}\gamma_{\phi\phi}- \gamma_{r\phi}^{2})}}, \tag{34}\]
where
\[{\cal H}\equiv\gamma_{rr}(\gamma_{t\phi}^{2}-\gamma_{tt}\gamma_{\phi\phi})+ \gamma_{r\phi}^{2}\gamma_{tt}+\gamma_{tr}^{2}\gamma_{\phi\phi}-2\gamma_{tr} \gamma_{t\phi}\gamma_{r\phi}. \tag{35}\]
From Eq.(31), the locally measured four-momentum \(p^{\hat{\mu}}\) of a photon can be expressed as
\[p^{\hat{t}}=-p_{\hat{t}}=-e^{\nu}_{\hat{t}}p_{\nu},\qquad\qquad\qquad p^{\hat {i}}=p_{i}=e^{\nu}_{i}p_{\nu}. \tag{36}\]
Thus, the locally measured four-momentum \(p^{\hat{\mu}}\) in the effective spacetime (23) can be further written as
\[p^{\hat{t}} = \zeta E_{0}-\gamma L_{z0}-\varepsilon p_{r},\qquad\qquad p^{\hat {\phi}}=\frac{1}{\sqrt{g_{\psi\psi}}}p_{\phi},\] \[p^{\hat{\theta}} = \frac{1}{\sqrt{g_{\theta\theta}}}p_{\theta},\qquad\qquad\qquad p^{ \hat{r}}=\sqrt{\frac{\gamma_{\phi\phi}}{\gamma_{rr}\gamma_{\phi\phi}-\gamma_{r \phi}^{2}}}\bigg{(}p_{r}-\frac{\gamma_{r\phi}}{\gamma_{\phi\phi}}p_{\phi} \bigg{)}. \tag{37}\]
As in refs.[14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25], we can obtain the celestial coordinates for pixel corresponding to light ray in the effective spacetime (23) as
\[x = -r_{obs}\frac{p^{\hat{\psi}}}{p^{\hat{r}}}=-r_{obs}\frac{\sqrt{ \gamma_{rr}\gamma_{\phi\phi}-\gamma_{r\phi}^{2}}}{(\gamma_{tr}\gamma_{\phi\phi }-\gamma_{r\phi}\gamma_{t\phi})\hat{t}+(\gamma_{rr}\gamma_{\phi\phi}-\gamma_{ r\phi}^{2})\hat{r}},\] \[y = r_{obs}\frac{p^{\hat{\theta}}}{p^{\hat{r}}}=r_{obs}\frac{\sqrt{ \gamma_{\theta\theta}\gamma_{\phi\phi}(\gamma_{rr}\gamma_{\phi\phi}-\gamma_{r \phi}^{2})}\,\dot{\theta}}{(\gamma_{tr}\gamma_{\phi\phi}-\gamma_{r\phi}\gamma_{ t\phi})\hat{t}+(\gamma_{rr}\gamma_{\phi\phi}-\gamma_{r\phi}^{2})\hat{r}}, \tag{38}\]
where \(r_{obs},\theta_{obs}\) are respectively the radial coordinate and polar angle of observer.
In Fig.1, we present the shadows of Kerr black hole casted by extraordinary rays with Weyl corrections as the observer lies in the equatorial plane. As in refs.[14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24], the total celestial sphere is divided into four quadrants painted in different colors (green, blue, red, and yellow). The grids of longitude and latitude lines are marked with adjacent brown lines separated by \(10^{\circ}\). The distributions of color regions and the shapes of longitude and latitude lines in figures reflect the bending of lights in the strong field regime near the black hole. The white ring in each figure, which is determined by lights from a reference spot lied in the line between black hole and observer, provides a direct demonstration of Einstein ring. The black parts denote black hole shadows. Here, we probe the effects of Weyl corrections in electrodynamics on the shadow for the rapidly rotating black hole with \(a=0.998\). The reason of selecting the black hole with high spin parameter is that the corrections to the effective metric (23) for the extraordinary rays in the Weyl corrected electrodynamics depend on the product of the parameters \(\alpha\) and \(a\), which yields that the corrections is negligible in the slowly rotating case. From Fig.1, we find that the positive coupling parameter \(\alpha\) results in a weak stretching in the vertical direction (i.e., \(y\)-axis direction) for the part of black hole shadow with \(x<0\), but a weak squeezing along the \(y\)-axis direction for the part with \(x>0\). The effects of the negative \(\alpha\) on black hole shadow are diametrically opposed. These properties are also further shown in Fig. 2.
The well-known "NHEK line" is a vertical line segment on the edge of the shadow of a rapidly rotating Kerr black hole. It is the brightest line in the image of a rapidly rotating Kerr black hole and its length is regarded to carry some characteristic information on the black hole [52; 53; 54; 55; 56]. For the selected spin parameter \(a=0.998\), the NHEK line lies at the position \(x=-1.99977\). In Fig.3, we present the change of the length of the NHEK line with the Weyl coupling parameter \(\alpha\). The negative \(\alpha\) decreases the length of the NHEK line. Especially, the length of the NHEK line becomes zero as \(\alpha=-0.3\), which means the black hole shadow
Figure 1: Shadows of a Kerr black hole casted by the extraordinary rays for different coupling parameter \(\alpha\) with fixed \(a=0.998\). Here we set the mass parameter \(M=1\), \(r_{obs}=30M\) and \(\theta_{obs}=\pi/2\). The figures from left to right correspond to \(\alpha=-0.3\), \(0\) and \(0.3\), respectively.
owns a cusp shape in this case. For the positive \(\alpha\), we find that the length of the NHEK line first increases with \(\alpha\), but with the further increasing \(\alpha\), it almost becomes a decreasing function of \(\alpha\). Finally, we find no self-similar fractal structure in the black hole shadow. This may be explained by that in the small coupling case the Weyl corrections are too weak to break the Kolmogorov-Arnold-Moser tori composed of photon orbits in phase space. Therefore, there is no chaotic motion of coupled photons although their motion equations (27)-(30) generally are not variable-separable.
## IV Summary
We have investigated the motion for the photon coupled to a Weyl tensor in a Kerr black hole spacetime in the small coupling approximation. Our results show that Weyl corrections yield a phenomenon of birefringence in the spacetime, where the propagation of the ordinary rays is is independent of the coupling parameter \(\alpha\), but the propagation of the extraordinary rays depends on the Weyl corrections. The effects of Weyl corrections on the extraordinary rays depend on the product of the parameters \(\alpha\) and \(a\). This leads to that there is no
Figure 3: Length of the NHEK line in the black hole shadow casted by the extraordinary rays due to Weyl corrections. Here we set \(M=1\), \(a=0.998\), \(r_{obs}=30M\) and \(\theta_{obs}=\pi/2\).
Figure 2: Boundaries of Kerr black hole shadows casted by the extraordinary rays arising from the coupling between photon and Weyl corrections tensor. Here we set \(M=1\), \(a=0.998\), \(r_{obs}=30M\) and \(\theta_{obs}=\pi/2\).
birefringence phenomenon in the non-rotating case because each extraordinary ray travels along the same path as each ordinary one. Moreover, the birefringence phenomena disappear as the propagation of the rays are limited in the equatorial plane or along rotation axis of the black hole. We also probe the effects of Weyl corrections on the black hole shadow casted by the extraordinary rays in the rapidly rotating case. It is shown the positive coupling parameter \(\alpha\) results in a weak stretching (or squeezing ) in the vertical direction (i.e., \(y\)-axis direction) for the part of black hole shadow with \(x<0\) (or \(x>0\)). The effects of the negative \(\alpha\) on black hole shadow are diametrically opposed. We also probe the change of the length of the NHEK line with the Weyl coupling parameter \(\alpha\). The negative \(\alpha\) decreases the length of the NHEK line, and the positive one first increases its length, but finally decreases the length of the NHEK line. Finally, we find no self-similar fractal structure in the black hole shadow in such a small coupling approximation.
## V Acknowledgments
This work was supported by the National Natural Science Foundation of China under Grant No.12275078, 11875026, 12035005, and 2020YFC2201400.
| |
2302.14235 | Ntuple Wizard: An Application to Access Large-Scale Open Data from LHCb | Making the large data sets collected at the Large Hadron Collider (LHC)
accessible to the world is a considerable challenge because of both the
complexity and the volume of data. This paper presents the Ntuple Wizard, an
application that leverages the existing computing infrastructure available to
the LHCb collaboration in order to enable third-party users to request specific
data. An intuitive web interface allows the discovery of accessible data sets
and guides the user through the process of specifying a configuration-based
request. The application allows for fine-grained control of the level of access
granted to the public. | Christine A. Aidala, Christopher Burr, Marco Cattaneo, Dillon S. Fitzgerald, Adam Morris, Sebastian Neubert, Donijor Tropmann | 2023-02-28T01:34:54 | http://arxiv.org/abs/2302.14235v2 | # Ntuple Wizard: an application to access large-scale open data from LHCb
###### Abstract
Making the large data sets collected at the Large Hadron Collider (LHC) accessible to the world is a considerable challenge because of both the complexity and the volume of data. This paper presents the Ntuple Wizard, an application that leverages the existing computing infrastructure available to the LHCb collaboration in order to enable third-party users to request specific data. An intuitive web interface allows the discovery of accessible data sets and guides the user through the process of specifying a configuration-based request. The application allows for fine-grained control of the level of access granted to the public.
**Keywords:** Open Data, Open Access, LHCb, LHC, CERN, HEP, Outreach
## 1 Introduction
In an increasingly diverse research landscape, management and curation of public data are becoming critical components of transdisciplinary science. Keys to the realization of an open research ecosystem that adds scientific value have been identified in the FAIR principles of scientific data management and stewardship [1]. Making data Findable, Accessible, Interoperable, and Reusable, however, requires a considerable amount of tooling and infrastructure.
A common problem, which is acute for data in high-energy physics but increasingly an issue in other fields as well, is the sheer size of data sets stored in custom file formats. For large-scale experimental facilities, such as the LHC at the European Organization for Nuclear Research (CERN), the data sets are so large that even access by the directly involved scientists has to be centrally managed. As an example, the LHCb data collected in the years 2011-12, corresponding to \(\sim 3\) fb\({}^{-1}\) of proton-proton collisions amount to a volume of 900 TB. This volume only refers to the already preprocessed data available to members of the collaboration and scheduled for release to the public. For the purpose of processing these data, extensive computing infrastructure has been set up by the countries participating in this type of research [2]. Replicating such an infrastructure to allow the public to handle the data would not
only require dedicated expert knowledge, but it would also duplicate existing facilities. On the other hand, any individual research conducted on a typical LHC data set will often only make use of a tiny portion of the full data, filtered and selected according to the requirements of the respective research question. It is therefore natural to provide the public with FAIR access to those highly selective subsamples.
In the following, an application is presented that exposes a data query service to allow the public to request sub-samples of data collected and published by the LHCb experiment. The samples are delivered as ROOT Ntuples [3] a data format that requires no special LHCb-specific software to read and for which converters to other standard file formats exist. We call the application the Ntuple Wizard.
The application interface guides users with basic knowledge in particle physics through the process of discovering the available data and formulating a useful query. The queries can be processed by the existing data production infrastructure, and results will be delivered through the CERN Open Data Portal [4]. By splitting the data request into the construction of a data query and subsequent processing of the query on the internal infrastructure, the LHCb collaboration retains fine-grained control over access to the data. Crucially this system protects the compute infrastructure from attacks by malicious code injection.
### Accessible open data
In 2020, the LHC experiments at CERN adopted a new Open Data Policy [5], the scope of which expanded in 2022 to an Open Science Policy [6]. These documents define the commitments of CERN to make the data collected at the LHC, at several levels of complexity, publicly available [7]:
_Level 1_ Published results -- this can include tables and figures but also preprocessed Ntuples or binned and unbinned fit likelihood functions.
_Level 2_ Outreach and education -- usually in the form of highly preprocessed Ntuples.
_Level 3_ Reconstructed data -- these data have been preprocessed to derive physics objects, such as charged particle candidates, photons, or particle jets. Reconstructed data may or may not be corrected for detector effects, such as efficiency and resolution.
_Level 4_ Raw data - the basic quantities recorded by the experimental instruments.
Both Level 1 and 2 data are considered to be highly processed, abstracted, and manageable using commonly available computers. Level 4 raw data will not be made available due to practical reasons concerning data size but also detector-specific information needed for the interpretation of these data. This leaves Level 3 data as the most versatile and basic data set which will be publicly accessible.
All LHC experiments have long and intricate data reconstruction pipelines, which yield several intermediate output data formats. During a pipeline, the raw data are converted to physical objects such as charged particle trajectories, jets, and vertices. Furthermore, the raw data are classified and filtered to obtain samples enriched in interesting signatures.
Figure 1 shows an overview of the data processing pipeline in LHCb as it has been used during LHC data-taking Runs 1 and 2 (2011-18). The various steps of the pipeline are outlined further in the LHCb computing technical design reports [8, 9]. Level 3 data have been defined as the output of the _stripping_ step. The stripping consists of a large number of selection algorithms called _lines_, which are designed to filter the data and sort events into several collections, which are called _streams_. Streams are defined according to common physics signatures and aim to collect selections with significant overlaps into a common set of files, to reduce duplication of the data. The LHCb data organization is discussed in more detail in Appendix A, including a list of streams available in Runs 1 and 2.
The stripping selections are based on the concept of physics _candidates_. A candidate refers to a set of data matching a particular physics signature. In most cases, this signature will be a particular particle decay, such as for example \(B^{+}\to\bar{D}^{0}\pi^{+}\) with the subsequent decay \(\bar{D}^{0}\to K^{+}\pi^{-}\), where \(B,D,K,\) and \(\pi\) mesons are the lightest hadrons containing \(b,c,s,\) and \(u/d\) quarks respectively. Such cascading decays are represented as tree-like data structures, where the nodes represent (intermediate) particles and the edges indicate a parent-child relationship in the
decay. These data structures are referred to as _decay trees_. The root particle of the decay tree (the \(B^{+}\) in our example) is called its _head_. Stripping selections attempt to find sets of physics objects in the reconstructed LHCb data, which match the desired decay tree and any additional criteria that might be applied to distinguish the intended signal process from background. Typical selection criteria include kinematic variables, vertex and track reconstruction qualities, and particle identification variables. Some stripping lines rely on multivariate classifiers to combine several observables into a single powerful selection criterion. The output of this procedure is collections of decay candidates specified by their particular decay trees in a custom LHCb-specific data format.
It is important to note that candidates are distinct from the concept of _events_ in the LHCb data processing. An event is defined during the data acquisition and refers to a particular time window in which collisions can occur. Several proton-proton collisions can happen during this time window, and in principle, it can happen that several candidates for a particular decay are identified for a single collision. In such cases, relevant quantities (related to vertex reconstruction and flight distances) can be computed for every primary vertex (e.g. collision point) in the event.
In order to convert these data into a framework-independent format a useful concept is the aforementioned Ntuples. The idea of a Ntuple is simple: each candidate is described by a tuple of variables, i.e. physical observables of interest measured on the particular candidate, or referring to the global event in which the candidate was found. A data set consists of \(N\) such tuples, much like a simple CSV file. Ntuples are saved in ROOT files [3] and only basic data types are allowed for the variables. As a small complication in some instances, the variables can actually be arrays of basic data types. In such cases, the Ntuple Wizard provides the necessary documentation for the interpretation.
### Principle of Ntuple creation and the Ntuple Wizard
Both the stripping as well as the Ntuple-making step in Fig. 1 are handled by DaVinci [8, 9, 10], an LHCb application for event selection and data analysis using the Gaudi framework [8, 9, 11]. DaVinci is configured via Python scripts and used to process entire data sets with batch processing. Both the Python configuration as well as the batch production system are intentionally hidden from users of the Ntuple Wizard for security reasons.
The DaVinci application provides access to a number of algorithms that can be combined in sequence for event selection and processing. In order to produce a Ntuple the user has to specify which variables should appear in the output data. This Ntuple configuration is handled by an algorithm named _DecayTreeTuple_, in which variables are registered through the use of so-called _TupleTools_ and _LoKi functors_. A large collection of those tools and functors are available for the user to choose from. In general, a TupleTool will add a set of variables to the Ntuple, while a LoKi functor usually computes a single number. The _LoKi::Hybrid::TupleTool_ can be used to write the output of functors into the tuple. Functors can be combined with standard arithmetic and
Figure 1: LHCb data flow in Runs 1 and 2. The output of the stripping step will be made public through the CERN Open Data Portal [4].
logic operations, providing a flexible and powerful system to compute derived quantities. A list of important available tools is presented in Appendix B.
Figure 2 shows an overview of the Ntuple Wizard architecture, the core functionality of which is the configuration of DaVinci. The metadata and documentation describing the available data, pre-selections, as well as available selection operations are generated from the original provenance traces of the data and the stripping selection code. The web interface presents this metadata and documentation to the user in a pedagogical way that facilitates data discovery and formulation of the query. The query to the data has two principal parts: Data set discovery and Ntuple configuration. First, the application allows the user to select from the available predefined stripping selections, data-taking periods, and conditions. In the second step, the user defines what quantities should be computed and written to the output Ntuple. Standard tools for the computation of typical quantities, such as kinematic variables, particle identification (PID) variables, etc., are available. The query formulated by the user is stored in a set of configuration files. These files can be converted into a Python configuration compatible with the internal LHCb Analysis Productions system [12]. This conversion and the final submission of the query to the compute infrastructure are handled through an LHCb Analysis Productions manager.
### Security considerations
Accepting arbitrary external code to run on the LHCb computing resources has obvious unacceptable security risks. Therefore, the Ntuple Wizard is designed to generate the configuration in a pure data-structure format. As shown in Figure 2, the configuration of the query is captured in YAML files, which can be downloaded and submitted to an LHCb Analysis Productions manager for further processing.
## 2 Metadata and documentation acquisition
In order to facilitate the core functionality of the Ntuple Wizard -- namely data set discovery (Sec. 4) and algorithm configuration (Sec. 5), metadata and documentation from several sources are required. In particular, the application needs to know what types of decays can be queried and what tools are available to compute derived quantities of interest about these candidates.
Since these metadata are unchanging, and providing direct access to the various sources requires authentication and introduces more points of failure, the metadata are collated and served as static files over HTTP. No additional access to the LHCb code or database is needed by the Ntuple Wizard once it has been deployed.
The sources of metadata can be grouped into two coarse categories: the LHCb software stack and the LHCb database. Metadata are acquired from the LHCb software stack in two ways. The first is from the Gaudi Python interface; particularly under the DaVinci application environment. Metadata about the configuration interface of each TupleTool are extracted from DaVinci. Details of the stripping lines, including the chain of selection algorithms that define them, are extracted from the DaVinci versions used in the corresponding stripping campaigns.
The process of building decay candidates in a stripping line often involves a combination of many algorithms from the LHCb selection framework, which combine particles, impose selection requirements, perform PID substitution, and build final-state particle candidates from trajectories of charged particles and calorimeter clusters. The algorithms can be related to each other through their input and output locations. The full list of decays (including all sub-decays) must be inferred by traversing the 'dependency tree' of the selection algorithms. This is performed using custom code during metadata acquisition.
The second, more indirect way is from the LHCb Doxygen pages, which themselves are generated from the source code of the LHCb software stack. The latest Doxygen pages for Run 1 or Run 2 DaVinci versions are used to extract the documentation for each TupleTool and LoKi functor. A campaign to improve the Doxygen documentation at its source was undertaken during the development of the Ntuple Wizard.
The LHCb database provides metadata about the centrally managed data sets, which is necessary to configure the Ntupling productions as explained above. In order not to duplicate effort,
a common code base is employed to extract metadata from the LHCb database for both the Ntuple Wizard and the CERN Open Data Portal.
## 3 User interface
The user interface consists of a sequence of dialogues that guide the user through the configuration steps. This is designed as a client-side dynamic web page that reads metadata acquired at deployment time to serve as static files (see Sec. 2).
Since users of LHCb open data do not, in general, have access to the same support network of experienced users and developers enjoyed by LHCb collaboration members, a key design element of the Wizard is to provide the necessary documentation for a novice user to complete each step of the configuration process.
The existing documentation of DaVinci [8, 9, 10] is fragmented across several sources (Twiki [13], the Starterkit [14], Doxygen [15] and the source code itself), so where possible, the Wizard pulls text from each of these disparate sources and renders it in the relevant context within the user interface.
There are two main steps to formulate a query using the Ntuple Wizard: Dataset discovery and Ntuple configuration. These steps are explained in the following.
## 4 Dataset discovery and production configuration
The available data contain a wide range of stripping selections, data-taking periods, and running conditions. The **Production configuration** dialogue of the Ntuple Wizard guides the user through the selection of the desired subsets. The interface allows the selection of several decays to be processed simultaneously as part of one query. For each decay, a separate Ntuple will be produced.
### Discovering available candidate decays
In the **Decay search** dialogue, the Ntuple Wizard presents a list of all decays selected by the stripping, accompanied by decay descriptors in LoKi and LaTeX formats, information about which stripping lines build them, as well as 'tags' that can be used to filter different types of decays. Decays are searchable through various filters, including the identity or properties of the parent particle and decay products, whether the candidates are built by a specific stripping line, and the aforementioned tags. An example of the decay search is shown in Figure 3. The selected candidate of interest is highlighted in blue, and the collection was narrowed down from the list of all possible decays by using the filters and tags at the top of the page. The 'none of' option of the tags drop-down menu is chosen by default, indicating that decays with the displayed tags are hidden
Figure 2: Architecture of the Ntuple Wizard.
from the list of selectable decays. The tags 'charge-violating' and 'undefined-unstable' corresponding to decays that violate charge conservation and that contain unstable particles without defined decays respectively are hidden by default. If the user wishes to instead isolate decays that meet the criteria of a given tag, a different option can be selected from the 'tags' drop-down menu. It is possible to select several decays for further processing at this stage.
### Stripping line and data set selection
Once a decay is selected by the user, all corresponding stripping lines and data sets from the various running periods are listed, and the desired combination(s) can be selected. The case can arise where the same stripping line shows up in multiple stripping versions within the same dataset (stream, running year, and magnet polarity). These are rendered as separate options in the dataset selection drop-down menu of the Ntuple Wizard. For a given decay, it is recommended to choose only one dataset for each magnet polarity within a given running year, and to use the most recent stripping version in the case of duplicates. The data organization of LHCb is elaborated on in Appendix A, including a table of running years, as well as corresponding collision energies and stripping versions.
Links to documentation about each stripping line including selection algorithms that went into building the decay candidates are displayed to the user to guide them in choosing the most suitable stripping line and data stream for their physics interest. Figure 4 shows an example of the production configuration page, where an available stripping line and data set have been chosen from lists of all possibilities corresponding to the selected decay channel. The blue question mark button contains links to the aforementioned stripping documentation. At this point, the query is specified up to deciding what information to write into the Ntuple.
## 5 Ntuple configuration
The **DecayTreeTuple configuration** dialogue is designed to guide the user through customization of the quantities written to the Ntuple for the selected candidates. For each decay, a separate DecayTreeTuple has to be configured. Care should be taken to name the Ntuples appropriately. The Ntuple Wizard requires a unique name for each Ntuple.
Selected decay trees are visually represented as graphs, where each physics object (e.g. particle) is represented by a node as shown in the screenshots in Figure 5. The user can interact with this graph by selecting one or multiple nodes at a time and determining which TupleTools will be added to each node, which in turn determines which quantities are saved to the Ntuple. A list is rendered on screen depending on the selected node(s), each element of which corresponds to a selected TupleTool, with buttons for configuring and removing the tool. The TupleTool configuration interface includes links to relevant documentation about the tool, including lists of quantities written by the tool where available. Each node in the graph comes with the standard set of TupleTools for LHCb analyses, but more will often be needed depending on the particular physics interests of the user. Furthermore, any added tool will come with the standard configuration, which can be further modified if the user desires. A custom set of standard LoKi variables and functions of these variables can also be saved to the Ntuple for each node, using the _Loki::Hybrid::TupleTool_. Appendix B contains a brief description of the standard set of TupleTools included with each node on the graph, as well as other useful TupleTools for physics analysis. Figure 5 shows an example of the configurable graph corresponding to the selected candidate shown in Figures 3 and 4, as well as a list of TupleTools corresponding to the entire decay candidate (top), and particular nodes selected on the graph (bottom). It can be seen from the figure that nodes can also be selected through the categories shown below the graph and that TupleTools can be added, removed, or configured for each node or grouping of nodes.
Figure 6 shows an example of the user interface for configuring TupleTools, with the particular example showing _TupleToolTISTOS_, which saves trigger information to the Ntuple. It can be seen at the bottom how relevant information is provided.
### Configuration output
Figure 7 shows an example of the output YAML file used to configure the DecayTreeTuple algorithm that was populated via configurations captured in Figs. 5 - 6, where the tools, groups and branches keys are shown specifying which TupleTools and therefore which information will be saved to the Ntuple. The top-level key tools contains a list of TupleTool configurations, from which the parsing functions create and configure TupleTool algorithms attached to the DecayTreeTuple itself, which will thus write either particle-level information about the decay or event-level information, depending on the class of the TupleTool. The keys branches and groups themselves contain lists of dictionaries whose keys specify particles and have their own tools lists which are used similarly to attach TupleTool algorithms to the specified particle(s) in the decay tree. Note that groups differs from branches in that it specifies multiple particles to be looped over and have identically configured TupleTool algorithms attached.
Figure 4: Example of the data set selection and production configuration step of the Ntuple Wizard.
Figure 3: Example of the decay candidate search function of the Ntuple Wizard.
Ntuple Wizard: an application to access large scale open data from LHCb
Figure 5: Example of an interactive graph used to configure DecayTreeTuple, with selected TupleTools displayed for both the entire candidate (top) and selected nodes (bottom).
Figure 6: Example of the configuration interface of a TupleTool within the Ntuple Wizard, (in particular, _TupleToolTISTOS_ for saving trigger information), including links to relevant documentation at the bottom of the modal.
Figure 7: Output of Btree.yaml, the data file used to configure the DecayTreeTuple algorithm.
D_0: particle: D^0 tools: [] Klus: particle: K+ tools: [] piminus: particle: pi- tools: [] piplus: particle: pi+ tools: [] groups: Klus,piminus: particles: - K+ - pi- tools: - TupleToolTISTOS: ExtraName: '' Verbose: false MaxPV: 100 VerboseL0: false VerboseHlt1: false VerboseHlt2: false VerboseStripping: false Fill1L0: true FillHlt1: true FillHlt2: true FillStripping: false TriggerList: [] Hlt1TriggerTisTosName: Hlt1TriggerTisTos Hlt2TriggerTisTosName: Hlt2TriggerTisTos L0TriggerTisTos PIDList: [] TopParticleOnly: false Hlt1Phys: >- Hlt1(?!ODIN)(?!LO)(?!Lumi)(?!Tell1)(?!MB)(?!NZS)(?!Velo)(?! BeamGas)(?!Incident).*Decision Hlt2Phys: >- Hlt2(?!Forward)(?!DebugEvent)(?!Express)(?!Lumi)(?! Transparent)(?!PassThrough).*Decision TIS: true TOS: true TUS: false TPS: false name: DecayTreeTuple/Btree
### Future developments
It is planned to extend the current functionality of the Ntuple Wizard by including the ability to create custom candidates from standard collections of LHCb particles. Another important planned addition is the ability to configure custom jet reconstruction. Ideally, support will be included for the full set of algorithms available in DaVinci for data analysis and event/candidate selection as resources allow.
As of Run 3, which started in 2022, the majority of the filtering and preselection of the data will be done in real time within the LHCb high-level trigger (HLT). In this architecture, the data will be fully reconstructed online and the final preselection algorithms will run in the HLT. Offline preselections will be feasible for a subset of the events. In both cases the output will have the same level of abstraction as the output of the stripping, allowing for a relatively simple adaptation of the Ntuple Wizard once the Run 3 data are made public.
## 6 Request submission and execution
Once the candidate(s) of interest, data set(s), and information to be saved in the Ntuple(s) are specified, and a name and email address have been provided for the production, a ZIP format file containing all relevant output files for the data query can be downloaded (as shown in Figure 8) and submitted to an LHCb Analysis Productions manager.
Requests for Ntuple creation are handled using the Analysis Productions package. The files describing a new request are committed to a repository hosted on the CERN GitLab [16], and a merge request is created once they are ready for review. The Continuous Integration feature of GitLab is used to submit test productions to LHCbDIRAC [8, 9], which automatically processes a small fraction of the data when the remote repository is updated.
Once the request is submitted, it is handled by the LHCbDIRAC production system. A production defines how a dataset is to be processed, and LHCbDIRAC will launch and manage computing jobs until the dataset is fully processed. Productions are defined in'steps' that specify which application to run and which configuration files to read, and may be chained together such that the output of the first step is the input to the second, etc. The info.yaml file produced by the Ntuple Wizard defines one production per dataset, each consisting of a single DaVinci step.
Within the production jobs, DaVinci is configured by functions defined in an external Python module according to the YAML files produced by the Ntuple Wizard. The data structure configured in Section 5 and displayed in Figure 7 is traversed, and the configurable properties of the DecayTree-Tuple algorithm are assigned the corresponding values.
After the Analysis Production jobs are complete, the produced Ntuples will be delivered to the CERN Open Data Portal for retrieval.
## 7 Summary
Providing public access to the large data sets at the LHC is a significant technical challenge, but it is becoming increasingly important for the longevity of high-energy physics in order to optimize acquired knowledge from the collected data. The volume and complexity of the data collected at LHCb make providing direct access to reconstructed (Level 3) data suitable for physics research difficult, motivating the design of the Ntuple Wizard, where users can submit queries to obtain skimmed data samples (Ntuples) of the reconstructed data suitable for their physics interests. The Ntuple Wizard is a web-based application that intuitively guides the user through specifying a query, from discovering a data set from a physics candidate (e.g. decay) of interest, to configuring the information to be saved in the output Ntuple. The output of the Ntuple Wizard is a pure data structure (YAML) format, which is to be submitted to an LHCb Analysis Productions manager so it can be parsed internally to provide the necessary Python scripts needed to configure the DaVinci application. The Ntuples will ultimately be delivered to the CERN Open Data Portal for retrieval.
## Appendix A LHCb Data Organization
Table 1 shows a list of running years, including corresponding collision energies and stripping versions available in the Ntuple Wizard.
LHCb data streams come in two formats, Data Summary Tape (DST) files, which contain the full event information, and micro Data Summary Tape (MDST) files, which only contain the physics objects directly matched in at least one stripping selection. In MDST streams, the rest of the information in the events, apart from a few global event properties, is discarded. Table 2 shows a list of streams from Run 1 and Run 2, with the DST vs MDST format indicated in the stream name.
## Appendix B List of useful TupleTools
### Default TupleTools
A set of TupleTools is included by default for all Ntuple configuration files produced by the Ntuple Wizard. Tools can be removed by the user if desired, but are standard tools used in LHCb analyses, and are recommended to keep for physics analyses. Given the flexible data structure of the Ntuple, it is easy to produce a reduced data structure with a subset of variables at a later stage in data processing, while still maintaining the full set of variables in the original Ntuple.
* _TupleToolANNPID_ -- A tool used to add artificial neural network particle identification information about the physics candidate to the Ntuple.
* _TupleToolEventInfo_ -- A tool used to add event and run information to the Ntuple.
\begin{table}
\begin{tabular}{|c|c|} \hline Running Year & \(\sqrt{s}\) (TeV) & Stripping Versions \\ \hline \multicolumn{3}{|c|}{Run 1} \\ \hline
2011 & 7 & s21r1, s21r1p1, s21r1p2 \\
2012 & 8 & s21, s21r0p1, s21r0p2 \\ \hline \multicolumn{3}{|c|}{Run 2} \\ \hline
2015 & 13 & s24r2 \\
2016 & 13 & s28r2, s28r2p1 \\
2017 & 13 & s29r2, s29r2p1, s29r2p2 \\
2018 & 13 & s34, s34r0p1, s34r0p2 \\ \hline \end{tabular}
\end{table}
Table 1: Table of running years, including collision energy (\(\sqrt{s}\)) and relevant stripping versions available in the Ntuple Wizard.
\begin{table}
\begin{tabular}{|c|} \hline Stream \\ \hline BHADRON.MDST \\ BHADRONCOMPLETEEVENT.DST \\ CALIBRATION.DST \\ CHARM.MDST \\ CHARMCOMPLETEEVENT.DST \\ CHARMTOBSWUM.DST \\ DIMUON.DST \\ EW.DST \\ LEPTONIC.MDST \\ MINBIAAS.DST \\ PID.MDST \\ RADIATIVE.DST \\ SEMILEPTONIC.DST \\ \hline \end{tabular}
\end{table}
Table 2: Table of data streams from Runs 1 and 2 available through the Ntuple Wizard.
Figure 8: Example of downloading the output files of the Ntuple Wizard after the query is fully specified.
* _TupleToolGeometry_ -- A tool used to add information about physics candidate geometry and event geometry to the Ntuple.
* _TupleToolKinematic_ -- A tool used to add kinematic information about the physics candidate to the Ntuple.
* _TupleToolPid_ -- A tool used to add particle identification information about the physics candidate to the Ntuple, with additional information than that in _TupleToolANPID_, including information about which PID detector subsystems were used in the probability calculations.
### Other useful TupleTools
* _TupleToolTISTOS_ -- A tool that saves the trigger TIS/TOS (Trigger independent of Signal/Trigger on Signal) decisions for each particle to the Ntuple.
* _LoKii:Hybrid:TupleTool_ -- A tool that allows the user to add LoKi variables or expressions of these variables known as LoKi functors to the Ntuple.
## Acknowledgements
We thank our colleagues at LHCb for providing the necessary data and software, and within the Data Processing & Analysis (DPA) project for their incredibly valuable discussions. We additionally would like to thank Jose Marco Arias for systematic testing of the web interface. All authors acknowledge support from CERN. In addition, C.A.A. and D.S.F. acknowledge support from the National Science Foundation under Award Number 2012926, and A.M. and S.N. acknowledge support from the DFG Grant NE 2185/1-1.
| 大規模なハドロンcollider(LHC)で収集された大規模なデータセットが世界にアクセスできるようになることは、その複雑さ、そしてデータの量という課題のために困難です。この論文では、Ntuple Wizardというアプリケーションを提案します。これは、LHCb共同体で使用されている既存の計算リソースを活用して、第三者ユーザーが特定のデータをリクエストできるようにするアプリケーションです。直感的なウェブインターフェースを使用して、アクセス可能なデータセットを探索し、ユーザーが設定に基づいたリクエストを指定するプロセスをガイドします。このアプリケーションは、公開に与えられたアクセスレベルの細分化に優れています。 |
2309.16750 | Memory in Plain Sight: Surveying the Uncanny Resemblances of Associative
Memories and Diffusion Models | The generative process of Diffusion Models (DMs) has recently set
state-of-the-art on many AI generation benchmarks. Though the generative
process is traditionally understood as an "iterative denoiser", there is no
universally accepted language to describe it. We introduce a novel perspective
to describe DMs using the mathematical language of memory retrieval from the
field of energy-based Associative Memories (AMs), making efforts to keep our
presentation approachable to newcomers to both of these fields. Unifying these
two fields provides insight that DMs can be seen as a particular kind of AM
where Lyapunov stability guarantees are bypassed by intelligently engineering
the dynamics (i.e., the noise and step size schedules) of the denoising
process. Finally, we present a growing body of evidence that records DMs
exhibiting empirical behavior we would expect from AMs, and conclude by
discussing research opportunities that are revealed by understanding DMs as a
form of energy-based memory. | Benjamin Hoover, Hendrik Strobelt, Dmitry Krotov, Judy Hoffman, Zsolt Kira, Duen Horng Chau | 2023-09-28T17:57:09 | http://arxiv.org/abs/2309.16750v2 | # Memory in Plain Sight
###### Abstract
Diffusion Models (DMs) have recently set state-of-the-art on many generation benchmarks. However, there are myriad ways to describe them mathematically, which makes it difficult to develop a simple understanding of how they work. In this survey, we provide a concise overview of DMs from the perspective of dynamical systems and Ordinary Differential Equations (ODEs) which exposes a mathematical connection to the highly related yet often overlooked class of energy-based models, called Associative Memories (AMs). Energy-based AMs are a theoretical framework that behave much like denoising DMs, but they enable us to _directly compute_ a Lyapunov energy function on which we can perform gradient descent to denoise data. We then summarize the 40 year history of energy-based AMs, beginning with the original Hopfield Network, and discuss new research directions for AMs and DMs that are revealed by characterizing the extent of their similarities and differences.
## 1 Introduction
Diffusion Models [1; 2; 3; 4] (DMs) have rapidly become the most perfomant class of generative models on images [5; 6; 7; 8; 9]. Recent examples of well known DMs include Imagen [7] (backed by Google), DALLE-2 [6] (backed by OpenAI), Midjourney [10] (an independent research lab), and Stable Diffusion [8] (backed by StabilityAI). The strong adoption of DMs by large companies, startups, and academia continue to lower the computational barriers to these models and push the capabilities of these models to domains like videos [11], molecules [12], audio [13; 14; 15], and even text [16].
On each of these domains, a trained DM will use a neural network to iteratively denoise a noisy image (or other data point) until there is no noise remaining. Each denoising step aims to increase the probability (specifically, the _log-probability_) that the noisy data looks like a sample from the real data distribution. An example log-probability landscape with two peaks of "real-looking data" (located at the top right and bottom left of the landscape) is shown in the left half of Figure 1, where each denoising step pushes the initial noisy image (blue dot) towards regions of higher log-probability.
The overall goal of a DM is to repeatedly denoise an image until no noise remains. That is:
**Goal of Diffusion Models**: _Given a corrupted representation of some data, recreate the original uncorrupted data._
However, this is not how diffusion processes behave in non-equilibrium thermodynamics [1]. Consider an example of diffusion in nature, where a droplet of red food dye is dropped into a glass of hot water. The millions of tiny red particles in the droplet rapidly "diffuse" through the water away from its initial concentration until the resultant water is a uniformly pale pink. We can say that diffusion is then an inherently _information-destroying_ process in forward time; one which maximizes the entropy of a distribution of particles.
If diffusion is information-destroying, then training a neural network to _reverse_ this process must be information-adding. This is the essence of DMs as proposed by [1, 2]. The _forward process_ of DMs iteratively adds noise to some image (or other data), whereas the _reverse process_ is trained to remove the noise introduced at each step of the forward process in an attempt to recreate the original image. The forward process on its own is un-parameterized and non-interesting, only requiring that the added noise intelligently explores all important regions of the log-probability landscape; the computational power of Diffusion Models lies entirely in its reverse process.
### Diffusion's Unseen Connection to Associative Memories
Associative Memory (AM) is a theory for the computational operation of brains that originated in the field of psychology in the late 19th century [17]. We are all familiar with this phenomenon. Consider walking into a kitchen and smelling a freshly baked apple pie. The smell could elicit strong feelings of nostalgia and festivities at grandma's, which surfaces memories of the names and faces of family gathered for the holidays. The smell (query) retrieved a set of feelings, names, and faces (values) from your brain (the associative memory).
Formally, AMs are dynamical systems that are concerned with the storage and retrieval of data (a.k.a. signals) called _memories_[18]. These memories live at local minima of an energy landscape that also includes _all possible corruptions of those memories_. From the Boltzmann distribution (Eq. 1), we know that energy can be understood as a negative and unscaled log-probability, where local peaks in the original probability distribution correspond exactly to local minima in the energy. An example energy landscape with two memories representing "real-looking data" (valleys located at the top right and bottom left of the landscape) is shown on the right side of Figure 1. Memories are retrieved (thus
Figure 1: Comparing the emphases of Diffusion Models and Associative Memories tasked with learning the same energy (negative log-probability) landscape, represented with both contours and gradient arrows. Diffusion Models (left) train a score function (depicted as orange arrows) to model the gradient of the energy. The noisy starting signal (depicted as a blue circle) becomes less corrupted by following these gradients in the reverse denoising process. Associative Memories (right) instead learn a smooth energy function, depicted as contours. The “memory retrieval dynamics” is the process by which a fixed point is retrieved by following the energy gradient from the initial signal. _This process is mathematically equivalent to the objective of the reverse denoising process of Diffusion Models_. Memory retrieval dynamics always converge to fixed points (there are two in the figure, the top right and lower left) where the energy is at a local minimum. This guarantee does not exist for Diffusion Models.
reconstructing our initial signal) by descending the energy according to the equation in the center of Figure 1.
The goal of Associative Memories as stated above is identical to that of DMs:
**Goal of Associative Memories**: _Given a corrupted representation of some data, recreate the original uncorrupted data._
Yet, though Diffusion Models (or score-based models in general) have been related to Markovian VAEs [19, 4, 1], Normalizing Flows [20], Neural ODEs [21], and Energy Based Models [2, 22], an explicit connection to Associative Memories has not been acknowledged. Such a connection would contribute to a growing body of literature that seeks to use modern AI techniques to unravel the mysteries of memory and cognition [23, 24, 25, 26, 27, 28, 29].
The class of AMs discussed in this paper are compelling architectures to study for more reasons than their original biological inspiration. All AMs define a _Lyapunov function_[18] (i.e., energy function with guaranteed stable equilibrium points), a characteristic of many physical dynamical systems that is strikingly absent in the formulation of DMs. This feature allows AMs to be well behaved for any input and for all time. It also allows us to formalize the _memory capacity_ of a given architecture: i.e., how many stable equilibrium points (memories) can we compress into a given choice of the architecture? This leads to architectures that are incredibly parameter efficient. For example, the entire energy landscape depicted in Figure 1 is simulated using an AM with four parameters: a single matrix containing the locations \(\mathbf{x}^{\star}\in\mathbb{R}^{2}\) of the two fixed points. We discuss the memory capacity of AMs in SS 4.3.
### Our Contributions
This survey explores a here-to-fore unseen connection between Diffusion Models and Associative Memories (specifically, Hopfield Networks and their derivatives), straddling a unique overlap between traditional and modern AI research that is rarely bridged. Through this survey we aim to:
* **Raise awareness** about AMs by exploring their uncanny mathematical similarities to DMs. AMs are a theoretical framework that behave much like DMs, but they enable us to design architectures that _directly compute_ a negative log-probability (energy) on which we can perform gradient descent to denoise data (retrieve memories). We isolate the fundamental differences between DMs and AMs (e.g., AM architectures satisfy Lyapunov stability criteria that DMs do not) and provide evidence for how these differences are mitigated through the design and usage of DMs.
* **Provide an approachable overview** of both DMs and AMs from the perspective of dynamical systems and Ordinary Differential Equations (ODEs). The dynamical equations of each model allow us to describe both processes using terminology related to particle motion taken from undergraduate physics courses.
* **Propose future research directions** for both DMs and AMs that is enabled by acknowledging their uncanny resemblance. We additionally identify similarities that AMs have to other modern architectures like Transformers [30, 31, 32]. We believe these similarities provide evidence that the field of AI is converging to models that strongly resemble AMs, escalating the urgency to understand Associative Memories as an eminent paradigm for computation in AI.
### A Language of Particles, Energies, and Forces
Different notations are used to describe the dynamical processes of DMs and AMs. We have tried to unify notation throughout this survey, describing the time-evolving state \(\mathbf{x}^{t}\) of our "data point" using the convention that denoising/memory retrieval happens in forward-time \(t\), and preferring to "minimize the energy" rather than "maximize the log-probability". This choice of language comes from the literature of AMs, allowing us to describe the reconstruction/retrieval dynamics using analogies taken from physics.
**Particle**: A _data point_ (e.g., an image from some data set). In DMs, we choose to describe a particle by its time-varying _position_\(\mathbf{x}^{t}\). In the case of colored images, the position is a high dimensional tensor of shape \(\mathbb{R}^{C\times H\times W}\) (\(C\)-number of channels, \(H\)-height, \(W\)-width).
**Energy**: The _distribution_ of our data that we model, exactly equal to the negative log-likelihood of a data point. This scalar-valued function must be defined for every possible position \(\mathbf{x}\) of our particle and be bounded from below. The update equations of both DMs and AMs seek to minimize this energy, though only AMs compute the energy itself.
**Force**: In physics, force is defined as the negative gradient of the energy: \(\mathbf{F}(\mathbf{x})\triangleq-\nabla_{\mathbf{x}}E(\mathbf{x})\). In data science, _force_ is equal to the _score_ discussed in SS 3.1 (the gradient of the log-likelihood). The force is a tensor of the same shape as our particle's position \(\mathbf{x}\) pointing in the direction of steepest energy descent.
Thus, data can be understand as particles described by their _positions_ rolling around a hilly _energy landscape_ according to some observed _force_. A _memory_ is a local minimum (valley) on the energy landscape into which the particle will eventually settle. We find that this language improves intuition for both DMs and AMs.
### Related Surveys
The popularity of DMs has resulted in surveys that focus on the methods and diverse applications of DMs [33; 2; 34] alongside tutorial-style guides that seek to gently explain them [2; 35; 36]. In all cases, these surveys/guides make no connection to AMs. For an exhaustive reference on DM literature, [37] has collected a (still growing) list of \(>\)\(600\) diffusion papers.
Other surveys cover AMs and their applications [38; 39; 40; 41]. However, we are aware of only two efforts to identify AM-like behavior in modern networks [42; 43], and these efforts serve more to acknowledge high-level similarities between recurrent networks and AMs. Certainly, no existing surveys have explored the depth of similarity between DMs and AMs.
## 2 Mathematical Notations
In this survey we deviate from notations typically used for DMs and AMs. To minimize visual clutter, we prefer tensor notation over Einstein notation, representing scalars in non-bolded font (e.g., energies \(E\) or time \(t\)), and tensors (e.g., vectors and matrices) in bold (e.g., states \(\mathbf{x}\) or weights \(\mathbf{W}\)). These distinctions also apply to scalar-valued and tensor-valued functions (e.g., energy \(E(\cdot)\) vs. activations \(\mathbf{g}(\cdot)\)). A collection of learnable parameters is expressed through the generic variable \(\mathbf{\theta}\). Gradients are expressed using "nabla" notation of a scalar valued function, where \(\nabla_{\mathbf{x}}(\cdot)\) will be a tensor of the same shape as vector \(\mathbf{x}\). \(\mathbf{g}^{\intercal}\) represents the transpose, whereas \(\mathbf{g}^{T}\) represents the tensor \(\mathbf{g}\) occurring at time \(T\). See Appendix A for an overview of all notation used in this survey.
## 3 Diffusion Models
### Diffusion Models are Score-Based Models
As generative models, Diffusion Models seek to synthesize new data by sampling from some learned probability distribution \(p\) of our training data. Other generative models, like Variational Auto-Encoders (VAEs) [44; 45; 46] and Normalizing Flows [47; 48; 49], use their parameters \(\mathbf{\theta}\) to directly model the data's likelihood or probability density function (p.d.f.) \(p_{\mathbf{\theta}}\). Mathematically, any p.d.f. can be expressed as the following equation:
\[p_{\mathbf{\theta}}(\mathbf{x})=\frac{e^{-E_{\mathbf{\theta}}(\mathbf{x})}}{Z_{\mathbf{ \theta}}} \tag{1}\]
where \(Z_{\mathbf{\theta}}\) is a normalizing constant to enforce that the \(\int p_{\mathbf{\theta}}(\mathbf{x})d\mathbf{x}=1\), known as the partition function, and \(E_{\mathbf{\theta}}(\mathbf{x})\) is the **energy function**, also known as an "unnormalized probability function". Instead of modeling the p.d.f. itself, DMs use their parameters to model the **score function**\(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x})\) of the distribution [21; 50; 51], and as such are considered a class of _score-based models_. The score
function is defined as the gradient of the log-likelihood itself, or equivalently, the negative gradient of the energy function (as the normalizing constant \(Z_{\mathbf{\theta}}\) does not depend on \(\mathbf{x}\)) [2]:
\[\mathbf{F}_{\mathbf{\theta}}(x)=\nabla_{\mathbf{x}}\log p_{\mathbf{\theta}}(\mathbf{x})= -\nabla_{\mathbf{x}}E_{\mathbf{\theta}}(\mathbf{x})-\nabla_{\mathbf{x}}\log Z_{\bm {\theta}}=-\nabla_{\mathbf{x}}E_{\mathbf{\theta}}(\mathbf{x}) \tag{2}\]
Figure 1 depicts the score function as vectors pointing to peaks in the log probability (equivalently, local minima in the energy function). In practice, we often think of the score function as predicting the noise we need to remove from \(\mathbf{x}^{t}\), where adding the estimated score \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x}^{t})\) in Eq. 3 is the same as removing the predicted noise. Thus, given a neural network trained to predict the score, the process of generating data using discrete score-based models can be construed as an iterative procedure that follows the approximate score function (energy gradient) for some fixed number of steps \(T\). The final state \(\mathbf{x}^{T}\) is declared to be a local peak (minima) of the log-likelihood (energy) and should now look like a sample drawn from the original distribution \(p\).
\[\mathbf{x}^{t+1}=\mathbf{x}^{t}+\alpha\mathbf{F}_{\mathbf{\theta}}(\mathbf{x}^{t }),\ \ \ \ \ t=0,...,T-1 \tag{3}\]
where \(\alpha(t)\in R\) is a step size in the direction \(F_{\mathbf{\theta}}\). Note that Eq. 3 is described using the convention that _time progresses forward when reconstructing the data_. However, the literature around DMs describes _time in the reconstruction process as going backwards_, denoting \(x^{T}\) to refer to the sample drawn from pure noise and \(x^{0}\) to refer to the final reconstructed sample drawn from \(p\)[1; 2; 4; 19; 21]. Eq. 4 rewrites Eq. 3 using the variable \(s\triangleq T-t\) to represent the reverse-time convention used in most DM papers (shown in Figure 2).
\[\mathbf{x}^{s-1}=\mathbf{x}^{s}+\alpha\mathbf{F}_{\mathbf{\theta}}(\mathbf{x}^{s }),\ \ \ \ \ s=T,...,1 \tag{4}\]
Using score-based models is then conceptually very simple: they seek to maximize (minimize) the log-likelihood (energy) of an initial sample by following the score \(\mathbf{F}_{\mathbf{\theta}}\) (energy gradient). In practice, many of the functions above are additionally conditioned on the time \(s\); i.e., the p.d.f. can be expressed as \(p_{\mathbf{\theta}}(\mathbf{x};s)\), the score function can be expressed as \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x};s)\), and the step size \(\alpha\) can be expressed as \(\alpha(s)\). We are also free to condition the score function however we desire; e.g., controllable image generation using DMs express the tokens \(\mathbf{C}\) of language prompts as a conditioning variable on the score function \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x};t,\mathbf{C})\)[8; 10; 52].
### Diffusion Models Cleverly Train the Score
Though simple in concept, DMs require several tricks to train. How do you train the "score" of a dataset? Several techniques to train the score-based models have been proposed, with the most popular being the technique of _denoising score-matching_[53; 2; 54; 55; 56; 4]. To train with denoising score-matching, samples \(\mathbf{x}\) from our original data distribution \(p\) are repeatedly perturbed with small amounts of noise \(\mathbf{\eta}(s)\). We then train our score-based model \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x}^{s+1})\) to remove the noise added at the previous step. If the added noise is small enough, we can guarantee that our optimal score function (parameterized by optimal weights \(\mathbf{\theta}^{*}\)) approximates the score of the true data distribution: i.e., \(\mathbf{F}_{\mathbf{\theta}^{*}}(\mathbf{x})\approx\nabla_{\mathbf{x}}\log p( \mathbf{x})\) and \(\mathbf{F}_{\mathbf{\theta}^{*}}(\mathbf{x}^{s+1})\approx-\mathbf{\eta}(s)\). This process of noisifying a signal is known as the _forward process_, which we also call the _corruption process_.
The original DM proposed by [1] trained their model using a forward process consisting of \(T=1000\) noise-adding steps. Unfortunately, to sample from DMs each step in the forward process must be paired with a step in the _reconstruction_ or _reverse process_, which likewise required \(T=1000\) steps/applications of the score network \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x}^{s})\). [2] improved the computational efficiency by introducing several techniques into the forward process, including a form of annealed Langevin dynamics where larger noises are added the further you are from the original data distribution, controlled by a variance scheduler \(\sigma(s)\in\mathbb{R}\). They then introduce Noise Conditional Score Networks (NCSNs) to condition the score network \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x};\sigma(s))\) on the amount of noise \(\sigma(s)\) to remove. Because the \(\sigma(s)\) has a 1:1 correspondence to a particular time step \(s\), NCSNs can also be written as \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x};s)\).
### Diffusion Models are Continuous Neural ODEs
The original DM [1] and NCSN [2] relied on a fixed number of discrete steps in the forward and reverse process and could not work without Langevin dynamics (stochastic descent down the energy
function during reconstruction). [21] introduced a _probability flow Ordinary Differential Equation_ (PF-ODE) formulation for DMs to unify and extend previous approaches, which [34, 57, 58] claim represents the culmination of DM theory for several reasons.
1. PF-ODEs show that Langevin dynamics are optional to making DMs work, a phenomena also detected by [59]. Deterministic forward and reverse processes perform just as well as those formulated stochastically.
2. PF-ODEs operate in continuous time and do not require a fixed number of discrete time-steps or noise scales. This makes it possible to use black-box ODE solvers during the reverse-process enabling faster generation than previously possible.
3. PF-ODEs enable exact log-likelihood calculations, drawing connections to normalizing flows [47, 48] and neural ODEs [60]. This perspective proffers DMs as a new technique to solve inverse problems.
4. The latent encodings for each step of PF-ODEs are both meaningful and uniquely identifiable. Given a sufficiently performant model trained on sufficient data, we can interfere with latent representations to perform tasks the model was not originally trained to perform (e.g., inpainting, outpainting, interpolation, temperature scaling, etc.)
The importance of PF-ODEs to the modern understanding of DMs cannot be overstated and warrants sufficient background. Consider the standard form of a generic ODE under constrained time \(s\):
\[\frac{d\mathbf{x}}{ds}=\boldsymbol{\mu}(\mathbf{x};s),\hskip 14.226378pts\in[T,0] \tag{5}\]
where \(\boldsymbol{\mu}(\mathbf{x};s)\) is an arbitrary _drift function_ that represents some deterministic change in position of particle \(\mathbf{x}^{s}\) at time \(s\). Diffusion models need to further corrupt the input \(\mathbf{x}\), so we add to Eq. 5 an infinitesimal amount of noise \(\frac{d\mathbf{w}}{ds}\) scaled by some real-valued _diffusion coefficient_\(\sigma(s)\). The forward process of PF-ODEs is now called an Ito Stochastic Differential Equation (SDE) [21].
\[\frac{d\mathbf{x}}{ds}=\boldsymbol{\mu}(\mathbf{x};s)+\sigma(s)\frac{d\mathbf{ w}}{ds} \tag{6}\]
[58] argues that the equation above can be further simplified without any loss in performance by assuming a constant drift function \(\boldsymbol{\mu}(\mathbf{x};s)=0\), a convention adapted by [57] to set SOTA one-step generation with DMs. This convention simplifies Eq. 6 to make the forward process depend only on a noise scale and the infinitesimal random noise shown in Eq. 7.
\[\frac{d\mathbf{x}}{ds}=\sigma(s)\frac{d\mathbf{w}}{ds}. \tag{7}\]
The reverse process now depends only on the noise scale \(\sigma\) and the score \(\mathbf{F}_{\boldsymbol{\theta}}\)[21, 57]
\[\frac{d\mathbf{x}}{ds}=-\frac{1}{2}\sigma^{2}(s)\mathbf{F}_{\boldsymbol{ \theta}}(\mathbf{x};s)\,. \tag{8}\]
We have written the strange "reverse time" convention of DMs using time variable \(s\triangleq T-t\). Eq. 9 rewrites Eq. 8 using forward time \(t\) and collects the noise scale into a real-valued time-variable \(\tau(t)\triangleq\frac{2}{\sigma^{2}(t)}\) to control the rate of change.
\[\tau(t)\frac{d\mathbf{x}}{dt}=\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{x};t )\,,\hskip 14.226378ptt\in[0,T] \tag{9}\]
PF-ODEs unify previous theories of DMs by simplifying the assumptions needed to make DMs work. At the same time, the continuous dynamics of PF-ODEs exposes a strong mathematical connection to Associative Memories that was difficult to see before.
## 4 Associative Memories
An Associative Memory (AM) is a dynamical system concerned with the storage and retrieval of signals (data). In AMs all signals exist on an _energy landscape_; in general, the more corrupted a signal the higher its energy value. Uncorrupted signals or _memories_ live at local minima of the system's energy or _Lyapunov_ function. The process of _memory retrieval_ is the dynamic process of descending the energy to a fixed point or _memory_ as shown in Figure 1, a phenomenon that is guaranteed to occur by constraints placed on the architecture (see SS 4.1).
The following subsections describing AMs can get quite technical and rely on modeling techniques used regularly in physics (e.g., Lagrangian functions, Lyapunov functions, Legendre transforms [18, 61]) but that are foreign to experts in modern AI. We thus deem it important to summarize the core points of AMs before diving into them and their history.
AMs are a class of neural architectures that defines an energy function \(E_{\boldsymbol{\theta}}(\mathbf{x})\in\mathbb{R}\), where \(\boldsymbol{\theta}\) is a set of learnable parameters and \(\mathbf{x}\) represents a data point. In other words, an AM is designed to compute a scalar energy on every possible input signal \(\mathbf{x}\). Given some initial input \(\mathbf{x}^{0}\) at time \(t=0\) (this could be any kind of signal, including pure noise), we want to minimize the energy (by descending the gradient) to a fixed point that represents a memory learned from the original data distribution.
\[\tau\frac{d\mathbf{x}}{dt}=-\nabla_{\mathbf{x}}E_{\boldsymbol{\theta}}( \mathbf{x})\,,\hskip 14.226378ptt>0 \tag{10}\]
Here, \(\tau\) is a time constant that governs how quickly the dynamics evolve. This equation can of course be discretized as
\[\mathbf{x}^{t+1}=\mathbf{x}^{t}-\frac{dt}{\tau}\nabla_{\mathbf{x}}E_{ \boldsymbol{\theta}}(\mathbf{x}^{t}) \tag{11}\]
and treated as a neural network that is recurrent through time. The energy of AMs is constructed in such a way that the dynamics will eventually converge to a fixed point \(\mathbf{x}^{*}\) at some \(t=T\). That is,
Figure 2: Diffusion Models through the lens of PF-ODEs, adapted from [2]. The forward (corrupting) process takes a true data point at time \(s=0\) and corrupts the data into pure noise at time \(s=T\) using an SDE. The reverse (reconstructing) process undoes the corruption using the score of the distribution. Both equations are shown without drift terms as in Eq. 7 and Eq. 8.
\[\frac{d\mathbf{x}^{\star}}{dt} =0\quad\mathrm{and}\] \[\mathbf{x}^{\star} =\mathbf{x}^{\star}-\frac{dt}{\tau}\nabla_{\mathbf{x}}E_{\mathbf{ \theta}}(\mathbf{x}^{\star})\qquad\forall t>T\,.\]
### The TinyBrain Sandbox for Building Associative Memories
AMs are constrained to use only neural architectures that are themselves a Lyapunov function: that is, these architectures must compute a scalar energy and be fixed point attractors. In this section we discuss the architectural constraints needed to ensure a Lyapunov function on AMs, following the abstractions of [62].
AMs were originally inspired by neurons and synapses within the brain [64]. As such, it is useful to consider a simple model we call the TinyBrain model1 as depicted in Figure 3. The TinyBrain model consists of two dynamic variables called _neuron layers_ connected to each other by a single synaptic weight matrix \(\mathbf{W}\) (relationship between variables). Each neuron layer (dynamic variable) can be described by an _internal state_\(\mathbf{x}\in\mathbb{R}^{N}\) (which one can think of as the _membrane voltage_ for each of the \(N\) neurons in that layer), and their _axonal state_\(\mathbf{g}\in\mathbb{R}^{N}\) (which is analogous to the _firing rate_ of each neuron). We call the axonal state the _activations_ and they are uniquely constrained to be the gradient of a scalar, convex Lagrangian function \(\mathcal{L}:\mathbb{R}^{N}\mapsto\mathbb{R}\) we choose for that layer; that is, \(\mathbf{g}=\nabla_{\mathbf{x}}\mathcal{L}\).
Footnote 1: Our use of the term “brain” does not claim resemblance to real brains or brain-like behavior; instead, the term encapsulates a useful thought experiment around the usage of the terms “neurons” and “synapses”.
The _Legendre Transform_ of the Lagrangian defines the energy \(E_{\nu}^{\mathrm{layer}}(\mathbf{g}_{\nu},\mathbf{x}_{\nu})\in\mathbb{R}\) of a neuron layer \(\nu\), shown in Eq. 12[18, 61, 62]. If all neuron-layers of a system have a Lagrangian, a synaptic energy can be easily found such that the entire system has a defined energy.
\[E_{\nu}^{\mathrm{layer}}=\mathbf{g}_{\nu}^{\intercal}\mathbf{x}_{\nu}- \mathcal{L}_{\nu}(\mathbf{x}_{\nu}) \tag{12}\]
Figure 3: The TinyBrain sandbox for understanding Associative Memories is a fully connected bipartite graph (structurally similar to the Restricted Boltzmann Machine [63]). Both feature neurons and memory neurons have states \(\mathbf{x}_{f}\) and \(\mathbf{x}_{m}\) respectively that evolve in time; these states have corresponding activations \(\mathbf{g}_{f}\) and \(\mathbf{g}_{m}\). The energy of the synapse is minimized when the memory activations \(\mathbf{g}_{m}\) perfectly align with the feature activations \(\mathbf{g}_{f}\) according to the learned parameters \(\mathbf{W}\). The energy for the entire system is defined by this the energies of each individual component, and the internal states of each neuron layer evolve by descending the global energy’s gradient.
For historical reasons, we call one neuron layer in our TinyBrain the "feature neurons" (fully described by the internal state \(\mathbf{x}_{f}\) and Lagrangian \(\mathcal{L}_{f}\) of the features) and the other the "memory neurons" (fully described by the internal state \(\mathbf{x}_{m}\) and Lagrangian \(\mathcal{L}_{m}\) of the memories). These layers are connected to each other via a synaptic weight matrix \(\mathbf{W}\in\mathbb{R}^{N_{m}\times N_{f}}\), creating a synaptic energy \(E^{\mathrm{synapse}}(\mathbf{g}_{f},\mathbf{g}_{m};\mathbf{W})\in\mathbb{R}\) defined as
\[E^{\mathrm{synapse}}=-\mathbf{g}_{m}^{\intercal}\mathbf{W}\mathbf{g}_{f}\,. \tag{13}\]
We now have defined all the _component energies_ necessary to write the total energy \(E^{\mathrm{system}}\) for TinyBrain as in Eq. 14. Note that Eq. 14 is a simple summation of the energies of each component of the TinyBrain: two layer energies and one synaptic energy. This perspective of modular energies for understanding AMs was originally proposed by [62].
\[E^{\mathrm{system}}=E_{f}^{\mathrm{layer}}+E_{m}^{\mathrm{layer}}+E^{ \mathrm{synapse}} \tag{14}\]
The hidden states of our neurons \(\mathbf{x}_{f}\) and \(\mathbf{x}_{m}\) evolve in time according to the general update rule:
\[\begin{cases}\tau_{f}\frac{d\mathbf{x}_{f}}{dt}&=-\nabla_{\mathbf{g}_{f}}E^{ \mathrm{system}}\\ \tau_{m}\frac{d\mathbf{x}_{m}}{dt}&=-\nabla_{\mathbf{g}_{m}}E^{ \mathrm{system}}\,.\end{cases} \tag{15}\]
Eq. 15 reduces to the manually derived update equations for the feature and memory layers presented in Eq. 16[18].
\[\begin{cases}\tau_{f}\frac{d\mathbf{x}_{f}}{dt}&=\mathbf{W}^{\intercal} \mathbf{g}_{m}-\mathbf{x}_{f}\\ \tau_{m}\frac{d\mathbf{x}_{m}}{dt}&=\mathbf{W}\mathbf{g}_{f}-\mathbf{x}_{m} \end{cases} \tag{16}\]
where \(\mathbf{W}^{\intercal}\mathbf{g}_{m}\) and \(\mathbf{W}\mathbf{g}_{f}\) represent the _input currents_ to feature neurons and memory neurons respectively, and \(-\mathbf{x}_{f}\) and \(-\mathbf{x}_{m}\) represent exponential decay (this implies that the internal state \(\mathbf{x}\) will exponentially decay to \(\mathbf{0}\) in the absence of input current). Note that the synaptic matrix \(\mathbf{W}\) plays a symmetric role: the same weights that modulate the current from feature neurons \(\rightarrow\) memory neurons are used to modulate the current from memory neurons \(\rightarrow\) feature neurons. This "symmetric weight constraint" present in AMs does not exist in real neurons and synapses [18, 65].
Krotov and Hopfield identified [18] that this toy model is mathematically equivalent to the original Hopfield Network [64, 66] under certain choices of the activation function \(\mathbf{g}\); hence, it is through the lens of the abstraction presented in this section that we explore the long history of AMs, beginning with the famous Hopfield Network.
### Hopfield Networks are the First Energy-Based Associative Memory
John Hopfield formalized the dynamic retrieval process of Associative Memory in the 80s [64, 66] in an energy-based model that became famously known as the _Hopfield Network_ (HN). Unlike previous models for associative memory (see SS 4.5), HNs performed the task of memory retrieval through gradient descent of an energy function. A HN resembles in form a single-layer McCulloch & Pitts perceptron [67], with input feature neurons \(\mathbf{x}_{f}\) and a single weight matrix; however, unlike perceptron-based models, the HN operated continuously in time, repeatedly updating the inputs \(\mathbf{x}_{f}^{t}\) over time to minimize some energy.
Hopfield described the AM energy dynamics for both binary [64] and graded (continuous) [66] neurons. Like the TinyBrain model, a continuous HN consists of \(N_{f}\) feature neurons connected via a synaptic weight matrix \(\mathbf{W}\in\mathbb{R}^{N_{m}\times N_{f}}\) to \(N_{m}\) memory neurons. Feature neurons \(\mathbf{x}_{f}\) have corresponding activations \(\mathbf{g}_{f}=\mathrm{sigmoid}(\mathbf{x}_{f})\), whereas memory neurons \(\mathbf{x}_{m}\) have a linear activation function \(\mathbf{g}_{m}=\mathbf{x}_{m}\). We call this configuration of TinyBrain the _Classical Hopfield Network_ (CHN).
Hopfield never considered the time evolution of the memory neurons in his model, instead assuming that \(\mathbf{x}_{m}^{t}=W\mathbf{g}_{f}^{t}\) at any point in time (i.e., that the state of the memory neurons was instantaneously defined by the activations of the feature neurons). This simplification of the two-layer TinyBrain did not change the fixed points of the AM dynamics and allowed him to consider the time-evolution for only the feature neuron layer. The simplified energy function and update rules for the CHN are shown in Eq. 17 and Eq. 18 respectively, where \(\mathcal{L}_{f}(\mathbf{x}_{f})\triangleq\sum_{N_{f}}\log\left(1+\exp( \mathbf{x}_{f})\right)\) is the Lagrangian of the sigmoid activation function and \((\cdot)^{2}\) operates elementwise.
\[E^{\mathrm{system}}(\mathbf{g}_{f},\mathbf{x}_{f}) =-\frac{1}{2}\sum_{N_{m}}\left(\mathbf{W}\mathbf{g}_{f}\right)^{ 2}+\mathbf{g}_{f}^{\intercal}\mathbf{x}_{f}-\mathcal{L}_{f}(\mathbf{x}_{f}) \tag{17}\] \[\tau\frac{d\mathbf{x}_{f}}{dt} =\mathbf{W}^{\intercal}(\mathbf{W}\mathbf{g}_{f})-\mathbf{x}_{f}\,. \tag{18}\]
A full derivation is included in [18].
### Solving the Small Memory Capacity of Hopfield Networks
The CHN unfortunately suffered from a tiny memory capacity that scaled linearly according the number of input features \(N_{f}\); specifically, the maximum storage capacity of the classical Hopfield Network was discovered to be \(\sim\)\(0.14N_{f}\) by [68; 64; 69]. Consider the problem of building an associative memory on the \(60\mathrm{k}\) images in MNIST [70], where each image can be represented as a binary vector with \(N_{f}=784\) features. If the patterns were random, one could only reliably store a maximum of \(0.14(784)\approx 110\) images using the classical Hopfield paradigm, **no matter how many memories \(N_{m}\) you add to the synaptic matrix \(\mathbf{W}\)**. Given that MNIST images have strong correlations between their pixel intensities, this bound is even lower.
A breakthrough in the capacity of Hopfield Networks was proposed by Krotov & Hopfield over 30 years later [67]. Their new network, called the Dense Associative Memory (DAM), enabled the energy dynamics of the CHN to store a super-linear number of memories. The core idea was to use a rapidly growing non-linear activation functions \(\mathbf{g}_{m}(\cdot)\) on the memory neurons. For instance, choosing \(\mathbf{g}_{m}(\cdot)\) to be higher orders of (optionally rectified) polynomials allowed much greater memory storage capacity than the CHN. Extending this idea to exponential functions \(\mathbf{g}_{m}=\exp(\mathbf{x}_{m})\) can even lead to exponential storage capacity [71]. The intuition is that the "spikier" the activation function \(\mathbf{g}_{m}\) (i.e., the faster it grows in the region around \(\mathbf{x}_{m}\)), the more memories the network can store and retrieve.
Recall that a CHN could reliably store a maximum of \(110\) MNIST images. Using the exponential DAM, one can increase the number of stored memories up to \(N_{m}\sim\exp(784/2)\), assuming no correlations, _and still reliably retrieve each individual memory_. This marked difference has led to DAMs being branded as the "Modern Hopfield Network" (MHN) [31] and opened the door for a new frontier in AMs [43].
### Adding Hierarchy to Shallow Associative Memories
The CHN and DAM have another fundamental limitation: like the TinyBrain, both only have a single weight matrix \(\mathbf{W}\) and thus cannot learn hierarchical abstractions that simplify the task of memorizing complex signals. This is not a limitation of Deep Learning networks today, which can be seen as a stack of distinct learnable functions ("layers") each processing the output of a previous layer. These architectures are inherently _hierarchical_; that is, deeper layers operate on higher levels of abstraction output by previous layers. A classic example of this occurs in deep Convolutional Neural Networks (CNNs), where neurons in earlier layers (i.e., layers closer to the image) detect simple shapes, and those in deeper layers respond to more complex objects (see Figure 4).
However, Eq. 12 makes it easy to constrain the energy of an AM. Why can't these systems be extended beyond the TinyBrain in SS 4.1 to include more interacting layers? This is the realization of [61; 62] who generalize the theoretical abstraction of AMs in SS 4.1 to connect arbitrary numbers of neuron layers via arbitrary synaptic relationships that can resemble the convolutional, pooling, or even attention operations in modern architectures. This version of AMs is known as a Hierarchical Associative Memory (HAM).
The HAM has given AMs a theoretical maturity and flexibility to rival the representational capacity of any existing neural architecture while guaranteeing stable attractor points. However, the HAM is still a very young architecture, having been proposed in 2021, and has yet to establish itself as a viable alternative to traditional Deep Learning architectures in practice.
### Other models for Associative Memory
The term "associative memory" has become a catch-all term for many different types of Content Addressable Memories (CAMs), including models like Sparse Distributed Memories [73, 74],Memory Networks [75], Key-Value Memories [76, 77], Hopfield Networks [64, 66], and Boltzmann Machines [78, 79, 63]. Even the popular attention mechanism in Transformers [30] is itself a differentiable form of associative memory [31] where tokens act as queries to retrieve values stored by the other tokens. In this paper we have considered a particular class of AMs that refers to systems with a defined Lyapunov function - that is, a CAM must have both a tractable energy function and guaranteed stable states to be considered in this survey. The paradigmatic examples of this class of AMs is of course the Hopfield Network (HN) [64, 66] and its modern counterparts [67, 61] which have been discussed in earlier sections.
## 5 An Uncanny Resemblance between Associative Memories and Diffusion Models
Associative Memories are different from Diffusion Models in that they are not primarily understood as generative models. This said, the memory retrieval process can be easily be construed as a "generation process" that samples from some original distribution. This process can be visualized as follows. We first pick a point at random on our energy landscape by initializing a data point \(\mathbf{x}^{0}\) to random noise. We then descend the energy (as in Eq. 10) until the system settles in the nearest local minimum \(\mathbf{x}^{\star}\): this retrieved memory is our generated sample. If desired, small amounts of noise can be added to each update step (a process called _Langevin dynamics_) that can help improve the diversity of generations and jar the dynamics out of undesirable local minima (a technique that is used regularly during generation in DMs [57, 3, 4, 1]). This realization makes it possible to directly compare AMs to DMs.
\begin{table}
\begin{tabular}{l c c} \hline \hline & **Diffusion** & **Associative Memory** \\ \hline Parameterizes the... & Score function \(\mathbf{F}_{\boldsymbol{\theta}}\) & Energy function \(E_{\boldsymbol{\theta}}\) \\ Continuous Update & \(\tau\frac{d\mathbf{x}}{dt}=\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{x})\) & \(\tau\frac{d\mathbf{x}}{dt}=-\nabla_{\mathbf{x}}E_{\boldsymbol{\theta}}(\mathbf{ x})\) \\ Discrete Update & \(\mathbf{x}^{t+1}=\mathbf{x}^{t}+\alpha\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{ x}^{t})\) & \(\mathbf{x}^{t+1}=\mathbf{x}^{t}-\alpha\nabla_{\mathbf{x}}E_{\boldsymbol{\theta}}( \mathbf{x}^{t})\) \\ Valid Time Domain & \(t\in[0,T]\) & \(t\geq 0\) \\ Fixed Point Attractor? & No\({}^{*}\) & Yes \\ Tractable Energy? & No\({}^{*}\) & Yes \\ Undoes Corruption of... & Noise it was trained on\({}^{*}\) & Any kind \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summarizing the similarities and differences between Diffusion Models and Associative Memory. Fields marked with a \({}^{*}\) indicate caveats. See § 5.1 and § 5.2 for details.
Figure 4: An explanation of hierarchical representations as learned in images, adapted from [72]. The top row shows the filters learned by the second layer in a convolutional deep belief network, whereas the bottom row the filters of the third layer. The deeper filters are able to recognize more complex objects and are thus considered more “abstract”.
We tabulate the similarities and differences between DMs and AMs in Table 1, providing additional details in SS 5.2.
### Characterizing the Similarities
There are several reasons to be skeptical of significant overlap between generative models like DMs and AMs. As presented, DMs only compute gradients of energies and not energies themselves; thus they have no Lyapunov function guaranteeing stability and cannot operate continuously in time. However, it is clear that both DMs and AMs have very similar goals as shown in Eq. 9 and Eq. 10. We summarize the similarities between these two data modeling approaches as follows:
* **Both model the energy.** DMs learn a parameterized score function \(\mathbf{F}_{\boldsymbol{\theta}}\) to approximate the gradient of some true energy function \(E\) at every point \(\mathbf{x}\) in the energy landscape such that \(\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{x})\approx-\nabla_{\mathbf{x}}E( \mathbf{x})\). In AMs, this energy function is explicit and is defined by using architectures that directly model the energy \(E_{\boldsymbol{\theta}}\) such that \(E_{\boldsymbol{\theta}}(\mathbf{x})\approx E(\mathbf{x})\).
* **Both generate samples by descending the predicted gradient of the energy.** A DM will directly output the estimated score \(\mathbf{F}_{\boldsymbol{\theta}}\approx-\nabla_{\mathbf{x}}E(\mathbf{x})\), whereas an AM will directly output a smooth energy \(E_{\boldsymbol{\theta}}(\mathbf{x})\) on which the gradient \(-\nabla_{\mathbf{x}}E_{\boldsymbol{\theta}}(\mathbf{x})\) can be directly calculated and descended. The discrete (continuous) update rules of both are identical to a step size \(\alpha\) (time variable \(\tau\)).
* **Both converge to a solution that lies in the neighborhood of a local energy minimum.** In DMs, this behavior is a consequence of the manner in which it is trained: the final output \(\mathbf{x}^{T}\) exists in a region such that a small perturbation in any direction would increase its energy. In AMs, this statement is a requirement of the Lyapunov function; if the dynamics progress for a sufficiently long time, we are guaranteed to reach a true fixed point \(\mathbf{x}^{\star}\) that lies at a local energy minimum.
### Reconciling the Differences
DMs and AMs are certainly not equivalent methods. However, we have discovered evidence that the theoretical differences between DMs and AMs are not so significant in practice. We present rebuttals to potential objections (which we call "misconceptions") that Diffusion Models do not simulate Associative Memories below.
**Misconception 1: Diffusion models are not fixed point attractors.** Though the dynamics of DMs have no theoretical guarantees of fixed point attractors, we notice that the design of DMs seems to intelligently engineer the behavior of fixed-point attractors like AMs without constraining the architecture to represent a Lyapunov function. We identify two fundamental tricks used by DMs that help approximate stable dynamics:
**Trick 1**: DMs explicitly halt their reconstruction process at time \(t=T\) (i.e., requiring \(\mathbf{x}^{\star}=\mathbf{x}^{T}\)) and are thus only defined for time \(t\in[0,T]\). \(\mathbf{x}^{T}\) then represents a _contrived fixed point_ because no further operations change it. We can say that \(\mathbf{x}^{t\neq T}\) corresponds to a data point with some corruption and \(\mathbf{x}^{T}\) corresponds to a _memory_ in the language of AMs.
**Trick 2**: We know that \(\mathbf{x}^{T}\) approximates a local energy minimum because of the _noise annealing_ trick introduced by [2] and used by all subsequent DMs. In the corruption process, points in the data distribution are perturbed with gradually increasing amounts of noise, implying that smaller noise is added at earlier points in time. This leads to a robust approximation of the true energy gradient localized around each training point, where the original data point lies at the minimum.
We additionally see evidence of DMs storing "memories" that are actual training points. [80] showed that one can retrieve training data almost exactly from publicly available DMs by descending an energy conditioned on prompts from the training dataset. It seems that this behavior is particularly evident for images considered outliers and for images that appear many times. Viewing DMs as AMs, this behavior is not surprising, as the whole function of AMs is to retrieve data (or close approximations to the data) that it has been seen before.
Tricks 1 & 2 also mean that a DM is inescapably bound to a knowledge of the current time \(t\). The time \(t\) defines not only how much noise the model should expect to remove from a given
signal in a single step, but also how much total noise it should expect to remove from a given signal. Given a signal with an unknown quantity of noise, a user must either make a "best guess" for the time \(t\) corresponding to this amount of noise, or restart the dynamics at time \(t=0\) which will cause the model to make large jumps around the energy landscape and likely land it in a different energy minimum. Currently, AMs have no such dependence between corruption levels and time \(t\) and can run continuously in time.
**Misconception 2: Diffusion models can only undo Gaussian noise.**: In order for a DM to behave like an AM, they must be able to undo any kind of corruption (e.g., inpainting, blur, pixelation, etc.), not just the white- or Gaussian-noise associated with Brownian motion as originally formulated in [1, 2]. [81, 82] showed that the performance of DMs can actually improve when considering other types of noisy corruption in the forward process. However, it also seems that DMs can learn to reverse any kind of corrupting process. [59] empirically show that DMs can be trained to invert arbitrary image corruptions that generate samples just as well as those trained to invert Gaussian noise. Because DMs can be trained to undo any kind of corruption, they can exhibit behavior identical to that of Associative Memory which focuses on the "fixed points" of the energy landscape rather than the kind of denoising/de-corrupting steps required to get there.
**Misconception 3: Unconstrained Diffusion Models work with any architecture.**: One advantage of DMs over AMs is that they are "unconstrained" and can use any neural network architecture to approximate the score function; in other words, the architecture is not required to be the gradient of an actual scalar energy. The only requirement is that the neural network chosen to approximate the score must be _isomorphic_ such that the function's output is the same shape as its input (i.e., \(\mathbf{F}_{\boldsymbol{\theta}}:\mathbb{R}^{d}\mapsto\mathbb{R}^{d}\)). However, not all isomorphic architectures are created equal and only select architectures are used in practice. Both standard feedforward networks [51] and vanilla Transformers have struggled to generate quality samples using diffusion [83, 84]. Most applications of DMs use some modification of the U-Net architecture [85] originally used by [4], though the original DM paper [1] used shallow MLPs, and recent work [83] has shown how to engineer vision Transformers [86] to achieve a similar reconstruction quality as U-Nets on images.
**Misconception 4: Diffusion Models with explicit energy don't work.**: Though DMs characterize an energy landscape by modeling its gradient everywhere, they do not inherently have a concept of the energy value itself. However, [21] showed that one can actually compute an exact energy for DMs using the instantaneous change of variables formula from [87], with the caveat that this equation is expensive to compute; estimations over direct calculations of the energy computation are preferred in practice [20].
Another approach for enforcing an energy on DMs is to choose an architecture that parameterizes an actual energy function, whose gradient is the score function. Ref. [51] researched exactly this, exploring whether a generic learnable function \(\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{x};t)\) that is constrained to be the true gradient of a parameterized energy function as in Eq. 19 is able to attain sample quality results similar to those of unconstrained networks.
\[E_{\boldsymbol{\theta}}(\mathbf{x};t)=\frac{1}{2\sigma(t)}||\mathbf{x}- \mathbf{f}_{\boldsymbol{\theta}}(\mathbf{x};t)||^{2} \tag{19}\]
The score \(\mathbf{F}_{\boldsymbol{\theta}}\) of this energy can be written by computing the analytical gradient
\[\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{x};t)=\frac{1}{\sigma(t)}(\mathbf{x }-\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{x};t))\nabla_{\mathbf{x}}\mathbf{f} _{\boldsymbol{\theta}}(\mathbf{x};t)-\frac{1}{\sigma(t)}(\mathbf{x}-\mathbf{f }_{\boldsymbol{\theta}}(\mathbf{x};t))\,. \tag{20}\]
Ref. [51] notes that the second term of the equation is the standard DM, while the first term involving \(\nabla_{\mathbf{x}}\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{x};t)\) is new and helps guarantee that \(\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{x};t)\) is a conservative vector field. They also showed that constraining the score function to be the gradient of the energy in Eq. 19 does not hurt the generation performance on the CIFAR dataset [88] and provides hope that AMs with constrained energy can match the performance of unconstrained DMs.
## 6 Conclusions & Open Challenges
Diffusion Models and Associative Memories have remarkable similarities when presented using a unified mathematical notation: both aim to minimize some energy (maximize some log-probability)
by following its gradient (score). The final solution of both approaches represents some sort of _memory_ that lies in a local minimum (maximum) of the energy (log-probability). However, these approaches are certainly not identical, as is evidenced by different validity constraints on architecture choices and time domains. The training philosophy behind each approach is also different: Diffusion Models assume that the energy function is intractable and fixate on training the gradient of the energy (using known perturbations of training data as the objective), while AMs focus on learning the fixed points of a tractable energy.
### Directions for Associative Memories
Associative Memories have not gained nearly the traction of Diffusion Models in AI applications. Many researchers in the field focus on the TinyBrain architecture, trying to improve its theoretical memory capacity [89; 90] or apply related models to modern problems [91]. Other researchers are integrating the memory-retrieval capabilities of AMs into existing feed-forward architectures [31; 92; 93]; in doing so they discard the idea of global optimization on the energy. In part, these other research directions exist because no pure AM has shown impressive performance on large data until [32] introduced the Energy Transformer, though even this AM does not yet show significant performance gain over traditional methods.
The empirical success of Diffusion Models across many domains should provide hope that modern AM architectures [61; 62] can achieve performance parity on similar tasks. Constrained Diffusion Models show no worse performance than unconstrained models [51], and the introduction of HAMs allow AMs to be built that resemble U-Nets [62]. Even the training pipeline of a Diffusion Model can be mapped directly to an AM paradigm if we condition the energy function and optimization procedure on the time: from a starting image \(\mathbf{x}^{*}\) in the training set, repeatedly add varying amounts of random noise and train the score function \(-\nabla_{\mathbf{x}}E_{\theta}(\mathbf{x}^{t};t)\) to predict the added noise. Once trained, Diffusion Models have even shown strong steerability - the ability for a user to modify a trained Diffusion Model to complete tasks it was not trained to perform [1; 94; 95; 96; 52; 97; 98; 99; 100]. We can expect similar levels of controllability from a fully trained AM.
### Directions for Diffusion Models
Diffusion Models should find the theoretical framework of the Lyapunov function from AMs compelling, as it defines systems around fixed-point attractors that seem to already exist in Diffusion Models. Perhaps the score function \(\mathbf{F}_{\theta}\) learned by Diffusion Models has already learned to behave similarly? If the score function is shown to be a conservative vector field as in [51], perhaps the constraint of finite time in Diffusion Models is unnecessary and Diffusion Models can behave well in continuous time \(t>T\). Viewing Diffusion Models as fundamentally fixed-point attractors like AMs, it also becomes possible to theoretically characterize its memory capacity. Finally, recent research has focused on optimizing the sampling speed of Diffusion Models by improving the scheduling step in the update rule [57; 101; 102; 103]. By viewing this sampling procedure as ordinary gradient descent (as is the case in AMs), smart gradient optimization techniques already used in Deep Learning like ADAM [104] and L-BFGS [105] become available.
### Beyond Diffusion: Transformers Resemble Associative Memories
Diffusion Models are not the only method in modern Deep Learning to show similarities to AMs. In fact, recent literature shows increasing connections between the operations of AMs and deep architectures, e.g. feed-forward networks [93], convolutional neural networks [61], Transformer architecture [31], and optimization processes of Ordinary Differential Equations (ODEs) [106; 107; 108; 109]. In 2020 [31] discovered that the attention mechanism of Transformers strongly resembled a single-step update of a Dense Associative Memory [67] under the \(\operatorname{softmax}(\cdot)\) activation function, which exhibits similar properties to the power and \(\exp(\cdot)\) functions studied in [67; 71]. However, it is incorrect to call their contrived energy function an AM as it is integrated as a conventional feed-forward layer in the standard Transformer block and applied only using a single gradient descent step. Of particular interest to the scope of this paper is the following question: if Transformers have strong resemblances to ODEs and the attention operation is similar to a single step update of a DAM, what are the differences between an AM with desirable attractor dynamics and a Transformer?
A recent work by [32] explores this question, deriving a new "Energy Transformer" block that strongly resembles the conventional Transformer block, but whose fundamental operation outputs an energy. This energy satisfies all the desired properties of an AM, which allows us to interpret the forward pass through a stack of Transformer blocks as gradient descent down the block's energy.
### Scaling Laws from the Perspective of Associative Memory
The performance of Transformers on language is famously characterized by the "scaling laws", which claim that a model's performance will improve as a power-law with model size, dataset size, and the amount of compute used for training [110]. We expect similar behaviors to hold for Diffusion Models, though a similar study has not yet been conducted. However, the "scaling laws" are empirical only, and there is little theory to justify why a model's performance would continue to grow with the size. AMs offer one possible theory by characterizing large-model performance as one of _memory capacity_ (see SS 4.3). In the world of AMs, this scaling behavior makes intuitive sense: more parameters means more possible memories that can be stored; more data means more meaningful local minima in the energy; and more compute means the model can descend further down the energy, making it easier to distinguish between nearby fixed points (alternatively, more compute can imply that longer training allows the model to distribute the fixed points in the energy more efficiently, allowing for greater memory capacity). These hypotheses for understanding the scaling law come from intuitively understanding large models as AMs, but this is still an open research question.
Both Transformers and Diffusion Models are ubiquitous choices for foundation models [111] in Deep Learning today, and both strongly resemble Associative Memories. We believe that the trajectory of AI research would benefit by interpreting the problem of unsupervised learning on large, unstructured data from the perspective of Associative Memory.
### Closing Remarks
Very few researchers will observe the rapid advances of AI today and notice a trend towards the dynamical processes of Associative Memories first established by John Hopfield in the 1980s. However, many of the theoretical guarantees of Associative Memories are captured in the design of increasingly popular Diffusion Models that have proven themselves fixtures for many applications of generative modeling. This survey represents a first step towards a more comprehensive understanding of the connections between Diffusion Models and Associative Memories. We hope that our work inspires further research into these exciting fields and that it helps to foster a new generation of AI systems that are capable of unlocking the secrets of memory and perception.
| Diffusionモデル(DMs)の生成過程は、近年、多くのAI生成ベンチマークで最先端となっています。生成プロセスは伝統的に「iterative denoiser」と理解されてきましたが、その説明には普遍的に受け入れられている言語はありません。私たちは、エネルギーベースの関連記憶(AM)の分野におけるメモリ検索の数学的な言語を用いて、DMsを説明する新しい視点を与えています。これにより、この2つの分野に慣れ親しんだ人にも、この説明が分かりやすいものになります。これらの2つの分野を統一することで、DMsは特定のAMとして捉えられることがわかります。その理由は、Lyapunov安定性の保証は、ノイズとステップサイズスケジュールの巧妙なエンジニアリングによって、Denoisingのプロセスのパターンに影響を与えています。最後に、DMsがAMの期待される実証的な行動を示しているという証拠が増加しており、DMsをエネルギーベースの記憶の一種として理解し |
2304.08592 | Improving Scene Text Recognition for Character-Level Long-Tailed
Distribution | Despite the recent remarkable improvements in scene text recognition (STR),
the majority of the studies focused mainly on the English language, which only
includes few number of characters. However, STR models show a large performance
degradation on languages with a numerous number of characters (e.g., Chinese
and Korean), especially on characters that rarely appear due to the long-tailed
distribution of characters in such languages. To address such an issue, we
conducted an empirical analysis using synthetic datasets with different
character-level distributions (e.g., balanced and long-tailed distributions).
While increasing a substantial number of tail classes without considering the
context helps the model to correctly recognize characters individually,
training with such a synthetic dataset interferes the model with learning the
contextual information (i.e., relation among characters), which is also
important for predicting the whole word. Based on this motivation, we propose a
novel Context-Aware and Free Experts Network (CAFE-Net) using two experts: 1)
context-aware expert learns the contextual representation trained with a
long-tailed dataset composed of common words used in everyday life and 2)
context-free expert focuses on correctly predicting individual characters by
utilizing a dataset with a balanced number of characters. By training two
experts to focus on learning contextual and visual representations,
respectively, we propose a novel confidence ensemble method to compensate the
limitation of each expert. Through the experiments, we demonstrate that
CAFE-Net improves the STR performance on languages containing numerous number
of characters. Moreover, we show that CAFE-Net is easily applicable to various
STR models. | Sunghyun Park, Sunghyo Chung, Jungsoo Lee, Jaegul Choo | 2023-03-31T06:11:33 | http://arxiv.org/abs/2304.08592v1 | # Improving Scene Text Recognition for Character-Level Long-Tailed Distribution
###### Abstract
Despite the recent remarkable improvements in scene text recognition (STR), the majority of the studies focused mainly on the English language, which only includes few number of characters. However, STR models show a large performance degradation on languages with a numerous number of characters (e.g., Chinese and Korean), especially on characters that rarely appear due to the long-tailed distribution of characters in such languages. To address such an issue, we conducted an empirical analysis using synthetic datasets with different character-level distributions (e.g., balanced and long-tailed distributions). While increasing a substantial number of tail classes without considering the context helps the model to correctly recognize characters individually, training with such a synthetic dataset interferes the model with learning the contextual information (i.e., relation among characters), which is also important for predicting the whole word. Based on this motivation, we propose a novel Context-Aware and Free Experts Network (CAFE-Net) using two experts: 1) context-aware expert learns the contextual representation trained with a long-tailed dataset composed of common words used in everyday life and 2) context-free expert focuses on correctly predicting individual characters by utilizing a dataset with a balanced number of characters. By training two experts to focus on learning contextual and visual representations, respectively, we propose a novel confidence ensemble method to compensate the limitation of each expert. Through the experiments, we demonstrate that CAFE-Net improves the STR performance on languages containing numerous number of characters. Moreover, we show that CAFE-Net is easily applicable to various STR models.
## 1 Introduction
Recent studies in scene text recognition (STR) models have shown remarkable performances. As the most commonly spoken language worldwide, the English language has been the main focus of the existing STR stud
ies [38, 3, 4, 41]. However, achieving high performance on other languages with the existing models is a non-trivial task, especially when the languages have numerous characters (_e.g.,_ letter, number, symbol.), unlike English. More specifically, English has only 26 letters, while Asian languages like Chinese and Korean have thousands of letters.
There exist few studies that try to improve STR performance on languages other than English [5, 18]. However, they overlook the fact that languages with a large number of characters have the _long-tailed distribution at the character level_. Due to the character-level long-tailed distribution, the model mainly focuses on learning the head characters (_i.e.,_ those which frequently appear when forming words) while focusing less on learning the tail characters (_i.e.,_ those which rarely appear in words). This leads to significant performance degradation on the tail classes, a commonly observed phenomenon in the existing long-tailed recognition [21, 36], as shown in Fig. 1.
Although synthetic datasets such as SynthText [13] are often utilized in STR, constructing synthetic datasets with a balanced number of characters is challenging. To be more specific, to alleviate the performance degradation due to the long-tailed distribution, the existing image classification methods studies generally proliferate the data samples of the tail classes when constructing a balanced set of classes. In STR, however, increasing the number of words including the tail characters also increases the number of the head characters when they are included in the same word. While generating words only including tail classes is one straightforward solution, those words generally do not include the contexts people use in their everyday life since tail classes are rarely used in common words. Such an issue makes it demanding to construct a synthetic dataset for STR that can improve the performance on the tail characters, especially when the characters show a long-tailed distribution.
This paper is the first work to address the STR task in terms of the _character-level long-tailed distribution_. Such a long-tailed distribution of the characters causes a significant performance drop on tail characters in STR. We investigate the character-level long-tailed distribution by constructing two synthetic datasets having different character-level distributions: (1) one created by common words to preserve the context (_i.e.,_ WikiSynth) and (2) the other with randomly combined characters, which has a balanced distribution but lacks such contextual information (_i.e.,_ RandomSynth). While training with WikiSynth encourages the model to learn contextual information, the model fails to predict the tail classes correctly due to the long-tailed distribution of characters. In contrast, using RandomSynth helps to correctly predict characters individually by focusing on the visual representation and enhances the performance on tail classes significantly, but such training interferes the model with learning the contextual information.
Based on the findings, we propose a Context-Aware and Free Experts Network (CAFE-Net), a _simple yet effective_ approach, which utilizes the confidence score for aggregating experts handling different character-level distributions. At a high level, we train two experts separately: (1) context-aware expert that focuses on learning the contextual representation using a dataset including characters with a long-tailed distribution and (2) context-free expert that learns the visual representation by utilizing a dataset of a balanced number of characters. Additionally, we propose a new evaluation metric termed 'character-level (char) F1 score', which is more suitable than existing word-level evaluation metrics (_e.g.,_ accuracy) for character-level analysis. Extensive experiments demonstrate that CAFE-Net significantly outperforms the existing methods in predicting the tail characters, while improving the performance on predicting the whole words with languages containing a numerous number of characters. Furthermore, we demonstrate the applicability of CAFE-Net by combining various STR models.
The main contributions of our work are as follows:
* To the best of our knowledge, this is the first work to handle the STR model in terms of the languages with character-level long-tailed distributions.
* To take care of learning both contextual and visual information, we propose a novel CAFE-Net using context-aware and context-free experts, which separately handles different character-level distributions.
* We demonstrate the superior performance of our method and its applicability through experiments.
## 2 Related Work
**Scene Text Recognition.** A recent study [3] proposes a widely used STR framework composed of four stages by analyzing the performance gains of each module in a model. Leveraging such a well-performing framework, the majority of studies in STR mainly focused on English [32, 46]. Several studies [5, 18] propose unified methods to handle multiple languages. However, such existing multilingual STR approaches do not consider the characteristics of each language (_e.g.,_ long-tailed distribution of characters). Another recent work tackled the vocabulary reliance problem [41] at the word level, which mitigates the poor generalization on images with words not included in the vocabulary of a training dataset. In contrast to the previous STR studies, to the best of our knowledge, this is the first work to address the _character level_ long-tailed distribution in STR.
**Long-tailed Recognition.** There exist numerous datasets, which have long-tailed distributions in the real world. Previous studies addressing the long-tailed distribution focused on improving loss functions [28, 9], augmenting the data
samples of the tail classes [7, 27], and adjusting the logits [21, 26, 30, 36]. Recent studies proposed using multiple experts specialized for correctly predicting under a certain label distribution [49, 44, 48]. Such a design enables to handle different label distributions within a single model. Inspired by such a design, we train two different experts specialized to learn contextual and visual representation, respectively, by taking account of the characteristic of STR.
## 3 Motivation
**Overview.** This section investigates the impacts of character-level long-tailed distribution in STR. We first describe several synthetic datasets, which are generated by shifting the character-level distribution (_e.g._, varying from long-tailed datasets to balanced datasets) in Section. 3.1. Moreover, we introduce character-level F1 score in Section. 3.2. Next, we show the effectiveness of each synthetic dataset and analyze them in Section. 3.3. We use a TRBA model [3], a representative STR framework, for the experiments in this section. The details for STR framework we used are described in the supplementary.
### Synthetic Data
As widely used in the previous studies of STR [38, 3, 45, 11, 2], we utilize synthetic data for training. We use Korean and Chinese for the languages, which include the long-tailed distributions at the character level. We construct the training datasets for each language by following SynthText [13], which is one of the synthetic datasets generated by using a large corpus and diverse backgrounds. We generate new synthetic datasets for the study by shifting the character-level distribution as shown in Fig. 2.
**WikiSynth (WS)** This dataset utilizes Wikipedia text corpus. The wiki corpus is composed of word units using a tokenizer for each language. The limit of word length is set to 25. The number of samples in the training and test sets for Chinese and Korean are 5,000,000 and 10,000, respectively. Since WS is generated by common words, it has a long-tailed distribution at character level that is generally observed in languages with numerous number of characters.
**RandomSynth (RS)** In contrast to WS, RS is a character-level balanced dataset, where words are generated by randomly combining characters. Since RS samples the characters uniformly, the dataset does not consider the context, so it does not contain the words generally used in the real world. RS contains the same number of images as WS for a fair comparison. As previous studies in long-tailed recognition [6] evaluate the models with the balanced test set, we use RS as the character-level balanced test set in STR.
**CombinedSynth (CS)** WS and RS has each own limitation, respectively. To be more specific, models trained with WS fail to learn _few_ characters, while training with RS interferes the model with learning the contextual information between characters. A viable option for solving these problems is to mix WS and RS. CS is composed of WS and RS with an equal number of images from each dataset to compensate for the limitation of each dataset.
### Character-Level F1 Score
Accuracy is a widely used evaluation metric, which evaluates whether a model correctly outputs all the characters in a given word. Since the accuracy only considers the performance of STR at the word level, we propose a novel evaluation metric termed 'char F1 score' to evaluate the performance on the character level. When obtaining the char F1 score, we 1) perform the sequence alignment of ground truth and predicted characters, 2) compute the F1 score per character, and 3) average these scores. We report the F1 score in addition to the accuracy since it is more suitable than accuracy when evaluating models with an imbalanced number of data samples. The details of char F1 score are described in the supplementary.
Since we address the long-tailed distribution of characters, we categorize the characters into three groups. For simplicity, we denote \(n_{i}\) as the number of training samples
Figure 3: (a) Accuracy on \(\textbf{Real}_{Easy}\) of models trained with WS, CS, and RS individually using Korean. Since RS and CS include randomly combining characters, the models trained with RS and CS exhibit lower accuracy compared to the model trained on WS. (b) Char F1 score on \(\textbf{Real}_{Hard}\). We observe that training models with RS and CS improve the recognition performance on individual characters.
Figure 2: Character-level distribution of WikiSynth (WS), RandomSynth (RS), and CombinedSynth (CS). Unlike WS, both RS and CS include a sufficient number of characters for all classes.
including \(i^{\text{th}}\) character in a given dataset. The characters are categorized according to \(n_{i}\): 1) _many (i.e., \(n_{i}\geq 1500\)_), 2) _medium_ (i.e., \(n_{i}<1500\) and \(n_{i}\geq 100\)), and 3) _few_ (i.e., \(n_{i}<100\)). Straightforwardly, char F1 scores of _few_ characters are much lower than those of _many_ characters when training models with WS as shown in Fig. 3 (b).
### Tradeoff between Context-Free and Context-Aware Learning
We use AI Hub dataset [1], a publicly available Korean dataset, for Korean test set noted as '**Real**'. Additionally, we divide **Real** datasets into two types of test sets: 1) a test set without _few_ characters (_i.e.,_ **Real\({}_{Easy}\)**) and 2) a test set including _few_ characters (_i.e._, **Real\({}_{Hard}\)). The details of the experimental setup are described in the supplementary.
We evaluate the models with **Real\({}_{easy}\)** and **Real\({}_{hard}\)** by individually training them with WS, RS, and CS using Korean. Note that the model trained with WS mainly primarily relies on contextual information for making predictions, whereas the one trained with RS mainly uses visual information while lacking contextual information. We observe a tradeoff of using WS and RS for the training set. Fig. 3 (a) demonstrates that training with WS improves the _accuracy_ on **Real\({}_{easy}\)** compared to training with CS or RS. On the other hand, Fig. 3 (b) shows that training with CS or RS improves the _char F1 score_ for all _many_, _medium_, _few_ characters when evaluated with **Real\({}_{easy}\)** compared to training with WS.
Through the experiments, we found that the model focused on learning visual information without contexts (_i.e.,_ trained with RS or CS) can correctly predict individual characters, which is important for improving the performance of long-tailed recognition, especially for _few_ characters. However, the model focusing on learning the contextual information (_i.e.,_ trained with WS) shows improved accuracy even with low char F1 score. This indicates that capturing the contextual information is crucial for correctly predicting all characters of a given word, especially for those words frequently appearing. Without such understanding of the contextual information, models show limited accuracy with even high char F1 score. Therefore, to improve recognizing individual characters and the whole word, we need to enhance both visual and contextual representations.
## 4 Method
**Overview.** Based on the empirical analysis, we propose a Context-Aware and Free Experts Network termed 'CAFE-Net'. Different from previous STR methods, we utilize two types of training datasets, which have different label distributions (_e.g.,_ WS and RS). As described in Fig. 4, our model consists of two main experts: (1) context-aware expert trained with WS to focus on the contextual representation via utilizing an external language model; (2) context-free expert trained with a balanced number of characters (_i.e.,_ RS) to improve the performance on _few_ characters. By dividing the roles of two experts, it is possible to improve the performance on _few_ characters while understanding the contextual information.
Different from the existing STR methods, we utilize two synthetic datasets (_i.e._, WS and RS) separately during training. Let \(\{x_{ca},y_{ca}\}\sim\mathcal{D}_{ca}\) and \(\{x_{cf},y_{cf}\}\sim\mathcal{D}_{cf}\) denote training images and labels sampled for training the context-aware expert and the context-free expert, respectively. In specific, we utilize WS and RS for \(\mathcal{D}_{ca}\) and \(\mathcal{D}_{cf}\), respectively. In the following, we illustrate the details of our method and its objective functions.
**Feature Extractor.**\(x_{ca}\) and \(x_{cf}\) are fed into the feature extractor to acquire the context-aware and context-free feature representations \(f_{ca}\) and \(f_{cf}\), respectively. In our framework, two experts share the same feature extractor. Sharing weights largely reduces the computational complexity in the inference phase. For the feature extractor, various model architectures can be utilized such as ResNet [15] and vision transformer (ViT) encoder [2].
**Context-Free Expert.** Given the feature representation \(f_{cf}\) that is extracted from \(x_{cf}\), a context-free expert produces the output feature \(h_{cf}=\{h_{cf}^{(1)},\ldots,h_{cf}^{(T)}\}\) of the corresponding words \(\hat{y}_{cf}=\{\hat{y}_{cf}^{(1)},\ldots,\hat{y}_{cf}^{(T)}\}\). Here, \(T\) denotes the maximum length of the word. Due to the balanced number of characters, the context-free expert correctly predicts _few_ characters more compared to the context-aware expert. This is mainly due to the fact that the random sequences of characters devoid of semantic meaning make the context-free expert prioritize learning visual representation above contextual representation.
**Context-Aware Expert.** Different from the context-free expert, the context-aware expert is trained with \(\mathcal{D}_{ca}\) to focus on learning the contextual information, which is essential to predict the whole words accurately. Inspired by recent context-aware STR methods [45, 11], we leverage an external language model to capture semantic information to assist STR. Specifically, with the feature representations \(f_{ca}\) and \(f_{cf}\), the context-aware expert produces the output feature. Then, an external language model refines the output of the context-aware expert. Finally, the outputs of the context-aware expert and the language model are fused to produce the final output feature. In summary, the context-aware expert with the external language model produces the final output feature \(h_{ca}=\{h_{ca}^{(1)},\ldots,h_{ca}^{(T)}\}\) of the corresponding words \(\hat{y}_{ca}=\{\hat{y}_{ca}^{(1)},\ldots,\hat{y}_{ca}^{(T)}\}\).
**Objective Functions.** The context-free and context-aware experts are trained by the same objective function that minimizes negative log-likelihood of the conditional probability
of word label \(y_{cf}\). Formally, loss function \(\mathcal{L}\) is as follows:
\[\mathcal{L}=-\frac{1}{T}\sum_{t=1}^{T}\log p(y^{t}|h^{t}), \tag{1}\]
where \(y^{t}\) is the \(t\)-th ground truth character.
**Confidence Ensemble.** During inference, we aggregate the outputs of two experts. The output probability of each expert is defined as:
\[p(\hat{y})=\prod_{t=1}^{l}p(\hat{y}^{t}|\hat{y}^{<t}), \tag{2}\]
where \(\hat{y}^{<t}=y^{1}\cdots y^{t-1}\) and \(l\) is the length of the predicted words. In specific, we ignore \(pad\) token and only consider the words preceding the \(eos\) token, where \(eos\) token indicates the end of the words.
To ensemble the outputs of two experts, we leverage the maximum softmax probability, which represents the probability \(p(\hat{y}^{t}|\hat{y}^{<t})\) of the predicted character. The confidence score score\((\hat{y})\) of each expert is calculated based on the maximum softmax probability of the characters as follows:
\[\text{score}(\hat{y})=\frac{1}{l}\sum_{t=1}^{l}\log(\max(p(\hat{y}^{t}|\hat{y }^{<t}))), \tag{3}\]
where we apply the length normalization that normalizes the score using the length \(l\) of the predicted word. Since the probabilities \(p(\hat{y}^{t})\) are all values less than one, multiplying a non-trivial number of values less than one will result in the confidence score of shorter words increasing. To address this issue, we normalize the confidence score by dividing it by the word length \(l\). We denote the confidence scores of the context-aware expert and context-free expert as \(\text{score}(\hat{y}_{ca})\) and \(\text{score}(\hat{y}_{cf})\), respectively. Among \(\text{score}(\hat{y}_{ca})\) and \(\text{score}(\hat{y}_{cf})\), we select the output with the higher confidence score. Then, the final predicted words are computed by taking the highest probability character at each time step \(t\). Intuitively, since the maximum softmax probabilities of the two experts vary depending on the characters, CAFE-Net is capable of selecting the word prediction properly during inference by utilizing the confidence score obtained from the two experts.
**Applicability of CAFE-Net.** Our proposed method provides a practical solution for addressing character-level long-tailed distribution in various STR models. In the supplementary, we describe how to integrate our method with representative STR models such as CNN-based models [3, 34] and ViT-based models [2]. While ensembling or utilizing multiple experts has been widely explored in other fields [25, 44, 49, 48], we want to emphasize that we shed light on _how_ to utilize ensembling in the character-level long-tailed STR. Notably, the key difference between character-level long-tailed STR and previous studies is that STR includes both vision and language modalities, where the model requires both visual and contextual information to predict the whole words. Due to this fact, simply adopting previous ensembling methods may not be directly applicable in STR. To solve such an issue, we first discover a crucial finding and propose a _simple yet effective_ method based on our finding.
## 5 Experiments
**Experimental Setup.** Since we only use synthetic datasets for Chinese and Korean, we also utilize ICDAR 2017 MLT dataset (MLT17) [33], a real-world dataset, for each language to reduce the domain gap with the real-world datasets. We filter the images of MLT17 including Chinese and Korean for each language. We evaluate the model using accuracy, a widely used evaluation metric in STR.
We evaluate the performance of the models on large
Figure 4: Overview of CAFE-Net. CAFE-Net utilizes two different experts: a context-aware expert and a context-free expert. Context-aware expert is trained with \(\mathcal{D}_{ca}\) (_e.g._, WS) to focus on learning the contextual information. On the other hand, context-free expert focuses on learning recognizing individual characters, which is trained with \(\mathcal{D}_{cf}\) (_e.g._, RS), a balanced dataset with images including randomly sequenced characters. As evidenced by visualization of the maximum softmax probabilities of two experts, it is clear that the experts have different certainty depending on the characters. Based on this characteristic, CAFE-Net select the prediction with the higher confidence score from the two experts during the inference.
scale real-world datasets. We utilize real-world datasets as test sets noted as '**Real**'. In specific, AI Hub dataset [1] and ICDAR 2019 ReCTS dataset [39] are publicly available real-world Korean and Chinese datasets, respectively. AI Hub dataset, a Korean real STR dataset, includes 151,105 cropped images of street signs, traffic signs, and brands. ReCTS, a Chinese real STR dataset, contains 77,709 cropped images of Chinese signboards in the street view with diverse backgrounds and fonts, which is a widely used benchmark dataset in the STR field. We choose these two datasets for evaluation since they contain a sufficient number of tail characters. The details for preprocessing real-world datasets are depicted in the supplementary.
We assess the performance of the models using the synthetic test datasets (_e.g._, WS\({}_{test}\) and RS\({}_{test}\)) in addition to real-world datasets. WS\({}_{test}\) is an imbalanced test set using a real-world corpus, which contains the common words. In contrast, RS\({}_{test}\) is a balanced test set but failing to preserve the contexts. Since WS\({}_{test}\) maintains the contexts, the accuracy is an important evaluation metric in WS\({}_{test}\) since it requires a model to predict all characters of a given word correctly. However, WS\({}_{test}\) does not contain sufficient number of _few_ characters. On the other hand, RS\({}_{test}\) is a balanced test set at the character level, so the char F1 score is a more meaningful evaluation metric compared to the accuracy. Therefore, we measure only accuracy for WS\({}_{test}\) and only char F1 score for RS\({}_{test}\), which are collectively referred to as '**Synth\({}_{test}\)**' in our experiments.
**Effectiveness of CAFE-Net.** We implement four models for the experiments; (i) CNN-based STR model: TRBA [3] and TextAdaIN [34], (ii) ViT-based STR model: ViT-STR+Linear and ViTSTR+Attn [2]. Table 1 demonstrates that integrating CAFE-Net improves the accuracy consistently in evaluation datasets in both Korean and Chinese datasets, except for TextAdaIN+Ours on Chinese **Real\({}_{Hard}\)**. We want to emphasize that our method leads to a large performance improvement compared to utilizing only a long-tailed dataset (_e.g._, WS), which is widely used in the STR field. These results demonstrate that appropriately solving the character-level long-tailed distribution can enhance overall performance for languages with a large number of characters. Notably, our method can achieve consistent performance improvement regardless of the model architecture, demonstrating its wide applicability.
**Comparison with Baselines.** A myriad of methods for handling long-tailed distribution datasets [28, 21, 17, 36] have been introduced in recent years. Since we tackle the long-tailed distribution of characters in STR, we compare our proposed method with the existing long-tailed recognition approaches. For the long-tailed recognition approaches, we adopt the simple techniques that are possible to apply to the STR model: (1) Softmax: the model is trained with the standard cross-entropy loss, (2) Focal loss [28]: relatively easy classes (_i.e._, many characters) are de-emphasized, (3) \(\tau\)-Normalization [21]: the weights of classifier are normalized with the hyper-parameter \(\tau\), (4) PC Softmax [17]: the logits are modified based on the label distribution during inference, (5) Balanced Softmax [36]: adjusting the output logits using the training label distribution. In this experiment, we apply the baselines to TRBA model [3]. The implementation details of baselines and our method are described in the supplementary. For a fair comparison with our method, we train the TRBA [3] model using CS.
Table 2 provides the summary of the performances of baselines and our method. The results demonstrate that our method outperforms the baselines in accuracy signifi
\begin{table}
\begin{tabular}{c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{**Method.**} & **Train** & \multicolumn{4}{c|}{_Korean_} & \multicolumn{4}{c}{_Chinese_} \\ & **Data** & **Real** & **Real\({}_{Easy}\)** & **Real\({}_{Hard}\)** & **Synth\({}_{test}\)** & **Real** & **Real\({}_{Easy}\)** & **Real\({}_{Hard}\)** & **Synth\({}_{test}\)** \\ \hline \hline \multicolumn{2}{c}{**CNN-based**} & & & & & & & & \\ \hline \multirow{2}{*}{TRBA} & WS & 78.25 & 79.43 & 34.14 & 87.47 & 39.19 & 42.99 & 9.25 & 83.23 \\ & CS & 77.43 & 77.87 & 61.25 & 86.37 & 41.83 & 41.72 & 42.71 & 83.31 \\ \cline{2-10} & CS & **81.35** & **81.75** & **66.68** & **88.93** & **47.67** & **48.09** & **44.34** & **86.22** \\ \hline \multirow{2}{*}{TextAdaIN} & WS & 80.35 & 81.57 & 34.54 & 86.97 & 41.33 & 45.78 & 6.30 & 81.73 \\ & CS & 80.43 & 80.80 & 66.60 & 85.82 & 45.76 & 45.82 & **45.30** & 81.80 \\ \cline{2-10} & CS & **82.34** & **82.75** & **66.85** & **88.88** & **47.21** & **47.85** & 42.15 & **85.83** \\ \hline \multicolumn{2}{c}{**ViT-based**} & & & & & & & & \\ \hline \multirow{2}{*}{ViTSTR + Linear} & WS & 80.92 & 81.74 & 50.46 & 90.57 & 44.14 & 46.12 & 28.51 & 89.81 \\ & CS & 81.82 & 82.19 & 68.07 & 90.93 & 49.15 & 48.84 & 51.63 & 90.51 \\ \cline{2-10} & CS & **82.78** & **83.14** & **69.16** & **92.09** & **51.37** & **51.10** & **53.47** & **91.09** \\ \hline \multirow{2}{*}{ViTSTR + Attn} & WS & 83.39 & 84.05 & 58.90 & 91.24 & 48.22 & 50.39 & 31.17 & 89.35 \\ \cline{2-10} & CS & 83.56 & 83.91 & 70.38 & 90.82 & 50.94 & 51.14 & 49.34 & 89.27 \\ \cline{1-1} & CS & **85.39** & **85.75** & **72.17** & **91.66** & **55.21** & **55.45** & **53.38** & **91.24** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy on Korean and Chinese datasets. _2nd_ column indicates training synthetic datasets. Applying CAFE-Net consistently improves the performance on various evaluation datasets: **Real**, **Real\({}_{Easy}\)**, **Real\({}_{Hard}\)** and **Synth\({}_{test}\)**.
cantly, while showing comparable performance in char F1 score. While \(\tau\)-norm [43] generally achieves the best char F1 score, it shows degraded performance in accuracy. Such a result shows that the model fails to learn the contextual information, even with improved char F1 score. CAFENet, however, shows comparable char F1 score (visual representation) while achieving the best accuracy (contextual representation). This result demonstrates the motivation of our work, which is to improve both contextual and visual representation for enhancing performance on STR with languages including numerous number of characters.
**Analysis on Confidence Score.** To better comprehend why confidence ensemble has the capability to appropriately select the expert, we study the confidence score qualitatively and quantitatively. Fig. 5 shows the prediction and the maximum softmax probability of each expert on several samples. Since the context-free expert focuses on the visual representation, it mispredicts confusing characters (Fig. 5 left column). In contrast, we observe that context-aware expert incorrectly predicts _few_ characters as _many_ characters by resorting to the context when making predictions (Fig. 5 right column). We observe that each expert outputs low maxi
\begin{table}
\begin{tabular}{c c c|c c c c c|c} \hline \hline
**Lang.** & **Test Data** & **Metric** & Softmax & Focal & \(\tau\)-norm & PC-Sofmtax & Bal-Softmax & Ours \\ \hline \multirow{6}{*}{_Kr_} & \multirow{2}{*}{**Real**} & Acc & 77.43 & 77.10 & 77.59 & 77.51 & 78.37 & **81.35** \\ & & Char F1 & 0.66/0.79/**0.88** & 0.60/0.75/0.86 & 0.68/0.80/**0.88** & 0.65/0.78/0.87 & 0.62/0.76/0.87 & **0.69/0.81**/0.88 \\ \cline{2-10} & \multirow{2}{*}{**Real\({}_{Easy}\)**} & Acc & 77.87 & 77.52 & 78.02 & 77.94 & 78.75 & **81.75** \\ & & Char F1 & —/0.80/0.88 & —/0.76/0.86 & —/**0.81**/0.88 & —/0.78/0.88 & —/0.77/0.87 & —/**0.81**/**0.89** \\ \cline{2-10} & \multirow{2}{*}{**Real\({}_{Hard}\)**} & Acc & 61.25 & 61.18 & 61.46 & 61.23 & 63.86 & **66.68** \\ & & Char F1 & 0.79/0.78/0.77 & 0.75/0.72/0.73 & 0.79/**0.79**/0.77 & 0.80/0.78/0.77 & 0.79/0.73/0.75 & **0.80**/0.75/**0.78** \\ \cline{2-10} & \multirow{2}{*}{**Synth\({}_{test}\)**} & Acc & 86.37 & 84.63 & 86.34 & 86.04 & 86.84 & **88.93** \\ & & Char F1 & 0.86/0.85/**0.83** & 0.85/0.84/0.81 & **0.87**/**0.86**/0.82 & 0.86/0.85/**0.83** & 0.86/0.85/0.82 & **0.87**/0.85/0.79 \\ \hline \multirow{6}{*}{_Cn_} & \multirow{2}{*}{**Real**} & Acc & 41.83 & 41.45 & 41.85 & 41.74 & 41.26 & **47.67** \\ & & Char F1 & 0.48/0.54/0.57 & 0.45/0.50/0.54 & **0.49**/**0.55**/**0.59** & 0.47/0.52/0.56 & 0.47/0.52/0.55 & 0.48/0.53/0.57 \\ \cline{2-10} & \multirow{2}{*}{**Real\({}_{Easy}\)**} & Acc & 41.72 & 41.48 & 41.76 & 41.63 & 41.14 & **48.09** \\ & & Char F1 & —/0.55/0.58 & —/0.52/0.55 & —/**0.57**/**0.60** & —/0.54/0.57 & —/0.53/0.56 & —/0.55/0.58 \\ \cline{2-10} & \multirow{2}{*}{**Real\({}_{Hard}\)**} & Acc & 42.71 & 41.24 & 42.57 & 42.59 & 42.25 & **44.34** \\ & & Char F1 & 0.57/**0.60**/0.55 & 0.54/0.56/0.53 & 0.57/**0.60**/**0.56** & 0.57/0.59/0.55 & **0.58**/0.59/0.55 & 0.53/0.58/0.55 \\ \cline{2-10} & \multirow{2}{*}{**Synth\({}_{test}\)**} & Acc & 83.31 & 80.06 & 83.18 & 83.02 & 82.07 & **86.22** \\ & & Char F1 & **0.83**/**0.83**/**0.78** & 0.80/0.79/0.76 & **0.83**/**0.83**/**0.78** & 0.82/0.82/**0.78** & 0.82/**0.83**/**0.78** & **0.83**/**0.73** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison with long-tailed recognition baselines and our method trained on CS, where we employ TRBA [3] for STR framework. Char F1 scores represent the scores of _few_ / _medium_ / _many_ characters, respectively. Our method achieves the state-of-the-art STR performance on languages including a long-tailed distribution of characters.
Figure 5: The left and right column indicates the correctly predicted samples of the context-aware and context-free expert, respectively. For each plot under each image, the x-axis and the y-axis indicate character sequence and maximum softmax probability of each character, respectively.
Figure 6: Visualization of relation between the characters and the output probability of each expert on **Real**. X-axis and y-axis indicate the characters sorted by the number of samples and the averaged probability of each character, respectively. The red and the blue dots indicate the probability of the context-aware and context-free experts, respectively.
mum softmax probability with confusing samples (_e.g_., visually confusing character for context-free expert, and _few_ characters for context-aware expert). Our confidence ensemble enables to filter out such low-confident prediction of one expert and select the high-confident prediction of the other expert, improving the STR performance overall.
Fig. 6 visualizes the averaged prediction probability at the ground truth character. We observe that the context-aware expert (red) achieves higher prediction probability with _many_ classes than the context-free expert (blue). On the other hand, the context-free expert shows higher prediction probability with large margin on the _few_ characters compared to the ones of context-aware expert. Such a visualization demonstrates that confidence ensemble enables the two experts to compensate the limitation of each other.
**Expert Selection Ratio.** We analyze the relation between the proportion of the samples allocated to each expert and the character category ratio in the test sets. Interestingly, we discover that the ratio of predictions selected by the context-aware expert in dataset is proportional to the ratio of _many_ characters in a dataset as shown in Fig. 7. In summary, these results indicate that the context-free expert tends to predict the instances containing _few_ or _medium_ characters, whereas the context-aware expert predicts the rest of the instances including only _many_ characters more frequently. We also report the accuracy of each expert in the supplementary.
**Effectiveness of Confidence Ensemble.** In Table 3, we show that careful consideration regarding how to ensemble two different experts is important. We observe that utilizing our method, a word-level confidence ensemble, outperforms the character-level confidence ensemble, which aggregates the outputs at the character level using the maximum softmax probability. The main reason is that the word-level ensemble performs more robustly than the character-level ensemble when misalignment happens between the predicted words by two experts. As shown, while ensemble may be a straightforward and widely used approach, considering such a property for scene text recognition is important. We want to emphasize that our method well reflects such characteristic and improves STR performance.
**Computational Cost.** Fig. 8 summarizes the accuracy on **Real** dataset and the computational costs (_e.g_., flops and the number of parameters). While applying our method consistently improves performance regardless of the model architectures, we observe that our method requires a negligible amount of additional computational costs. The main reason is that we only require an additional classifier, which occupies a negligible amount of weight parameters. For example, about 1% flops and 3\(\sim\)7% parameters increase when applying our method to ViTSTR [2].
## 6 Conclusions
This paper investigates character-level long-tailed distribution in STR, which has been overlooked in STR previously. Our empirical analysis indicates that improving both contextual and visual representation is crucial for improving STR on languages including characters with long-tailed distribution. Based on the finding, we propose a Context-Aware and Free Experts Network (CAFE-Net), which trains two different experts to focus on learning contextual infor
Figure 8: Comparisons on accuracy and computational costs. We use **Real** dataset for the analysis.
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline
**Lang.** & **Method** & **Real** & **Real\({}_{Easy}\)** & **Real\({}_{Hard}\)** \\ \hline \multirow{2}{*}{_Kr_} & Char-level & 80.62 & 81.20 & 59.07 \\ & Word-level & **81.35** & **81.75** & **66.68** \\ \hline \multirow{2}{*}{_Cn_} & Char-level & 45.36 & 47.42 & 29.14 \\ & Word-level & **47.67** & **48.09** & **44.34** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study on ensemble technique using **Real** Korean and Chinese datasets. TRBA model [3] is utilized.
Figure 7: (a) We observe that CAFE-Net selects more predictions from context-aware branch for WS\({}_{test}\) and **Real\({}_{Easy}\)**, while selecting more predictions from context-free branch for **Real\({}_{Hard}\)** and RS\({}_{test}\). CA and CF indicates the context-aware branch and the context-free branch, respectively. (b) We show the proportion of _many_, _medium_, and _few_ characters in each test set.
mation and visual representation, respectively. To aggregate two different experts, we propose the confidence ensemble to improve STR performance on all _many_, _medium_, and _few_ characters. Extensive experiments show that we achieve the state-of-the-art performance with languages showing the long-tailed distributions at the character level. We believe that our work inspires the future researchers to improve STR on languages with numerous characters, which is relatively under-explored compared to STR on English.
| despite recent remarkable improvements in scene text recognition (STR), a majority of the studies focused mainly on the English language, which only includes a few number of characters. However, STR models show a large performance degradation on languages with a numerous number of characters (e.g., Chinese and Korean), especially on characters that rarely appear due to the long-tailed distribution of characters in such languages. To address such an issue, we conducted an empirical analysis using synthetic datasets with different character-level distributions (e.g., balanced and long-tailed distributions). While increasing a substantial number of tail classes without considering the context helps the model to correctly recognize characters individually, training with such a synthetic dataset interferes the model with learning the contextual information (i.e., relation among characters), which is also important for predicting the whole word. Based on this motivation, we propose a novel Context-Aware and Free Experts Network (CAFE-Net) using two experts: 1) a context-aware expert |
2309.07885 | Generating Sets and Algebraic Properties of Pure Mapping Class Groups of
Infinite Graphs | We completely classify the locally finite, infinite graphs with pure mapping
class groups admitting a coarsely bounded generating set. We also study
algebraic properties of the pure mapping class group: We establish a semidirect
product decomposition, compute first integral cohomology, and classify when
they satisfy residual finiteness and the Tits alternative. These results
provide a framework and some initial steps towards quasi-isometric and
algebraic rigidity of these groups. | George Domat, Hannah Hoganson, Sanghoon Kwak | 2023-09-14T17:31:35 | http://arxiv.org/abs/2309.07885v1 | # Generating sets and algebraic properties of pure mapping class groups of infinite graphs
###### Abstract.
We completely classify the locally finite, infinite graphs with pure mapping class groups admitting a coarsely bounded generating set. We also study algebraic properties of the pure mapping class group: We establish a semidirect product decomposition, compute first integral cohomology, and classify when they satisfy residual finiteness and the Tits alternative. These results provide a framework and some initial steps towards quasi-isometric and algebraic rigidity of these groups.
## 1. Introduction
A recent surge of interest in mapping class groups of infinite-type surfaces has prompted the emergence of a "big" analog of \(\mathrm{Out}(F_{n})\) as well. Algom-Kfir-Bestvina [1] propose that the appropriate analog is the group of self proper homotopy equivalences up to proper homotopy of a locally finite, infinite graph.
One main difficulty of studying these "big" groups is that the classical approach of geometric group theory is not applicable. In particular, the mapping class groups of infinite-type surfaces and those of locally finite, infinite graphs are generally not finitely generated, and not even compactly generated. Fortunately, they are still Polish groups (separable and completely metrizable topological groups), to which Rosendal provides a generalized geometric group theoretic approach. The role of a finite or compact generating set is replaced with a _coarsely bounded_ (CB) generating set. For example, a group that admits a coarsely bounded generating set has a well-defined quasi-isometry type [13, Theorem 1.2 and Proposition 2.72], and a coarsely bounded group is quasi-isometric to a point. A group with a coarsely bounded neighborhood around the identity is said to be _locally coarsely bounded_, which is equivalent to having a well-defined _coarse equivalence_ type, and is necessary to have a coarsely bounded generating set. Using this framework, Mann-Rafi [12] gave a classification of (tame) surfaces whose mapping class groups are coarsely bounded, locally coarsely bounded, and generated by a coarsely bounded set. This established a first step toward studying the coarse geometry of mapping class groups of infinite-type surfaces. Recently, Thomas Hill [10] gave a complete classification of surfaces that have _pure_ mapping class groups with the aforementioned coarse geometric properties, without the tameness condition.
In the authors' previous work [11], we gave a complete classification of graphs with coarsely bounded, and locally coarsely bounded, pure mapping class groups, the subgroup of the mapping class group fixing the ends of the graph pointwise. In this paper, we provide the complete classification of infinite graphs with CB-generated pure mapping class groups, fulfilling our goal to provide a foundation for studying the coarse geometry of these groups. In the following statement, \(E\) refers to the space of ends of the graph \(\Gamma\) and \(E_{\ell}\) is the subset of ends accumulated by loops.
**Theorem A**.: _Let \(\Gamma\) be a locally finite, infinite graph. Then its pure mapping class group, \(\mathrm{PMap}(\Gamma)\), is CB generated if and only if either \(\Gamma\) is a tree, or satisfies both:_
1. \(\Gamma\) _has finitely many ends accumulated by loops, and_
2. there is no accumulation point in \(E\setminus E_{\ell}\)._
**Remark 1.1**.: Alternatively, we have a constructive description: \(\mathrm{PMap}(\Gamma)\) is CB generated if and only if \(\Gamma\) can be written (not necessarily uniquely) as a finite wedge sum of the four graphs from Figure 1.
Table 1 illustrates the complete classification of graphs with CB, locally CB, and CB-generated pure mapping class group. Observe the trend that \(\mathrm{PMap}(\Gamma)\) admits more complicated coarse geometric properties when \(\Gamma\) has more complicated geometry.
The main tool used to prove Theorem A is the following semidirect decomposition of the pure mapping class group (reminiscent of [1, Corollary 4] in the surface setting):
**Theorem B**.: _Let \(\Gamma\) be a locally finite graph. Let \(\alpha=\max\{0,|E_{\ell}(\Gamma)|-1\}\) for \(|E_{\ell}(\Gamma)|<\infty\) and \(\alpha=\aleph_{0}\) otherwise. Then we have the following short exact sequence,_
\[1\longrightarrow\overline{\mathrm{PMap}_{c}(\Gamma)}\longrightarrow\mathrm{ PMap}(\Gamma)\longrightarrow\mathbb{Z}^{\alpha}\longrightarrow 1\]
_which splits. In particular, we have \(\mathrm{PMap}(\Gamma)=\overline{\mathrm{PMap}_{c}(\Gamma)}\rtimes\mathbb{Z}^{\alpha}\)._
Here, \(\overline{\mathrm{PMap}_{c}(\Gamma)}\) is the closure of the group of compactly supported mapping classes and \(\mathbb{Z}^{\alpha}\) is generated by commuting loop shifts.
As a corollary, we compute the rank of the first integral cohomology of \(\mathrm{PMap}(\Gamma)\). This allows us to see that the number of ends acummuluted by loops of a graph \(\Gamma\) is an algebraic invariant of \(\mathrm{PMap}(\Gamma)\).
**Corollary C**.: _For every locally finite, infinite graph \(\Gamma\),_
\[\mathrm{rk}\left(H^{1}(\mathrm{PMap}(\Gamma);\mathbb{Z})\right)=\begin{cases}0& \text{if }|E_{\ell}|\leq 1,\\ n-1&\text{if }2\leq|E_{\ell}|=n<\infty,\\ \aleph_{0}&\text{otherwise}.\end{cases}\]
We also show that \(\mathrm{PMap}(\Gamma)\) distinguishes graphs of finite rank from graphs of infinite rank. Recall a group is _residually finite_ if it can be embedded into a direct product of finite groups.
**Theorem D**.: \(\mathrm{PMap}(\Gamma)\) _is residually finite if and only if \(\Gamma\) has finite rank._
Figure 1. Every graph with a CB-generated pure mapping class group can be written as a finite wedge sum of these four graphs. From left to right these are: a single loop, a single ray, a Loch Ness monster graph, and a Millipede monster graph.
A group satisfies the _Tits Alternative_ if every subgroup is either virtually solvable or contains a nonabelian free group. Interestingly, it is exactly the graphs with \(\mathrm{PMap}(\Gamma)\) residually finite that satisfy the Tits Alternative.
**Theorem E**.: \(\mathrm{PMap}(\Gamma)\) _satisfies the Tits Alternative if and only if \(\Gamma\) has finite rank._
These three results are steps towards determining when the isomorphism type of \(\mathrm{PMap}(\Gamma)\) determines the graph \(\Gamma\), as in the surface case [1].
If \(\Gamma\) is the infinite rank graph with a single end (the Loch Ness monster graph) and \(\Gamma^{\prime}\) is the wedge sum of \(\Gamma\) with a single ray, then the groups \(\mathrm{PMap}(\Gamma)\) and \(\mathrm{PMap}(\Gamma^{\prime})\) inject into \(\mathrm{Out}(F_{\infty})\) and \(\mathrm{Aut}(F_{\infty})\), respectively, by [1, Theorem 3.1 and Lemma 3.2]. Thus we immediately get the following corollary. We note that one can instead prove this directly, e.g. see [3].
**Corollary F**.: _For \(F_{\infty}\), the free group on a countably infinite set, \(\mathrm{Aut}(F_{\infty})\) and \(\mathrm{Out}(F_{\infty})\) are not residually finite and do not satisfy the Tits alternative._
**Comparison with Surfaces.** The statement of Theorem B is exactly the same as for pure mapping class groups of surfaces, seen in Aramayona-Patel-Vlamis [1]. Although the proof we give is similar in spirit as well, we have to make use of different tools. In [1] the authors make use of the _homology of separating curves_ on a surface and build an isomorphism between the first cohomology of the pure mapping class group and this homology group. For graphs, we do not have any curves to take advantage of. Instead we use partitions of the space of ends accumulated by loops. In order to make
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(t=\) the number of components of \(\Gamma\setminus\Gamma_{c}\) & \multicolumn{2}{c}{Finite rank \(r\)} & \multicolumn{2}{c}{Infinite rank with \(|E_{\ell}|=:n\)} \\ \cline{2-5} with infinite end spaces & \(r=0\) & \(r\in[1,\infty)\) & \(n=1\) & \(n\in[2,\infty)\) & \(n=\infty\) \\ \hline \hline \end{tabular}
\begin{tabular}{c c c c c} \hline \hline \(t=0\) & \(\begin{array}{c}\text{CB}\\ \text{CB}\end{array}\) & \(\begin{array}{c}\text{CB}\end{array}\) & \(\begin{array}{c}\text{CB}\end{array}\) & \(\begin{array}{c}\text{CB-generated}\\ \text{COCO}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t\in[1,\infty)\) & \(\begin{array}{c}\text{CB}\end{array}\) & \(\begin{array}{c}\text{locally CB}\end{array}\) & \(\begin{array}{c}\text{locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}
this precise and give this an algebraic structure we make use of the group of locally constant integral functions on \(E_{\ell}(\Gamma)\), i.e., the zeroth Cech cohomology of \(E_{\ell}(\Gamma)\), denoted as \(\mathring{C}(E_{\ell}(\Gamma))\). On a surface, any separating simple closed curve determines a partition of the end space. We can use this to show that the first cohomology groups of pure mapping class groups of graphs and surfaces are often in fact _naturally_ isomorphic. This also gives a slightly alternate proof of the main results in [1].
**Corollary G**.: _Let \(S\) be an infinite-type surface of genus at least one and \(\Gamma\) a locally finite, infinite graph. If \(E_{g}(S)\) is homeomorphic to \(E_{\ell}(\Gamma)\), then both \(H^{1}(\operatorname{PMap}(S);\mathbb{Z})\) and \(H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\) are isomorphic to \(\mathring{C}(E_{g}(S))\cong\mathring{C}(E_{\ell}(\Gamma))\)._
### Rigidity Questions
The results above fit naturally into a general question about (pure) mapping class groups of infinite graphs. Namely: "How much does the group \(\operatorname{PMap}(\Gamma)\) determine the graph \(\Gamma\)?" One can obtain more concrete questions by considering certain types of rigidity. We will focus on _algebraic_ and _quasi-isometric_ rigidity. In the finite-type setting, mapping class groups of surfaces and \(\operatorname{Out}(F_{n})\) are known to exhibit strong rigidity properties. Various results starting with Ivanov [14] (see also [14, 15, 16, 17]) establish strong forms of algebraic rigidity and Behrstock-Kleiner-Minsky-Mosher [1] establish quasi-isometric rigidity for \(\operatorname{Map}(S)\). For \(\operatorname{Out}(F_{n})\) we also have strong forms of algebraic rigidity from the work of Farb-Handel [13] building on [15, 16] (see also [12]). Quasi-isometric rigidity for \(\operatorname{Out}(F_{n})\) is still unknown.
For infinite-type surfaces, the work of Bavard-Dowdall-Rafi [1] established a strong form of algebraic rigidity a la Ivanov (see also [10]). The question of quasi-isometric rigidity is still open, but Mann-Rafi [16] give a classification of which mapping class groups of tame infinite-type surfaces have a well-defined quasi-isometry type and which of those are trivial. This allows one to begin to distinguish between some of the mapping class groups (see also [11]).
One can ask the same rigidity questions for infinite graphs. The picture becomes less clear than in the surface case. In particular, trees fail to have algebraic rigidity for the _pure_ mapping class group, as they all have trivial pure mapping class group. Failure is also present for the full mapping class group. Let \(T\) be the regular trivalent tree and let \(T^{\prime}\) be the wedge sum of \(T\) with a single ray. Note that \(E(T)=\mathcal{C}\), a Cantor set, and \(E(T^{\prime})=\mathcal{C}\sqcup\{*\}\), a Cantor set together with a single isolated point. Now we have that \(\operatorname{Map}(T)=\operatorname{Homeo}(\mathcal{C})\) and \(\operatorname{Map}(T^{\prime})=\operatorname{Homeo}(\mathcal{C}\sqcup\{*\})\). However, these two groups are isomorphic, as any homeomorphism fixes the extra end \(*\) of \(T^{\prime}\). There are even more complicated examples of this failure of algebraic rigidity for mapping class groups of trees that come from work on Boolean algebras by McKenzie [14] answering a rigidity conjecture of Monk [15].
The results in this paper allow one to ask several natural rigidity questions for the pure mapping class groups of infinite graphs. We will restrict to some nice classes of graphs in order to state concrete questions. All of the following families of graphs are CB generated by Theorem A and hence have a well-defined quasi-isometry type. Let \(\Gamma_{n}\) denote the graph with exactly \(n\) ends, each of which is accumulated by loops.
**Question 1.2**.: Let \(n,m\geq 2\). If \(\operatorname{PMap}(\Gamma_{n})\) is quasi-isometric to \(\operatorname{PMap}(\Gamma_{m})\), then does \(n=m\)?
By Corollary C we do know that \(\operatorname{PMap}(\Gamma_{n})\) is algebraically isomorphic to \(\operatorname{PMap}(\Gamma_{m})\) if and only if \(n=m\). We can also use the fact that \(\operatorname{PMap}(\Gamma_{1})\) is CB to see that \(\operatorname{PMap}(\Gamma_{1})\) is not quasi-isometric to \(\operatorname{PMap}(\Gamma_{n})\) for any \(n\neq 1\). However, the general question is still open. In the authors' previous work [10], we computed asymptotic dimension for
all of these groups. However, it is infinite for \(n>1\). Therefore, in order to answer this question one would need to study and/or develop other "big" quasi-isometry invariants.
Instead of comparing the effect of changing the number of ends accumulated by loops, one could ask the same question for rays. Namely, let \(\Gamma_{n,r}\) denote the graph with \(n\) ends accumulated by loops and \(r\) rays. We start by asking for distinguishing features of "no ray" versus "one ray."
**Question 1.3**.: Is \(\operatorname{PMap}(\Gamma_{n,0})\) quasi-isometric to \(\operatorname{PMap}(\Gamma_{n,1})\)?
In fact, here we do not even know algebraic rigidity.
**Question 1.4**.: Is \(\operatorname{PMap}(\Gamma_{n,0})\) isomorphic to \(\operatorname{PMap}(\Gamma_{n,1})\)?
The other large family of graphs with CB-generated pure mapping class groups are the finite-type ones. Let \(\Omega_{n,r}\) denote the graph of finite rank \(n\) with \(r<\infty\) rays attached. We know that no \(\operatorname{PMap}(\Omega_{n,r})\) is isomorphic to any \(\operatorname{PMap}(\Gamma_{m})\) by using either residual finiteness Theorem D or the Tits alternative Theorem E. We do not know if any of them are quasi-isometric. Note that \(\operatorname{PMap}(\Omega_{n,r})\) is always finitely generated, but this does not preclude it from being quasi-isometric to an uncountable group.
**Question 1.5**.: Is \(\operatorname{PMap}(\Omega_{m,r})\) ever quasi-isometric to \(\operatorname{PMap}(\Gamma_{n})\), for \(m,r,n>1\)?
### Outline
In Section 2, we give background on mapping class groups of infinite graphs, examples of elements in the pure mapping class group, and coarse geometry of groups. In Section 3, we prove our semidirect product decomposition, Theorem B. We also prove Corollary C in Section 3.5. By exploiting the semidirect decomposition of \(\operatorname{PMap}(\Gamma)\), we prove the CB-generation classification, Theorem A, in Section 4. In Section 5 and Section 6, we finish by proving the residual finiteness characterization Theorem D and Tits alternative characterization Theorem E.
### Acknowledgments
Thank you to Mladen Bestvina for providing an idea of the proof for Lemma 3.14 and suggestion to use the zeroth Cech cohomology to prove Lemma 3.18. We also thank Priyam Patel for helpful conversations toward Section 5 and Theorem D, along with answering questions regarding [20] and [1]. Also we thank Camille Horbez for clarifying the proof of Fact 6.6.
The first author was supported in part by the Fields Institute for Research in Mathematical Sciences, NSF DMS-1745670, and NSF DMS-2303262. The second author was supported by NSF DMS-2303365. The third author acknowledges the support from the University of Utah Graduate Research Fellowship.
###### Contents
* 2 Preliminaries
* 2.1 Mapping class groups of infinite graphs
* 2.2 Elements of \(\operatorname{PMap}(\Gamma)\)
* 2.2.1 Loop swaps
* 2.2.2 Word maps
* 2.2.3 Loop shifts
* 2.3 Coarse geometry of groups
* 3 Semidirect product structure and cohomology
* 3.1 The case \(|E_{\ell}|\leq 1\)
* 3.2 Flux maps
* 3.3 Flux zero maps
* 3.4 Space of flux maps
### 2. Preliminaries
#### Mapping class groups of infinite graphs
Let \(\Gamma\) be a locally finite, infinite graph. Informally, an _end_ of a graph is a way to travel to infinity in the graph. The space of ends (or, the end space), denoted by \(E(\Gamma)\), is defined as:
\[E(\Gamma)=\varprojlim_{K\subset\Gamma}\pi_{0}(\Gamma\setminus K),\]
where \(K\) runs over compact sets of \(\Gamma\) in the inverse limit. Then each element of \(E(\Gamma)\) is called an **end** of \(\Gamma\). An end \(e\) of \(\Gamma\) is said to be **accumulated by loops** if the sequence of complementary components in \(\Gamma\) corresponding to \(e\) only consists of infinite rank graphs. Colloquially, if one continues to see loops along the way to \(e\). We denote by \(E_{\ell}(\Gamma)\) the set of ends of \(\Gamma\) accumulated by loops. Note \(E_{\ell}(\Gamma)\) is a closed subset of \(E(\Gamma)\), and \(E(\Gamma)\) can be realized as a closed subset of a Cantor set (hence so is \(E_{\ell}(\Gamma)\)). We say that the **characteristic triple** of \(\Gamma\) is the triple \((r(\Gamma),E(\Gamma),E_{\ell}(\Gamma))\), where \(r(\Gamma)\in\mathbb{Z}_{\geq 0}\cup\{\infty\}\) is the rank of \(\pi_{1}(\Gamma)\).
Now we define the mapping class group of a locally finite, infinite graph \(\Gamma\). Recall that a map is **proper** if the pre-image of every compact set is compact.
**Definition 2.1**.: [1] The **mapping class group** of \(\Gamma\), denoted \(\operatorname{Map}(\Gamma)\), is the group of proper homotopy classes of proper homotopy equivalences of \(\Gamma\). The **pure mapping class group**, denoted \(\operatorname{PMap}(\Gamma)\), is the closed subgroup consisting of maps that fix the ends of \(\Gamma\) pointwise. More precisely, it is the kernel of the action of \(\operatorname{Map}(\Gamma)\) on the end space \((E(\Gamma),E_{\ell}(\Gamma))\) by homeomorphisms, hence fitting into the following short exact sequence:
\[1\longrightarrow\operatorname{PMap}(\Gamma)\longrightarrow\operatorname{ Map}(\Gamma)\longrightarrow\operatorname{Homeo}(E,E_{\ell})\longrightarrow 1\]
When \(E(\Gamma)\setminus E_{\ell}(\Gamma)\) is nonempty and compact, we can further decompose \(\operatorname{PMap}(\Gamma)\) into subgroups of _core maps_ and of _ray maps_. To state the result, we need to introduce a few concepts.
**Definition 2.2**.: Let \(\Gamma\) be a locally finite, infinite graph. Denote by \(\Gamma_{c}\) the **core graph** of \(\Gamma\), the smallest connected subgraph of \(\Gamma\) that contains all immersed loops in \(\Gamma\). When \(E(\Gamma)\setminus E_{\ell}(\Gamma)\) is nonempty, pick \(e_{0}\in E(\Gamma)\setminus E_{\ell}(\Gamma)\) and denote by \(\Gamma_{c}^{*}\) be the subgraph consisting of \(\Gamma_{c}\) and a choice of embedded ray in \(\Gamma\) limiting to \(e_{0}\) such that the ray intersects \(\Gamma_{c}\) in exactly one point.
Define \(\pi_{1}(\Gamma_{c}^{*},e_{0})\) to be the set of proper homotopy equivalence classes of lines in \(\Gamma_{c}^{*}\), both ends of which limit to \(e_{0}\). We endow it with a group structure by concatenation. This group is naturally isomorphic to \(\pi_{1}(\Gamma_{c}^{*},p)\) for any choice of basepoint \(p\in\Gamma_{c}^{*}\). Finally, define \(\mathcal{R}\) as the group of maps \(h:E(\Gamma)\rightarrow\pi_{1}(\Gamma_{c}^{*},e_{0})\) such that
1. \(h(e_{0})=1\), and
2. \(h\) is locally constant,
where the group operation is the pointwise multiplication in \(\pi_{1}(\Gamma_{c}^{*},e_{0})\).
We have the following decomposition of \(\mathrm{PMap}(\Gamma)\):
**Proposition 2.3** ([1, Corollary 3.9]).: _Let \(\Gamma\) be a locally finite, infinite graph with \(E(\Gamma)\setminus E_{\ell}(\Gamma)\) nonempty and compact. Then_
\[\mathrm{PMap}(\Gamma)\cong\mathcal{R}\rtimes\mathrm{PMap}(\Gamma_{c}^{*}).\]
_In particular, when \(\Gamma\) has finite rank \(n\geq 0\) and finitely many ends, say \(|E(\Gamma)|=e\), then_
\[\mathrm{PMap}(\Gamma)\cong\begin{cases}\mathrm{Out}(F_{n}),&\text{if $e=0$,}\\ F_{n}^{e-1}\rtimes\mathrm{Aut}(F_{n}),&\text{if $e\geq 1$.}\end{cases}\]
**Remark 2.4**.: Any time \(K\) is a connected, compact subgraph of a locally finite, infinite graph \(\Gamma\), we use \(\mathrm{PMap}(K)\) to refer to the group of proper homotopy equivalences of \(K\) that fix \(\partial K\) pointwise up to proper homotopy fixing \(\partial K\). This group is naturally isomorphic to the group \(\mathrm{PMap}(\tilde{K})\) where \(\tilde{K}\) is the graph \(K\) together with a ray glued to each point in \(\partial K\). Applying the above proposition we see that \(\mathrm{PMap}(K)\) is always of the form \(F_{n}^{e-1}\rtimes\mathrm{Aut}(F_{n})\) for some \(n\) and \(e\) because \(K\) is always a proper subset of \(\Gamma\), so \(\partial K\) is nonempty.
The pure mapping class group \(\mathrm{PMap}(\Gamma)\) records the internal symmetries of \(\Gamma\). Contractible graphs (trees) have no internal symmetries. This follows from the work of Ayala-Dominguez-Marquez-Quintero [1]. They give a proper homotopy equivalence classification of locally finite, infinite graphs.
**Theorem 2.5** ([1, Theorem 2.7]).: _Let \(\Gamma\) and \(\Gamma^{\prime}\) be two locally finite graphs of the same rank. A homeomorphism of end spaces \((E(\Gamma),E_{\ell}(\Gamma))\to(E(\Gamma^{\prime}),E_{\ell}(\Gamma^{\prime}))\) extends to a proper homotopy equivalence \(\Gamma\to\Gamma^{\prime}\). If \(\Gamma\) and \(\Gamma^{\prime}\) are trees, then this extension is unique up to proper homotopy._
The second statement of Theorem 2.5 implies the following.
**Proposition 2.6**.: _Let \(\Gamma\) be a locally finite, infinite graph with \(\pi_{1}(\Gamma)=1\). Then \(\mathrm{PMap}(\Gamma)=1\)._
In [1] the authors endow \(\mathrm{Map}(\Gamma)\) with the compact-open topology and show that this gives \(\mathrm{Map}(\Gamma)\), and hence \(\mathrm{PMap}(\Gamma)\), the structure of a Polish group. A neighborhood basis about the identity for the topology is given by sets of the form
\[\mathcal{V}_{K} :=\{[f]\in\mathrm{Map}(\Gamma)|\ \exists f^{\prime}\in[f]\ \text{s.t.}\ f^{\prime}|_{K}=\mathrm{id}\,\text{ and}\] \[f^{\prime}\ \text{preserves the complementary components of $K$ setwise}\}\]
where \(K\) is a compact subset of \(\Gamma\).
Recall the **support** of a continuous map \(\phi:X\to X\) is the closure of the set of points \(x\in X\) such that \(\phi(x)\neq x\). The group of compactly supported mapping classes, denoted by \(\mathrm{PMap}_{c}(\Gamma)\), is the subgroup of \(\mathrm{PMap}(\Gamma)\) consisting of classes that have a compactly supported representative. Its closure in this topology is denoted by \(\overline{\mathrm{PMap}_{c}(\Gamma)}\) and it is a closed (hence Polish) subgroup of \(\mathrm{PMap}(\Gamma)\).
As proper homotopy equivalences are not necessarily injective, unlike homeomorphisms, we need the following alternate notion of support for a proper homotopy equivalence.
**Definition 2.7**.: We say that \([f]\in\operatorname{Map}(\Gamma)\) is **totally supported** on \(K\subset\Gamma\) if there is a representative \(f^{\prime}\in[f]\) so that \(f^{\prime}(K)=K\) and \(f^{\prime}|_{\Gamma\setminus K}=\operatorname{id}\).
To see how a proper homotopy equivalence can have different support and total support, consider a rose graph with two loops labeled by \(a_{1}\) and \(a_{2}\). Then a (proper) homotopy equivalence mapping \(a_{1}\) to \(a_{1}a_{2}\) which is the identity elsewhere is supported on \(a_{1}\), but not totally supported on \(a_{1}\). It is totally supported on \(a_{1}\cup a_{2}\). This is in contrast with homeomorphisms on surfaces, where \(f\) is supported on \(K\) if and only if \(f\) is totally supported on \(K\).
As mapping class groups of graphs are independent of the proper homotopy equivalence representative of the graph, it is often useful to consider a'standard' representative within a proper homotopy equivalence class of graphs.
**Definition 2.8**.: A locally finite graph, \(\Gamma\), is in **standard form** if \(\Gamma\) is a tree with loops attached at some of the vertices. We endow \(\Gamma\) with the path metric that assigns each edge length \(1\).
We also give special names to specific graphs that we will reference often.
**Definition 2.9**.: The **Loch Ness Monster** graph is the graph with characteristic triple \((\infty,\ \{*\},\ \{*\})\). The **Millipede Monster** graph is the graph with characteristic triple \((\infty,\ \{0\}\cup\{\frac{1}{n}\mid n\in\mathbb{Z}_{>0}\},\ \{0\})\). A **monster** graph refers to either one of these.
### Elements of \(\operatorname{PMap}(\Gamma)\)
Here we give a brief treatment of elements of \(\operatorname{PMap}(\Gamma)\). For more detailed definitions with examples, see [1, Section 3].
#### 2.2.1. Loop swaps
A Loop swap is an order \(2\) proper homotopy equivalence induced from a transposition automorphism of a free group. It is totally supported on a compact set. More precisely, we define it as follows.
**Definition 2.10**.: Let \(\Gamma\) be a locally finite graph in standard form with \(\operatorname{rk}\Gamma\geq 2\). Let \(A\) and \(B\) be disjoint finite subsets of loops such that \(|A|=|B|\). Then the **loop swap**\(\mathcal{L}(A,B)\) is a proper homotopy equivalence induced from the group isomorphism on \(\pi_{1}(\Gamma)\) swapping the free factors corresponding to \(A\) and \(B\).
More concretely, pick a basepoint \(p\in\Gamma\) and collapse each maximal tree of the subgraphs corresponding to \(A\) and \(B\) in \(\pi_{1}(\Gamma,p)\). This results in two roses of \(|A|=|B|\) petals. Then swap the two roses, followed by blowing up each rose to the original subgraph. Define \(\mathcal{L}(A,B)\) as the composition of these three maps. Note \(\mathcal{L}(A,B)\in\operatorname{PMap}_{c}(\Gamma)\).
As mentioned above, loop swaps of a graph correspond to the transposition free group automorphisms, which are part of generating set for \(\operatorname{Aut}(F_{n})\). (See Section 4).
#### 2.2.2. Word maps
Next, word maps, which are the most diverse kind of mapping classes among the three kinds of maps introduced in this section.
**Definition 2.11**.: Let \(\Gamma\) be a locally finite graph with \(\operatorname{rk}\Gamma\geq 1\), with a base point \(p\in\Gamma\). Let \(w\in\pi_{1}(\Gamma,p)\) and \(I\) be an interval in an edge of \(\Gamma\). Then the **word map**\(\varphi_{(w,I)}\) is a proper homotopy equivalence that maps \(I\) to a path in \(\Gamma\) determined by \(w\in\pi_{1}(\Gamma,p)\) and is the identity outside of \(I\).
See [1, Section 3.3] for a careful construction of these maps. Note \(\varphi_{(w,I)}\) is supported on \(I\), but in general not _totally_ supported on \(I\). Rather, it is totally supported on the compact set that is the union of \(I\) with a path in \(\Gamma\) determined by \(w\in\pi_{1}(\Gamma,p)\).
The following two properties of word maps will be important in Section 4.
**Lemma 2.12** ([14, Lemma 3.5]).: _If \(I\) is contained in an edge of \(\Gamma\setminus\Gamma_{c}\) and \(w_{1},w_{2}\) are elements in \(\pi_{1}(\Gamma,p)\), then_
\[[\varphi_{(w_{1},I)}\circ\varphi_{(w_{2},I)}]=[\varphi_{(w_{1}w_{2},I)}]\]
_in \(\operatorname{PMap}(\Gamma)\)._
**Lemma 2.13** ([14, Lemma 3.10]).: _Let \(I\) be an interval of \(\Gamma\) which is outside of \(\Gamma_{c}\), and \(\psi\in\operatorname{PMap}(\Gamma)\) be totally supported on a compact subgraph of \(\Gamma_{c}\). Then_
\[\psi\circ[\varphi_{(w,I)}]\circ\psi^{-1}=[\varphi_{(\psi_{*}(w),I)}].\]
In particular, we can use Lemma 2.13 when \(\psi\) is a loop swap.
#### 2.2.3. Loop shifts
Loop shifts are to graphs as handle shifts are to surfaces, which were introduced in Patel-Vlamis [11]. We first define a loop shift on the standard form of the graph \(\Lambda\), the graph with characteristic triple \((\infty,\{e_{-},e_{+}\},\{e_{-},e_{+}\})\). (See Definition 2.8.) Embed \(\Lambda\) in \(\mathbb{R}^{2}\) by identifying the maximal tree with the \(x\)-axis such that \(e_{\pm}\) is identified with \(\pm\infty\) of the \(x\)-axis, and each vertex is identified with an integer point in the \(x\)-axis. Identify the loops with the circles \(\{(x-n)^{2}+(y-\frac{1}{4})^{2}=\frac{1}{16}\}_{n\in\mathbb{Z}}\). Note these circles are tangent to the integer points \(\{(n,0)\}_{n\in\mathbb{Z}}\), thus representing the loops in \(\Lambda\). Now define the **primitive loop shift**\(h\) on \(\Lambda\) as the horizontal translation \(x\mapsto x+1\). One can also omit some loops from \(\Lambda\) and define the loop shift to avoid those loops. For a more general definition, see [14, Section 3.4].
**Definition 2.14**.: Now we define the loop shift on a locally finite, infinite graph \(\Gamma\) with \(|E_{\ell}|\geq 2\). Pick two distinct ends \(e_{-},e_{+}\in E_{\ell}(\Gamma)\) accumulated by loops. By considering a standard form of \(\Gamma\), we can find an embedded ladder graph \(\Lambda\) in \(\Gamma\) such that \(e_{\pm}\) is identified with \(e_{\pm}\) of \(\Lambda\), respectively. Now define the **primitive loop shift**\(h\) on \(\Gamma\) associated to \((e_{-},e_{+})\) as the proper homotopy equivalence induced from the primitive loop shift on the embedded ladder graph \(\Lambda\). For the rest of the graph, define \(h\) to be the identity outside of the \(\frac{1}{2}\)-neighborhood of \(\Lambda\) and interpolate between the shift and the identity on the \(\frac{1}{2}\)-neighborhood.
Finally, a proper homotopy equivalence \(f\) on \(\Gamma\) is a **loop shift** if \(f=h^{n}\) for some primitive loop shift \(h\) and \(n\in\mathbb{Z}\setminus\{0\}\).
### Coarse geometry of groups
**Definition 2.15**.: Let \(A\) be a subset of a topological group \(G\). Then \(A\) is **coarsely bounded (CB)** in \(G\) if for every continuous isometric action of \(G\) on a metric space, every orbit is bounded.
We say a group is **CB-generated** if it has an algebraic generating set that is CB. Similarly, a group is **locally CB** if it admits a CB neighborhood of the identity. In Section 4, we will construct a CB-generating set for the pure mapping class groups of certain graphs, proving the if direction of Theorem A. On the other hand, we have previously classified which graphs have CB or locally CB mapping class groups:
**Theorem 2.16** ([14, Theorem A, D]).: _Let \(\Gamma\) be a locally finite, infinite graph. Then its pure mapping class group \(\operatorname{PMap}(\Gamma)\) is coarsely bounded if and only if one of the following holds:_
* \(\Gamma\) _has rank zero, or_
* \(\Gamma\) _has rank one, and has one end, or_
* \(\Gamma\) _is a monster graph with finitely many rays attached._
_Moreover, \(\operatorname{PMap}(\Gamma)\) is locally coarsely bounded if and only if one of the following holds:_
* \(\Gamma\) _has finite rank, or_
* \(\Gamma\) _satisfies both:_ 1. \(|E_{\ell}(\Gamma)|<\infty\)_, and_ 2. _only finitely many components of_ \(\Gamma\setminus\Gamma_{c}\) _have infinite end spaces._
**Remark 2.17**.: Mirroring the constructive description in Remark 1.1 of the CB-generated \(\mathrm{PMap}(\Gamma)\) classification, we can alternatively characterize the locally CB condition as: \(\mathrm{PMap}(\Gamma)\) is locally CB if and only if \(\Gamma\) can be written as a finite wedge sum of single loops, monster graphs, and trees.
After confirming that a group is CB-generated, the Rosendal framework enables the exploration of the group through the lens of coarse geometry.
**Theorem 2.18**.: _[_13_, Theorem 1.2, Proposition 2.72]_ _Let \(G\) be a CB-generated Polish group. Then \(G\) has a well-defined quasi-isometry type. Namely, any two CB-generating sets for \(G\) give rise to quasi-isometric word metrics on \(G\)._
## 3. Semidirect product structure and cohomology
In this section, we prove:
**Theorem 3.1** (Theorem B, revisited).: _Let \(\Gamma\) be a locally finite graph. Let \(\alpha=\max\{0,|E_{\ell}(\Gamma)|-1\}\) for \(|E_{\ell}(\Gamma)|<\infty\) and \(\alpha=\aleph_{0}\) otherwise. Then we have the following short exact sequence,_
\[1\longrightarrow\overline{\mathrm{PMap}_{c}(\Gamma)}\longrightarrow\mathrm{ PMap}(\Gamma)\longrightarrow\mathbb{Z}^{\alpha}\longrightarrow 1\]
_which splits. In particular, we have \(\mathrm{PMap}(\Gamma)=\overline{\mathrm{PMap}_{c}(\Gamma)}\rtimes\mathbb{Z}^{\alpha}\)._
The map to \(\mathbb{Z}^{\alpha}\) is defined using _flux maps_, which were first defined for locally finite, infinite graphs in [1]. We quickly treat the case when the graph has at most one end accumulated by loops in Section 3.1. Then in Section 3.2, we recap the necessary definitions for flux maps and further expand on their properties. In Section 3.3, we characterize \(\overline{\mathrm{PMap}_{c}(\Gamma)}\) as the common kernel of all flux maps (Theorem 3.11), which provides the left side of the desired splitting short exact sequence. Then in Section 3.4, we construct the other side of the short exact sequence by finding a section, proving Theorem B. This requires us to study the space of flux maps, which is done in the same subsection. As an application, in Section 3.5 we compute the first integral cohomology of \(\mathrm{PMap}(\Gamma)\). Finally, we show the same approach could have been applied to infinite-type surfaces in Section 3.6 to recover the surface version of Theorem B by Aramayona-Patel-Vlamis [1], by showing that there is a natural isomorphism between the first cohomology of the pure mapping class groups of infinite-type surfaces and infinite graphs.
### The case \(|E_{\ell}|\leq 1\)
**Proposition 3.2**.: _Let \(\Gamma\) be a locally finite, infinite graph with \(|E_{\ell}|\leq 1\). Then \(\mathrm{PMap}(\Gamma)=\overline{\mathrm{PMap}_{c}(\Gamma)}\). Furthermore, if \(|E_{\ell}|=0\), then \(\mathrm{PMap}(\Gamma)=\mathrm{PMap}_{c}(\Gamma)\)._
Proof.: The case when \(|E_{\ell}(\Gamma)|=1\) is the result of [1, Corollary 4.5]. Now we assume \(|E_{\ell}(\Gamma)|=0\), i.e., \(\Gamma\) has finite rank.
Let \(f\in\mathrm{PMap}(\Gamma)\). Because \(f\) is proper, \(f^{-1}(\Gamma_{c})\) is compact. Thus, there is some connected compact set \(K\) such that \(\Gamma_{c}\cup f^{-1}(\Gamma_{c})\subset K\). Now \(f|_{\Gamma\setminus K}\) is a proper homotopy equivalence between two contractible sets and thus \(f\) can be homotoped to be totally supported on \(K\). Hence, we conclude \(f\in\mathrm{PMap}_{c}(\Gamma)\)
### Flux maps
We begin the case when \(|E_{\ell}|\geq 2\), where the flux maps come onto the scene. Here we recap the definitions and properties of flux maps developed in [10, Section 7].
Let \(\Gamma\) be a locally finite, infinite graph with \(|E_{\ell}|\geq 2\). For each nonempty, proper, clopen subset \(\mathcal{E}\) of \(E_{\ell}\), we will construct a flux map \(\Phi_{\mathcal{E}}\), which will evaluate to \(1\) for every primitive loop shift that goes from an end in \(E_{\ell}\setminus\mathcal{E}\) to an end in \(\mathcal{E}\). We fix such a subset \(\mathcal{E}\) for this discussion.
After potentially applying a proper homotopy equivalence, we can put \(\Gamma\) into a standard form so that there is a maximal tree \(T\) and a choice of \(x_{0}\) in \(T\) such that \(\Gamma\setminus\{x_{0}\}\) defines a partition of the ends that is compatible with the partition \(\mathcal{E}\sqcup(E_{\ell}\setminus\mathcal{E})\) of \(E_{\ell}\). That is, the components of \(\Gamma\setminus\{x_{0}\}\) determine a partition \(E=\bigsqcup_{i=1}^{m}\mathcal{F}_{i}\) so that we can rewrite as \(\mathcal{E}=\bigsqcup_{i=1}^{k}(\mathcal{F}_{i}\cap E_{\ell})\) and \(E_{\ell}\setminus\mathcal{E}=\bigsqcup_{i=k+1}^{m}(\mathcal{F}_{i}\cap E_{ \ell})\).
Now we group the components of \(\Gamma\setminus\{x_{0}\}\) by the set \(\mathcal{E}\). Let \(\Gamma_{+}\) and \(\Gamma_{-}\) be the unions of the closures of the components of \(\Gamma\setminus\{x_{0}\}\) so that \(E_{\ell}(\Gamma_{+})=\mathcal{E}\) and \(E_{\ell}(\Gamma_{-})=E_{\ell}\setminus\mathcal{E}\). More precisely, \(\Gamma_{+}\) is exactly the union of the complementary components of \(x_{0}\) with end spaces corresponding to \(\mathcal{F}_{1},\dots,\mathcal{F}_{k}\) together with adding back in \(x_{0}\). Similarly, \(\Gamma_{-}\) is the union of the components corresponding to \(\mathcal{F}_{k+1},\dots,\mathcal{F}_{m}\), together with \(x_{0}\). Finally, let \(T_{-}\) be the maximal tree of \(\Gamma_{-}\) contained in \(T\). Define for each \(n\in\mathbb{Z}\):
\[\Gamma_{n}:=\begin{cases}\overline{\Gamma_{-}\cup B_{n}(x_{0})}&\text{ if }n \geq 0,\\ (\Gamma_{-}\setminus B_{n}(x_{0}))\cup T_{-}&\text{ if }n<0,\end{cases}\]
where \(B_{n}(x_{0})\) is the open metric ball of radius \(n\) about \(x_{0}\). See [10, Section 7.2] for more details and pictures of the \(\Gamma_{n}\)'s.
Recall that a subgroup \(A\) of a group \(G\) is a **free factor** if there exists another subgroup \(P\) such that \(G=A*P\). Given a free factor \(A\) of \(B\), we define the **corank** of \(A\) in \(B\), denoted by \(\operatorname{cork}(B,A)\), as the rank of \(B/\langle\!\langle A\rangle\!\rangle\), the quotient of \(B\) by the normal closure of \(A\). For the \(\Gamma_{n}\) defined above we write \(A_{n}=\pi_{1}(\Gamma_{n},x_{0})\), the free factor determined by the subgraph \(\Gamma_{n}\).
Denote by \(\operatorname{PPHE}(\Gamma)\) the group of proper homotopy equivalences on \(\Gamma\) that fix the ends of \(\Gamma\) pointwise and fix the basepoint \(x_{0}\), i.e., the group of _pure_ proper homotopy equivalences. Any pure mapping class can be properly homotoped to fix a point, hence every pure mapping class has a representative in \(\operatorname{PPHE}(\Gamma)\). Note a proper homotopy equivalence on \(\Gamma\) induces an isomorphism on the level of fundamental group. Hence, with our choice of basepoint \(x_{0}\in\Gamma\), for each element \(f\in\operatorname{PPHE}(\Gamma)\), we denote by \(f_{*}\) the induced map on \(\pi_{1}(\Gamma,x_{0})\).
**Definition 3.3** ([10, Definition 7.9]).: Given \(f\in\operatorname{PPHE}(\Gamma)\), we say that a pair of integers, \((m,n)\), with \(m>n\), is **admissible** for \(f\) if
1. \(A_{n}\) and \(f_{*}(A_{n})\) are free factors of \(A_{m}\), and
2. both \(\operatorname{cork}(A_{m},A_{n})\) and \(\operatorname{cork}(A_{m},f_{*}(A_{n}))\) are finite.
In [10, Corollary 7.8], we showed that for every \(f\in\operatorname{PPHE}(\Gamma)\), and \(n\in\mathbb{Z}\), there exist \(m\in\mathbb{Z}\) such that \(m>n\) and \((m,n)\) is admissible for \(f\). Hence, we can define:
**Definition 3.4**.: For a map \(f\in\operatorname{PPHE}(\Gamma)\) and an admissible pair \((m,n)\) for \(f\), we let
\[\phi_{m,n}(f):=\operatorname{cork}(A_{m},A_{n})-\operatorname{cork}(A_{m},f_{*} (A_{n})).\]
Call such a \(\phi_{m,n}\) a **PPHE-flux map**.
**Lemma 3.5** ([10, Lemma 7.10]).: _The PPHE-flux of a map \(f\in\operatorname{PPHE}(\Gamma)\) is well-defined over the choice of admissible pair \((m,n)\). That is, if \((m,n)\) and \((m^{\prime},n^{\prime})\) are two admissible pairs for the map \(f\in\operatorname{PPHE}(\Gamma)\) then \(\phi_{m,n}(f)=\phi_{m^{\prime},n^{\prime}}(f)\)._
Furthermore:
**Proposition 3.6** ([13, Proposition 7.11 and Lemma 7.12]).: _The PPHE-flux maps are homomorphisms. Moreover, for any nonempty proper clopen subset \(\mathcal{E}\) of \(E_{\ell}\), if \(f,g\in\mathrm{PPHE}(\Gamma)\) are properly homotopic, then \(\phi_{\mathcal{E}}(f)=\phi_{\mathcal{E}}(g)\)._
Hence, the PPHE-flux map factors through \(\mathrm{PMap}(\Gamma)\), so we can define the flux map on \(\mathrm{PMap}(\Gamma)\) as follows.
**Definition 3.7**.: For each nonempty proper clopen subset \(\mathcal{E}\) of \(E_{\ell}\), we define the **flux map** as:
\[\Phi_{\mathcal{E}}:\mathrm{PMap}(\Gamma)\to\mathbb{Z}\] \[\mapsto\phi_{\mathcal{E}}(f),\]
which is a well-defined homomorphism by Proposition 3.6.
This independence of the choice of admissible pairs further implies the independence of the choice of the basepoint \(x_{0}\).
**Lemma 3.8** (Independence to choice of \(x_{0}\)).: _For a nonempty proper clopen subset \(\mathcal{E}\) of \(E_{\ell}\), let \(x_{0}\) and \(x^{\prime}_{0}\) be two different points that realize the partition \(E_{\ell}=\mathcal{E}\sqcup(E_{\ell}\setminus\mathcal{E})\). Say \(\phi_{\mathcal{E}}\) and \(\phi^{\prime}_{\mathcal{E}}\) are the flux maps constructed from \(x_{0}\) and \(x^{\prime}_{0}\) respectively, with the same orientation; \(E_{\ell}(\Gamma_{+})=E_{\ell}(\Gamma^{\prime}_{+})=\mathcal{E}\). Then \(\phi_{\mathcal{E}}=\phi^{\prime}_{\mathcal{E}}\)._
Proof.: Note \(x_{0}\) and \(x^{\prime}_{0}\) together cut \(\Gamma\) into three parts (not necessarily connected), where two of them are of infinite rank and realize \(\mathcal{E}\) and \(E_{\ell}\setminus\mathcal{E}\) respectively, and the middle part is of finite rank (but not necessarily compact), and we call it \(M\).
Let \(\{\Gamma_{n}\}\) and \(\{\Gamma^{\prime}_{n}\}\) be the chains of graphs used to define \(\phi_{\mathcal{E}}\) and \(\phi^{\prime}_{\mathcal{E}}\) respectively. Then since \(\phi_{\mathcal{E}}\) and \(\phi_{\mathcal{E}^{\prime}}\) are in the same direction, there exists \(k\in\mathbb{Z}\) such that \(A_{n+k}=A^{\prime}_{n}\) for all \(n\in\mathbb{Z}\). To be precise, this holds for \(k\) such that \(\Gamma_{k}\) and \(\Gamma^{\prime}_{0}\) have the same core graph. Now, given \(f\in\mathrm{PMap}(\Gamma)\) and an admissible pair \((m,n)\) for \(f\) at \(x_{0}\), the pair \((m-k,n-k)\) is admissible for \(f\) at \(x^{\prime}_{0}\). Then
\[(\phi_{\mathcal{E}})_{m,n}(f) =\mathrm{cork}(A_{m},A_{n})-\mathrm{cork}(A_{m},f_{*}(A_{n}))\] \[=\mathrm{cork}(A^{\prime}_{m-k},A^{\prime}_{n-k})-\mathrm{cork}(A^ {\prime}_{m-k},f_{*}(A^{\prime}_{n-k}))=(\phi^{\prime}_{\mathcal{E}})_{m-k,n- k}(f).\]
All in all, the independence of the choice of admissible pairs by Lemma 3.5 proves that \(\phi_{\mathcal{E}}(f)=\phi^{\prime}_{\mathcal{E}}(f)\). Since \(f\) was chosen arbitrarily, this concludes the proof.
Therefore, for each nonempty proper clopen subset \(\mathcal{E}\) of \(E_{\ell}\), we can write the resulting flux map as \(\phi_{\mathcal{E}}\) without specifying \(x_{0}\).
We end this subsection by exploring basic properties of flux maps, to be used in subsequent subsections. Note that flux maps inherit the group operation from \(\mathrm{Hom}(\mathrm{PMap}(\Gamma),\mathbb{Z})\); pointwise addition.
**Proposition 3.9**.: _Let \(\mathcal{E}\subset E_{\ell}\) be a nonempty proper clopen subset of \(E_{\ell}\), where \(|E_{\ell}|\geq 2\). Let \(A,B\) and \(B^{\prime}\) be nonempty proper clopen subsets of \(E_{\ell}\), such that \(A\) and \(B\) are disjoint, and \(B\) is a proper subset of \(B^{\prime}\). Then the following hold:_
1. \(\Phi_{\mathcal{E}^{c}}=-\Phi_{\mathcal{E}}\)_._
2. \(\Phi_{A\sqcup B}=\Phi_{A}+\Phi_{B}\)_._
3. \(\Phi_{B^{\prime}\setminus B}=\Phi_{B^{\prime}}-\Phi_{B}\)_._
Proof.: We first note that (iii) follows from (ii), noting that \(B^{\prime}\setminus B\) and \(B\) are disjoint. Hence, it suffices to prove (i) and (ii).
1. Let \(f\in\mathrm{PPHE}(\Gamma)\) and \(\mathcal{E}\subset E_{\ell}\) be a nonempty proper clopen subset. Choose \(g\in\mathrm{PPHE}(\Gamma)\) to be a proper homotopy inverse of \(f\). Take \(\Gamma_{L}\) and \(\Gamma_{R}\) with \(\Gamma_{L}\subset\Gamma_{R}\) to be an admissible pair of graphs for \(f\) and \(g\) with respect to \(\mathcal{E}.\) Fixing \(\Gamma_{L}\), we can enlarge \(\Gamma_{R}\) so that \((\Gamma\setminus\Gamma_{L},\Gamma\setminus\Gamma_{R})\) is an admissible pair for \(f\) with respect to \(\mathcal{E}^{c}\). Note \((\Gamma_{R},\Gamma_{L})\) is still an admissible pair of graphs for \(f\) with respect to \(\mathcal{E}\). In summary, we have: * \(f(\Gamma_{L})\subset\Gamma_{R},\quad g(\Gamma_{L})\subset\Gamma_{R}\) * \(f(\Gamma\setminus\Gamma_{R})\subset\Gamma\setminus\Gamma_{L}\), * \(\mathrm{cork}(\pi_{1}(\Gamma_{R}),\pi_{1}(\Gamma_{L}))<\infty,\quad\mathrm{ cork}(\pi_{1}(\Gamma_{R}),f_{*}(\pi_{1}(\Gamma_{L})))<\infty.\) * \(\mathrm{cork}(\pi_{1}(\Gamma_{R}),g_{*}(\pi_{1}(\Gamma_{L})))<\infty.\) * \(\mathrm{cork}(\pi_{1}(\Gamma\setminus\Gamma_{L}),\pi_{1}(\Gamma\setminus\Gamma _{R}))<\infty,\quad\mathrm{cork}(\pi_{1}(\Gamma\setminus\Gamma_{L}),f_{*}(\pi_ {1}(\Gamma\setminus\Gamma_{R})))<\infty.\) Because \(f_{*}\) is a \(\pi_{1}\)-isomorphism, we have the following three different free factor decompositions of \(\pi_{1}(\Gamma)\): \[\pi_{1}(\Gamma) =f_{*}(\pi_{1}(\Gamma_{R}))*f_{*}(\pi_{1}(\Gamma\setminus\Gamma_{ R})),\] \[\pi_{1}(\Gamma) =\pi_{1}(\Gamma_{R})*\pi_{1}(\Gamma\setminus\Gamma_{R}),\text{ and}\] \[\pi_{1}(\Gamma) =\pi_{1}(\Gamma_{L})*\pi_{1}(\Gamma\setminus\Gamma_{L}).\] We also have the free factor decompositions \[f_{*}(\pi_{1}(\Gamma_{R})) =\pi_{1}(\Gamma_{L})*B,\text{ and}\] \[\pi_{1}(\Gamma\setminus\Gamma_{L}) =f_{*}(\pi_{1}(\Gamma\setminus\Gamma_{R}))*C,\] for some free factors \(B\) and \(C\) of \(\pi_{1}(\Gamma)\). Putting together these decompositions, we get: \[\pi_{1}(\Gamma) =\pi_{1}(\Gamma_{L})*B*f_{*}(\pi_{1}(\Gamma\setminus\Gamma_{R}))\] \[\pi_{1}(\Gamma) =\pi_{1}(\Gamma_{L})*f_{*}(\pi_{1}(\Gamma\setminus\Gamma_{R}))*C.\] Therefore, we have \(\mathrm{rk}(B)=\mathrm{rk}(C)\). Translating these equalities, we compute: \[\Phi_{\mathcal{E}^{c}}(f) =\mathrm{cork}(\pi_{1}(\Gamma\setminus\Gamma_{L}),f_{*}(\pi_{1}( \Gamma\setminus\Gamma_{R})))-\mathrm{cork}(\pi_{1}(\Gamma\setminus\Gamma_{L}),\pi_{1}(\Gamma\setminus\Gamma_{R}))\] \[=\mathrm{cork}(f_{*}(\pi_{1}(\Gamma_{R})),\pi_{1}(\Gamma_{L}))- \mathrm{cork}(\pi_{1}(\Gamma_{R}),\pi_{1}(\Gamma_{L}))\] \[=\mathrm{cork}(\pi_{1}(\Gamma_{R}),g_{*}(\pi_{1}(\Gamma_{L})))- \mathrm{cork}(\pi_{1}(\Gamma_{R}),\pi_{1}(\Gamma_{L}))\] \[=\Phi_{\mathcal{E}}(g)=-\Phi_{\mathcal{E}}(f),\] where the last equation follows from that \(g\) is a proper inverse of \(f\) and \(\Phi_{\mathcal{E}}\) is a homomorphism.
2. Let \(f\in\mathrm{PPHE}(\Gamma)\). Choose an \(x_{0}\) that determines a partition that is compatible with both \(A^{c}\) and \(B^{c}\) as in the beginning of this section. Then there exist admissible pairs \((\Gamma_{R_{A^{c}}},\Gamma_{L_{A^{c}}})\) and \((\Gamma_{R_{B^{c}}},\Gamma_{L_{B^{c}}})\) of \(f\) with respect to \(A^{c}\) and \(B^{c}\) respectively. By taking small enough \(\Gamma_{L_{A^{c}}}\) and \(\Gamma_{L_{B^{c}}}\), we can ensure that \(\Gamma_{R_{A^{c}}}\) and \(\Gamma_{R_{B^{c}}}\) have contractible intersection in \(\Gamma\); See Figure 2. Then we observe that \((\Gamma_{R_{A^{c}}}\cup\Gamma_{R_{B^{c}}},\Gamma_{L_{A^{c}}}\cup\Gamma_{L_{B^{ c}}})\) is an admissible pair for \(f\) with respect to \(A^{c}\cap B^{c}=(A\sqcup B)^{c}\) (still with the basepoint \(x_{0}\)). We then have a free decomposition \[\pi_{1}(\Gamma_{R_{A^{c}}}\cup\Gamma_{R_{B^{c}}},x_{0})\cong\pi_{1}(\Gamma_{R_ {A^{c}}},x_{0})*\pi_{1}(\Gamma_{R_{B^{c}}},x_{0}),\]
and the same for \(\pi_{1}(\Gamma_{L_{A^{c}}}\cup\Gamma_{L_{B^{c}}},x_{0})\). Finally, we compute
\[\Phi_{(A\sqcup B)^{c}} =\operatorname{\mathrm{cork}}\left(A_{R_{A^{c}}}*A_{R_{B^{c}}},f_{* }(A_{L_{A^{c}}}*A_{L_{B^{c}}})\right)-\operatorname{\mathrm{cork}}\left(A_{R_{A^ {c}}}*A_{R_{B^{c}}},A_{L_{A^{c}}}*A_{L_{B^{c}}}\right)\] \[=(\operatorname{\mathrm{cork}}(A_{R_{A^{c}}},f_{*}(A_{L_{A^{c}}})) +\operatorname{\mathrm{cork}}(A_{R_{B^{c}}},f_{*}(A_{L_{B^{c}}})))\] \[\qquad-(\operatorname{\mathrm{cork}}(A_{R_{A^{c}}},A_{L_{A^{c}}}) +\operatorname{\mathrm{cork}}(A_{R_{B^{c}}},A_{L_{B^{c}}}))\] \[=(\operatorname{\mathrm{cork}}(A_{R_{A^{c}}},f_{*}(A_{L_{A^{c}}}) )-\operatorname{\mathrm{cork}}(A_{R_{A^{c}}},A_{L_{A^{c}}}))\] \[\qquad+(\operatorname{\mathrm{cork}}(A_{R_{B^{c}}},f_{*}(A_{L_{B^ {c}}}))-\operatorname{\mathrm{cork}}(A_{R_{B^{c}}},A_{L_{B^{c}}}))\] \[=\Phi_{A^{c}}+\Phi_{B^{c}}.\]
Finally we apply Part (i) to see that
\[\Phi_{A\sqcup B}=-\Phi_{(A\sqcup B)^{c}}=-\Phi_{A^{c}}-\Phi_{B^{c}}=\Phi_{A}+ \Phi_{B}.\qed\]
**Remark 3.10**.: We remark that by Proposition 3.9 (i) and Proposition 3.9 (ii), we can even formally define the flux map with respect to the empty set or the whole set \(E_{\ell}\):
\[\Phi_{\emptyset}:=\Phi_{A}-\Phi_{A}\equiv 0,\qquad\Phi_{E_{\ell}}:=\Phi_{A}+ \Phi_{A^{c}}\equiv 0.\]
This allows us to define a flux map for any clopen \(\mathcal{E}\subset E\) by \(\Phi_{\mathcal{E}}=\Phi_{\mathcal{E}\cap E_{\ell}}\).
### Flux zero maps
In this section we will prove the following characterization of flux zero maps.
**Theorem 3.11**.: _Let \(\Gamma\) be a locally finite, infinite graph with \(|E_{\ell}(\Gamma)|\geq 2\), and \(f\in\operatorname{\mathrm{PMap}}(\Gamma)\). Then \(f\in\overline{\operatorname{\mathrm{PMap}}_{c}(\Gamma)}\) if and only if \(\Phi_{\mathcal{E}}(f)=0\) for every clopen subset \(\mathcal{E}\) of \(E(\Gamma)\)._
We have proved the forward direction already in a previous paper.
Figure 2. Illustration of the choices of subgraphs for the proof of Proposition 3.9 (ii). Here the paths from \(x_{0}\) to each subgraph are omitted. We can choose pairs of graphs \((\Gamma_{R_{A}^{c}},\Gamma_{L_{A}^{c}})\) and \((\Gamma_{R_{B}^{c}},\Gamma_{L_{B}^{c}})\) such that the graphs from different pairs have contractible intersections.
**Proposition 3.12** ([14, Proposition 7.13]).: _If \(f\in\overline{\mathrm{PMap}_{c}(\Gamma)}\), then \(\Phi_{\mathcal{E}}(f)=0\) for every clopen subset \(\mathcal{E}\) of \(E(\Gamma)\)._
We will first assume that \(\Gamma\) is a core graph, i.e., \(E_{\ell}(\Gamma)=E(\Gamma)\). For brevity, we will temporarily drop the subscript \(\ell\) for \(E_{\ell}\) while we work under this assumption. To leverage the algebraic information (flux \(0\)) to obtain topological information (homotopy equivalence), we need the following fact:
**Lemma 3.13** ([14, Proposition 1B.9]).: _Let \(X\) be a connected CW complex and let \(Y\) be \(K(G,1)\). Then every homomorphism \(\pi_{1}(X,x_{0})\to\pi_{1}(Y,y_{0})\) is induced by a continuous map \((X,x_{0})\to(Y,y_{0})\) that is unique up to homotopy fixing \(x_{0}\)._
Recall that a graph is \(K(F,1)\) for \(F\) a free group (the fundamental group of the graph). Now we prove a preliminary lemma to construct a compact approximation of a proper homotopy equivalence.
**Lemma 3.14**.: _Let \(\mathcal{E}\subset E(\Gamma)\) be a nonempty proper clopen subset and \(f\in\mathrm{PMap}(\Gamma)\). If \(\Phi_{\mathcal{E}}(f)=0\), then given any compact \(K\subset\Gamma\), there exists \(\psi\in\mathrm{PMap}(\Gamma)\) such that_
1. **(Compact approximation)**__\(\psi f^{-1}\in\mathcal{V}_{K}\)_,_
2. **(Truncation)** _there exist disjoint subgraphs_ \(\Gamma_{\mathcal{E}}\)_, and_ \(\Gamma_{\mathcal{E}^{c}}\) _of_ \(\Gamma\) _with end spaces_ \(\mathcal{E}\) _and_ \(\mathcal{E}^{c}\) _respectively, such that_ \(\psi|_{\Gamma_{\mathcal{E}}}=\mathrm{id}\) _and_ \(\psi|_{\Gamma_{\mathcal{E}^{c}}}=f|_{\Gamma_{\mathcal{E}^{c}}}\)_, and_
3. **(Same flux)**__\(\Phi_{\eta}(\psi)=\Phi_{\eta}(f)\) _for every clopen subset_ \(\eta\subset E(\Gamma)\setminus\mathcal{E}\)_._
Proof.: Let \(\{\Gamma_{n}\}_{n\in\mathbb{Z}}\) be as in the definition of \(\Phi_{\mathcal{E}}\), for some choice of basepoint \(x_{0}\). Now, given \(f\in\mathrm{PMap}(\Gamma)\) and any \(n\) there is some \(m_{n}>n\) that makes \((m_{n},n)\) into an admissible pair for \(f\). See Figure 3.
Since \(\Phi_{\mathcal{E}}(f)=0\) we have
( \[*\] ) \[\mathrm{cork}(\pi_{1}(\Gamma_{m_{n}},x_{0}),\pi_{1}(\Gamma_{n},x_{0}))= \mathrm{cork}(\pi_{1}(\Gamma_{m_{n}},f(x_{0})),f_{*}(\pi_{1}(\Gamma_{n},x_{0})))\]
for each \(n\in\mathbb{Z}\). This allows us to define an isomorphism \(\Psi_{n}:\pi_{1}(\Gamma,x_{0})\to\pi_{1}(\Gamma,f(x_{0}))\) for each \(n\). Here we use the notation \(G\left\backslash\!\right\rangle\!H\) to denote the complementary free factor of
\(H\) in \(G\). Define
\[\Psi_{n}=\begin{cases}\operatorname{Id}&\text{on }\pi_{1}(\Gamma,x_{0})\setminus \text{$\pi_{1}(\Gamma_{m_{n}},x_{0})$},\\ \sigma_{n}&\text{on }\pi_{1}(\Gamma_{m_{n}},x_{0})\setminus\text{$\pi_{1}( \Gamma_{n},x_{0})$},\\ f_{*}&\text{on }\pi_{1}(\Gamma_{n},x_{0}),\end{cases}\]
where \(\sigma_{n}:\pi_{1}(\Gamma_{m_{n}},x_{0})\setminus\text{$\pi_{1}(\Gamma_{n},x_ {0})$}\to\pi_{1}(\Gamma_{m_{n}},f(x_{0}))\setminus\text{$f_{*}(\pi_{1}(\Gamma _{n},x_{0}))$}\) is any isomorphism. Such \(\sigma_{n}\) is guaranteed to exist by \((*)\).
Now by Lemma 3.13, for each \(n\) there exists a homotopy equivalence \(\psi_{n}:(\Gamma,x_{0})\to(\Gamma,f(x_{0}))\) such that
\[\psi_{n}=\begin{cases}\operatorname{Id}&\text{on }\Gamma\setminus\Gamma_{m_{n}},\\ f&\text{on }\Gamma_{n}.\end{cases}\]
Also note \(\psi_{n}\) is a _proper_ homotopy equivalence, as it can be defined in pieces as proper maps. Further, \(\psi_{n}\) fixes the ends of \(\Gamma\), because \(f\) does and \(\Gamma_{m_{n}}\setminus\Gamma_{n}\) is compact. One can similarly define its proper homotopy inverse. Hence, for each \(n\) we have \([\psi_{n}]\in\operatorname{PMap}(\Gamma)\).
The subgraphs \(\{\Gamma_{n}\}_{n\in\mathbb{Z}}\) form an exhaustion of \(\Gamma\), so \(\psi_{n}\to f\) in \(\operatorname{PMap}(\Gamma)\). Therefore, for a compact \(K\subset\Gamma\), there exists an \(n^{\prime}\in\mathbb{Z}\) such that \(\psi_{n^{\prime}}f^{-1}\in\mathcal{V}_{K}\). Take \(\psi=\psi_{n^{\prime}}\) and set \(\Gamma_{\mathcal{E}^{c}}=\Gamma_{n^{\prime}}\) and \(\Gamma_{\mathcal{E}}=\overline{\Gamma\setminus\Gamma_{m_{n^{\prime}}}}\). This gives (i) and (ii) by construction.
We now check that (iii) follows from (ii). Let \(\eta\) be a clopen subset of \(E(\Gamma)\) that is disjoint from \(\mathcal{E}\). We will actually check that \(\Phi_{\eta^{c}}(\psi)=\Phi_{\eta^{c}}(f)\). This will imply (iii) by Proposition 3.9 (i).
Note \(\eta\subset\mathcal{E}^{c}\). Now let \(\Gamma_{m}\) be a subgraph from the definition of \(\Phi_{\eta^{c}}\) so that \(\Gamma_{m}\subset\Gamma_{\mathcal{E}^{c}}\). Then there exists \(n\leq m\) such that \((m,n)\) is admissible for \(\psi\) with respect to the flux map \(\Phi_{\eta^{c}}\). Since \(f=\psi\) on \(\Gamma_{n}\subset\Gamma_{m}\subset\Gamma_{\mathcal{E}^{c}}\) by (ii), we see that \(\Phi_{\eta^{c}}(\psi)=\Phi_{\eta^{c}}(f)\) with the admissible pair of graphs \((\Gamma_{m},\Gamma_{n})\).
**Remark 3.15**.: The reader may wonder why in the proof above we chose to define this sequence of maps and argue via convergence in place of constructing the map \(\psi\) by hand as in [1]. While it is not too difficult to construct a \(\psi\) so that \(\psi f^{-1}\) is the identity on a given compact \(K\), it is significantly more finicky to guarantee that \(\psi f^{-1}\) preserves the complementary components of \(K\). The convergence argument given above allows us to avoid the messy details of this.
**Proposition 3.16**.: _Let \(\Gamma\) be a locally finite, infinite graph with \(E(\Gamma)=E_{\ell}(\Gamma)\), \(|E(\Gamma)|\geq 2\), and \(f\in\operatorname{PMap}(\Gamma)\). If \(\Phi_{\mathcal{E}}(f)=0\) for every clopen subset \(\mathcal{E}\) of \(E(\Gamma)\), then \(f\in\overline{\operatorname{PMap}_{c}(\Gamma)}\)._
Proof.: Assume \(f\in\operatorname{PMap}(\Gamma)\) has \(\Phi_{\mathcal{E}}(f)=0\) for every nonempty proper clopen subset \(\mathcal{E}\) of the end space \(E(\Gamma)\). Given any compact \(K\subset\Gamma\) we will find \(\psi\in\operatorname{PMap}_{c}(\Gamma)\) such that \(\psi f^{-1}\in\mathcal{V}_{K}\).
Without loss of generality we may enlarge \(K\) so that it is connected, has at least two complementary components, and every complementary component of \(K\) is infinite. Then the complement of \(K\) induces a partition of the ends. Write
\[\mathcal{P}_{K}=\mathcal{E}_{1}\sqcup\ldots\sqcup\mathcal{E}_{n}\]
for this partition.
Apply Lemma 3.14 to \(f\) using \(\mathcal{E}_{1}\) to obtain \(\psi_{1}\). Note that by (iii) we still have \(\Phi_{\mathcal{E}_{2}}(\psi_{1})=\Phi_{\mathcal{E}_{2}}(f)=0\). Thus we can apply the lemma again to \(\psi_{1}\) using \(\mathcal{E}_{2}\) to obtain a \(\psi_{2}\). Continue this process recursively to obtain \(\psi_{n}\).
Now, by (i) of Lemma 3.14, there exist \(v_{1},\dots,v_{n}\in\mathcal{V}_{K}\) such that
\[\psi_{i}=\begin{cases}v_{i}\psi_{i-1}&\text{for $1<i\leq n$},\\ v_{1}f&\text{for $i=1$}.\end{cases}\]
Putting these together gives \(\psi_{n}f^{-1}=v_{n}v_{n-1}\cdots v_{1}\in\mathcal{V}_{K}\) as \(\mathcal{V}_{K}\) is a subgroup.
It remains to check that \(\psi_{n}\in\operatorname{PMap}_{c}(\Gamma)\). However, by (ii), we have that \(\psi_{n}\) is equal to the identity on \(\bigcup_{i=1}^{n}\Gamma_{\mathcal{E}_{i}}\). This exactly covers all of the ends of \(\Gamma\) as \(\mathcal{P}_{K}\) was a partition of the ends. Therefore we see that \(\psi_{n}\) is supported on \(\overline{\bigcap_{i=1}^{n}\Gamma\setminus\Gamma_{\mathcal{E}_{i}}}\), a compact set. Taking \(\psi=\psi_{n}\) gives the desired compact approximation of \(f\).
Finally, since the \(K\) above was taken to be arbitrary, starting with a compact exhaustion of \(\Gamma\) we can apply the above to obtain a sequence of compactly supported maps that converge to \(f\).
Now we turn to the case where \(\Gamma\) is not necessarily a core graph.
Proof of Theorem 3.11.: The forward direction follows from Proposition 3.12.
For the backward direction, we first homotope \(f\) so that it fixes the vertices of \(\Gamma\). Then we see that we can write \(f=f_{T}f_{c}\) where \(f_{T}\) has support on \(\Gamma\setminus\Gamma_{c}\) and \(f_{c}\) has support on \(\Gamma_{c}\).
We can see that \(f_{T}\in\overline{\operatorname{PMap}_{c}(\Gamma)}\). Indeed, enumerate the components of \(\Gamma\setminus\Gamma_{c}\) as \(\{R_{i}\}_{i\in I}\) where each \(R_{i}\) is a tree and \(I\) is either finite or \(I=\mathbb{N}\). Then we can decompose \(f_{T}=\prod_{i\in I}f_{i}\) where each \(f_{i}\) has compact support on \(R_{i}\). Indeed, each has compact support as \(f_{T}\) is proper and thus the pre-image of the cutpoint \(\overline{R_{i}}\cap\Gamma_{c}\) is compact and \(f_{i}\) can be homotoped to have support contained within the convex hull of this full pre-image of the cutpoint. Furthermore, all of the \(f_{i}\) pairwise commute as each \(f_{i}\) can be homotoped so that it is totally supported away from the support of each other \(f_{j}\). Thus, we see that \(f_{T}\in\overline{\operatorname{PMap}_{c}(\Gamma)}\) as it is realized as the limit of partial products of the \(f_{i}\).
This also shows that given any flux map \(\Phi_{\mathcal{E}}\) we must have that \(\Phi_{\mathcal{E}}(f_{T})=0\), again by Proposition 3.12. Therefore, given an \(\mathcal{E}\) with \(\Phi_{\mathcal{E}}(f)=0\) we must have that \(\Phi_{\mathcal{E}}(f_{c})=0\) as \(\Phi_{\mathcal{E}}\) is a homomorphism. We can then apply Proposition 3.16 to conclude the desired result.
### Space of flux maps
Before we can prove Theorem B we need to endow the set of flux maps with an algebraic structure. In the surface case, [1] could utilize the first integral (co)homology of separating curves on the surface to give structure to the flux maps they defined. Here we will be using the group of locally constant \(\mathbb{Z}\)-valued functions on \(E_{\ell}(\Gamma)\) in place of the homology of separating curves. We remark that this is really the zeroth Cech cohomology of \(E_{\ell}(\Gamma)\) with coefficients in the constant sheaf \(\mathbb{Z}\). In Section 3.6 we observe that this perspective also works in the surface case.
For a topological space \(X\), we denote by \(\check{C}(X)\) the group of locally constant \(\mathbb{Z}\)-valued functions on \(X\). The group operation is given by addition of functions. We let \(\check{C}(X)=\check{C}(X)/\mathbb{Z}\), the quotient obtained by identifying the constant functions with zero. We will now give a collection of some facts about \(\check{C}(E)\) when \(E\) is a compact, totally disconnected, and metrizable space (i.e. a closed subset of a Cantor set).
We identify the Cantor set, \(\mathcal{C}=2^{\mathbb{N}}=\{0,1\}^{\mathbb{N}}\), with the set of countable binary sequences. A countable basis of clopen sets for the topology is then given by the cylinder sets
\[C_{a_{1}\cdots a_{k}}:=\{(x_{n})\in 2^{\mathbb{N}}\ |\ x_{i}=a_{i},\ i=1, \dots,k\}\]
where \(a_{1}\cdots a_{k}\) is some finite binary sequence of length \(k\). Say such a cylinder set has **width**\(k\). For \(E\) a closed subset of the Cantor set \(\mathcal{C}\), a **cylinder set** of \(E\) is the intersection
of a cylinder set for \(\mathcal{C}\) with \(E\), i.e., a set of the form \(C_{w}\cap E\) where \(w\in 2^{k}\) for some \(k\geq 0\). The standard tree model for the Cantor set is the usual rooted binary tree, and for an arbitrary closed subset \(E\subset\mathcal{C}\) we take the subtree with the end space \(E\). Given a subset, \(A\), of a topological space we let \(\chi_{A}\) denote the indicator function on \(A\).
**Theorem 3.17** (Countable Basis for \(\hat{C}(E)\)).: _Let \(E\) be a compact, totally disconnected, and metrizable space. There exists a countable collection \(\mathcal{A}=\{A_{i}\}_{i\in I}\) of cylinder sets of \(E\) so that_
1. _Any cylinder set_ \(C\) _of_ \(E\) _that is not in_ \(\mathcal{A}\) _can be written as_ \(C=A_{0}\setminus(A_{1}\sqcup\dots\sqcup A_{n})\) _for some_ \(A_{0}\in\mathcal{A}\)_, and some_ \(A_{j}\in\mathcal{A}\)_, with_ \(A_{j}\subset A_{0}\) _and_ \(A_{j}\cap A_{k}=\emptyset\) _for all distinct_ \(j,k\in\{1,\dots,n\}\)_,_
2. \(\mathcal{B}=\{\chi_{A_{i}}\}_{i\in I}\) _is a free basis for_ \(\hat{C}(E)\)_. In particular,_ \(\hat{C}(E)=\oplus_{i\in I}\mathbb{Z}\)_, and_
3. _for_ \(T\) _the standard tree model of the end space_ \((E,\emptyset)\)_, there exists an injective map_ \(\iota:\mathcal{A}\to T\) _so that_ \(\iota\) _maps into the interior of edges and_ \(\iota(\mathcal{A})\) _cuts the graph into a collection of one ended graphs._
Proof.: Note that if \(|E|=n<\infty\) then the result is immediate by taking \(\mathcal{A}\) to be the collection of all individual ends except one. Hence, we will assume that \(E\) is infinite.
We first prove the result for \(E=\mathcal{C}\) the Cantor set. We define \(\mathcal{A}^{\prime}\) to be the set of all cylinder sets consisting of cylinders of the form \(C_{a_{1}\cdots a_{k-1}0}\) together with the whole space \(\mathcal{C}\). That is,
\[\mathcal{A}^{\prime}=\{\mathcal{C},C_{0},C_{00},C_{10},C_{000},C_{100},C_{010 },C_{110},\dots\}\]
We claim that \(\{\chi_{A}\}_{A\in\mathcal{A}^{\prime}}\) forms a free basis for \(\check{C}(\mathcal{C})\). We first have
**Claim**.: _For every \(f\in\check{C}(\mathcal{C})\), there exist finitely many disjoint clopen subsets \(B_{1},\dots,B_{n}\) and integers \(b_{1},\dots,b_{n}\) such that_
\[f=\sum_{j=1}^{n}b_{j}\chi_{B_{j}}.\]
Proof.: Suppose \(f\) is a locally constant function on \(\mathcal{C}\) with _infinitely many_ distinct \(\mathbb{Z}\)-values \(b_{1},b_{2},\dots\). Then \(\{f^{-1}(b_{j})\}_{j=1}^{\infty}\) forms a clopen cover of \(\mathcal{C}\) which does not have a finite subcover, contradicting the compactness of \(\mathcal{C}\). Therefore, \(f\) can assume at most finitely different values in \(\mathbb{Z}\), and taking \(B_{j}=f^{-1}(b_{j})\) proves the claim.
Thus we can check that \(\{\chi_{A}\}_{A\in\mathcal{A}^{\prime}}\) generates \(\check{C}(\mathcal{C})\) by verifying that for an arbitrary clopen set \(B\) of \(\mathcal{C}\), we can write \(\chi_{B}\) as a finite linear combination of elements from \(\{\chi_{A}\}_{A\in\mathcal{A}^{\prime}}\). Since the cylinder sets form a clopen basis for the topology, we only need to check when \(B\) is a cylinder set. Take \(B=C_{a_{1}\cdots a_{k}}\) for some \(k>0\) and \(a_{1}\cdots a_{k}\in 2^{k}\). Then we have either \(B\in\mathcal{A}^{\prime}\) or \(a_{k}=1\). Supposing the latter, let
\[m=\begin{cases}0&\text{if }a_{1}=\dots=a_{k}=1,\\ \max\{j|a_{j}=0\}&\text{otherwise}\end{cases}\]
Then we can write
\[\chi_{B}=\chi_{C_{a_{1}\cdots a_{k}}}=\chi_{C_{a_{1}\cdots a_{m}}}-\left(\sum _{j=m}^{k-1}\chi_{C_{a_{1}\cdots a_{j}0}}\right),\]
where we take \(a_{1}\cdots a_{m}\) as an empty sequence when \(m=0\). Thus we see that \(\{\chi_{A}\}_{A\in\mathcal{A}^{\prime}}\) generates \(\check{C}(\mathcal{C})\). This also shows that property (1) holds.
Next we verify that the set \(\mathcal{B}^{\prime}:=\{\chi_{A}\}_{A\in\mathcal{A}^{\prime}}\) is linearly independent. Suppose
\[0=\sum_{j=1}^{n}a_{j}\chi_{A_{j}},\]
for some distinct \(A_{1},\ldots,A_{n}\in\mathcal{A}^{\prime}\). We will proceed by induction on \(n\). The case when \(n=1\) is straightforward. Now let \(n>1\) and without loss of generality we can assume that \(A_{n}\) is of minimal width. Let \(w\) be the word defining \(A_{n}\), i.e. \(A_{n}=C_{w}\). Note that \(w\) may be the empty word (when \(A_{n}=\mathcal{C}\)). Consider the sequence \(w\bar{1}\) consisting of the starting word \(w\) followed by the constant infinite sequence of \(1\)s. Then by minimality of \(w\), we have
\[0=\sum_{j=1}^{n}a_{j}\chi_{A_{j}}(w\bar{1})=a_{n}.\]
Therefore, we have \(0=\sum_{j=1}^{n}a_{j}\chi_{A_{j}}=\sum_{j=1}^{n-1}a_{j}\chi_{A_{j}}\) so by the induction on \(n\) we see that \(a_{j}=0\) for all \(j\). Thus we see that \(\mathcal{B}^{\prime}\) is a free basis for \(\hat{C}(\mathcal{C})\). Taking \(\mathcal{A}:=\mathcal{A}^{\prime}\setminus\{\mathcal{C}\}=\{C_{0},C_{00},C_{1 0},C_{000},C_{100},C_{010},C_{110},\ldots\}\), the free basis \(\mathcal{B}^{\prime}\) for \(\hat{C}(\mathcal{C})\) descends (allowing for a slight abuse of notation) to a free basis \(\mathcal{B}:=\{\chi_{A}\}_{A\in\mathcal{A}}\) for \(\hat{C}(\mathcal{C})\), proving (2).
Finally, we can define \(\iota:\mathcal{A}\to T\) by using the labels on each of the cylinder sets to map each cylinder set to the midpoint of its corresponding edge in the standard binary tree model of the Cantor set. See Figure 4 for a picture of the map. The components of \(T\setminus\iota(\mathcal{A})\) each contains one end from \(T\).
Now to go from the Cantor set to a general infinite end space we identify \(E\) with a subspace of \(\mathcal{C}\) and take \(\mathcal{A}=\{C_{0}\cap E,C_{00}\cap E,C_{10}\cap E,\ldots\}\), deleting duplicated sets if necessary. Then the set \(\{\chi_{A}\}_{A\in\mathcal{A}}\) will still determine a free basis for \(\hat{C}(E)\).
Apply this theorem to \(E_{\ell}(\Gamma)\) in order to obtain the set \(\mathcal{A}=\{A_{i}\}_{i\in I}\). We now define the homomorphism
\[\Pi:\operatorname{PMap}(\Gamma) \to\prod_{i\in I}\mathbb{Z}\] \[f \mapsto(\Phi_{A_{i}}(f))_{i\in I}.\]
Figure 4. The image of the map \(\iota:\mathcal{A}\to T\) is given in blue.
We will check that this map is surjective and has kernel exactly \(\overline{\operatorname{PMap}_{c}(\Gamma)}\), i.e. it forms the following short exact sequence:
\[1\longrightarrow\overline{\operatorname{PMap}_{c}(\Gamma)}\longrightarrow \operatorname{PMap}(\Gamma)\stackrel{{\Pi}}{{\longrightarrow}}\prod_{ i\in I}\mathbb{Z}\longrightarrow 1.\]
**Lemma 3.18**.: _Let \(\mathcal{E}\) be a clopen subset of \(E(\Gamma)\) so that \(\mathcal{E}\cap E_{\ell}(\Gamma)\) is a proper nontrivial subset. If \(f\in\operatorname{PMap}(\Gamma)\) satisfies \(\Phi_{A}(f)=0\) for all \(A\in\mathcal{A}\), then \(\Phi_{\mathcal{E}}(f)=0\) as well._
Proof.: We first note that \(\mathcal{E}\) can be written as a disjoint union of finitely many cylinder sets. Thus, by Proposition 3.9 (ii) it suffices to check when \(\mathcal{E}\) is a cylinder set \(C\) of \(E(\Gamma)\). Assume that \(f\in\operatorname{PMap}(\Gamma)\) satisfies \(\Phi_{A_{i}}(f)=0\) for all \(i\in I\). Then \(C\cap E_{\ell}(\Gamma)\) is again a cylinder set of \(E_{\ell}\). Applying property (1) of Theorem 3.17 we have either \(C\in\mathcal{A}\), or \(C=A_{0}\setminus(\bigsqcup_{j=1}^{n}A_{j})\) for some \(A_{0}\in\mathcal{A}\) and \(A_{j}\in\mathcal{A}\). If \(C\in\mathcal{A}\), then we conclude \(\Phi_{C}(f)=0\). For the other case, we can apply Proposition 3.9 (ii) and Proposition 3.9 (iii) to write
\[\Phi_{C}(f)=\Phi_{A_{0}}(f)-\sum_{j=1}^{n}\Phi_{A_{j}}(f)=0-0=0.\]
**Corollary 3.19**.: _For \(\Gamma\) and \(\Pi\) as above, \(\ker(\Pi)=\overline{\operatorname{PMap}_{c}(\Gamma)}\)._
Proof.: The forward direction of Theorem 3.11 implies \(\ker(\Pi)\supset\overline{\operatorname{PMap}_{c}(\Gamma)}\). On the other hand, Lemma 3.18 together with the backward direction of Theorem 3.11 imply the other containment \(\ker(\Pi)\subset\overline{\operatorname{PMap}_{c}(\Gamma)}\).
Next, we will build a section to show \(\Pi\) is surjective, and more importantly, this sequence splits. This gives us our desired semidirect product decomposition in Theorem B.
**Proposition 3.20**.: _There exists an injective homomorphism \(\hat{\iota}:\prod_{i\in I}\mathbb{Z}\to\operatorname{PMap}(\Gamma)\) so that \(\Pi\circ\hat{\iota}\) is the identity on \(\prod_{i\in I}\mathbb{Z}\)._
Proof.: Let \(T\) be the maximal tree of the graph \(\Gamma_{c}\) in standard form. Note that the end space of \(T\) is homeomorphic to \(E_{\ell}(\Gamma)\) and let \(\mathcal{A}=\{A_{i}\}_{i\in I}\) be the set obtained from (2) of Theorem 3.17 applied to the set \(E_{\ell}(\Gamma)\) and \(\iota:\mathcal{A}\to T\) be the map given by property (3) of Theorem 3.17. The closure in \(\Gamma_{c}\) of every complementary component of \(\iota(\mathcal{A})\) is a one-ended subgraph with infinite rank. Call one such component \(\Gamma^{\prime}\). It has at most a countably infinite number of half edges coming from the points of \(\iota(\mathcal{A})\). Now we will modify \(\Gamma^{\prime}\) via a proper homotopy equivalence that fixes \(\partial\Gamma^{\prime}\) so that the new graph has a "grid of loops" above \(\partial\Gamma^{\prime}\). See Figure 5 for how this replacement is done. Such a replacement by a proper homotopy equivalence is possible by the classification of infinite graphs.
After replacing each component of \(\Gamma_{c}\setminus\iota(\mathcal{A})\) we obtain a new graph that is proper homotopy equivalent to the original \(\Gamma_{c}\). We can also extend this proper homotopy equivalence to the entire graph \(\Gamma\), as our proper homotopy equivalence fixes the boundary points of each complementary component of \(\iota(\mathcal{A})\). Now for each \(i\), there are exactly two complementary components whose closures in \(\Gamma_{c}\) contain \(\iota(A_{i})\). Let \(\ell_{i}\in\operatorname{PMap}(\Gamma)\) be the loop shift supported on the two columns of loops sitting above \(\iota(A_{i})\) in these components. Orient the loop shift so that it is shifting towards the end in \(A_{i}\).
Note that each \(\ell_{i}\) has total support disjoint from each other \(\ell_{j}\) so that \(\ell_{i}\ell_{j}=\ell_{j}\ell_{i}\) for all \(i,j\in I\). Therefore, \(\prod_{i\in I}\langle\ell_{i}\rangle<\operatorname{PMap}(\Gamma)\), and we can define the homomorphism
\(\hat{\iota}:\prod_{i\in I}\mathbb{Z}\rightarrow\mathrm{PMap}(\Gamma)\) by
\[\hat{\iota}\left((n_{i})_{i\in I}\right):=\prod_{i\in I}\ell_{i}^{n_{i}}.\]
It remains to check that \(\Pi\circ\hat{\iota}\) is the identity on \(\prod_{i\in I}\mathbb{Z}\). By the construction of the loop shifts, \(\ell_{i}\) crosses exactly one of the clopen subsets in \(\mathcal{A}\), namely \(A_{i}\). Therefore, we
Figure 5. The new replacement graphs for each component of \(T\setminus\iota(\mathcal{A})\). The top picture shows the case when a component has infinitely many cut points and the bottom for finitely many. Note that above each cut point one sees a “column of loops” within the grid.
have
\[\Phi_{A_{j}}(\ell_{i})=\delta_{ij}:=\begin{cases}1&\text{if }i=j,\\ 0&\text{if }i\neq j.\end{cases}\]
Now, given any tuple \((n_{i})_{i\in I}\in\prod_{i\in I}\mathbb{Z}\) we compute
\[(\Pi\circ\hat{\iota})\left((n_{i})_{i\in I}\right)=\Pi\left(\prod_{i\in I}\ell _{i}^{n_{i}}\right)=\left(\Phi_{A_{j}}\left(\prod_{i\in I}\ell_{i}^{n_{i}} \right)\right)_{j\in I}=(n_{i})_{i\in I}.\qed\]
Proof of Theorem B.: Corollary 3.19 and Proposition 3.20 above give the desired splitting short exact sequence \(1\longrightarrow\overline{\mathrm{PMap}_{c}(\Gamma)}\longrightarrow\mathrm{ PMap}(\Gamma)\longrightarrow\mathbb{Z}^{\alpha}\longrightarrow 1\), with \(\alpha=|\mathcal{A}|-1\).
### The rank of integral cohomology
As pointed out in Remark 3.10, we define \(\Phi_{\emptyset}=\Phi_{E_{\ell}}\equiv 0\).
**Lemma 3.21**.: _Let \(\{A_{i}\}_{i\in I}\) be a clopen collection of subsets of \(E_{\ell}(\Gamma)\) such that \(\mathcal{B}=\{\chi_{A_{i}}\}_{i\in I}\) is a free basis for \(\mathring{C}(E_{\ell})\) as in Theorem 3.17. Then the map_
\[\Theta:\mathring{C}(E_{\ell}(\Gamma)) \longrightarrow H^{1}(\mathrm{PMap}(\Gamma);\mathbb{Z}),\] \[\sum_{i\in I}n_{i}\chi_{A_{i}} \longmapsto\sum_{i\in I}n_{i}\Phi_{A_{i}}.\]
_is a well-defined injective homomorphism._
Proof.: Since \(\mathcal{B}\) is a free basis for \(\mathring{C}(E_{\ell})\), the map \(\chi_{A_{i}}\mapsto\Phi_{A_{i}}\) on \(\mathcal{B}\) extends to a well-defined homomorphism on the whole group \(\mathring{C}(E_{\ell})\). To see \(\Theta\) is injective, suppose \(\Theta(\sum_{i}n_{i}\chi_{A_{i}})=\sum_{i}n_{i}\Phi_{A_{i}}=0\) for \(\chi_{A_{i}}\in\mathcal{B}\). Then for each \(j\) that arises as an index of the summation, we evaluate the sum at the loop shift \(\ell_{j}\) constructed in the proof of Proposition 3.20:
\[0=\sum_{i}n_{i}\Phi_{A_{i}}(\ell_{j})=n_{j}\Phi_{A_{j}}(\ell_{j})=n_{j},\]
which implies that \(\sum_{i}n_{i}\chi_{A_{i}}\equiv 0\), concluding that \(\Theta\) is injective.
Here we collect relevant results on the first homology of the pure mapping class group of graphs of rank \(n\) with \(s\) rays.
**Fact 3.22** ([13, Theorem 1.1]).: \(H_{1}(\mathrm{Aut}(F_{n});\mathbb{Q})=0\) _for all \(n\geq 1\)._
**Fact 3.23** ([13, Section 4]).: _For \(n\geq 3\) and \(s\geq 1\),_
\[H_{1}(F_{n}^{s-1}\rtimes\mathrm{Aut}(F_{n});\mathbb{Z})\cong H_{1}(F_{n}^{s} \rtimes\mathrm{Aut}(F_{n});\mathbb{Z}).\]
_This still holds for \(n=1,2\) if \(s\geq 2\)._
**Proposition 3.24**.: \(H^{1}(\mathrm{PMap}_{c}(\Gamma);\mathbb{Z})=0\) _for every locally finite, infinite graph \(\Gamma\)._
Proof.: Let \(\{\Gamma_{k}\}\) be a compact exhaustion of \(\Gamma\). Then \(\mathrm{PMap}_{c}(\Gamma)\) is a direct limit of \(\mathrm{PMap}(\Gamma_{k})\)'s, each of which is isomorphic to \(F_{n_{k}}^{e_{k}}\rtimes\mathrm{Aut}(F_{n_{k}})\) for some \(e_{k}\geq 0\) and \(n_{k}\geq 1\) (Recall Remark 2.4). Since the direct limit commutes with \(H^{1}(-;\mathbb{Z})\equiv\mathrm{Hom}(-,\mathbb{Z})\), it suffices to show that groups of the form \(F_{n}^{e}\rtimes\mathrm{Aut}(F_{n})\) have trivial first cohomology. We first show \(H^{1}(\mathrm{Aut}(F_{n});\mathbb{Z})=0\). By the universal coefficient theorem for cohomology,
\[0\longrightarrow\mathrm{Ext}\left(H_{0}(\mathrm{Aut}(F_{n});\mathbb{Q}), \mathbb{Z}\right)\longrightarrow H^{1}(\mathrm{Aut}(F_{n});\mathbb{Z}) \longrightarrow\mathrm{Hom}(H_{1}(\mathrm{Aut}(F_{n});\mathbb{Q}),\mathbb{Z}) \longrightarrow 0\]
where \(\mathrm{Ext}\left(H_{0}(\mathrm{Aut}(F_{n});\mathbb{Q});\mathbb{Z}\right)=0\) as \(H_{0}(\mathrm{Aut}(F_{n});\mathbb{Q})\cong\mathbb{Q}\) is free. Also, \(H_{1}(\mathrm{Aut}(F_{n});\mathbb{Q})=0\) by Fact 3.22, so it follows that \(H^{1}(\mathrm{Aut}(F_{n});\mathbb{Z})=0\).
On the other hand, repeatedly applying Fact 3.23 together with the universal coefficient theorem for homology shows that for \(n\geq 3\),
\[H_{1}(F_{n}^{s}\rtimes\operatorname{Aut}(F_{n});\mathbb{Q})=H_{1}(F_{n}^{s-1} \rtimes\operatorname{Aut}(F_{n});\mathbb{Q})=\ldots=H_{1}(\operatorname{Aut}( F_{n});\mathbb{Q})=0.\]
The last equality comes from Fact 3.22. For \(n=1,2\), the argument is the same, except we reduce the problem of showing \(H^{1}(F_{n}^{s-1}\rtimes\operatorname{Aut}(F_{n});\mathbb{Z})=0\) to checking \(H_{1}(F_{n}\rtimes\operatorname{Aut}(F_{n});\mathbb{Q})=0\). One can check \(\mathbb{Z}\rtimes\mathbb{Z}_{2}\) and \(F_{2}\rtimes\operatorname{Aut}(F_{2})\) have finite abelianization to conclude this. (See e.g. [1, Corollary 2] for a finite presentation of \(\operatorname{Aut}(F_{2})\).) This completes the proof of \(H^{1}(\operatorname{PMap}_{c}(\Gamma);\mathbb{Z})=0\).
**Theorem 3.25**.: _The map \(\Theta\) in Lemma 3.21 is an isomorphism._
Proof.: We only need to check the surjectivity of \(\Theta\). Pick \(\phi\in H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})=\operatorname{Hom}( \operatorname{PMap}(\Gamma),\mathbb{Z})\). By Proposition 3.24, we have \(\phi(\operatorname{PMap}_{c}(\Gamma))=\{0\}\). By Dudley's automatic continuity [1], \(\phi\) is continuous, so \(\phi(\overline{\operatorname{PMap}_{c}(\Gamma)})=\{0\}\). Recall the semidirect product decomposition \(\operatorname{PMap}(\Gamma)\cong\overline{\operatorname{PMap}_{c}(\Gamma)}\rtimes L\) from Theorem B, where \(L\cong\prod_{i\in I}\langle\ell_{i}\rangle\), the product of commuting loop shifts. Furthermore, these loop shifts are dual to the collection of \(\{\Phi_{A_{i}}\}_{i\in I}\subset H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\) so that \(\Phi_{A_{j}}(\ell_{i})=\delta_{ij}\). Since \(\phi\) is zero on the \(\overline{\operatorname{PMap}_{c}(\Gamma)}\)-factor, it follows that \(\phi\) is completely determined by its value on \(L\). Note also that \(L\cong\prod_{i\in I}\mathbb{Z}\) so that \(H^{1}(L;\mathbb{Z})\cong\oplus_{i\in I}\mathbb{Z}\) where a basis for \(H^{1}(L;\mathbb{Z})\) is given exactly by the set \(\{\Phi_{A_{i}}\}_{i\in I}\), as in Theorem 3.17(2). Hence, \(\phi=\phi|_{L}\in H^{1}(L;\mathbb{Z})\) can be described by a finite linear combination of \(\Phi_{A_{i}}\)'s. Such a finite linear combination is the image of a finite linear combination of \(\chi_{A_{i}}\) under \(\Theta\), so \(\Theta\) is surjective.
**Corollary 3.26** (Corollary C, revisited).: _For every locally finite, infinite graph \(\Gamma\),_
\[\operatorname{rk}\big{(}H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\big{)}= \begin{cases}0&\text{if }|E_{\ell}|\leq 1\\ n-1&\text{if }2\leq|E_{\ell}|=n<\infty\\ \aleph_{0}&\text{otherwise}.\end{cases}\]
Proof.: This follows from the isomorphism \(\Theta:\mathring{C}(E_{\ell}(\Gamma))\cong H^{1}(\operatorname{PMap}(\Gamma); \mathbb{Z})\) in Theorem 3.25.
### Relation to surfaces
Aramayona-Patel-Vlamis in [1] obtain a result similar to Theorem 3.25 in the infinite-type _surface_ case using the homology of separating curves in place of \(\mathring{C}(E_{\ell}(\Gamma))\). Here we show that these approaches can be unified, as they each rely solely on the subspace of ends accumulated by loops or genus. Let \(S\) be an infinite-type surface and let \(\hat{S}\) be the surface obtained from \(S\) by forgetting the planar ends of \(S\). Let \(H_{1}^{\text{sep}}(\hat{S};\mathbb{Z})\) be the subgroup of \(H_{1}(\hat{S};\mathbb{Z})\) generated by homology classes that have separating simple closed curves of \(\hat{S}\) as representatives. Note that when \(S\) has only planar ends, \(H_{1}^{\text{sep}}(\hat{S};\mathbb{Z})\) is trivial.
**Theorem 3.27** ([1, Theorem 4] for genus \(\geq 2\), [1, Theorem 1.1] for genus \(1\)).: _Let \(S\) be an infinite-type surface of genus at least one. Then_
\[H^{1}(\operatorname{PMap}(S);\mathbb{Z})\cong H_{1}^{\text{sep}}(\hat{S}; \mathbb{Z}).\]
Let \(E_{g}(S)\) denote the space of ends of \(S\) accumulated by genus (i.e., the non-planar ends).
**Proposition 3.28**.: _Let \(S\) be an infinite-type surface. Then_
\[H_{1}^{\text{sep}}(\hat{S};\mathbb{Z})\cong\mathring{C}(E_{g}(S)).\]
Proof.: We first note that by definition, \(E_{g}(S)=E(\hat{S})\). Let \(v\in H^{sep}_{1}(\hat{S};\mathbb{Z})\) be a primitive element, i.e. \(v\) has a representative \(\gamma\) that is an oriented and separating simple closed curve. Now \(v\) determines a partition of \(E(\hat{S})\) into two clopen subsets, \(v^{+}\), those ends to the right of \(\gamma\), and \(v^{-}\), those ends to the left of \(\gamma\). Note that these are proper subsets if and only if \(v\neq 0\) if and only if \(\chi_{v^{+}}\neq 0\) in \(\hat{C}(E)\). Define
\[\Xi(v):=\chi_{v^{+}}\in\hat{C}(E),\]
for each nonzero primitive element \(v\) of \(H^{sep}_{1}(\hat{S};\mathbb{Z})\). This linearly extends to define an isomorphism \(\Xi:H^{sep}_{1}(\hat{S};\mathbb{Z})\xrightarrow{\sim}\hat{C}(E_{g}(S))\).
**Corollary 3.29**.: _Let \(S\) be an infinite-type surface of genus at least one and \(\Gamma\) be a locally finite, infinite graph. If \(E_{g}(S)\) is homeomorphic to \(E_{\ell}(\Gamma)\), then there is a natural isomorphism between \(H^{1}(\operatorname{PMap}(S);\mathbb{Z})\) and \(H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\)._
Proof.: We first note that if \(E_{g}(S)\) is empty (i.e. \(S\) has finite genus), then \(H^{1}(\operatorname{PMap}(S);\mathbb{Z})\) is trivial by [1, Theorem 1] and [1, Theorem 1.1]. Similarly, if \(E_{\ell}(S)\) is empty, then \(H^{1}(\operatorname{PMap}(S);\mathbb{Z})\) is trivial by Proposition 3.2 and Proposition 3.24.
Otherwise, the isomorphism is obtained by composing the maps from Theorem 3.25, Theorem 3.27, and Proposition 3.28:
\[H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\stackrel{{\Theta}}{ {\cong}}\hat{C}(E_{\ell}(\Gamma))\cong\hat{C}(E_{g}(S))\stackrel{{ \Xi}}{{\cong}}H^{sep}_{1}(\hat{S};\mathbb{Z})\cong H^{1}( \operatorname{PMap}(S);\mathbb{Z}).\]
## 4. CB generation classification
As an application of Theorem 3.11 and Theorem B, in this section we obtain Theorem A, the classification of infinite graphs with CB generating sets. We revisit the theorem for convenience.
**Theorem 4.1** (Theorem A, revisited).: _Let \(\Gamma\) be a locally finite, infinite graph. Then \(\operatorname{PMap}(\Gamma)\) is CB generated if and only if either \(\Gamma\) is a tree, or satisfies the following:_
1. \(\Gamma\) _has finitely many ends accumulated by loops, and_
2. _there is no accumulation point in_ \(E\setminus E_{\ell}\)_._
The only if direction of Theorem A comes from [1]:
**Proposition 4.2** ([1, Theorem 6.1]).: _Let \(\Gamma\) be a locally finite, infinite graph. If \(\operatorname{rk}(\Gamma)>0\) and \(E\setminus E_{\ell}\) has an accumulation point, then \(\operatorname{PMap}(\Gamma)\) is not CB-generated._
**Proposition 4.3** ([1, Theorem 8.2]).: _Let \(\Gamma\) be a locally finite, infinite graph. If \(\Gamma\) has infinitely many ends accumulated by loops, then \(\operatorname{PMap}(\Gamma)\) is not locally CB. In particular, \(\operatorname{PMap}(\Gamma)\) is not CB-generated._
Now we show that those conditions are also sufficient for CB-generation. First, recall by Proposition 2.6 that when \(\Gamma\) is a tree, \(\operatorname{PMap}(\Gamma)\) is the trivial group. We proceed to show (1) and (2) are sufficient to show that \(\operatorname{PMap}(\Gamma)\) is CB generated. We start with the case where \(\Gamma\) has finite rank and satisfies Condition (2):
**Proposition 4.4**.: _Let \(\Gamma\) be a locally finite, infinite graph. If \(\Gamma\) has finite rank with no accumulation point in \(E\), then \(\operatorname{PMap}(\Gamma)\) is finitely generated._
Proof.: Note in this case \(E_{\ell}\) is the empty set, so having no accumulation point in \(E\setminus E_{\ell}\) is equivalent to having a finite end space. Hence \(\operatorname{PMap}(\Gamma)\) is isomorphic to one of \(\operatorname{Out}(F_{n}),\operatorname{Aut}(F_{n})\) or \(F_{n}^{e}\rtimes\operatorname{Aut}(F_{n})\) for some \(e=|E|-1\geq 1\), all of which are finitely generated, concluding the proof.
Now assume \(\Gamma\) has infinite rank but finitely many ends accumulated by loops with no accumulation point in \(E\setminus E_{\ell}\). As in Remark 1.1, \(\Gamma\) can be realized as a finite wedge sum of rays, Loch Ness monster graphs (infinite rank graph with end space \((E,E_{\ell})\cong(\{*\},\{*\})\)), and Millipede monster graphs (infinite rank graph with end space \((E,E_{\ell})\cong(\mathbb{N}\cup\{\infty\},\{\infty\})\)). Then \(\Gamma\) is characterized by the triple \((r,l,m)\) where \(r\) is the number of ray summands, \(l\) is the number of Loch Ness monster summands, and \(m\) is the number of Millipede monster summands. Then \(\Gamma\) is as in Figure 6. Note that this triple is not unique, in fact, if \(m>0\) then we do not need to keep track of \(r\) as any additional ray can simply be moved via a proper homotopy into a Millipede monster summand. However, in order to avoid a case-by-case analysis we prove that \(\operatorname{PMap}(\Gamma)\) is CB-generated for _any_ triple \((r,l,m)\). Note that we already know by Theorem 2.16 that both the Loch Ness monster graph, \((0,1,0)\), and the Millipede monster graph, \((0,0,1)\), have CB and thus CB-generated pure mapping class groups. Therefore we will ignore these two graphs throughout this section.
The foundation for our choice of CB-generating set for \(\operatorname{PMap}(\Gamma)\) will be the set \(\mathcal{V}_{K}\), where \(K\) is the wedge point as in Figure 6. Recall that an appropriate choice of a compact set \(K\) provides a CB neighborhood of the identity certifying that \(\operatorname{PMap}(\Gamma)\) is locally CB.
Figure 6. The graphs \(\Gamma\) that we prove have a CB-generating pure mapping class group. Each such \(\Gamma\) has a single wedge point \(K\) and \(\Gamma\setminus K\) has \(r\) ray components, \(l\) Loch Ness monster components, and \(m\) Millipede monster components.
**Proposition 4.5** ([4, Proposition 8.3]).: _Let \(\Gamma\) be a locally finite, infinite graph with finitely many ends accumulated by loops. Then \(\operatorname{PMap}(\Gamma)\) is locally CB if and only if \(\Gamma\setminus\Gamma_{c}\) has only finitely many components whose ends space is infinite. Moreover, for any choice of connected compact subgraph \(K\) whose complementary components are either trees or monster graphs, \(\mathcal{V}_{K}\) is a CB neighborhood of the identity in \(\operatorname{PMap}(\Gamma)\)._
We remark that the moreover statement is absent in [4, Proposition 8.3]; however, it can be deduced readily from the proof. We thus have that our choice of \(\mathcal{V}_{K}\) is CB. This is the starting point for our CB generating set; we now describe how to choose the remaining elements.
Enumerate each of the ray summands of \(\Gamma\) as \(R_{1},\ldots,R_{r}\), the Loch Ness monster summands as \(L_{1},\ldots,L_{l}\), and the Millipede monster summands as \(M_{1},\ldots,M_{m}\) (skip the enumeration if there are no summands of a given type). We also sequentially label the loops in \(L_{i}\) by \(a_{i,j}\) where \(a_{i,1}\) is the loop closest to \(K\). We similarly label the loops in \(M_{i}\) by \(b_{i,j}\). For each \(R_{i}\) let \(I_{i}\) be an interval in the interior of \(R_{i}\). Then we have the following finite collection of word maps:
\[W:=\{\phi_{(a_{1,1},I_{i})}\}_{i=1}^{r}.\]
If \(l=0\) then we use \(W:=\{\phi_{(b_{1,1},I_{i})}\}_{i=1}^{r}\) instead. Note we cannot have \(l=m=0\) as \(\Gamma\) has infinite rank. If \(r=0\), we set \(W:=\emptyset\).
Next, we have the following finite collection of loop swaps:
\[B:= \{\alpha_{ij}:=\text{swaps }a_{i,1}\leftrightarrow a_{j,1}\ |\ 1 \leq i<j\leq l\}\] \[\cup\{\beta_{ij}:=\text{swaps }b_{i,1}\leftrightarrow b_{j,1}\ |\ 1 \leq i<j\leq m\}\] \[\cup\{\gamma_{ij}:=\text{swaps }a_{i,1}\leftrightarrow b_{j,1}\ |\ 1 \leq i\leq l,\ 1\leq j\leq m\}.\]
In words, \(B\) is the collection of all loop swaps between loops that are adjacent to \(K\).
Finally, we need a finite collection of loop shifts. The graph \(\Gamma\) has only finitely many ends accumulated by loops, so by Corollary C, \(H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\) has finite rank. Let \(H\) be a finite collection of primitive loop shifts dual to a finite basis of \(H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\).
We claim that the set
\[\mathcal{S}:=\mathcal{V}_{K}\cup W\cup B\cup H\]
is a CB generating set for \(\operatorname{PMap}(\Gamma)\). Note that \(\mathcal{S}\) is CB since \(\mathcal{V}_{K}\) is CB by Proposition 4.5 and each of \(W,B\), and \(H\) is simply a finite set. Thus we only need to verify that \(\mathcal{S}\) is a generating set for \(\operatorname{PMap}(\Gamma)\). We will first check that \(\mathcal{S}\) generates \(\operatorname{PMap}_{c}(\Gamma)\).
**Lemma 4.6**.: _If \(K^{\prime}\subset\Gamma\) is any connected compact subset of \(\Gamma\), then \(\operatorname{PMap}(K^{\prime})\subset\langle\mathcal{S}\rangle\)._
Before we give the proof of this lemma we review finite generating sets for \(\operatorname{Aut}(F_{n})\). Let \(F_{n}\) be a free group of rank \(n\), and denote by \(a_{1},\ldots,a_{n}\) its free generators. In 1924, Nielsen [10] proved a finite presentation of \(\operatorname{Aut}(F_{n})\), with the generating set \(\{\tau_{i}\}_{i=1}^{n}\cup\{\sigma_{ij},\lambda_{ij},\rho_{ij}\}_{1\leq i \neq j\leq n}\), where:
\[\tau_{i}=\begin{cases}a_{i}\mapsto a_{i}^{-1},\\ a_{j}\mapsto a_{j}\qquad\text{for }j\neq i.\end{cases} \sigma_{ij}=\begin{cases}a_{i}\leftrightarrow a_{j},\\ a_{k}\mapsto a_{k}\quad\text{for }k\neq i,j.\end{cases}\] \[\lambda_{ij}=\begin{cases}a_{i}\mapsto a_{j}a_{i},\\ a_{k}\mapsto a_{k}\qquad\text{for }k\neq i,j.\end{cases} \rho_{ij}=\begin{cases}a_{i}\mapsto a_{i}a_{j},\\ a_{k}\mapsto a_{k}\qquad\text{for }k\neq i,j.\end{cases}\]
We call \(\tau_{i}\) a **flip**, \(\sigma_{ij}\) a **transposition**, and \(\lambda_{ij},\rho_{ij}\)**left/right Nielsen automorphisms** respectively. In fact, Armstrong-Forrest-Vogtmann [1, Theorem 1] reduced this generating set to consist only of involutions:
( \[\dagger\] ) \[\{\tau_{i}\}_{i=1}^{n}\cup\{\sigma_{i,i+1}\}_{i=1}^{n-1}\cup\{\tau_{2}\lambda_{ 12}\}.\]
Proof of Lemma 4.6.: Let \(K^{\prime}\) be a connected compact subset of \(\Gamma\). Without loss of generality, we can increase the size of \(K^{\prime}\) so that it is as in Figure 7. In particular, \(K^{\prime}\) satisfies the following:
* \(K^{\prime}\) contains at least two loops of \(L_{1}\) (or \(M_{1}\) if \(l=0\)),
* \(K^{\prime}\) contains at least one loop from every monster summand,
* the vertices in \(K^{\prime}\) are contained in its interior,
* every component of \(\Gamma\setminus K^{\prime}\) is infinite,
* \(K^{\prime}\) is connected and contains the wedge point \(K\).
Note that the last two properties imply that \(K^{\prime}\) contains a subsegment of every ray summand \(R_{i}\).
By Proposition 2.3 and Remark 2.4, we have that \(\operatorname{PMap}(K^{\prime})\cong F_{m}^{k}\rtimes\operatorname{Aut}(F_{m})\) for some \(k>0\) and \(m=\operatorname{rk}(K^{\prime})\). We first check that \(\langle\mathcal{S}\rangle\) contains an Armstrong-Forrest-Vogtmann generating set for \(\operatorname{Aut}(F_{m})\). Relabel the loops of \(K^{\prime}\) by \(a_{1},\dots,a_{m}\) in the following manner. The loop of \(L_{1}\) closest to \(K\) is labeled \(a_{1}\), the next loop in \(L_{1}\) is \(a_{2}\), continued until the loops of \(L_{1}\) are exhausted. Then the next loop, say \(a_{j+1}\), is the first loop on \(L_{2}\) etc., until all of the loops in all of the \(L_{i}\) are exhausted. Finally, continue relabeling by \(a_{\bullet}\)'s the loops in \(M_{1}\) through \(M_{m}\), in the same process. Note that when \(l=0\), then \(a_{1}\) and \(a_{2}\) are contained in \(M_{1}\).
Figure 7. Illustration of \(K^{\prime}\) in \(\Gamma\). Here we have \(l=2,m=3,r=3\) and \(\operatorname{PMap}(K^{\prime})\cong F_{9}^{11}\rtimes\operatorname{Aut}(F_{9})\). We may assume \(K^{\prime}\) contains: at least one loop from every monster summand, at least two loops from one of the monster summands, and initial segments of the ray summands, as well as \(K\). If needed, we can further enlarge \(K^{\prime}\) such that it contains the vertices in the interior, and it only contains the entirety of the loops.
Note that we immediately have \(\tau_{1},\ldots,\tau_{m},\lambda_{12}\in\mathcal{V}_{K}\subset\mathcal{S}\). Therefore it remains to check that \(\sigma_{i,i+1}\in\langle\mathcal{S}\rangle\) for all \(i=1,\ldots,m-1\). Each such \(\sigma_{i,i+1}\) either swaps two adjacent loops in a single component of \(K^{\prime}\setminus K\) or swaps the last loop in a component of \(K^{\prime}\setminus K\) with the first loop in the next component. In the former case we already have that \(\sigma_{i,i+1}\in\mathcal{V}_{K}\). For the latter case, let \(a_{t}\) be the first loop in the component of \(K^{\prime}\setminus K\) containing \(a_{i}\). Then consider the loop swap \(\sigma_{i,t}\) that swaps \(a_{i}\) with \(a_{t}\) (note those two loops could coincide, and then \(\sigma_{i,t}\) is the identity) and let \(\sigma_{t,i+1}\) be the loop swap that swaps \(a_{t}\) with \(a_{i+1}\), which is the first loop in the component of \(K^{\prime}\setminus K\) containing \(a_{i+1}\). Then we have that \(\sigma_{i,1}\in\mathcal{V}_{K}\), \(\sigma_{t,i+1}\in B\) and \(\sigma_{i,i+1}=\sigma_{i,t}\sigma_{t,i+1}\sigma_{i,t}\in\langle\mathcal{S}\rangle\). Thus we see that every Armstrong-Forrest-Vogtmann generator for the \(\operatorname{Aut}(F_{m})\) subgroup of \(\operatorname{PMap}(K^{\prime})\cong F_{m}^{k}\rtimes\operatorname{Aut}(F_{m})\) is contained in \(\langle\mathcal{S}\rangle\).
Finally we need to be able to obtain each of the \(k\) factors of \(F_{m}\) in \(\operatorname{PMap}(K^{\prime})\). Each \(F_{m}\) subgroup can be identified with the subgroup of the collection of word maps on an interval adjacent to the boundary of \(K^{\prime}\). Recall by Proposition 2.3 there are \(k+1\) such boundary adjacent intervals, so say \(I_{1},\ldots,I_{k+1}\). Since we have already generated the \(\operatorname{Aut}(F_{m})\) subgroup of \(\operatorname{PMap}(K^{\prime})\) with \(\mathcal{S}\) and we can change the word of the word map using Lemma 2.12 and Lemma 2.13, it suffices to show that a _single_ word map on each interval \(I_{j}\) that maps onto a generator of \(F_{m}\) is in \(\langle\mathcal{S}\rangle\). However, we even have one such word map in \(\mathcal{S}\) already. Indeed, if \(I_{j}\) is contained in some ray then we have already added a corresponding word map to \(W\). Otherwise, if \(I_{j}\) is contained in some monster summand, then there is an appropriate word map already in \(\mathcal{V}_{K}\) obtained by mapping \(I_{j}\) over the first loop of that summand. We can thus conclude that \(\operatorname{PMap}(K^{\prime})\cong F_{m}^{k}\rtimes\operatorname{Aut}(F_{m})\) in contained in \(\langle\mathcal{S}\rangle\).
We are now ready to prove Theorem A. Note that in the above lemma we never made use of the loop shifts in \(H\). They will now be used to push any random mapping class into the closure of the compactly supported mapping classes.
Proof of Theorem A.: As discussed in the beginning of the section, the only if direction comes from Proposition 4.2 and Proposition 4.3. Now we prove the if direction. When \(\Gamma\) is a tree, we have \(\operatorname{PMap}(\Gamma)=1\) by Proposition 2.6. If \(\Gamma\) has finite rank, \(\operatorname{PMap}(\Gamma)\) is finitely generated by Proposition 4.4. Also if \(\Gamma\) is either the Loch Ness monster or Millipede monster graph, then by Theorem 2.16, \(\operatorname{PMap}(\Gamma)\) is CB. Hence we may assume \(1\leq|E_{\ell}|<\infty\), there is no accumulation point in \(E\setminus E_{\ell}\), and \(\Gamma\) is neither the Loch Ness monster nor the Millipede monster graphs.
Let \(\mathcal{S}\) be as defined above; \(\mathcal{S}=\mathcal{V}_{K}\cup W\cup B\cup H\). We will show that \(\mathcal{S}\) generates \(\operatorname{PMap}(\Gamma)\). Let \(f\in\operatorname{PMap}(\Gamma)\). If \(|E_{\ell}|=1\), then \(\operatorname{PMap}(\Gamma)=\overline{\operatorname{PMap}_{c}(\Gamma)}\) by Theorem B, so we obtain \(f\in\overline{\operatorname{PMap}_{c}(\Gamma)}\). Otherwise, if \(|E_{\ell}|\geq 2\), then by postcomposing \(f\) with primitive loop shifts in \(H\), we may assume the flux of \(f\) is zero with respect to any \(2\)-partition of \(E_{\ell}\). By Theorem 3.11, we can assume \(f\in\overline{\operatorname{PMap}_{c}(\Gamma)}\) for this case as well.
Then there exists a compact set \(K^{\prime}\) containing \(K\), and \(g\in\operatorname{PMap}_{c}(\Gamma)\) such that \(g\) is totally supported in \(K^{\prime}\) and \(fg^{-1}\in\mathcal{V}_{K}\). Therefore, it suffices to show that \(g\) is contained in the group generated by \(\mathcal{S}\). Since \(g\) is totally supported in \(K^{\prime}\), the map \(g\) can be identified with an element in \(\operatorname{PMap}(K^{\prime})\), which is contained in \(\langle\mathcal{S}\rangle\) by Lemma 4.6. This concludes the proof that \(\operatorname{PMap}(\Gamma)\) is generated by \(\mathcal{S}\). Finally, \(\mathcal{S}\) is CB as it is the union of three finite sets \(W,B\), and \(H\) and the set \(\mathcal{V}_{K}\), which is CB by Proposition 4.5.
## 5. Residual finiteness
In this section, we prove Theorem D:
**Theorem 5.1** (Theorem D, revisited).: \(\operatorname{PMap}(\Gamma)\) _is residually finite if and only if \(\Gamma\) has a finite rank._
### Forgetful map
Throughout this section, we let \(\Gamma\) be a locally finite, infinite graph with no ends accumulated by loops. That is, \(E_{\ell}(\Gamma)=\emptyset\) but \(E(\Gamma)\neq\emptyset\). Fix an end \(\alpha_{0}\in E(\Gamma)\). Define \(E_{<\infty}(\Gamma)\) as the collection of finite subsets of \(E(\Gamma)\) containing \(\alpha_{0}\):
\[E_{<\infty}(\Gamma)=\{\mathcal{E}\subset E(\Gamma):\alpha_{0}\in\mathcal{E}, \text{ and }|\mathcal{E}|<\infty\}.\]
For each \(\mathcal{E}\in E_{<\infty}(\Gamma)\), we define the graph \(\Gamma_{\mathcal{E}}\) as a subgraph of \(\Gamma\) such that:
* \(\operatorname{rk}\Gamma_{\mathcal{E}}=\operatorname{rk}\Gamma\), and
* \(E(\Gamma_{\mathcal{E}})=\mathcal{E}\).
Note \(\Gamma_{\mathcal{E}}\) is properly homotopic to the core graph \(\Gamma_{c}\) of \(\Gamma\) with \(|\mathcal{E}|\) rays attached.
Now we use Proposition 2.3: \(\operatorname{PMap}(\Gamma)\cong\mathcal{R}\rtimes\operatorname{PMap}(\Gamma _{c}^{*})\) if \(E(\Gamma)\setminus E_{\ell}(\Gamma)\) is nonempty and compact. In our case \(\Gamma\) is of infinite type and has no ends accumulated by loops, so \(E(\Gamma)\setminus E_{\ell}(\Gamma)=E(\Gamma)\) is automatically nonempty and compact. Now we denote by \(\mathcal{R}_{\mathcal{E}}\) the \(\mathcal{R}\) subgroup for \(\Gamma_{\mathcal{E}}\). Then we have a map \(\rho_{\mathcal{E}}:\mathcal{R}\to\mathcal{R}_{\mathcal{E}}\) by'restricting' the domain to \(E(\Gamma_{\mathcal{E}})\). Namely, given a locally constant map \([f:E(\Gamma)\to\pi_{1}(\Gamma,\alpha_{0})]\in\mathcal{R}\), we define \(\rho_{\mathcal{E}}(f):=f|_{E(\Gamma_{\mathcal{E}})}\), where we note that \(f|_{E(\Gamma_{\mathcal{E}})}:E(\Gamma_{\mathcal{E}})\to\pi_{1}(\Gamma,\alpha_ {0})=\pi_{1}(\Gamma_{\mathcal{E}},\alpha_{0})\) is a locally constant map to \(\pi_{1}(\Gamma_{\mathcal{E}},\alpha_{0})\), so \(\rho_{\mathcal{E}}(f)\in\mathcal{R}_{\mathcal{E}}\).
**Lemma 5.2**.: _The restriction map \(\rho_{\mathcal{E}}:\mathcal{R}\to\mathcal{R}_{\mathcal{E}}\) is a group homomorphism._
Proof.: Recall the group operation in \(\mathcal{R}\) is the pointwise multiplication in \(\pi_{1}(\Gamma,\alpha_{0})\). Hence the restriction on \(f\cdot g\) for \(f,g\in\mathcal{R}\) is just the product of their restrictions:
\[\rho_{\mathcal{E}}(f\cdot g)=(f\cdot g)|_{E(\Gamma_{\mathcal{E}})}=f|_{E( \Gamma_{\mathcal{E}})}\cdot g|_{E(\Gamma_{\mathcal{E}})}=\rho_{\mathcal{E}}(f )\cdot\rho_{\mathcal{E}}(g).\qed\]
Observe \(\Gamma_{c}^{*}=(\Gamma_{\mathcal{E}})_{c}^{*}\). Hence, from the lemma we can build the homomorphism \(\mathcal{F}_{\mathcal{E}}\) on \(\operatorname{PMap}(\Gamma)\) to \(\operatorname{PMap}(\Gamma_{\mathcal{E}})\) by just letting \(\mathcal{F}_{\mathcal{E}}=\rho_{\mathcal{E}}\times\operatorname{Id}\) on the decomposition \(\operatorname{PMap}(\Gamma)=\mathcal{R}\rtimes\operatorname{PMap}(\Gamma_{c}^ {*})\) given in Proposition 2.3:
\[\mathcal{F}_{\mathcal{E}}:\operatorname{PMap}(\Gamma)\to\operatorname{PMap}( \Gamma_{\mathcal{E}}),\]
which we will call as the **forgetful homomorphism** to \(\mathcal{E}\subset E(\Gamma)\).
### Finite rank if and only if residually finite
Now we prove Theorem D. The following lemma is used for the only if direction of the proof. Denote by \(\operatorname{SAut}(F_{n})\) the unique index \(2\) subgroup of \(\operatorname{Aut}(F_{n})\).
**Fact 5.3** ([1, Theorem 9.1]).: _There exists a strictly increasing sequence of integers \(\{a_{n}\}_{n\geq 3}\) such that for \(n\geq 3\), every nontrivial finite quotient of \(\operatorname{SAut}(F_{n})\) has cardinality of at least \(a_{n}\)._
Proof of Theorem D.: Suppose that \(\Gamma\) has finite rank \(n\). If \(\Gamma\) has no ends then \(\operatorname{PMap}(\Gamma)\) is isomorphic to \(\operatorname{Out}(F_{n})\), which is residually finite by [10]. If \(\Gamma\) has only one end, then \(\operatorname{PMap}(\Gamma)\) is isomorphic to \(\operatorname{Aut}(F_{n})\), which is residually finite by [1]. If \(\Gamma\) has finitely many ends, then \(\operatorname{PMap}(\Gamma)\cong F_{n}^{|E|-1}\rtimes\operatorname{Aut}(F_{n})\) which is again residually finite as both factors are residually finite and \(F_{n}^{|E|-1}\) is finitely generated [15, Theorem 1].
Now we assume \(\Gamma\) has finite rank and infinitely many ends. The proof is similar to the proof for infinite-type surfaces; [14, Proposition 4.6]. Let \(f\in\operatorname{PMap}(\Gamma)\) be a nontrivial element. Since \(\Gamma\) is of finite rank and \(f\) is proper, it follows that \((\Gamma\setminus\Gamma_{c})\cap\operatorname{supp}(f)\) is compact. In particular, there exists some finite set \(\mathcal{E}\subset E\) such that \((\Gamma_{\mathcal{E}}\setminus\Gamma_{c})\cap\operatorname{supp}(f)\) is still nonempty. This implies that the forgetful map \(\mathcal{F}_{\mathcal{E}}:\operatorname{PMap}(\Gamma)\to\operatorname{PMap}( \Gamma_{\mathcal{E}})\) sends \(f\) to a nontrivial element \(\mathcal{F}_{\mathcal{E}}(f)\in\operatorname{PMap}(\Gamma_{\mathcal{E}})\). However, we know that \(\mathcal{E}\) has finite end space so \(\operatorname{PMap}(\Gamma_{\mathcal{E}})\) is residually finite by the previous paragraph. Therefore, there
exists a homomorphism \(\psi:\mathrm{PMap}(\Gamma_{\mathcal{E}})\to F\) for some finite group \(F\) so that \(\psi(\mathcal{F}_{\mathcal{E}}(f))\) is nontrivial. Thus \(\mathrm{PMap}(\Gamma)\) is residually finite.
Conversely, let \(\Gamma\) have infinite rank and assume it is in standard form. Let \(\{\Gamma_{k}\}\) be a compact exhaustion of \(\Gamma\) by connected subgraphs. Then there exist non-decreasing sequences \(\{n_{k}\},\{e_{k}\}\) such that \(\mathrm{PMap}(\Gamma_{k})\cong F_{n_{k}}^{e_{k}}\rtimes\mathrm{Aut}(F_{n_{k}})\). Here \(e_{k}+1\) is the number of boundaries of \(\Gamma_{k}\) (i.e. the size of \(\overline{\Gamma\setminus\Gamma_{k}}\cap\Gamma_{k}\)), and \(n_{k}\) is the rank of \(\Gamma_{k}\). As \(\Gamma\) has infinite rank, we have \(\lim_{k\to\infty}n_{k}=\infty.\) Also, note \(\mathrm{Aut}(F_{n_{k}})\) has the unique index \(2\) subgroup \(\mathrm{SAut}(F_{n_{k}})\) for each \(k\), whose isomorphic copy in \(\mathrm{PMap}(\Gamma_{k})\) we denote by \(G_{k}\). The group \(\mathrm{Aut}(F_{n_{k}})\) can be identified with the subgroup of mapping classes totally supported on the core graph of \(\Gamma_{k}\), and \(G_{k}\cong\mathrm{SAut}(F_{n_{k}})\) with the set of those mapping classes that preserve orientation. Since the core graph of \(\Gamma_{k}\) is contained in the core graph of \(\Gamma_{k+1}\), and orientation preserving mapping classes on \(\Gamma_{k}\) are orientation preserving on \(\Gamma_{k+1}\), it follows that we have the inclusion \(G_{k}\hookrightarrow G_{k+1}\). Hence the direct limit \(\mathrm{SAut}_{\infty}(\Gamma):=\varinjlim G_{k}\) is a well-defined subgroup of \(\mathrm{PMap}(\Gamma)\).
We claim that \(\mathrm{SAut}_{\infty}(\Gamma)\) has no finite quotients. To do this, suppose \(H\) is a proper normal subgroup of \(\mathrm{SAut}_{\infty}(\Gamma)\) with finite index \(r\geq 2\). Then as \(H\) is a proper subgroup of \(\mathrm{SAut}_{\infty}(\Gamma)\) and \(\varinjlim G_{k}=\mathrm{SAut}_{\infty}(\Gamma)\), it follows that there exists some \(k_{0}\) such that whenever \(k\geq k_{0}\), \(\widetilde{G_{k}}\) is not contained in \(H\). Hence \(H\cap G_{k}\) is a _proper_ normal subgroup of \(G_{k}\). Note
\[1\neq[G_{k}:G_{k}\cap H]\leq[\mathrm{SAut}_{\infty}(\Gamma):H]=r,\]
but the minimal finite index of proper subgroups of \(G_{k}\cong\mathrm{SAut}(F_{n_{k}})\) increases as \(k\) does by Fact 5.3. Therefore, \([G_{k}:G_{k}\cap H]\) cannot be uniformly bounded by \(r\); giving a contradiction. Therefore \(\mathrm{SAut}_{\infty}(\Gamma)\) has no nontrivial finite quotient, implying that both \(\mathrm{PMap}(\Gamma)\) and \(\mathrm{Map}(\Gamma)\) are not residually finite.
## 6. Tits alternative
In a series of three papers [1, 1, 1], Bestvina, Feighn, and Handel prove that \(\mathrm{Out}(F_{n})\) satisfies what we call the **strong Tits alternative**: Every subgroup either contains a nonabelian free group or is virtually abelian. The same was previously known for mapping class groups of compact surfaces by work of Ivanov, McCarthy, and Birman-Lubotzky-McCarthy [14, 15, 16]. However, it was shown by Lanier and Loving [13] that this is not the case for big mapping class groups. They prove that big mapping class groups _never_ satisfy the strong Tits alternative by showing that they always contain a subgroup isomorphic to the wreath product \(\mathbb{Z}\wr\mathbb{Z}\). In [1], the authors extend this idea to find many subgroups isomorphic to wreath products. Allcock [16] further showed that most big mapping class groups fail the (standard) Tits alternative by finding a poison subgroup that surjects onto a Grigorchuck group. A groups satisfies the **Tits alternative** if every subgroup either contains a nonabelian subgroup or is virtually solvable. Note that some references require subgroups be finitely generated, but we do not need to make that restriction.
### Infinite rank: Fails to satisfy TA
In this section, we find poison subgroups (analogous to the surface case) in \(\mathrm{PMap}(\Gamma)\) for graphs \(\Gamma\) of infinite rank.
**Theorem 6.1**.: _Let \(\Gamma\) be a locally finite graph of infinite rank. Then \(\mathrm{PMap}(\Gamma)\) contains a subgroup isomorphic to \(\mathrm{Aut}(F_{n})\wr\mathbb{Z}\) for all \(n\in\mathbb{N}\)._
Proof.: Recall that to define the wreath product \(G\wr\mathbb{Z}\), we need a \(\mathbb{Z}\)-indexed set of copies of \(G\), denoted by \(\{G_{i}\}_{i\in\mathbb{Z}}\). Then \(\mathbb{Z}\) acts on the index set by translation, so it also acts
on \(\oplus_{i\in\mathbb{Z}}G_{i}\) by translation on the indices. Now set \(G=\operatorname{Aut}(F_{n})\) and denote by \(\phi\) the translation action by \(\mathbb{Z}\) on the direct sum. We then define
\[\operatorname{Aut}(F_{n})\wr\mathbb{Z}:=\left(\bigoplus_{\mathbb{Z}} \operatorname{Aut}(F_{n})\right)\rtimes_{\phi}\mathbb{Z}.\]
To realize this group as a subgroup of \(\operatorname{PMap}(\Gamma)\), we will find \(\mathbb{Z}\) copies of \(\operatorname{Aut}(F_{n})\) together with a translation action.
For each \(n\in\mathbb{N}\), let \(\Delta_{n}\) be the graph obtained from a line identified with \(\mathbb{R}\) with a wedge of \(n\) circles attached by an edge at each integer point; see Figure 8. If \(\Gamma\) has at least two ends accumulated by loops, we can properly homotope \(\Gamma\) to have \(\Delta_{n}\) as a subgraph.
For each \(i\in\mathbb{Z}\), let \(R_{i}\) be the wedge of circles supported above the integer point \(i\) in \(\Delta_{n}\subset\Gamma\). Let \(G_{i}\) be the subgroup of elements of \(\operatorname{PMap}(\Gamma)\) which are totally supported on \(R_{i}\). Each \(G_{i}\) is isomorphic to \(\operatorname{Aut}(F_{n})\) (see Remark 2.4) and the \(G_{i}\)'s have disjoint total support, so \(\langle G_{i}\rangle_{i\in\mathbb{Z}}\cong\oplus_{\mathbb{Z}}\operatorname{ Aut}(F_{n})\). There is a shift map, call it \(h\), that translates along \(\Delta_{n}\) by \(+1\) on the line and maps \(R_{i}\) to \(R_{i+1}\) isometrically. Because \(h^{m}G_{i}=G_{i+m}h^{m}\), the subgroup of \(\operatorname{PMap}(\Gamma)\) generated by \(G_{0}\) and \(h\) is isomorphic to \(\operatorname{Aut}(F_{n})\wr\mathbb{Z}\).
In general, if \(\Gamma\) has at least one end accumulated by loops, we can embed a copy of \(\Delta_{n}\) into \(\Gamma\) where the images of the two ends of \(\Delta_{n}\) are not distinct. The corresponding "shift map" will no longer be shifting between distinct ends, but this does not affect the construction of an \(\operatorname{Aut}(F_{n})\wr\mathbb{Z}\) subgroup.
Theorem 6.1 immediately tells us that \(\operatorname{PMap}(\Gamma)\) fails the strong Tits alternative because \(\mathbb{Z}\wr\mathbb{Z}\) is a subgroup of \(\operatorname{Aut}(F_{n})\wr\mathbb{Z}\). In [11], Allcock shows that big mapping class groups of surfaces with infinite genus fail the (standard) Tits alternative. His idea is to find elements of the mapping class group that "look like" the action of the Grigorchuck group on a rooted binary tree. Because these elements are not of finite order, the resulting subgroup of the mapping class group is an extension of the Grigorchuck group. When this same idea is implemented in the pure mapping class group of a graph, we instead find an exact copy of Grigorchuck's group. Many graphs, such as an infinite binary tree, also contain Grigorchuck's group as a subgroup of their _full_ mapping class group in the obvious way.
**Theorem 6.2**.: _Let \(\Gamma\) be a locally finite graph of infinite rank. Then \(\operatorname{PMap}(\Gamma)\) contains a subgroup isomorphic to the Grigorchuck group._
Proof.: First, we define the proper homotopy equivalences called \(a,b,c,d\) on an infinite binary tree \(T\) as in Figure 9. Note only \(a\) swaps the level \(1\) branches. Each of the other three homotopy equivalences \(b,c,d\) misses \((3k+1),3k,(3k-1)\)-st branch swaps for \(k\geq 1\)
Figure 8. The graph \(\Delta_{4}\), with two ends accumulated by roses with \(4\) petals. It admits a translation of roses, denoted by the green dotted arrow. Up to proper homotopy, such graph arises as a subgraph of any graph with at least two ends accumulated by loops.
respectively, as well as the level-\(1\) swap. These four elements generate the Grigorchuck group, \(G\)[10, 11].
Now let \(\Delta\) be the infinite graph with one end accumulated by loops, constructed as in Figure 10. Specifically, we start with a ray and label the countably many vertices by \(v_{1},v_{2},\ldots\) etc. Attach a finite binary tree \(T_{i}\) of level \(i\) to \(v_{i}\) for each \(i\geq 1\). Then attach a single loop at each leaf of the trees. For any graph \(\Gamma\) with infinite rank, we can apply a proper homotopy equivalence so that \(\Delta\) is a subgraph. Hence, \(\operatorname{PMap}(\Delta)\leq\operatorname{PMap}(\Gamma)\), so it suffices to find a Grigorchuck group inside \(\operatorname{PMap}(\Delta)\).
Define a proper homotopy equivalence \(\hat{b}\) as the map on _finite_ binary trees \(T_{1},T_{2},\ldots\) by'mimicking' \(b\) defined on the _infinite_ binary tree \(T\). See Figure 10 for an illustration
Figure 9. Proper homotopy equivalences \(a,b,c\) and \(d\) on infinite binary tree \(T\). Each green arrow denotes the swap of the two subtrees induced by the swap of the two branches.
of \(\hat{b}\), denoted in green arrows. Similarly define \(\hat{a},\hat{c}\) and \(\hat{d}\) from \(a,c\) and \(d\). Denote by \(\widetilde{a},\widetilde{b},\widetilde{c}\) and \(\widetilde{d}\) the proper homotopy classes of \(\hat{a},\hat{b},\hat{c}\) and \(\hat{d}\) respectively. Following the same proof of [10, Lemma 4.1], we see that \(\widetilde{a},\widetilde{b},\widetilde{c},\widetilde{d}\) satisfy exactly the defining relations of \(G\), and \(\widetilde{G}:=\langle\widetilde{a},\widetilde{b},\widetilde{c},\widetilde{d}\rangle\) is isomorphic to the Grigorchuck group.
**Corollary 6.3**.: _Let \(\Gamma\) be a locally finite graph of infinite rank. Then \(\operatorname{PMap}(\Gamma)\) and \(\operatorname{Map}(\Gamma)\) fail the Tits alternative._
### Finite rank: Satisfies TA
On the other hand, when \(\Gamma\) has finite rank, we get the following contrasting result.
**Theorem 6.4**.: _Let \(\Gamma\) be a locally finite graph with finite rank. Then \(\operatorname{PMap}(\Gamma)\) satisfies the Tits alternative. That is, every finitely generated subgroup is either virtually solvable or contains a free group._
We first need the following stability property of the Tits alternative.
**Proposition 6.5**.: _Satisfying the Tits alternative is stable under subgroups, finite-index supergroups, and group extensions. More precisely,_
1. _Let_ \(H\leq G\)_. If_ \(G\) _satisfies the Tits alternative, then so does_ \(H\)_._
2. _Let_ \(H\leq G\) _with_ \([G:H]<\infty\)_. If_ \(H\) _satisfies the Tits alternative, then so does_ \(G\)_._
3. _(cf._ _[_10_, Proposition 6.3]__) Suppose the groups_ \(K,G,H\) _form a short exact sequence as follows:_ \[1\longrightarrow K\longrightarrow G\longrightarrow H\longrightarrow 1.\] _If_ \(K\) _and_ \(H\) _satisfy the Tits alternative, then so does_ \(G\)_._
Proof.: Claim (1) holds because every subgroup of \(H\) is a subgroup of \(G\). Claim (2) will follow from (3), as finite groups are virtually trivial so satisfy the Tits alternative.
Now we prove (3). Let \(L\leq G\) be a subgroup. Then we have the following commutative diagram:
Indeed, \(K\cap L\trianglelefteq L\) and \(q(L)\cong L/(K\cap L)\leq H\). By (1), both \(K\cap L\) and \(q(L)\) satisfy Tits alternative. If \(K\cap L\) has \(F_{2}\) as a subgroup, then \(L\) has \(F_{2}\) as a subgroup. If \(q(L)\) has \(F_{2}\) has a subgroup, then we can find a section of \(q\) to lift \(F_{2}\) inside \(L\). Hence, we may assume both \(K\cap L\) and \(q(L)\) are virtually solvable. In this case, the following fact finishes the proof.
**Fact 6.6** ([12, Lemma 5.5], see also [10, Lemme 6.1]).: _Suppose \(N\) is a normal subgroup of a group \(G\). If both \(N\) and \(G/N\) are virtually solvable, then \(G\) is virtually solvable._
Hence \(L\) is virtually solvable, concluding that \(G\) satisfies Tits alternative.
Now we are ready to prove Theorem 6.4.
Proof of Theorem 6.4.: Let \(\operatorname{rk}\Gamma=n.\) Then we have the following short exact sequence [1, Theorem 3.5]:
\[1\longrightarrow\mathcal{R}\longrightarrow\operatorname{PMap}(\Gamma) \longrightarrow\operatorname{Aut}(F_{n})\longrightarrow 1,\]
where \(\mathcal{R}\) is the group of locally constant functions from \(E\) to \(F_{n}\) with pointwise multiplication.
The subgroup of \(\operatorname{Out}(F_{n+1})\) fixing one \(\mathbb{Z}\) factor is naturally isomorphic to \(\operatorname{Aut}(F_{n})\). Recall that \(\operatorname{Out}(F_{n+1})\) satisfies the Tits alternative by [1], so \(\operatorname{Aut}(F_{n})\) does too. We will show that \(\mathcal{R}\) satisfies the (strong) Tits alternative, then Proposition 6.5 part (3), guarantees that \(\operatorname{PMap}(\Gamma)\) satisfies the Tits alternative as well.
**Claim**.: \(\mathcal{R}\) _satisfies the strong Tits alternative._
Consider a (not necessarily finitely generated) subgroup \(H\subset\mathcal{R}\). Suppose there exist \(\phi,\psi\in\mathcal{R}\) that do not commute; so there exists an \(e\in E\) such that \(\phi(e)\psi(e)\neq\psi(e)\phi(e)\). Now we use the Ping Pong lemma to prove \(\langle\phi,\psi\rangle\cong F_{2}\), which will conclude that \(\mathcal{R}\) satisfies the strong Tits alternative. Let \(X_{\phi(e)}\) and \(X_{\psi(e)}\) as the set of words in \(F_{n}\) that start with \(\phi(e)\) and \(\psi(e)\) respectively. We note \(X_{\phi(e)}\) and \(X_{\psi(e)}\) are disjoint as otherwise \(\phi(e)=\psi(e)\), contradicting the assumption \(\phi(e)\psi(e)\neq\psi(e)\phi(e)\). We consider the action of \(H\) on \(F_{n}\) as:
\[\phi\cdot w:=\phi(e)w,\qquad\text{for $\phi\in\mathcal{R}$ and $w\in F_{n}$.}\]
Then the same assumption \(\phi(e)\psi(e)\neq\psi(e)\phi(e)\) implies that \(\phi\cdot X_{\psi(e)}\subset X_{\phi(e)}\) and \(\psi\cdot X_{\phi(e)}\subset X_{\psi(e)}\). Therefore, by the Ping-Pong lemma, we conclude \(\langle\phi,\psi\rangle\cong F_{2}\) so \(\mathcal{R}\) satisfies the (strong) Tits alternative.
We now extend these results to determine which full mapping class groups satisfy the Tits alternative.
**Corollary 6.7**.: _Let \(\Gamma\) be a locally finite, infinite graph. Then \(\operatorname{Map}(\Gamma)\) satisfies the Tits alternative if and only if \(\Gamma\) has finite rank and finite end space._
Proof.: We divide into cases. First, if \(\Gamma\) has at least one end accumulated by loops, then \(\operatorname{Map}(\Gamma)\) fails the Tits alternative by Corollary 6.3. Otherwise, \(\Gamma\) has finite rank, and we continue to divide into cases. If \(\Gamma\) has finite end space, then \(\operatorname{Map}(\Gamma)\) is a finite extension of \(\operatorname{PMap}(\Gamma)\), so by Proposition 6.5 property (2), the full mapping class group \(\operatorname{Map}(\Gamma)\) satisfies the Tits alternative. If \(\Gamma\) has countable end space, then we can modify the proof of Theorem 6.2 by replacing the loops with rays, to realize Grigorchuck's group as a subgroup of \(\operatorname{Map}(\Gamma)\). If the end space of \(\Gamma\) is uncountable, then there is a closed subset of the ends which is homeomorphic to the whole Cantor set, so contains Grigorchuck's group in the natural way, and again \(\operatorname{Map}(\Gamma)\) fails the Tits alternative.
The strong Tits alternative is not stable under group extensions (consider \(\mathbb{Z}\wr\mathbb{Z}\)). So, the best we could conclude about \(\operatorname{PMap}(\Gamma)\) from the decomposition as \(\mathcal{R}\rtimes\operatorname{Aut}(F_{n})\) was Theorem 6.4. However, our proof that \(\mathcal{R}\) satisfies the strong Tits alternative actually shows the slightly stronger statement: Any two elements of \(\mathcal{R}\) which do not commute generate \(F_{2}\). This property could be useful in answering the question,
**Question 6.8**.: If \(\Gamma\) is a locally finite graph of finite rank, does \(\operatorname{PMap}(\Gamma)\) satisfy the strong Tits alternative?
| locally finite、無限グラフ、純なマッピングクラス群を有するものを完全分類します。また、純なマッピングクラス群の代数的な性質を調べます。
Semidirectproduct decomposition、first integral cohomology、residual finiteness、Tits alternativeの定理を適用することで、これらのグループの Quasi-isometric and algebraic rigidityを導き出します。 |
2309.09670 | DGM-DR: Domain Generalization with Mutual Information Regularized
Diabetic Retinopathy Classification | The domain shift between training and testing data presents a significant
challenge for training generalizable deep learning models. As a consequence,
the performance of models trained with the independent and identically
distributed (i.i.d) assumption deteriorates when deployed in the real world.
This problem is exacerbated in the medical imaging context due to variations in
data acquisition across clinical centers, medical apparatus, and patients.
Domain generalization (DG) aims to address this problem by learning a model
that generalizes well to any unseen target domain. Many domain generalization
techniques were unsuccessful in learning domain-invariant representations due
to the large domain shift. Furthermore, multiple tasks in medical imaging are
not yet extensively studied in existing literature when it comes to DG point of
view. In this paper, we introduce a DG method that re-establishes the model
objective function as a maximization of mutual information with a large
pretrained model to the medical imaging field. We re-visit the problem of DG in
Diabetic Retinopathy (DR) classification to establish a clear benchmark with a
correct model selection strategy and to achieve robust domain-invariant
representation for an improved generalization. Moreover, we conduct extensive
experiments on public datasets to show that our proposed method consistently
outperforms the previous state-of-the-art by a margin of 5.25% in average
accuracy and a lower standard deviation. Source code available at
https://github.com/BioMedIA-MBZUAI/DGM-DR | Aleksandr Matsun, Dana O. Mohamed, Sharon Chokuwa, Muhammad Ridzuan, Mohammad Yaqub | 2023-09-18T11:17:13 | http://arxiv.org/abs/2309.09670v1 | DGM-DR: Domain Generalization with Mutual Information Regularized Diabetic Retinopathy Classification
###### Abstract
The domain shift between training and testing data presents a significant challenge for training generalizable deep learning models. As a consequence, the performance of models trained with the independent and identically distributed (i.i.d) assumption deteriorates when deployed in the real world. This problem is exacerbated in the medical imaging context due to variations in data acquisition across clinical centers, medical apparatus, and patients. Domain generalization (DG) aims to address this problem by learning a model that generalizes well to any unseen target domain. Many domain generalization techniques were unsuccessful in learning domain-invariant representations due to the large domain shift. Furthermore, multiple tasks in medical imaging are not yet extensively studied in existing literature when it comes to DG point of view. In this paper, we introduce a DG method that re-establishes the model objective function as a maximization of mutual information with a large pretrained model to the medical imaging field. We re-visit the problem of DG in Diabetic Retinopathy (DR) classification to establish a clear benchmark with a correct model selection strategy and to achieve robust domain-invariant representation for an improved generalization. Moreover, we conduct extensive experiments on public datasets to show that our proposed method consistently outperforms the previous state-of-the-art by a margin of 5.25% in average accuracy and a lower standard deviation. Source code available at [https://github.com/BioMedIA-MEZUAI/DGM-DR](https://github.com/BioMedIA-MEZUAI/DGM-DR).
Keywords:Domain Generalization Diabetic Retinopathy Mutual Information Regularization
## 1 Introduction
Medical imaging has become an indispensable tool in diagnosis, treatment planning, and prognosis. Coupled with the introduction of deep learning, medical imaging has witnessed tremendous progress in recent years. Notwithstanding, a
major challenge in the medical imaging field is the domain shift problem, where the performance of a trained model deteriorates when for instance tested on a dataset that was acquired from a different device or patient population than the original dataset. This problem is especially prominent in tasks, where acquiring large-scale annotated datasets from one center is costly and time-consuming. Domain generalization (DG) [29] aims to alleviate this challenge by training models that can generalize well to new unseen domains, without the need for extensive domain-specific data collection and annotation.
DG in medical image analysis still requires extensive research, however there already exist a handful of works examining it. One of those works includes utilizing an adversarial domain synthesizer to create artificial domains using only one source domain to improve the generalizability of the model in downstream tasks [26]. Although such method can synthesize a wide range of possible domains, it usually suffers from the ability to mimic realistic domain shifts. Another method is applying test-time augmentations such that the target image resembles the source domain, thus reducing the domain shift and improving generalization [25]. Moreover, DRGen [3] combines Fishr [20] and Stochastic Weight Averaging Densely (SWAD) [6] to achieve domain generalization in Diabetic Retinopathy (DR) classification. In DRGen, Fishr [20] is used to make the model more robust to variations in the data by penalizing large differences in the gradient variances between in-distribution and out-of-distribution data, and SWAD [6] is used to seek flatter minima in the loss landscape of the model. DRGen is currently state-of-the-art in DR classification, however it has been evaluated using samples from the testing set which makes it harder to assess its true generalizability.
In natural images, the domain generalization problem has been explored extensively compared to medical imaging analysis. Some of the DG methods proposed over the past ten years include domain alignment [16], meta-learning [10], style transfer [28], and regularization methods [14]. More recently, the authors of [7] utilize a large pretrained model to guide a target model towards generalized feature representation through mutual information regularization. Another DG regularization method that can be applied orthogonally to many DG algorithms is SWAD [6], which improves domain generalizability by seeking flat minima in the loss landscape of the model. The flatter minima indicate that the loss is not changing significantly in any direction, thus reducing the risk of the model overfitting to domain biases [6]. However, when adapting a DG approach that demonstrates a good performance on natural images, there is no guarantee of a similar performance on medical imaging applications due to the typical complex nature of such problems.
DR is a complication of Diabetes Mellitus that affects the eyes and can lead to vision loss or blindness. It is caused by damage to the blood vessels in the retina due to high blood sugar levels, which often leads to blood leakage onto the retina [24]. This can cause swelling and distortion of vision. The prevalence of DR is increasing worldwide due to the growing number of people with diabetes. However, early detection and management of DR is critical to the prevention of vision deterioration or loss. DR can be classified into 4 classes: mild, moderate,
severe, and proliferative. Some of the visible features that are used to classify the first 3 classes include microaneurysms, retinal hemorrhages, intraretinal microvascular abnormalities (IRMA), and venous caliber changes, while pathologic preretinal neovascularization is used to classify proliferative DR [9].
In this paper, we propose DGM-DR, a Domain Generalization with Mutual information regularized Diabetic Retinopathy classifier. Our main contributions are as follows:
* We introduce a DG method that utilizes mutual information regularization with a large pretrained oracle model.
* We show the improvement of our proposed solution on the DR classification task over the previous state-of-the-art in both performance and robustness through rigorous investigations.
* We set a clear benchmark with the correct DG model selection method inline with standard DG protocols for the task of DR classification.
## 2 Methodology
Our work is inspired by [7], which aims to improve model generalizability when classifying natural images. In DGM-DR, we re-establish the domain generalization objective as a maximization of mutual information with a large pretrained model, named the oracle, to address DR classification. We aim to make the distribution of feature representations of the target model close to the generalized
Figure 1: Overview of the proposed method, DGM-DR. It consists of the oracle \(f^{0}\) and target \(f\) feature extractor, where \(f\) is initialized by the weights of \(f^{0}\). For each sampled mini-batch, feature representations are extracted using both feature extractors \(f^{0}\) and \(f\). Features \(Z_{f}\) are then passed to classifier \(g\). Lastly, the loss - a linear combination of cross entropy and mutual information regularization loss - is calculated, and \(f\) and \(g\) are updated.
one of the oracle by maximizing the mutual information between both. The oracle model is trained on a large-scale diverse dataset that contains information on many different domains in order to approximate it as closely as possible to a true oracle, which is a model that can generalize to any domain and is inaccessible in practice. Figure 1 shows an overview of DGM-DR's process. Initially, the oracle's weights are used to initialize the target model's feature extractor. Then, for each mini-batch, the oracle feature extractor \(f^{0}\) and the target feature extractor \(f\) are used to extract feature representations \(Z_{f^{0}}\) and \(Z_{f}\), respectively. The features \(Z_{f}\) are passed to the classifier \(g\) to produce the output. The oracle model is chosen as ImageNet pretrained ResNet-50 [13] for a realistic and fair comparison with other DG algorithms. It is shown in [5] that maximization of the lower bound of the mutual information between \(Z_{f^{0}}\) and \(Z_{f}\) is equivalent to minimization of the term 1
\[\mathbb{E}_{Z_{f^{0}},Z_{f}}\big{[}log|\Sigma(Z_{f})|+\|Z_{f^{0}}-\mu(Z_{f})\|_ {\Sigma(Z_{f})^{-1}}^{2} \tag{1}\]
The final loss is calculated using Equation 2:
\[\mathcal{L}(h)=\mathcal{E}_{S}(h)+\lambda\mathbb{E}_{Z_{f^{0}},Z_{f}}\big{[} log|\Sigma(Z_{f})|+\|Z_{f^{0}}-\mu(Z_{f})\|_{\Sigma(Z_{f})^{-1}}^{2}\big{]} \tag{2}\]
where \(\mathcal{E}_{S}(.)=\sum_{d=1}^{m}\mathcal{E}_{S_{d}}(.)\) is an empirical loss over \(m\) source domains, which was chosen as cross-entropy loss, \(\lambda\) is the regularization coefficient, and \(\|x\|_{A}=\sqrt{x^{T}Ax}\). The model \(h\) is modeled as a composition of a feature extractor \(f\) and a classifier \(g\), hence \(h=f\circ g\). Finally, the variational distribution that approximates the oracle model is modeled as a Gaussian distribution with mean vector \(\mu(Z_{f})\) and covariance matrix \(\Sigma(Z_{f})\). \(\|Z_{f^{0}}-\mu(Z_{f})\|\) enforces the mean feature representation \(Z_{f}\) to be as close as possible to the oracle feature representation \(Z_{f^{0}}\) when the variance term \(\Sigma(Z_{f})\) is low [7]. We anticipate that this optimization will yield robust representations, despite the substantial distribution shift between the oracle pretrained on natural images and the finetuning task involving retinal images. This is based on our hypothesis regarding the oracle's generalizability to any domain, owing to its extensive, diverse, and semantically rich features that surpass those found in any other medical dataset. The regularization term \(\lambda\) aims to minimize the variance in the target features and encourage similarity between the oracle and target features. This, in turn, facilitates the learning of domain-invariant representations that generalize well across different domains.
## 3 Experimental Setup
### Datasets
We utilize the four datasets used by [3], which are EyePACS [2], APTOS [1], Messidor and Messidor-2 [17]. The 4 datasets are composed of 5 classes of 5 grades from 0 to 4: No DR (Grade 0), mild DR (Grade 1), moderate DR (Grade 2), severe DR (Grade 3), and proliferative DR (Grade 4). These datasets were
acquired from various geographical regions, encompassing India, America, and France [1, 2, 17]. As a result, domain shift emerges, due to the variations in the employed cameras [2, 4], and the difference in population groups. Figure 2 shows example images for the 5 DR classes. A breakdown of the distribution of the classes is given in Table 4. In all 4 datasets, there is a high imbalance between the no DR class and the other 4 DR classes.
### Data Augmentations
All fundus images are resized to \(224\times 224\times 3\). We perform histogram equalization with a probability \(p=0.5\), horizontal flip and color jitter by a value of \(0.3\) in brightness, contrast, saturation, and hue with \(p=0.3\).
### Evaluation Methods
We utilize the DomainBed [11] evaluation protocols for fair comparison with DRGen [3] and other DG algorithms. The appropriate DG model selection method used is the training-domain validation set following DomainBed [11], in which we split each training domain into training and validation subsets, pool the validation subsets together to create an overall validation set, and finally choose the model that maximizes the accuracy on the overall validation set. We use 20% of the source training data for validation. We evaluate the performance scores using leave-one-domain-out cross validation, and average the cases where a specific domain is used as a target domain and the others as source domains.
We also perform comparisons of the proposed and existing DG approaches with the Empirical Risk Minimization (ERM) technique that aims to minimize in-domain errors. Interestingly, [11] argues that carefully training a model using ERM achieves a near state-of-the-art performance. This was tested on a range of baselines and was shown to outperform a few DG models.
### Implementation Details
We implement all our models using the PyTorch v1.7 framework. The experiments were run on 24GB Quadro RTX 6000 GPU. The backbone used is ResNet-50 pretrained on ImageNet. We use the Adam optimizer [15] with a learning rate
Figure 2: Sample images from different DR classes obtained from APTOS [1].
of \(5e-5\) and no weight decay, chosen experimentally. The model was trained in 5000 steps. The batch size was fixed to 32 images. The \(\lambda\) regularization coefficient was set to 1.0. Different values of lambda were experimented with, and the results are given in 5.
To compare against other DG methods, we reproduce the results of all algorithms using the same implementation details mentioned previously for a fair comparison. For Fishr [20], we set the Fishr lambda (\(\lambda\)) to 1000, penalty anneal iteration (\(\gamma\)) of 1500 and an exponential moving average of 0.95. For DRGen [3], we use SWAD as the model selection method as opposed to the test-domain validation used in the original paper [3], which is not suitable for DG evaluation. Moreover, we use the data augmentations in the official implementations of Fishr [20] and DRGen [3] for the respective algorithms, otherwise we use DGM-DR's augmentations. Finally, we use SWAD as the model selection method when combining DGM-DR with the SWAD [6] algorithm.
## 4 Results
Table 1 compares the performance of DGM-DR with three other methods, including the previous state-of-the-art DRGen. The experiments for the main results were repeated three times using three different seeds, and the average accuracy and standard deviation across the runs are reported. DGM-DR achieves \(>5\%\) increase in the average accuracy when compared with the other DG methods (Fishr and DRGen) and \(1\%\) increase compared to ERM-based model [22].
### Ablation Studies
**Changing the oracle pretraining datasets, methods, and backbones.** We investigate the effect of changing the oracle on the DR classification task and report the results in Table 2. We use ImageNet pretrained ResNet-50 using Barlow Twins [27] and MoCo [12], CLIP pretrained ResNet-50, and large-scale pretraining including CLIP pretrained ViT-B [8] and SWAG pretrained RegNetY-16GF [19]. All experiments were performed with the same implementation details mentioned previously, except for RegNetY-16GF, where the batch size was changed from 32 to 16 due to hardware limitations.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline
**Algorithm** & **APTOS** & **EyePACS** & **Messidor** & **Messidor-2** & **Average Accuracy** \\ \hline
**ERM[22]** & 62.83 & 73.01 & **66.88** & 65.26 & 66.99\(\pm\)4.3 \\
**Fishr[20]** & 56.49 & 68.24 & 61.53 & 62.11 & 62.09\(\pm\)4.8 \\
**DRGen[3]** & 54.53 & **73.87** & 52.03 & 69.13 & 62.39\(\pm\)9.3 \\ \hline
**DGM-DR** & **65.39** & 70.12 & 65.63 & **69.41** & 67.64\(\pm\)2.4 \\
**DGM-DR + SWAD[6]** & 65.15 & 71.92 & 65.66 & 68.96 & **67.92\(\pm\)3.2** \\ \hline \end{tabular}
\end{table}
Table 1: Multi-class classification results with ERM and DG methods averaged over three runs. The best accuracy (%) is highlighted in bold.
Binary classification of DR.
We study the effect of changing the multiclass classification task into a binary classification task, where fundus images are classified as _DR_ or _No DR_. The results of this experiment are reported in Table 3.
## 5 Discussion
In Table 1, we report the results of 4 different algorithms and show that DGM-DR outperforms all algorithms, including the previous state-of-the-art DRGen [3] by a significant margin of 5.25%. Additionally, DGM-DR demonstrates robustness with a relatively small standard deviation of 2.4 across three different experiments. As was concluded in [11], ERM-based methods can outperform a range of DG methods, if carefully trained. We show in 1 that the ERM method outperforms existing DG baselines that we compare with. On the other hand, we show that DGM-DR outperforms the ERM's performance for multiclass classification. We believe that even though the task of DR classification is challenging, the fundus images across all domains share common semantic structures, hence ERM is able to learn some domain-invariant features. However, the performance of DGM-DR is more stable, with a standard deviation being almost half that of ERM's. This can be attributed to DGM-DR's novel learning technique that aims to minimize a combination of cross entropy and mutual information regularization with an oracle, which enables it to learn more robust domain-invariant representations. Lastly, with the addition of SWAD to DGM-DR, performance further increases by a slight value (0.28%), consistent with previous literature (e.g. [21]) where accuracy is improved when combined with SWAD.
\begin{table}
\begin{tabular}{l|l|c c c|c} \hline
**Dataset** & **Pre-training** & **APTOS** & **EyePACS** & **Messidor** & **Messidor 2** & **Average Accuracy** \\ \hline \multirow{3}{*}{**ImageNet**} & ERM & **65.39** & 70.12 & 65.63 & **69.41** & **67.64\(\pm\)2.5** \\ \cline{2-6} & Barlow Twins & 60.66 & 73.45 & 55.57 & 61.18 & 62.71\(\pm\)7.6 \\ \cline{2-6} & MoCo v3 & 56.90 & 72.69 & 65.77 & 68.41 & 65.94\(\pm\)6.7 \\ \hline \multirow{2}{*}{**CLIP**} & CLIP (ResNet) & 61.01 & 73.33 & 62.44 & 58.10 & 63.72\(\pm\)6.7 \\ \cline{2-6} & CLIP (ViT) & 64.25 & 68.54 & **66.29** & 66.05 & 66.28\(\pm\)1.8 \\ \hline
**Instagram** & SWAG* (RegNet) & 63.12 & **75.38** & 62.96 & 64.61 & 66.52\(\pm\)6.0 \\ \hline \end{tabular}
\end{table}
Table 2: Results of changing the oracle pretraining datasets, methods, and backbones. SWAG* is using a batch size of 16, while the rest of the experiments are using a batch size of 32. The average accuracy and the standard deviation across the 4 domains in a single run are given, with the best accuracy (%) highlighted in bold.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline
**Algorithm** & **APTOS** & **EyePACS** & **Messidor** & **Messidor-2** & **Average Accuracy** \\ \hline
**ERM** & **95.42** & 74.70 & **86.98** & 77.47 & **83.63\(\pm\)9.5** \\
**Fishr** & 90.67 & 74.45 & 77.92 & **79.30** & 80.59\(\pm\)7.0 \\
**DRGen** & 82.05 & **75.41** & 81.67 & 72.42 & 77.89\(\pm\)4.7 \\
**DGM-DR** & 83.34 & 71.82 & 86.15 & 78.10 & 80.00\(\pm\)8.2 \\
**DGM-DR + SWAD** & 88.00 & 72.06 & 85.63 & 76.22 & 80.48\(\pm\)7.6 \\ \hline \end{tabular}
\end{table}
Table 3: Results of the binary classification task using different algorithms. The average accuracy and the standard deviation across the 4 domains in a single run are given, with the best accuracy (%) highlighted in bold.
In general, the performance of all algorithms on each of the datasets is consistent. This indicates that any decline or increase in performance of a dataset can be attributed to the distribution of the dataset itself, which is used as the target domain in the evaluation, and to the distribution of the combined source datasets on which the model is trained. For example, EyePACS [2] consistently performs better across all algorithms. A possible yet arguable hypothesis is it is highly imbalanced, as demonstrated in Table 4, with the majority of images belonging to _No DR_. Since the _No DR_ class is the majority in all datasets, the model is also biased towards it. Hence the model could be correctly classifying the _No DR_ case and randomly guessing in the other four cases.
In Table 2, we study the effect of changing the oracle pretraining datasets, methods, and backbones. The large SWAG* pretrained RegNetY-16GF oracle yields the best accuracy in this experiment, second only to our ResNet-50 with ImageNet ERM pretraining, possibly due to the smaller batch size and the limit of number of steps set for a fair comparison. In general, we observe that a larger oracle model trained on a bigger, more diverse dataset is able to guide the target model towards more generalized feature representations. However, it will require longer training time to converge.
In Table 3, we notice that ERM is doing a better job at binary classification of DR than DGM-DR. Since the binary classification problem is simpler, as is visually evident in Figure 2, DG algorithms tend to negatively impact the results as they are likely to introduce more complexity. Furthermore, the generalization gap [23] is typically smaller in binary classification than a multiclass setup. Therefore, ERM-based methods are likely to outperform DG-based methods in such scenarios.
The selection of the mutual information regularization coefficient \(\lambda\), which controls the balance between the cross entropy loss and the mutual information regularization loss, is related to how informative the oracle model's knowledge is for the target model's task. A large \(\lambda\) encourages the model to reduce the variance in the target features and enforce similarity between the target and oracle features. Thus, the model will focus on learning domain-invariant patterns originating from the oracle's knowledge, which is ImageNet in our main experiment. On the other hand, a small \(\lambda\) reduces the emphasis on domain-invariance and thus may potentially lead to overfitting.
In our case, as shown in [18], ImageNet initialization of deep learning models is beneficial in the context of medical imaging analysis, including fundus images. Therefore, we conclude that the best \(\lambda\) for the case of DR classification is 1.0 for the ImageNet pretrained ResNet-50, in contrast with that of natural images where \(\lambda\) is typically set to be \(\{0.001,0.01,0.1\}\) as in [7]. We believe that in the DR classification problem, the oracle model has a significant impact on training the target DG model due to its rich low level feature representations which cannot be easily learnt from scratch or from a small size dataset.
As a final note, a very important part of a domain generalization solution is the model selection method, as it simplifies fair assessments by disregarding differences in results due to inconsistent hyperparameter tuning that may be
attributed to the algorithms under study [11]. Furthermore, utilizing the test-domain validation set as a model selection method is inappropriate for a DG algorithm, which was done by DRGen [3] in DR classification. Hence, one goal of this paper is to set a clear benchmark for DR classification using training-domain validation, thus allowing easy comparison with future work.
## 6 Conclusion
In this paper, we introduce DGM-DR to tackle the problem of DR classification with domain generalization. Our use of a large pretrained model to guide the target model towards learning domain-invariant features across different DR datasets through mutual information regularization achieves superior performance over the previous-state-of-the art DG methods. We also establish a clear benchmark for the task using a DG-appropriate model selection algorithm, thus allowing future work to make comparisons with our work. Further investigation to understand when and why DG-based methods could be superior or inferior to ERM-based approaches in medical imaging is needed. Although we believe that our work pushes the horizons of the DG field in medical image analysis, several DG-related research questions are yet to be investigated e.g., unsupervised DG, interpretable DG, and performance evaluation to DG methods.
| トレーニングデータとテストデータ間のドメインシフトは、汎用的な深層学習モデルの訓練に大きな課題を提起する。その結果、独立同分布 (i.i.d.)
仮定に基づいて訓練されたモデルの性能は、実世界で展開される際に低下する。この問題は、医療画像のコンテキストにおいて特に顕著で、臨床センター、医療機器、患者の間でデータ収集の変動が原因である。
ドメイン generalisation (DG) は、この問題を解決するために、あらゆる非現像されたターゲットドメインに汎化することができるモデルを学習することを目的としている。多くのドメイン generalisation 技術は、大きなドメインシフトのため、ドメイン invariantな表現を学習することができなかった。さらに、医療画像における複数のタスクは、DGの視点から既存の文献で十分に研究されていなかった。本論文では、 DG を解決するための方法を導入する。それは、医療画像 |
2309.06543 | Quantum character varieties | In this survey article for the Encyclopedia of Mathematical Physics, 2nd
Edition, I give an introduction to quantum character varieties and quantum
character stacks, with an emphasis on the unification between four different
approaches to their construction. | David Jordan | 2023-09-12T19:35:14 | http://arxiv.org/abs/2309.06543v1 | # Quantum Character Varieties
###### Abstract.
In this survey article for the Encyclopedia of Mathematical Physics, 2nd Edition, I give an introduction to quantum character varieties and quantum character stacks, with an emphasis on the unification between four different approaches to their construction.
## 1. Introduction
The purpose of this article is to introduce the notion of a character variety, to explain its central role in mathematical physics - specifically gauge theory - and to highlight four different approaches to constructing its deformation quantization, what are colloquially called "quantum character varieties". Rather remarkably, each distinct mechanism for quantization is motivated in turn by a distinct topological observation about the topology of surfaces and hence the geometry of classical character varieties.
The term "character variety" commonly refers to a moduli space of \(G\)-local systems on some topological space \(X\), equivalently of homomorphisms \(\rho:\pi_{1}(X)\to G\). In this article we discuss several different models for this moduli space - the ordinary, framed, and decorated character varieties - as well as their common stacky refinement, the character stack. The distinction between these different models becomes important due to the presence of stabilizers and singularities in the naively defined moduli problem.
The term "quantum character variety" typically refers to any of the following non-commutative deformations of character varieties in the case \(X=\Sigma\) is a real surface:
1. **Skein algebras** of surfaces yield deformation quantizations of the algebra of functions on ordinary character varieties of surfaces.
2. **Moduli algebras** attached to ciliated ribbon graphs, yield deformation quantizations of the algebra of functions on framed character varieties of surfaces with at least one boundary component.
3. **Quantum cluster algebras** associated to marked and punctured surfaces quantize a cluster algebra structure on the decorated character variety.
4. **Factorization homology** of surfaces, with coefficients in the ribbon braided tensor category of representations of the quantum group yields linear abelian categories quantizing the category of quasi-coherent sheaves on character stacks.
Each of the above constructions depends on a non-zero complex parameter \(q\), and reproduces the corresponding "classical" or "undeformed" character variety upon setting \(q=1\). By definition the classical character variety depends on the underlying space \(X\) only through its fundamental group - in particular, only up to homotopy. The quantum character varieties are more subtle invariants, which depend on the homeomorphism type of the manifold in a way detected only when \(q\neq 1\).
In this survey we will recount the classical and quantum versions of each of these four constructions, outline their relation to one another, and how their study relates to super-symmetric quantum field theory and the mathematical framework of topological field theory. The topological input for each construction
above is a surface, however in each case there are natural extensions to 3-manifolds - some more developed than others - which we will also discuss.
### Flat connections
One of the most fundamental notions in gauge theory is that of a principal \(G\)-bundle with connection. A principal \(G\)-bundle \(E\) over a space \(X\) consists of a map \(\pi:E\to X\), together with a free \(G\)-action on \(E\) which preserves fibers and makes each fiber of \(\pi\) into a \(G\)-torsor, so that \(E/G=X\). A connection on \(E\) is a 1-form \(A\in\Omega^{1}(X,\operatorname{ad}(E))\) valued in the adjoint bundle,
\[\operatorname{ad}(E)=\mathfrak{g}\times_{G}E=(\mathfrak{g}\times E)/G.\]
Among all connections on \(E\) are distinguished the _flat connections_\(A\) which satisfy the flatness equation \(dA+[A,A]=0\). Equivalently \(A\) is flat if the parallel transport along some curve \(\gamma\) defined by \(A\) depends on \(\gamma\) only up to homotopy of paths.
### Local systems
From a principal \(G\)-bundle with flat connection we may extract the more combinatorial data of a principal \(G\)-bundle \(E\), together with parallel transport isomorphisms \(\nabla_{\gamma}:E_{x}\to E_{y}\) along the homotopy class of paths connecting \(x\) to \(y\). Such a pair \((E,\nabla)\) is called a \(G\)-local system. The choice of a base point \(x\) in \(X\), and a trivialisation of \(E\) at \(x\) reduces the data of a \(G\)-local system to that of a group homomorphism \(\pi_{1}(X)\to G\). Changes of basepoint and changes of framing are both implemented by conjugation in \(G\); hence two such homomorphisms represent the same \(G\)-local system if, and only if, they are related by post-composition with conjugation in \(G\).
### Appearances in physics
Classical character varieties appear very naturally in classical gauge theories such as Yang-Mills and Chern-Simons theory, in which a gauge field is by definition an adjoint valued 1-form, and the classical equations of motion involve the curvature of the connection - in particular, in Chern-Simons theory in 3 dimensions, the critical locus is precisely the space of flat connections.
Quantum character varieties/stacks play a similar role in super-symmetric quantum field theories. Some notable examples are:
1. Quantum character varieties - specifically in their skein-theoretic incarnation - describe topological operators, known as Wilson lines, in the quantization of Chern-Simons theory [83]. The parameter \(\hbar=\log q\) appears as the quantization parameter.
2. It is expected that state spaces of 3-manifolds, and categories of boundary conditions on surfaces, both attached to the Kapustin-Witten twist [56] of 4D \(\mathcal{N}=4\) super Yang-Mills may be described via skein modules, and skein categories, respectively. Here, the parameter \(\Psi=\log q\) identifies with the twisting parameter in Kapustin-Witten's construction.
3. Coulomb branches of 4d \(\mathcal{N}=2\) theories of class S, compactified on a circle, are naturally described by character varieties. In this case, the deformation parameter \(q\) appears as the exponentiated ratio of \(\Omega\)-deformation parameter for a pair of commuting circle actions coming from \(SO(2)\times SO(2)\subset SO(4)\). [40, 41, 52, 80] In more mathematical terms, Gaiotto has proposed quantizations of Coulomb branches (hence of character varieties) via \(\mathbb{C}^{\times}\)-equivariant cohomology. This has been discussed in [76].
### Acknowledgements
In preparing this survey article I have benefited from the inputs of a number of people. I particularly thank David Ben-Zvi, Adrien Brochier, Francois Costantino, Charlie Frohman, Andy Neitzke, Thang Le, Gus Schrader, and Alexander Shapiro for fact-checking and helping to find complete references. I also thank the Editor Catherine Meusberger, whose detailed comments and suggestions considerably improved the exposition.
## 2. Classical character varieties
Each of the four quantization prescriptions highlighted in the introduction emerges from first understanding the relevant classical moduli space: its geometry, and its Poisson structure. When viewed from the correct perspective, each classical moduli space admits a very natural deformation quantization. For this reason, although the article is focused on quantum character varieties, a significant portion is dedicated to reviewing the classical structures.
Let us begin by recalling in more detail the precise construction of the framed character variety, the ordinary character variety (the word "ordinary" is non-standard, and appears here and throughout merely for emphasis), and the decorated character variety. We turn then to the ordinary and decorated character stacks, and then finally to enumerating the relations between them. Along the way we will introduce the Poisson brackets which will be the quasi-classical limits of the quantum constructions.
### Framed character variety
Given a group \(G\) and a compact manifold \(X\) with basepoint \(x\), the framed character variety \(\operatorname{Ch}_{G}^{fr}(X)\) is an affine algebraic variety which parameterises pairs \((E,\eta)\), where \(E\) is a \(G\)-local system, and \(\eta:E_{x}\to G\) is a trivialisation of the fiber \(E_{x}\cong G\) (equivalently, this is the data of a single point \(e\in E_{x}\) which becomes the identity element in \(G\) under the framing). In more concrete terms, we may identify the framed character variety with the set of representations \(\pi_{1}(X)\to G\), where we do not however quotient by the \(G\)-action. This alternative description makes it clear that \(\operatorname{Ch}_{G}^{fr}(X)\) is indeed an algebraic variety: choosing any presentation of \(\pi_{1}(X)\) with \(m\) generators and \(n\) relations identifies \(\operatorname{Ch}_{G}^{fr}(X)\) with a closed subvariety of the affine variety \(G^{m}\) defined by the \(n\) relations, regarded as \(G\)-valued equations. As there is no a priori reason for this closed subvariety to be equidimensional, the framed character variety will typically not be smooth.
An important special case is the framed character variety of a surface \(\Sigma_{g,r}\) of genus \(g\) with \(r\geq 1\) punctures. Since \(\pi_{1}(\Sigma_{g,r})\) is the free group on \(2g+r-1\) generators, we have that
\[\operatorname{Ch}_{G}^{fr}(\Sigma_{g,r})=G^{2g+r-1}.\]
### Ordinary character variety
The (ordinary) character variety \(\operatorname{Ch}_{G}(X)\) is defined as the GIT quotient1 of the framed character variety by the \(G\)-action by conjugation. By definition, this means that the character variety is an affine variety defined as
Footnote 1: with trivial stability condition, equivalently the “categorical quotient” or “coarse moduli space”
\[\operatorname{Ch}_{G}(X)=\operatorname{Spec}(\mathcal{O}(\operatorname{Ch}_{ G}^{fr}(X))^{G}),\]
the spectrum of the sub-algebra of \(G\)-invariant functions on the framed character variety.
More geometrically, the character variety so defined parameterises _closed \(G\)-orbits_ in the framed character variety. The map sending a point of the framed character variety to the closure of its \(G\)-orbit defines a surjection \(\operatorname{Ch}_{G}^{fr}(X)\to\operatorname{Ch}_{G}(X)\).
### Decorated character variety
For simplicity we will restrict now to the case where \(X=\Sigma\) is a compact surface possibly with boundary. An important enhancement of the notion of a \(G\)-local system is that of a decorated local system. In a series of three highly influential papers [30, 31, 32], Fock and Goncharov established that the moduli space of decorated local systems, known as the decorated character variety, has an open locus admitting the geometric structure of a cluster variety, and they exploited this structure to define its quantization (discussed in Section 4).
For the construction of decorated character varieties we fix in addition to the group \(G\) a Borel subgroup \(B\) and let \(T=B/[B,B]\) denote the quotient of \(B\) by its unipotent radical. A \(G\)-\(B\)-\(T\)-coloring of a surface \(\Sigma\) consists of a partition of \(\Sigma\) into three-sets \(\Sigma=\Sigma_{G}\sqcup\Sigma_{B}\sqcup\Sigma_{T}\), where \(\Sigma_{G}\) and \(\Sigma_{T}\) are open and \(\Sigma_{B}=\partial\Sigma_{G}\cap\partial\Sigma_{T}\)
is closed (see Figure 1 for some examples). We will say that a "marked point" on a decorated surface \(\Sigma\) is a \(T\)-region which contracts onto an interval in the boundary of \(\Sigma\), while a "puncture" is a \(T\)-region contracting onto an entire boundary component of \(\Sigma\). We will call a connected decorated surface all of whose \(T\) regions are of those two types a "marked and punctured surface". We note that a marked and punctured surface necessarily has a unique \(G\)-region.
**Remark 1**.: _In Fock and Goncharov's original work, and most works which follow them, the marked points and punctures are indeed regarded as a finite set of points of \(\Sigma\) lying in the boundary and interior of \(\Sigma\), respectively, rather than as two-dimensional regions contracting to the boundary, as we have described above. However when one unpacks the data they attach to punctures and marked points, one sees that it expresses quite naturally in the framework of decorated surfaces, and in particular the resulting notion of decorated local system, and hence the decorated character variety defined in either convention is identical._
_For the topological field theory perspective it is important to "zoom in" on these points and see them as one-dimensional "defects" between adjoining 2-dimensional regions (the \(G\)- and \(T\)-regions discussed above). For example, the "amalgamation" prescription of Fock and Goncharov - by which one glues together charts on each triangle of a triangulation to obtain a chart on the decorated character variety - is just an instance of the excision axiom in factorization homology._
The decorated character variety is a moduli space parameterising triples \((E_{G},E_{B},E_{T})\), where \(E_{G}\) and \(E_{T}\) are \(G\)- and \(T\)-local systems over \(\Sigma_{G}\) and \(\Sigma_{T}\), respectively, and where \(E_{B}\) is a reduction of structure over \(\Sigma_{T}\) of the product \(E_{G}\times E_{T}\) restricted there. By a reduction of structure, we mean a \(B\)-sub-local system of the \(B\)-space \(E_{G}\times E_{T}\). At each point of the curve \(\Sigma_{B}\) this is simply the specification of a \(B\)-orbit \(B\cdot(g,t)\subset G\times T\), for some \((g,t)\in G\times T\) equivalently a point of \((G\times T)/B\cong G/N\); however the local system \(E_{G}\) and \(E_{T}\) can twist as we traverse loops in the surface, so that monodromy around punctures can introduce multiplication by some fixed element \(t\in T\). In other words, the monodromy around punctures need only preserve the underlying flag \(\overline{\mathcal{F}}\in G/B\) obtained as the image of \(\mathcal{F}\in G/N\) under the projection \(G/N\to G/B\).
As with the ordinary character variety, one may introduce a framed variant of the character variety, by requiring additional data of a trivialisation of \(E\) at a sufficiently rich system of basepoints. It suffices to assume there is one basepoint in each connected region of \(E_{G}\) and of \(E_{T}\), and to require a trivialisation of the fiber there: in this case one can show that the resulting groupoid is rigid, so that the framed decorated character variety parameterising this data really is a variety. We may then construct the GIT quotient of \(\mathrm{Ch}_{G}^{fr,dec}(\Sigma)\) by \(G^{r}\times T^{s}\), where there are \(r\) basepoints in \(\Sigma_{G}\) and \(s\) basepoints in \(\Sigma_{T}\). It is an exercise analogous to that for the ordinary character variety to see that this construction is in fact independent of the chosen basepoints up to unique isomorphism.
Figure 1. Three marked surfaces: the “triangle”, a disk with three contractible T-regions (indicated by shading), the “punctured disk with two marked points”, an annulus with one annular \(T\)-region and two contractible \(T\) regions, and “the punctured torus”, the torus (opposite edges are identified) with a disk at the corner removed and an annular \(T\)-region around the resulting boundary circle. The thin lines on the latter two depict a triangulation.
In passing from the framed decorated character variety to the decorated character variety, we have quotiented by a typically non-free action of \(G^{r}\), which has the consequence that the GIT quotient is typically singular. Inspired by the Penner coordinate system [70] on decorated Teichmuller space, Fock and Goncharov found a remarkable open subvariety of the framed and decorated character variety, on which the \(G^{r}\)-action is actually free, hence the GIT quotient there is smooth. Moreover, they produced a remarkable set of "cluster charts" in the framework of cluster algebras as had been introduced in [36, 35, 12, 37, 43]. We recall the rudiments of their construction here.
Fock and Goncharov defined a family of subsets \(U_{\alpha}\) on the framed decorated character variety of a marked and punctured surface with three remarkable properties:
1. The \(G\)-action on \(\operatorname{Ch}_{G}^{dec,fr}(\Sigma)\) restricts to a _free_\(G\)-action on \(U_{\alpha}\), and
2. The quotient \(U_{\alpha}/G\) is a torus \((\mathbb{C}^{\times})^{k}\), for some explicitly given \(k\).
3. The transition maps between charts \(U_{\alpha}/G\) and \(U_{\beta}/G\) take the form of a "cluster mutation", an explicitly given birational transformation between tori.
The decorated character variety is defined as the union over the charts \(\alpha\) of the \(U_{\alpha}/G\); it is a subvariety of the decorated character stack, which although a union of affine charts, is typically not affine.
We emphasise moreover that the remaining \(T^{s}\)-action on each \(U_{\alpha}/G\) is still not free; different ways of treating this non-free action, as well as specifying the monodromy of the \(G/N\)-fibers around punctures, leads to various related formulations of decorated character varieties. We will not recall their complete definitions in this survey article.
The collection of opens \(U_{\alpha}/G\) form what is called a cluster structure: briefly, this means that one has a combinatorial prescription to reconstruct the union of the charts \(U_{\alpha}/G\) by starting from a single initial chart \(U_{0}/G\) - together with its coordinates and its Poisson structure \(U_{0}/G\) is called a seed - and successively adding in new charts, glued via the cluster mutation. To construct the seed, one may choose a triangulation of the surface \(\Sigma\), which must be compatible with the decoration, in that each vertex of the triangulation should be some framed basepoint lying in \(\Sigma_{T}\cap\partial\Sigma\). Each triangle contributes a torus, and one combines the different tori together along the edges of the triangulation via a process called "amalgamation". The coordinates and the Poisson structure are also specified in this construction. The end result is also an algebraic torus, whose coordinates are indexed by the vertices of a quiver \(\Gamma\), and whose Poisson bracket is by construction log-canonical; the rest of the cluster charts \(U_{\alpha}/G\) are indexed by mutated graphs, and the precise form of the cluster mutation \(U_{\alpha}/G\to U_{\beta}/G\) is encoded in this graph.
We will see in Section 4 that the explicit combinatorial description of cluster charts underpins an equally explicit Poisson structure and its canonical quantization to a quantum cluster algebra.
### The character stack
In each of the above approaches, the presence of stabilisers inhibits a naive definition and introduces complications: for character varieties, we retreat to a moduli space of closed \(G\)-orbits, and for decorated character varieties, we retreat to an open locus where the \(G\)-action becomes free. On the other hand, in both the ordinary and decorated case, our presentation has passed implicitly through a more universal notion of a character stack.
Without recounting completely the framework of stacks, we will recall only that certain moduli problems - including both that of classifying ordinary and decorated local systems up to isomorphism - admit the structure of an Artin stack: this simply means that they may be presented as the group quotient of an algebraic variety by a reductive algebraic group. In fact, such a presentation is typically only required locally, but in our case we have a global such description.
One studies stacks algebraically via their locally presentable abelian categories of quasi-coherent sheaves, in particular we may consider the locally presentable abelian categories,
\[\mathcal{QC}(\underline{\operatorname{Ch}}_{G}(X)),\quad\mathcal{QC}( \underline{\operatorname{Ch}}_{G}^{dec}(X)).\]
As for any stack, \(\mathcal{QC}(\underline{\mathrm{Ch}}_{G}(X))\) carries a distinguished object called the _structure sheaf_\(\mathcal{O}\). It follows from basic definitions that the algebra of functions on the ordinary character variety is isomorphic to \(\mathrm{End}(\mathcal{O})\). On the other hand, we have a pullback square,
Hence we may describe structures on \(\underline{\mathrm{Ch}}_{G}(X)\) as \(G\)-equivariant structures on \(\mathrm{Ch}_{G}^{fr}(X)\), and conversely we recover \(\mathrm{Ch}_{G}^{fr}(X)\) by forgetting this equivariance.
The relation between decorated character stacks and decorated character varieties of Fock and Goncharov is somewhat more complicated. Each cluster chart \(U_{\alpha}\) of the cluster variety \(\mathrm{Ch}_{G}^{dec}(\Sigma)\) defines an object \(\mathcal{O}_{\alpha}\in\mathcal{QC}(\mathrm{Ch}_{G}^{dec}(\Sigma))\): this is just the sheaf of functions which are regular on \(U_{\alpha}\). We have that \(\mathrm{End}(\mathcal{O}_{\alpha})\) is a ring of Laurent polynomials (i.e. functions on the corresponding torus \(U_{\alpha}\)), that the full subcategory generated by \(\mathcal{O}_{\alpha}\) is indeed affine, and finally that the transition maps between the different \(U_{\alpha}\)'s define exact functors between these subcategories, which are written explicitly as cluster transformations.
A crucial feature of classical character stacks is that they fit into the framework of fully extended TQFT. For this we briefly recall that there is a "topological operad" \(E_{n}\) which encodes the embeddings of finite disjoint unions of disks \(\mathbb{R}^{n}\) inside one another. An \(E_{n}\)-algebra is an algebraic structure governed by \(E_{n}\). \(E_{n}\)-algebras may be regarded as the "locally constant" or "topological" specialisation of the notion of a factorisation algebra, which in physical terms captures the structure of local observables in a quantum field theory, and the condition of being locally constant is a consequence when the QFT is topological. Most relevant to our discussion are the examples that \(E_{1}\)-, \(E_{2}\)- and \(E_{3}\)-algebras in the bi-category of linear categories are monoidal, braided monoidal, and symmetric monoidal categories, respectively. We refer to [7] for more technical details, and to [15] for a gentle exposition.
One may regard the symmetric monoidal category \(\mathrm{Rep}(G)\) of representations of the reductive algebraic group \(G\) as an \(E_{n}\)-algebra, for any \(n\), in the symmetric monoidal \(2\)-category of categories. The collection of such \(E_{n}\)-algebras form an \(n+1\)-category, and the factorisation homology defines a full extended \(n\)-dimensional topological field theory valued in the symmetric monoidal \(2\)-category of categories. We have an equivalence of categories [11]:
\[\mathcal{QC}(\underline{\mathrm{Ch}}_{G}(X))\simeq\int_{X}\mathrm{Rep}(G), \tag{1}\]
where the integral notation on the righthand side denotes the factorization homology functor defined in [7]. This equivalence and its consequences are discussed in greater detail in Section 4 below. A similar TFT description can be given for decorated character stacks using stratified factorization [8]. For now, we note that it is not possible for either the ordinary or decorated character varieties to fit into the fully extended framework: if a manifold is given to us by gluing simpler manifolds, we can indeed build any \(G\)- or \(G,B,T\)-local system on it by gluing local systems on each piece, however there will be automorphisms of the glued local system which are not the disjoint product of automorphisms on each piece. This simple observation prevents the ordinary and decorated character varieties from satisfying the gluing compatibilities satisfied by their stacky enhancements.
## 3. Poisson brackets
Each of the framed, ordinary, and decorated character varieties, as well as the ordinary and decorated character stacks carry canonically defined Poisson brackets, which form the quasi-classical - or leading order - degenerations of the quantizations constructed in the next section. We review these here.
### The Atiyah-Bott symplectic structure
For reductive group \(G\), Atiyah and Bott constructed in [5] a symplectic form on the smooth locus of the moduli space of flat \(G\)-connections on \(\Sigma\). Given a \(G\)-bundle \(E\) with a flat connection \(A\) on \(\Sigma\), the tangent space to \(E\) consists of sections of the associated adjoint bundle \(\operatorname{ad}(E)=E\times_{G}\mathfrak{g}\) over \(\Sigma\). We denote by \(\kappa\) the Killing form on \(\mathfrak{g}\). Given a pair \(\chi_{1},\chi_{2}\) of sections of \(\operatorname{ad}(E)\), regarded as tangent vectors to \(E\), we define their symplectic pairing by the formula:
\[\omega(\chi_{1},\chi_{2})=\int_{\Sigma}\kappa(\chi_{1}\wedge\chi_{2}).\]
We remark that this symplectic form arises naturally in the Chern-Simons action term for a 3-manifold of the form \(\Sigma\times I\). The skew-symmetry, non-degeneracy and closedness of \(\omega\) all follow relatively easily from the definition.
### The Goldman bracket on the ordinary character variety
The character variety carries a combinatorial analog of the Atiyah-Bott symplectic structure, due to Goldman [47] and Karshon [57], which can be defined purely algebraically without appeal to analysis. The fundamental group \(\pi_{1}(\Sigma)\) of a closed surface has as its second group cohomology \(H^{2}(\pi_{1}(G),\mathbb{C})=\mathbb{C}\). The tangent space to a given representation \(\rho:\pi_{1}(\Sigma)\to G\) identifies with \(H^{1}(\pi_{1}(\Sigma),\mathfrak{g})\), where \(\pi_{1}(\Sigma)\) acts on \(\mathfrak{g}\) through the conjugation action. In analogy with the Atiyah-Bott symplectic structure we obtain a skew pairing,
\[H^{1}(\pi_{1}(\Sigma),\mathfrak{g})\times H^{1}(\pi_{1}(\Sigma),\mathfrak{g}) \xrightarrow{\kappa\circ\cup}H^{2}(\pi_{1}(\Sigma),\mathbb{C})=\mathbb{C},\]
which again defines a symplectic form on the character variety. The corresponding Poisson bracket is known as the Goldman bracket.
### The Fock-Rosly Poisson structure on the framed character variety
While both the Atiyah-Bott and Goldman symplectic forms are clearly very natural and general, they don't lead immediately to explicit formulas for the Poisson brackets of functions on the character variety. In the case of the framed character variety of surfaces with at least one boundary component, a much more explicit reformulation was given by Fock and Rosly in [33].
First we recall that \(\pi_{1}(\Sigma_{g,r})\) is the free group on \(2g+r-1\) generators, where \(\Sigma_{g,r}\) denotes a surface of genus \(g\) with \(r\)-punctures. Hence the framed character variety is simply the product \(G^{2g+r-1}\). The Poisson bracket between between functions \(f\) and \(g\) of a single \(G\) factor is given by the Poisson bivector,
\[\pi_{STS}=\rho^{ad,ad}+t^{r,l}-t^{l,r}. \tag{2}\]
Here we denote with superscripts \(r,l,ad\) the right, left and adjoint vector fields on \(G\) determined by a Lie algebra element, we let \(r\in(\mathfrak{g})^{\otimes 2}\) denote the classical \(r\)-matrix, and \(\rho\) and \(t\) its anti-symmetric and symmetric parts. The bivector \(\pi_{STS}\) induces on \(G\) a Poisson structure which has been introduced by Semenov-Tian-Shansky [78].
The Poisson bracket between functions \(f\) and \(g\) of the \(i\)th and \(j\)th factor is given by \(\pi_{ij}\), where
\[\pi_{ij}=\begin{cases}\pm(r^{ad,ad})&\text{if $i,j$ are $\pm$ unlinked}\\ \pm(r^{ad,ad}+2t^{r,l})&\text{if $i,j$ are $\pm$ linked}\\ \pm(r^{ad,ad}-2t^{r,r}+2t^{r,l})&\text{if $i,j$ are $\pm$ nested}\end{cases}\]
In total we have,
\[\pi=\sum_{i}\pi_{STS}^{(i)}+\sum_{i<j}(\pi_{ij}-\pi_{ji}), \tag{3}\]
where now \(\pi_{ij}\) is a 2-tensor acting on the \(i\)th component of the first factor and the \(j\)th component of the second factor, given by the formula above.
The appearance of the classical \(r\)-matrix in the Fock-Rosly Poisson bracket foreshadows the role of quantum groups and quantum \(R\)-matrices in the deformation quantization of Section 4.
### The Fock-Goncharov cluster Poisson structure on the decorated character variety
Recall that the decorated character variety of a marked and punctured surface contains a system of open charts, each isomorphic to a torus \((\mathbb{C}^{\times})^{k}\) for some \(k\) depending on \(G\) and on the decorated surface. Each chart \(U_{\alpha}\) carries a "log canonical" Poisson bracket:
\[\{x_{i},x_{j}\}=a_{ij}x_{i}x_{j},\]
where \(A=(a_{ij})\) is the adjacency matrix of the quiver attached to \(U_{\alpha}\). It is called log-canonical because the formal logarithm of the generators \(x_{i}\) satisfy \([\log(x_{i}),\log(x_{j})]=a_{ij}\), which resemble Heisenberg's "canonical commutation relations". The birational transformations given by cluster mutation intertwine the different Poisson brackets on each chart, so that they glue together to a globally defined Poisson bracket on the cluster Poisson variety.
### Shifted symplectic structures and character stacks
The moduli problem given by the character stack can be phrased in terms of classifying spaces, and this allows a universal construction of the Atiyah-Bott/Goldman symplectic structure on character varieties, due to Pantev, Toen, Vaquie, and Vezzosi [69], see [20] for an exposition. Specifically, there exists a classifying stack \(BG\), such that for any surface \(\Sigma\), we have:
\[\underline{\mathrm{Ch}}_{G}(\Sigma)=\mathrm{Maps}(\Sigma,BG),\]
where Maps here denotes the stack of locally constant maps from a topological space into an algebraic stack, for instance as obtained by regarding \(\Sigma\) as presented by a simplicial set.
The classifying space \(BG\) has as its tangent and cotangent complexes the Lie algebra \(\mathfrak{g}\) and its dual \(\mathfrak{g}^{*}\) in homological degrees 1 and -1, respectively. Hence the Killing form gives an isomorphism from the tangent bundle to the 2-shifted cotangent bundle: such a structure (satisfying some additional properties which follow from those of the Killing form) is known as a 2-shifted symplectic structure. One may then transgress the 2-shifted symplectic structure on \(BG\) through the mapping stack construction to give an ordinary - or 0-shifted - structure on the character stack. Remarkably, the descent of this symplectic structure to the smooth part of the character variety recovers the Atiyah-Bott/Goldman symplectic structure. Hence the PTVV structure on the character stack may be regarded as a stacky version of the Atiyah-Bott/Goldman construction.
### Hamiltonian reduction interpretation
The framework of Hamiltonian reduction gives a natural and very general procedure to pass from the framed character variety of the surface \(\Sigma_{g,1}\) to the ordinary character variety of the closed surface \(\Sigma_{g}\).
Let \(\mu:\mathrm{Ch}_{G}^{fr}(\Sigma_{g,1})\to G\) denote the map which sends the \(G\)-local system to its monodromy around the unique boundary component (we assume for simplicity that the basepoint is on the boundary to ensure this map is well-defined, not only up to conjugation). Sealing the boundary component means imposing \(\mu=\mathrm{Id}\), and forgetting the framing means quotienting by the \(G\)-action. Hence we have,
\[\mathrm{Ch}_{G}(\Sigma_{g})=\mu^{-1}(\mathrm{Id})/G.\]
Formulas such as the above are common in the theory of _Hamiltonian reduction_, where one interprets \(\mu\) as a _moment map_ for a Hamiltonian action of the group \(G\) on some phase space - in this case the phase space is \(\mathrm{Ch}_{G}^{fr}(\Sigma_{g,1})\). The target of the moment map is typically \(\mathfrak{g}^{*}\) rather than \(G\), however Alekseev, Kosmann-Schwarzbach, Malkin, and Meinrenken developed in [3, 2], following [63], a formalism of "quasi-Hamiltonian" \(G\)-spaces, which feature "group-valued moment maps" valued in \(G\) rather than \(\mathfrak{g}^{*}\). The Hamiltonian reduction of a quasi-Hamiltonian \(G\)-space is symplectic, and recovers to the Atiyah-Bott/Goldman symplectic structure on the closed surface.
## 4. Quantum character varieties
In this final section, let us recount the four most well-known constructions of quantum character varieties.
### The moduli algebra quantization of the framed character variety
The Fock-Rosly Poisson structure on the framed character variety of \(\Sigma_{g,r}\), with \(r\geq 1\), admits a highly explicit and algebraic quantization, introduced independently by Alekseev, Grosse, and Schomerus [4, 1] and by Buffenoir and Roche [17]. This quantization was orginally called the "moduli algebra", and is often referred to subsequently as the "AGS algebra". The constuction also goes under the names "combinatorial Chern-Simons theory" and "lattice gauge theory", and has been studied in many different contexts, see e.g. [73], [64], [65]. The starting point is that the classical \(r\)-matrices appearing in the Fock-Rosly Poisson bracket have well-known quantizations into quantum \(R\)-matrices, which themselves describe the braiding of representations of the quantum group. Arftul insertion of quantum \(R\)-matrices in place of the classical \(r\)-matrices gives the deformation quantization of framed character varieties: let us now describe their construction in more detail.
In the case \(g=1,r=1\), the Fock-Rosly Poisson bracket is identical to the Semenov-Tian-Shansky Poisson bracket \(\pi_{STS}\) from Equation (2). Replacing classical \(r\)-matrices with quantum \(R\)-matrices leads to a deformation quantization \(\mathcal{F}_{q}(G)\) of the algebra of functions2 on the group \(G\), known as the _reflection equation algebra_. The name refers to the fact that for \(G=\mathrm{GL}_{N}\), the commutation relations in this algebra are given3 via the "reflection equation",
Footnote 2: A point of disambiguation: the reflection equation algebra \(\mathcal{F}_{q}(G)\) here does not coincide with the so-called FRT algebra quantization, often denoted \(\mathcal{O}_{q}(G)\), which quantizes instead the Sklyanin Poisson bracket on \(G\).
Footnote 3: See, e.g. [55] for details about this notation.
\[R_{21}A_{1}R_{12}A_{2}=A_{2}R_{21}A_{1}R_{12}, \tag{4}\]
which appears as a defining relation in Coxeter groups of type \(B\), and in mathematical physical models for scattering matrices in 1+1-dimension in the presence of a reflecting wall.
A general surface with boundary may be presented combinatorially as a "ciliated ribbon graph" - essentially a gluing of the surface from disks. According to this prescription, each edge contributes a factor of \(\mathcal{F}_{q}(G)\) to the moduli algebra (which may be regarded as the quantum monodromy of a connection along that edge), and the cross relations between different edge factors are given by explicit formulas resembling the reflection equation, but with an asymmetries related to the unlinked, linked or nested crossings.
### The skein theory quantization of the ordinary character variety
The skein algebra quantization is prefaced on an elegant graphical formulation of the functions on the classical \(\mathrm{SL}_{2}\) character variety in terms of (multi-) curves drawn on the surface. This is then deformed to a similar graphical calculus for curves drawn instead in the surfaces times an interval. Skein algebras were independently introduced by Przytycki [71] and Turaev [82]. Following them, the vast majority of skein theory literature concerns the so-called Kauffmann bracket skein relations, a particular normalisation of the skein relations deforming the \(\mathrm{SL}_{2}\)-character variety. In that same tradition, we will recall this special case first in detail, only outlining the extension to general groups, and indeed to general ribbon braided tensor categories afterwards.
Given a loop \(\gamma:S^{1}\to X\), a \(G\)-local system \(E\) over \(X\), and a finite-dimensional representation \(V\) of \(G\), we have a polynomial function, \(\operatorname{tr}_{\gamma,V}\), sending \(E\) to the trace of the parallel transport along \(\gamma\) of the associated vector bundle with connection \(E\times_{G}V\). The word "polynomial" here means, for instance that \(\operatorname{tr}_{\gamma,V}\) defines a \(G\)-invariant function on the framed character variety, hence a polynomial function on the GIT quotient. The skein theoretic approach to quantizing character varieties begins by describing this commutative algebra structure "graphically", i.e. via the image of the curve \(\gamma\) sitting in \(\Sigma\), and then introducing a deformation parameter in the graphical presentation.
Let us consider a 3-manifold \(M=\Sigma\times I\), the cylinder over some surface \(\Sigma\). We may depict the function \(\operatorname{tr}_{\gamma,V}\) by drawing \(\gamma\) with the label \(V\) (one often projects onto the \(\Sigma\) coordinate, to draw \(\gamma\) as a curve with normal crossings on \(\Sigma\)). We depict the product \(\operatorname{tr}_{\gamma_{1},V_{1}}\cdot\operatorname{tr}_{\gamma_{2},V_{2}}\) of two such functions by super-imposing the drawings, as in Figure 3. The resulting diagrams will of course develop additional crossings when multiplied, however a basic observation - which is somewhat special to the case of \(\operatorname{SL}_{2}\) - is that one can use local "skein relations" to resolve crossings.
To better understand this, consider the Cayley-Hamilton identity for a matrix \(X\in\operatorname{SL}_{2}\):
\[X+X^{-1}=\operatorname{tr}(X)\cdot\operatorname{Id}_{2},\]
Multiplying by an arbitrary second matrix \(Y\) and taking traces gives an identity
\[\operatorname{tr}(XY)+\operatorname{tr}(X^{-1}Y)=\operatorname{tr}(X) \operatorname{tr}(Y). \tag{5}\]
Suppose now that we have two paths \(\gamma_{1}\) and \(\gamma_{2}\) intersecting at a point \(p\in\Sigma\), and that we consider the product \(\operatorname{tr}_{\gamma_{1},V_{1}}\cdot\operatorname{tr}_{\gamma_{2},V_{2}}\), where \(V_{1}=V_{2}=\mathbb{C}^{2}\) is the defining representation of \(\operatorname{SL}_{2}\). Then the identity (5) implies a graphical simplification as depicted in Figures 2 and 3.
A fundamental observation of Przytycki and Turaev (independently) is that the graphical relation in Figure 2 can be naturally deformed by introducing coefficients into the relations which mimic the defining relations of the Jones polynomial, more precisely its normalisation known as the Kauffmann bracket. Around the same time, Edward Witten had understood these same "skein relations" to appear as fundamental relations between Wilson loops in a quantization of the Chern-Simons theory in 3-dimensions.
The \(\operatorname{SL}_{2}\)-skein algebra of a surface is therefore defined as the quotient of the vector space spanned by links embedded into \(\Sigma\times I\), modulo local skein relations: two skeins are declared equivalent if they agree outside of some cylindrical ball \(\mathbb{D}^{2}\times I\) and differ inside as indicated in the Figure. An example of a skein relation is depicted in Figure 3: we imagine a thickening of \(T^{2}\) to \(T^{2}\times I\), and a small cylindrical ball \(D^{2}\times I\) around the intersection point indicated on the left hand side of the equation. Within the cylindrical ball, we replace the skein with the expressions in the right hand side of the skein relation in Figure 2. We impose such relations for all cylindrical balls throughout \(\Sigma\times I\).
Figure 2. At left, the equations \(\operatorname{tr}(XY)+\operatorname{tr}(X^{-1}Y)=\operatorname{tr}(X) \operatorname{tr}(Y)\) and \(\operatorname{tr}(\operatorname{Id})=2\)) express graphically as skein relations; the top relation holds between any three curves which are identical outside of the dotted region, and differ as indicated within it, while the bottom relation states that any isolated unknot can be erased at the price of a factor of 2. At right is stated the quantum deformation which depends on a parameter \(A\in\mathbb{C}\).
Turaev showed that the resulting relations are flat in the parameter \(A\) - more precisely he showed that the skein module4 is free as a \(\mathbb{Z}[A,A^{-1}]\)-module, and that the specialisation at \(A=-1\) recovers the vector space of polynomial functions on the character variety of the surface5.
Footnote 4: The name “module” refers simply to the fact that the base ring may be taken to be \(\mathbb{Z}[A,A^{-1}]\) (to allow specialisation) rather than a field. However, when a 3-manifold has boundary, it skein module indeed becomes a module for the action by inserting skeins at the boundary.
Footnote 5: The careful reader may note a perhaps unexpected sign here, and in the skein relations at the right side of Figure 2; this is a standard convention taken to ensure the Kauffman bracket skein module of a 3-manifold does not depends only on a choice of orientation. Although in the case of surfaces, the signs can be absorbed into the normalisation, we include the standard normalisation here for the sake of consistency. The parameter \(A\) discussed here is a fixed square root of the parameter \(q\) discussed elsewhere in the article.
Moreover, the skein module of \(\Sigma\times I\) obtains an algebra structure by stacking skeins in the \(I\)-direction. With respect to this stacking operation, the skein module becomes a non-commutative algebra whose \(q=1\) specialisation is the algebra of functions on the classical ordinary character variety. Due to the flatness in \(q\), we obtain a Poisson bracket on the character variety, by setting:
\[\{f,g\}=\frac{f\cdot g-g\cdot f}{q-q^{-1}}\mod q-1. \tag{6}\]
It was shown in [19] that this Poisson bracket agrees with the Atiyah-Bott/Goldman bracket and so the skein algebra is a deformation quantization of the character variety with its Atiyah-Bott/Goldman Poisson bracket.
The definition above extends naturally to define the \(\mathrm{SL}_{2}\)-skein module of an oriented 3-manifold: one considers the formal linear span of links in \(M^{3}\), modulo the skein relations imposed in each cylindrical ball \(D^{2}\times I\) embedded in the 3-manifold. The \(A=-1\) specialisation still coincides with the functions on the character variety of \(M\)[18], however the skein module in general no longer carries an algebra structure, as there is no distinguished direction along which to stack the skeins.
The above discussion has been formulated for simplicity in the case of \(\mathrm{SL}_{2}\)-skeins, however it generalises naturally to any simple gauge group \(G\), indeed to any ribbon tensor category. The careful reader will have noticed that it sufficed in the \(\mathrm{SL}_{2}\)-case to consider only the defining two dimensional representation, and that we could reduce always to crossing-less diagrams. For a general group \(G\), or more generally for an arbitrary ribbon tensor category, this is no longer possible. Skeins for a general group \(G\) consist of embedded oriented ribbon graphs in the 3-manifold, together with a network of labels of representations of the quantum group (more genenerally objects of the ribbon braided tensor category) along each edge, and a morphism at every vertex, from the tensor product of incoming to outgoing edges. We refer to [22] or [49] for details about the general construction, or to [60] or [79] for early examples beyond the Kauffman bracket skein module.
An important ingredient in the definition is the Reshetikhin-Turaev evaluation map, which maps any skein on the cylindrical ball \(\mathbb{D}^{2}\times I\) to a morphism from the tensor product of incoming labels along \(\mathbb{D}^{2}\times\{0\}\)
Figure 3. The product \(\operatorname{tr}_{\gamma_{1},\,V}\cdot\operatorname{tr}_{\gamma_{2},V}\) of two curves is given by their stacking in the \(I\) direction on \(\Sigma\times I\). The skein relations express this as a linear combination of two new curves as depicted at the right.
to the tensor product of the outgoing labels along \(\mathbb{D}^{2}\times\{1\}\). The skein module is defined as the span of all skeins, modulo the kernel of the Reshtikhin-Turaev evaluation maps, ranging over all embedded disks in \(M\).
### The Fock-Goncharov quantum cluster algebra structure on the decorated character variety
Recall that the cluster algebra structure on the decorated character varieties consists of a collection of open subsets, each identified via a coordinate system with a torus \((\mathbb{C}^{\times})^{r}\), carrying a "log-canonical" Poisson bracket preserved by the birational transformations. There is a canonical quantization of such a torus given by introducing invertible operators satisfying \(X_{i}X_{j}=q^{a_{ij}}X_{j}X_{i}\). The Poisson bracket obtained from these relations as in equation (6) recovers the log-canonical Poisson bracket.
An elementary and fundamental observation of Fock and Goncharov is that conjugation by the "quantum dilogarithm" power series induces a birational equivalence between different quantum charts. This birational isomorphism lifts the cluster mutation taking place at the classical level to an birational isomorphism of the associated quantum tori. Essentially by definition, the cluster quantization of the decorated character variety is the resulting collection of quantum tori, equipped with a preferred system of generators, known as cluster variables, related by quantum cluster mutations.
In summary, Fock and Goncharov construct the quantization of the decorated character variety as a quantum cluster algebra. The resulting algebraic, combinatorial, and analytic structures are of considerable interest and are heavily studied, however to survey them in proper depth would be beyond the scope of this survey article. Instead we highlight several important papers following in this tradition: [61, 77, 81, 53, 48, 34, 75],
### Quantum character stacks from factorization homology
A basic ingredient in non-commutative algebra and gauge theory is the quantum group. In categorical terms, the category \(\operatorname{Rep}_{q}(G)\) is a ribbon braided tensor category, which \(q\)-deforms the classical category of representations of the algebraic group \(G\).
Recall that the classical character stack is computed via factorization homology, as in Equation (1). The quantum character stack of a surface is defined, by fiat, by the equation,
\[Z(\Sigma)=\int_{\Sigma}\operatorname{Rep}_{q}(G).\]
Here, as in Section 2 we are regarding \(\operatorname{Rep}_{q}(G)\) with its braided monoidal structure equivalently as an \(E_{2}\)-category in the symmetric monoidal bi-category of categories, in order to define its factorization homology on surfaces. We also note that the ribbon structure on \(\operatorname{Rep}_{q}(G)\) equips it with an \(SO(2)\)-fixed point structure, so that it descends from an invariant of framed surfaces to one of oriented surfaces.
The construction via factorization homology opens up tools in topological field theory, extending the construction of quantum character stacks both _down_ to the level of the point, and _up_ to dimension 3. The algebraic framework for the extended theory involves a 4-category denoted \(\operatorname{Br}\)Tens, whose objects are braided tensor categories and whose higher morphisms encode notions of algebra, bimodules, functors and natural transformations, respectively, between braided tensor categories. The braided tensor category \(\operatorname{Rep}_{q}(G)\) defines a 3-dualizable object in \(\operatorname{Br}\)Tens, and so according to the cobordism hypothesis it defines a fully extended framed 3-dimensional topological field theory. The \(SO(2)\)-fixed structure descends this to an oriented topological field theory, and the resulting invariants of oriented surfaces coincide with the factorization homology functor \(Z\) as defined above.
The construction by factorization homology has a modification where we allow a pair of braided tensor categories and a morphism between them labelling a codimension 1-defect, and from this data produce an invariant of bipartite surfaces. Taking \(\operatorname{Rep}_{q}(G)\) and \(\operatorname{Rep}_{q}(T)\), with \(\operatorname{Rep}_{q}(B)\) labelling the defect, one obtains a quantum deformation of the decorated character stacks.
### Special structures at roots of unity
Each flavour of quantum character variety discussed above exhibits special behavior when the quantum parameter \(q\) is taken to be a root of unity:
1. Skein algebras at root-of-unity parameter \(q\) have large centers, over which the entire algebra is a finitely generated module, as first proved by Bonahon-Wong [14]. Following them, numerous papers have studied the implications for the representation theory of the skein algebra, especially the determination of the Azumaya locus - the points over the spectrum of the center for which the central reduction of the skein algebra is a full rank matrix algebra. Most such results are only proved in the case of \(\mathrm{SL}_{2}\)-skeins, but are expected to hold more generally. See [38, 42, 58]
2. AGS moduli algebras at a root of unity develop a large centre, and their Azumaya locus is known to contain the preimage \(\mu^{-1}(G^{\circ})\) of the big cell. This follows easily from the Brown-Gordon/Kac-de Concini technique of Poisson orders [16, 24]: the space \(\mu^{-1}(G^{\circ})\) is precisely the open symplectic leaf in the framed character variety, and BG/KdC method gives an isomorphism of the fiber over any two points in the same symplectic leaf. See [42] for a precise formulation in the setting of AGS algebras.
3. Quantum cluster algebras attached to decorated quantum character varieties exhibit parallel behavior at roots of unity: chart by chart, the \(\ell\)th power of any cluster monomial is central, whenever \(q^{\ell}=1\). The cluster mutations respect this central structure, and lead to a quantum Frobenius embedding of the classical decorated character variety (of each respective type) into the corresponding quantum cluster algebra. It is much more straightforward to determine the Azumaya locus on this setting, given the explicit control on the individual charts. See [31], and in greater generality [68].
4. The Azumaya algebras described above are each an instance of _invertibility_: indeed, an algebra \(A\) is said to be invertible over its center of the algebra \(A\otimes_{Z(A)}A^{op}\) is Morita equivalent to \(Z(A)\). A proof has been announced by Kinnear that quantum character stacks are invertible relative to classical character stacks in the strongest possible sense: the symmetric monoidal category \(\mathcal{QC}(\mathrm{Ch}_{G}(\Sigma))\) acts monoidally on the quantum character stack, and the quantum character stack defines an invertible sheaf of categories for that action. At the level of surfaces, this implies that the factorization homology category is an invertible sheaf of categories of quasi-coherent sheaves on the character stack. More fundamentally, the assertion is that the fully extended 3D TQFT defined by quantum character stack construction at root-of-construction is invertible relative to the fully extended 4D TQFT determined by \(\mathrm{Rep}(G)\).
### Unifications of various approaches to quantum character varieties
Recall that classically, each of the framed, ordinary, and decorated character stacks could be derived directly from the classical character stack under various geometric operations. This applies equally well to the quantum character stacks. We record the following unifications:
* By a result of Cooke [22], the skein category is computed via the factorization homology of the category \(\mathrm{Rep}_{q}(G)^{cp}\) of compact-projective objects. The skein algebra is the algebra of endomorphisms of the empty skein object.
* In [9] Alekseev-Grosse-Schomerus algebras are recovered from the quantum character stack via monadic reconstruction techniques. The quantum Hamiltonian reduction procedure is recast via monadic reconstruction in [10, 74].
* The Fock-Goncharov quantum cluster algebra may be recovered via an open subcategory of the decorated quantum character stack: this is presently only proved in the \(\mathrm{SL}_{2}\) case, in [54].
* The Alekseev-Grosse-Schomerus algebras have also been recovered directly in skein-theoretic terms, via the so-called "stated" [23], or "internal" [49] skein algebra construction. See [51].
* The Bonahon-Wong approach to the representation theory of skein algebras involves describing skein algebras via quantum trace maps (c.f. [13], [59], [62],). This relationship has been studied in more physical literature under the term non-abelianization in [52, 41, 66, 67].
### Quantum character varieties of 3-manifolds
In contrast to the vast literature we have surveyed pertaining to quantum character varieties and character stacks, much less is currently known about the quantization of character varieties and character stacks of 3-manifolds. Perhaps one reason for this, as discussed below, is that the very notion of quantization in the context of character stacks of 3-manifolds is different than in the case of surfaces: neither character stacks nor character varieties of 3-manifolds are symplectic/Poisson, but they are rather (-1)-shifted symplectic. This implies a quantization theory of a different nature - in particular as we discuss below, the skein module quantization of a 3-manifold is essentially never a flat deformation, this happens only when the character variety is a finite set of points.
Let us nevertheless highlight what is known and currently under investigation concerning each of the four perspectives on quantization.
1. The AGS algebra attached to the surface \(\Sigma_{g,1}\) acts naturally on the underlying vector space of the AGS algebra attached to \(\Sigma_{0,g}\), which we should regard instead as attached to the handle body \(H_{g}\) of genus \(g\). In [49] it was established that the skein module of a closed oriented 3-manifold \(M\) may be computed by choosing a Heegaard splitting, \[M=H_{g}\cup_{\Sigma_{g}}H_{g},\] where the splitting involves twisting by a choice of \(\gamma\) in the mapping class group of \(\Sigma_{g}\).
2. This presentation of the skein module was used in [49] to establish the finite-dimensionality of skein modules of closed 3-manifolds, as had been previously conjectured by Witten. Several recent papers [21, 27, 25, 46, 45, 26, 50] are devoted to determining these dimensions in special cases, and Gunningham and Safronov have announced an identification of the skein module with the space of sections of a certain perverse sheaf introduced by Joyce as a quantization of the \((-1)\)-shifted structure on the character variety regarded as a critical locus.
3. Decorated character varieties of 3-manifolds (perhaps not by this name) were studied in the papers [28, 29]: one fixes an ideal triangulation of a hyperbolic 3-manifold, and directly computes a deformation quantization of the \(A\)-polynomial ideal using quantum cluster algebra techniques tuned to each ideal tetrahedron, regarded as a filling of a decorated \(\Sigma_{0,4}\).
4. It was established by Przytycki and Sikora [72] that the \(q=1\) specialisation of the skein module of \(M\) indeed recovers the algebra of functions on the character variety of \(M\). Echoing the Azumaya/invertibility at the level of surfaces, it is now expected that skein modules at root-of-unity parameters arise as the global sections of a line bundle on the classical character variety. Two approaches to this result have been announced, one by Kinnear using higher categorical techniques in parallel to the above discussion, and another by Frohman, Kania-Bartoszynska and Le, appealing to the Azumaya property for surfaces and invoking the structure of a (3,2) TQFT.
## 5. Further reading
The following references may be helpful to a reader hoping to learn this subject in more detail:
* "Lectures on gauge theory and Integrable systems", [6].
* "Quantum geometry of moduli spaces of local systems and representation theory" [48].
* _Cluster algebras and Poisson geometry_, [44].
* "Factorization homology of braided tensor categories" [15].
* GEAR Lectures on quantum hyperbolic geometry [39].
| この数学物理学百科事典の2ndEditionのこの調査記事において、量子キャラクタ行列と量子キャラクタ積、その構築のための四つの異なるアプローチの統一について、紹介します。 |
2309.12536 | Exceptional points in perturbed dielectric spheres: A resonant-state
expansion study | Exceptional points (EPs) in open optical systems are rigorously studied using
the resonant-state expansion (RSE). A spherical resonator, specifically a
homogeneous dielectric sphere in a vacuum, perturbed by two point-like defects
which break the spherical symmetry and bring the optical modes to EPs, is used
as a worked example. The RSE is a non-perturbative approach encoding the
information about an open optical system in matrix form in a rigorous way, and
thus offering a suitable tool for studying its EPs. These are simultaneous
degeneracies of the eigenvalues and corresponding eigenfunctions of the system,
which are rigorously described by the RSE and illustrated for perturbed
whispering-gallery modes (WGMs). An exceptional arc, which is a line of
adjacent EPs, is obtained analytically for perturbed dipolar WGMs. Perturbation
of high-quality WGMs with large angular momentum and their EPs are found by
reducing the RSE equation to a two-state problem by means of an orthogonal
transformation of a large RSE matrix. WGM pairs have opposite chirality in
spherically symmetric systems and equal chirality at EPs. This chirality at EPs
can be observed in circular dichroism measurements, as it manifested itself in
a squared-Lorentzian part of the optical spectra, which we demonstrate here
analytically and numerically in the Purcell enhancement factor for the
perturbed dipolar WGMs. | Kyle S. Netherwood, Hannah K. Riley, Egor A. Muljarov | 2023-09-21T23:23:58 | http://arxiv.org/abs/2309.12536v3 | # Exceptional points in optical systems: A resonant-state expansion study
###### Abstract
Exceptional points (EPs) in open optical systems are rigorously studied using the resonant-state expansion (RSE). A spherical resonator, specifically a homogeneous dielectric sphere in a vacuum, perturbed by two point-like defects which break the spherical symmetry and bring the optical modes to EPs, is used as a worked example. The RSE is a non-perturbative approach encoding the information about an open optical system in matrix form in a rigorous way, and thus offering a suitable tool for studying its EPs. These are simultaneous degeneracies of the eigenvalues and corresponding eigenfunctions of the system, which are rigorously described by the RSE and illustrated for perturbed whispering-gallery modes (WGMs). An exceptional arc, which is a line of adjacent EPs, is obtained analytically for perturbed dipolar WGMs. Perturbation of high-quality WGMs with large angular momentum and their EPs are found by reducing the RSE equation to a two-state problem by means of an orthogonal transformation of a large RSE matrix. WGM pairs of opposite chirality away from EPs are shown to have the same chirality at EPs. This chirality can be observed in circular dichroism measurements, as it manifested itself in a squared-Lorentzian part of the optical spectra, which we demonstrate here analytically and numerically in the Purcell enhancement factor for the perturbed dipolar WGMs.
## I Introduction
An exceptional point (EP), originally named by Kato (1966) [1], is a simultaneous degeneracy of the eigenvalues and the corresponding eigenfunctions of a system. An EP of \(N\)th-order has \(N\) degenerate eigenvalues and eigenfunctions. EPs are a typical feature of open systems, which are characterized by the presence of gain and/or loss of energy and information, and can be described by non-Hermitian matrices which have generally complex eigenvalues [2].
Matrices allow a mathematically rigorous and simultaneously the most straightforward investigation of EPs as a special case of their eigenvalues and eigenvectors. To give a mathematical example of an EP, we introduce the \(2\times 2\) symmetric matrix
\[M=\begin{pmatrix}a&b\\ b&d\end{pmatrix} \tag{1}\]
where \(a\), \(b\), and \(d\) are complex numbers. The matrix \(M\) has the eigenvalues
\[\lambda=\frac{a+d}{2}\pm\frac{1}{2}\sqrt{(a-d)^{2}+4b^{2}}\,. \tag{2}\]
To find a point where the eigenvalues are degenerate, we let the square-root term in Eq.(2) vanish. This gives the degeneracy condition
\[b=\pm\frac{i(a-d)}{2}\,. \tag{3}\]
If \(b\neq 0\) and Eq.(3) is satisfied, \(a\), \(b\), and \(d\) are the matrix elements of \(M\) at an EP. If Eq.(3) is satisfied but \(b=0\), the degeneracy is called a diabolic point (DP) which is a degeneracy of eigenvalues but not eigenfunctions. DPs are equivalent to any degeneracies in a Hermitian system, but in a non-Hermitian system they are only the degeneracies that arise due to symmetry, and they generally do not have the characteristic shape of an EP. This characteristic shape along with other features of EPs can be demonstrated, for example, by setting the matrix elements of Eq.(1) to \(a=0\), \(b=ic\), and \(d=1\) where \(c\) is a real variable. Using Eq.(2), the eigenvalues of this example matrix around an EP at \(c=1/2\) are plotted in Fig.1.
Fig.1 shows the characteristic shape of the eigenvalues in the proximity of an EP. This shape is due to the fact that eigenvalues vary non-line
Figure 1: Eigenvalues of Eq.(1), where \(a=0\), \(b=ic\), and \(d=1\), varied against parameter \(c\), taking a value of \(c=1/2\) at an EP. | Exceptional点 (EP) は、開放光学系で厳密に研究されています。それは、共鳴状態展開 (RSE) を用いて行われます。球形共鳴器は、真空中に存在する、球形不均一な誘電体球と、球形対称性を破る点状の2つの欠陥によって perturbations と呼ばれています。この共鳴器を用いて、EP に影響を与える光モードを分析した例として使用されています。RSE は、非微小なアプローチであり、非微小なアプローチで、開放光学系の情報を入手し、厳密な方法で、行列形式で、EP についての研究を可能にするのに役立ちます。これらのものは、システムの固有値と対応する固有関数の一致した不整合です。これは、RSE によって厳密に記述され、変調されたウィスパーギャラリーモード (WGM) に対して示されています |
2310.00051 | Kinematically constrained vortex dynamics in charge density waves | We build a minimal model of dissipative vortex dynamics in two spatial
dimensions, subject to a kinematic constraint: dipole conservation. The
additional conservation law implies anomalously slow decay rates for vortices.
We argue that this model of vortex dynamics is relevant for a broad range of
time scales during a quench into a uniaxial charge density wave state. Our
predictions are consistent with recent experiments on uniaxial charge density
wave formation in $\mathrm{LaTe}_3$. | Marvin Qi, Andrew Lucas | 2023-09-29T18:00:03 | http://arxiv.org/abs/2310.00051v1 | # Kinematically constrained vortex dynamics in charge density waves
###### Abstract
We build a minimal model of dissipative vortex dynamics in two spatial dimensions, subject to a kinematic constraint: dipole conservation. The additional conservation law implies anomalously slow decay rates for vortices. We argue that this model of vortex dynamics is relevant for a broad range of time scales during a quench into a uniaxial charge density wave state. Our predictions are consistent with recent experiments on uniaxial charge density wave formation in LaTe\({}_{3}\).
## I Introduction
The dynamics of systems with multipolar symmetries and more general kinematic constraints have been the subject of intense study in recent years. Much of the interest in such systems derives from their natural relation to the study of fractons, which are quasiparticle excitations in many-body systems with restricted mobility [1; 2; 3; 4; 5; 6]. These mobility restrictions often originate from multipolar symmetries or gauged versions thereof [7; 8; 9]. Restricted mobility of microscopic degrees of freedom can lead to many observable consequences in dynamics, including ergodicity breaking [10; 11], Hilbert space fragmentation [12; 13; 14; 15; 16] and subdiffusive hydrodynamics [17; 18; 19; 20; 21; 22; 23; 24; 25; 26].
Given the unusual nature of the symmetries involved in fractonic theories, it is often challenging to realize the dynamical phenomena discovered above directly in experiment. One exception is the emergence of dipole conservation in tilted optical lattices [27; 28; 29]. Spin liquids and frustrated magnetism [30; 31; 32; 33] may also give rise to similar physics, though a conclusive experimental demonstration has not been found yet. The most "conventional" realization of such unusual symmetries in nature is in elastic solids: via fracton-elasticity duality [34; 35; 36; 37; 38], the charges and dipoles of a rank-two tensor gauge theory are mapped to disclination and dislocation defects of a two-dimensional crystal. The disclination is immobile in isolation while the dislocation can only move along its Burgers vector; these mobility constraints are shared respectively by the charge and dipole of the rank-two tensor gauge theory. Similar mobility constraints apply to defects of two-dimensional smectic liquid crystals [39; 40].
The purpose of this work is to show that in a rather related setting - the formation of (uniaxial) charge density waves (CDW) - emergent and approximate mobility constraints can have striking dynamical signatures that are experimentally observable. We will see that topological defects (vortices) have an anomalously long lifetime in uniaxial charge density wave formation. More concretely, we write down a minimal model for dissipative vortex dynamics in this setting, incorporating a dipolar kinematic constraint on vortex motion orthogonal to the density wave direction. Numerical simulations and analytical theory demonstrate that dissipative vortex dynamics of such constrained vortices is qualitatively different from usual dissipative vortex dynamics [41; 42; 43], which has been realized in e.g. thin films of superfluid helium [44].
Our predictions are quantitatively consistent with recent ultrafast experiments on LaTe\({}_{3}\)[45], which revealed anomalously slow subdiffusive vortex decay, incompatible with the existing Ginzburg-Landau model of uniaxial density wave formation. Hence, this work reveals a promising new avenue for studying "fractonic" dynamical universality classes in quantum materials.
## II Vortex dynamics in charge density waves
Topological defects of an order parameter naturally form when a system undergoes a quench from a disordered to an ordered phase [46; 47]. Relaxation toward the equilibrium steady state proceeds via annihilation of the topological defects. In a two dimensional uniaxial charge density wave, the topological defects are vortices, which correspond physically to dislocations of the CDW.
The _equilibrium_ properties of the transition into a uniaxial CDW are commonly described using the same Ginzburg-Landau (GL) theory describing the superfluid-insulator transition. However, we argue following [48] that the standard analysis of a dynamical GL theory will incorrectly describe dynamics of CDW vortices. Unlike superfluid vortices, vortices of the CDW are dislocations, which are subject to approximate kinematic constraints. If the layering occurs perpendicular to the \(\hat{x}\) axis, then local operators can translate vortices along the \(\hat{x}\) direction, as shown in Fig. 1(\(a\)). Motion along \(\hat{y}\) requires a non-local operator which translates the CDW layer, however, because translating the vortex in this direction requires _adding or removing charge_, which violates local charge conservation if the CDW is in its ground state. Hence, the simplest moves in the \(\hat{y}\) direction is of a pair of vortices: see Fig. 1(\(b\)) - (\(c\)). Such processes leave the \(\hat{y}\) dipole moment of the vortices unchanged.
At finite temperature, we expect a very small density of
mobile charged degrees of freedom thermally fluctuating on top of the CDW, which will give a single vortex a small mobility in the \(\hat{y}\) direction. In this Letter, we will focus on dynamics at short time scales, where this process can be neglected.
## III The model
We now develop a minimal model for vortex dynamics subject to the constraint above. The degrees of freedom are the positions \(\mathbf{r}^{\alpha}=(x^{\alpha},y^{\alpha})\) of the \(N\) vortices. Starting with the dissipationless component, we anticipate that this can be described by conventional point-vortex dynamics: after all, such dynamics _already_ conserves dipole [49]. The dissipationless dynamics is moreover Hamiltonian, if we define Poisson brackets
\[\{x^{\alpha},y^{\beta}\}=\frac{1}{\Gamma_{\alpha}}\delta_{\alpha\beta}. \tag{1}\]
Here \(\Gamma_{\alpha}\) is the vorticity of the \(\alpha\)-th vortex. Note that we do not sum over repeated indices. This can equivalently be written as
\[\{r_{i}^{\alpha},r_{j}^{\beta}\}=\frac{1}{\Gamma_{\alpha}}\delta_{\alpha\beta }\epsilon_{ij}. \tag{2}\]
The vortices interact via a logarithmic potential, so the Hamiltonian is (in dimensionless units)
\[\mathcal{H}=-\sum_{\alpha<\beta}\Gamma_{\alpha}\Gamma_{\beta}\,\log(|\mathbf{ r}_{\alpha}-\mathbf{r}_{\beta}|). \tag{3}\]
The corresponding Hamiltonian equations of motion are
\[\begin{split}\dot{x}^{\alpha}=\{x,\mathcal{H}\}=& \frac{1}{\Gamma_{\alpha}}\frac{\partial H}{\partial y^{\alpha}}\\ \dot{y}^{\alpha}=\{y,\mathcal{H}\}=-\frac{1}{\Gamma_{\alpha}} \frac{\partial H}{\partial x^{\alpha}}\end{split} \tag{4}\]
In this setting, dipole conservation is a consequence of translation invariance. Indeed, the Poisson brackets (1) mean that \(\Gamma_{\alpha}y^{\alpha}\) plays the role of "momentum" of the \(\alpha\)-th vortex in the \(\hat{x}\) direction, and similarly for \(-\Gamma_{\alpha}x^{\alpha}\) in the \(\hat{y}\) direction. The total dipole moments are therefore identified with the generators of translation, whose conservation follows from translation invariance of \(\mathcal{H}\).
The dipole conservation can be seen in the exactly solvable two-body dynamics of vortices. Pairs with equal vorticity travel in a circular orbit around their center of mass, while pairs of opposite vorticity move in a straight line perpendicular to their dipole moment; in each case dipole moment is conserved [49].
We now turn to the effects of dissipation. The standard model for dissipative dynamics of point vortices is
\[\begin{split}\dot{x}^{\alpha}&=\frac{1}{\Gamma_{ \alpha}}\frac{\partial H}{\partial y^{\alpha}}-\gamma\frac{\partial H}{ \partial x^{\alpha}}\\ \dot{y}^{\alpha}&=-\frac{1}{\Gamma_{\alpha}}\frac{ \partial H}{\partial x^{\alpha}}-\gamma\frac{\partial H}{\partial y^{\alpha}} \end{split}. \tag{5}\]
where \(\gamma\) term is the mutual friction responsible for dissipation. Note, however, that it breaks the conservation of dipole moment.
Indeed, one can see the effect of \(\gamma\) in the two-body dynamics of vortices. It causes same-sign vortices to drift apart, and opposite-sign vortices to approach each other and collide in finite time; dipole moment conservation is violated in the latter case.
A minimal model for dissipative vortex dynamics which conserve both components of the dipole moment is (see Appendix A for a derivation)
\[\begin{split}\dot{x}^{\alpha}&=\frac{1}{\Gamma_{ \alpha}}\frac{\partial H}{\partial y^{\alpha}}-\gamma^{\prime}\frac{\tilde{f} _{\alpha}}{\Gamma_{\alpha}^{2}}\frac{\partial H}{\partial x^{\alpha}}+\gamma^{ \prime}\sum_{\beta}\frac{f_{\alpha\beta}}{\Gamma_{\alpha}\Gamma_{\beta}} \frac{\partial H}{\partial x^{\beta}}\\ \dot{y}^{\alpha}&=-\frac{1}{\Gamma_{\alpha}}\frac{ \partial H}{\partial x^{\alpha}}-\gamma^{\prime}\frac{\tilde{f}_{\alpha}}{ \Gamma_{\alpha}^{2}}\frac{\partial H}{\partial y^{\alpha}}+\gamma^{\prime} \sum_{\beta}\frac{f_{\alpha\beta}}{\Gamma_{\alpha}\Gamma_{\beta}}\frac{ \partial H}{\partial y^{\beta}}\end{split}. \tag{6}\]
where \(f_{\alpha\beta}\coloneqq f(|\mathbf{r}^{\alpha}-\mathbf{r}^{\beta}|)\) is a function which depends only on the distance between vortices \(\alpha\) and \(\beta\). The function \(f_{\alpha\beta}\) is not constrained by the EFT; we choose \(f_{\alpha\beta}=|\mathbf{r}^{\alpha}-\mathbf{r}^{\beta}|^{-1}\). When this dipole-conserving dissipative term is included, two vortices of opposite sign can approach each other and annihilate in the presence of a nearby third vortex. The motion of the third vortex compensates for the change of dipole moment caused by the annihilation of the two vortices, leaving total dipole moment unchanged. This process is depicted in Fig. 2(b). We find, however, that for our choice of \(f_{\alpha\beta}\), if the initial positions are sufficiently far apart then a vortex pair can simply escape off to infinity without annihilating.
We numerically simulate the \(N\)-body dynamics of dipole-conserving vortices given by the equations of motion (6). For the initial conditions we randomly sample \(N\) points uniformly from a box of size \(L\times L\), and randomly assign vorticities \(\pm 1\) to each. Dissipation causes vortices to come together and annihilate in finite time. Vortex annihilation is implemented in the simulation by manually removing pairs of opposite-sign vortices when
Figure 1: Kinematic constraints of dislocations of the charge density wave. Column \((a)\) shows a local process which translates the vortex freely in the \(\hat{x}\) direction. On the other hand, motion of same (resp. opposite) sign vortices along the \(\hat{y}\) direction only occurs in pairs, as depicted in \((b)\) (resp. \((c)\)). The pair process conserves dipole moment along the \(\hat{y}\) direction.
their distance decreases below a cutoff \(\epsilon\). We plot the average surviving fraction of vortices \(\langle n(t)\rangle\) as a function of (rescaled) time in Fig. 3 in blue for \(N=400\) and \(L=80\). This vortex relaxation process is well-described at early to intermediate times by a function of the form \((1+{\cal K}t)^{-\alpha_{\rm dipole}}\) with \(\alpha_{\rm dipole}=0.50\pm 0.01\), shown in orange in Fig. 3. At late times, vortex annihilation occurs much more slowly; this can be attributed to the annihilation process "freezing out" at sufficiently low density as alluded to above. This is qualitatively similar to the breakdown of thermalization in dipole-conserving systems found in [10; 11].
Vortices of the CDW order parameter (approximately) conserve dipole moment only along the layering axis, and are unconstrained transversely. Assuming layering along the \(y\) axis, this anisotropic mobility constraint is implemented by including the \(i=x\) terms of (14) and the \(i=y\) terms of (14) in the EFT Lagrangian. The resulting equations of motion are
\[\begin{split}\dot{x}^{\alpha}&=\frac{1}{\Gamma_{ \alpha}}\frac{\partial H}{\partial y^{\alpha}}-\gamma\frac{\partial H}{ \partial x^{\alpha}}\\ \dot{y}^{\alpha}&=-\frac{1}{\Gamma_{\alpha}}\frac{ \partial H}{\partial x^{\alpha}}-\gamma^{\prime}\frac{\tilde{f}_{\alpha}}{ \Gamma_{\alpha}^{2}}\frac{\partial H}{\partial y^{\alpha}}+\gamma^{\prime} \sum_{\beta}\frac{f_{\alpha\beta}}{\Gamma_{\alpha}\Gamma_{\beta}}\frac{ \partial H}{\partial y^{\beta}}\end{split}\cdot \tag{7}\]
Since motion along the \(\hat{x}\) axis is unconstrained, the three-body annihilation process of Fig. 2(b) is no longer the only process by which vortices can annihilate. A pair of vortices can annihilate via the process of Fig. 2(c) if the pair has net zero \(y\)-dipole moment. However, such a process is fine-tuned, as it requires the \(y\)-dipole moment to vanish exactly. Nevertheless, we expect that the dynamics should proceed more quickly than in the isotropic dipole-conserving case, as it is less constrained.
We follow the same procedure to simulate the \(N\)-body dynamics of (7) with vortex annihilation. The average surviving fraction of vortices is plotted in green in Fig. 3, for \(N=400\) and \(L=80\). The red dashed line shows the fit at early to intermediate times to \((1+{\cal K}t)^{-\alpha_{y-{\rm dipole}}}\), with \(\alpha_{y\text{-dipole}}=0.65\pm 0.02\). The relation \(\alpha_{y\text{-dipole}}\gtrsim\alpha_{\rm dipole}\) is consistent with faster dynamics as a consequence of fewer kinematic constraints.
In an isotropic theory, we can successfully estimate \(\alpha\) for both the dipole-conserving and non-conserving vortex dynamics based on evaluating the two-point correlator for a conserved (vortex) density \(n\):
\[\langle n(t)n(0)\rangle\sim\int\mathrm{d}k_{x}\mathrm{d}k_{y}\mathrm{e}^{-D_{ x}k_{x}^{\alpha_{x}t}-D_{y}k_{y}^{\alpha_{y}t}}\sim t^{-\alpha^{*}}, \tag{8}\]
with \(\alpha^{*}=1/a_{x}+1/a_{y}\), using \(a=4\) for dipole-conserving [17; 48] and \(a=2\) for dipole-non-conserving dissipative dynamics. If only \(y\)-dipole is conserved, this argument then suggests that \(\alpha=\frac{1}{2}+\frac{1}{4}=0.75\). The numerically observed \(\alpha_{y\text{-dipole}}=0.65\) is not consistent with this estimate. This suggests that the dynamical universality class observed is not fully captured by "mean field" scaling.
## IV Reaction-diffusion dynamics
To further justify our observation of \(\alpha=0.5\) scaling observed when both components of dipole moment are conserved, we employ the theory of stochastic reaction-diffusion equations [50; 51; 52; 53; 54; 55]. Our setup, however, contains some complicating features: the vortices experience long-range interactions given by the logarithmic potential (3), and their dynamics conserve the total dipole moment (along one or both axes). These features modify the kinetic rate equations and their scaling.
Moreover, in contrast to an ordinary \(A+B\to 0\) reaction-diffusion system, two isolated vortices are unable to annihilate each other, as doing so would change the dipole moment of the system. Rather, two vortices can only annihilate in the presence of another nearby vortex. In this case, the change in dipole moment arising from annilation of the vortex pair is compensated by
Figure 2: Few body dissipative dynamics of kinematically constrained vortices. (\(a\)) Two same-sign vortices will tend to spiral away from each other preserving dipole moment. (\(b\)) The minimal three-body process by which two vortices can annihilate. The change in dipole moment is offset by motion of the third vortex. (\(c\)) When only the \(y\) dipole moment is conserved, two vortices can annihilate if their \(y\) dipole moment exactly vanishes.
Figure 3: Average number of vortices \(\langle n(t)\rangle\) normalized as a fraction of the initial number of vortices. The blue and red curves show \(\langle n(t)\rangle\) for vortices which conserve dipole moment in both directions and only the \(\hat{x}\) direction, respectively. The dashed lines show their corresponding fits to the function \(1/(1+{\cal K}t)^{\alpha}\), with \(\alpha_{\rm dipole}=0.50\pm 0.01\) and \(\alpha_{x\text{-dipole}}=0.65\pm 0.02\). Note that the \(x\) axis is scaled to \({\cal K}t\); \({\cal K}\) is a fit parameter which is different for the two systems.
motion of the nearby vortex. See Fig. 2(b) for an illustration of the minimal process by which a pair of vortices can annihilate while preserving dipole moment in both directions. Hence, the allowed reactions are of the form \(A+B+x\to x\), for \(x=A,B\).
Letting \(\rho_{A,B}\) denote the densities of the two species (positive/negative vortices), we then postulate a kinetic rate equation
\[\frac{\mathrm{d}\rho_{A}}{\mathrm{d}t}=\frac{\mathrm{d}\rho_{B}}{\mathrm{d}t}- \mathcal{K}\rho_{A}\rho_{B}(\rho_{A}+\rho_{B}), \tag{9}\]
Defining the total vortex density \(n=\rho_{A}+\rho_{B}\) and the (signed) vorticity density \(\rho=\rho_{A}-\rho_{B}\), we have
\[\begin{split}\frac{\mathrm{d}n}{\mathrm{d}t}&=- \frac{\mathcal{K}}{2}(n^{2}-\rho^{2})n\\ \frac{\mathrm{d}\rho}{\mathrm{d}t}&=0\end{split} \tag{10}\]
The second equation is the statement that the total charge is a conserved quantity since only opposite-sign vortices can annihilate. When the initial charge density vanishes, (10) can be solved to give
\[n(t)=(1/n_{0}^{2}+\mathcal{K}t)^{-1/2} \tag{11}\]
which is in sharp contrast to the case where no dipole symmetry is imposed. The exponent is in excellent agreement with the numerical results of the isotropic dipole-conserving vortex model of the previous section.
We now include the effects of spatial fluctuations and long-range interactions in the reaction-diffusion dynamics. The equations of motion are modified to be
\[\begin{split}\partial_{t}n-D\nabla^{2}n&=-\frac{ \mathcal{K}}{2}[n^{2}-\rho^{2}]n-Q\nabla(\rho\nabla V)\\ \partial_{t}\rho-\tilde{D}\nabla^{4}\rho&=-Q\nabla( n\nabla V)\end{split} \tag{12}\]
where the potential \(V(\mathbf{r},t)\) is given by
\[V(\mathbf{r},t)=-\int\mathrm{d}^{2}\mathbf{r}^{\prime}\rho(\mathbf{r}^{\prime },t)\log|\mathbf{r}^{\prime}-\mathbf{r}| \tag{13}\]
and captures the drift of vortices due to the velocity fields created by the others. We have introduced (sub)diffusion coefficients \(D\) and \(\tilde{D}\) and a coefficent \(Q\sim\Gamma^{2}\) proportional to the square of the vorticity.
Following [56], we analyze these equations using a self-consistent approximation where the fluctuations of the total number density \(n\) are neglected while the fluctuations of the charge density \(\rho\) are kept. In other words, the number density \(n(\mathbf{r},t)=n(t)\) is approximated as a spatially independent function of time. Our goal is to determine the average number density \(n(t)\) averaged over an ensemble of initial conditions for \(\rho(\mathbf{r},0)\) taken to be
\[\begin{split}\langle\rho(\mathbf{r},0)\rangle&=0\\ \langle\rho(\mathbf{r}_{1},0)\rho(\mathbf{r}_{2},0)\rangle& =n_{0}^{2}\delta^{(2)}(\mathbf{r}_{1}-\mathbf{r}_{2})\end{split}. \tag{14}\]
Substituting \(n(\mathbf{r},t)\simeq n(t)\) into the first equation of (12) and averaging over all space and initial conditions, we obtain
\[\frac{\mathrm{d}n(t)}{\mathrm{d}t}+\frac{\mathcal{K}}{2}n(t)^{3}=\frac{ \mathcal{K}}{2}n(t)\int\mathrm{d}^{2}\mathbf{r}\ \langle\rho(\mathbf{r},t)^{2}\rangle. \tag{15}\]
with \(\langle\ldots\rangle\) denoting the average over initial conditions. Fourier transforming \(\rho(\mathbf{r},t)\) in space, the second equation of (12) becomes
\[\partial_{t}\rho(\mathbf{k},t)+Dk^{4}=-Qn(t)\rho(\mathbf{k},t) \tag{16}\]
away from \(k=0\) and \(\partial_{t}\rho(\mathbf{0},t)=0\) at \(k=0\). This equation can be solved exactly to give
\[\rho(\mathbf{k},t)=\rho(\mathbf{k},0)\exp\left(-\tilde{D}k^{4}t-Q\int_{0}^{t} \mathrm{d}t^{\prime}n(t^{\prime})\right). \tag{17}\]
Substituting the solution \(\rho(\mathbf{k},t)\) into (15) and performing the average over initial conditions (14), the equation of motion for \(n(t)\) becomes
\[\begin{split}\frac{\mathrm{d}n}{\mathrm{d}t}+\frac{\mathcal{K}} {2}n^{3}&=\frac{\mathcal{K}}{2}n\int\mathrm{d}^{2}\mathbf{k}\exp \left(-2\tilde{D}k^{4}t-2Q\int_{0}^{t}\mathrm{d}t^{\prime}n(t^{\prime})\right) \\ &\simeq\mathcal{K}n\exp\left(-2Q\int_{0}^{t}\mathrm{d}t^{\prime}n( t^{\prime})\right)\frac{1}{\sqrt{2Dt}}\end{split} \tag{18}\]
This equation reduces to the mean field solution (11) when the right hand side can be ignored. The self-consistency of the mean field approximation can be determined as follows. Substituting the mean field solution (11) into, the terms on the left hand side decay as \(t^{-3/2}\), while the right hand side decays superpolynomially as \(t^{-1}\exp(-\sqrt{t})\). We see that the mean field solution is valid asymptotically for any nonzero \(Q\). This justifies the agreement between the numerical simulations of the isotropic dipole-conserving model and the fit obtained from mean-field reaction-diffusion theory.
## V Comparison to Experiment on LaTe\({}_{3}\)
A recent experiment, which served as motivation for this work, observed anomalously slow dynamics of vortex strings in LaTe\({}_{3}\)[45], whose behavior was not explained by a Ginzburg-Landau type phenomenological theory. The experiment pulses the CDW state of LaTe\({}_{3}\) to photoinduce vortex strings and uses ultrafast x-ray scattering to resolve the time dynamics of the resulting non-equilibrium state. X-ray scattering measures the structure factor \(S(\mathbf{k},t)\), which takes the form
\[S(\mathbf{k},t)=g(t)F[kL(t)] \tag{19}\]
where \(F\) is a universal function and \(L(t)\sim t^{\beta}\) corresponds to the average distance between topological defects. They find a scaling exponent \(\beta=0.29\), which is inconsistent with the standard result \(\beta=0.5\) for diffusive
vortex decay in superfluid thin films [42] or superfluid ultracold atomic gases [57].
Because vortex strings are a codimension-two defect, the density of defects scales as \(n(t)\sim L(t)^{-2}\sim t^{-2\beta}\). While our model of uniaxial CDW vortex relaxation is strictly two-dimensional, it may capture similar phenomena to the experiment, so long as the bending of vortex strings does not play a qualitatively important role in the dynamics (note that 2d vs. 3d does not change \(\beta\) in ordinary superfluids). Above, we computed \(n(t)\) within our model of CDW vortex relaxation and found \(n(t)\sim t^{-\alpha}\) with \(\alpha=0.65\pm 0.02\). This yields \(\beta=0.325\pm 0.01\), which is quite close to the experimentally observed value. Importantly, our computation of \(\beta\) goes beyond the Ginzburg-Landau type phenomenological theory, which produces \(\beta=1/4\) and \(\beta=1/2\) for conserved and non-conserved dipole moments, respectively.
## 6 Conclusion
We have constructed a minimal dissipative model of kinematically-constrained vortices, relevant to dynamics over a large range of time scales in uniaxial CDWs. While our isotropic dipole-conserving model agrees well with simple mean-field-theoretic arguments, the experimentally relevant model where only one component of dipole is conserved exhibits anomalous exponents that are close to those observed in a recent experiment on LaTe\({}_{3}\)[45]. Our results therefore provide a quantitative theory for how experimentally-observed subdiffusive dynamics of solid-state defects follows from emergent mobility restrictions, with direct implications for experiment. Generalizing our theory to broader settings, including the constrained dynamics of topological defects in three dimensional charge density waves, remains an interesting open problem that is - at least in principle - straightforwardly tackled using the methods described here.
## Acknowledgements
We thank Leo Radzihovsky, Mariano Trigo and Gal Orenstein for useful discussions. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award number DESC0014415 (MQ), the Alfred P. Sloan Foundation through Grant FG-2020-13795 (AL), and the National Science Foundation through CAREER Grant DMR-2145544 (AL).
## Appendix A Review of effective field theory
### Formalism
We review the general effective field theory (EFT) for stochastic dynamics following [21]. Let \(\mathbf{q}=(q_{1},\ldots,q_{M})\) be the "slow" degrees of freedom that we keep track of in the EFT. To begin, we assume the existence of a stationary distribution
\[P_{eq}\propto e^{-\Phi[\mathbf{q}]}, \tag{10}\]
where \(\Phi[\mathbf{q}]\) is a function(al) of the degrees of freedom \(\mathbf{q}\). The EFT describes the evolution of the probability distribution \(P[\mathbf{q},t]\) as it relaxes toward the stationary distribution (10). The EFT is encoded by an action of the form
\[S=\int\mathrm{d}t\sum_{a}\left[\pi_{a}\partial_{t}q_{a}-H(\pi_{a},q_{a})\right] \tag{11}\]
where \(\pi_{a}\) are "noise" variables conjugate to \(q_{a}\). We will refer to \(H(\pi_{a},q_{a})\) as the EFT Hamiltonian.
There are several general conditions that \(H(\pi_{a},q_{a})\) must obey for the action to represent a sensible EFT. We present the general principles that the dynamics must obey and the corresponding consequences for \(H(\pi_{a},q_{a})\); for a derivation of the latter from the former, see [21]. First, conservation of probability enforces
\[H(\pi_{a}=0,q_{a})=0. \tag{12}\]
This ensures that all terms of \(H\) have at least one factor of \(\pi_{a}\). Stationarity of \(P_{eq}\) places another constraint on the allowed form of \(H(\pi_{a},q_{a})\). Define the generalized chemical potentials
\[\mu_{a}=\frac{\partial\Phi}{\partial q_{a}}. \tag{13}\]
Stationarity of \(P_{eq}\) implies that
\[H(\pi_{a}=i\mu_{a},q_{a})=0. \tag{14}\]
Finally, we assume that the degrees of freedom \(\mathbf{q}\) undergo stochastic dynamics whose fluctuations are bounded. This enforces the condition
\[\mathrm{Im}(H)\leq 0. \tag{15}\]
Finally, we may require that the dynamics respect a version of time-reversal symmetry. Let \(\mathbb{T}\) denote the time-reversal operation, which sends \(t\to-t\) and \(\mathbf{q}\to\mathbb{T}(\mathbf{q})\). On the conjugate noise variables \(\pi_{a}\), time reversal acts as
\[\pi_{a}\to-\mathbb{T}(\pi_{a})+i\mu_{a}. \tag{16}\]
Note that this is a \(\mathbb{Z}_{2}\) transformation.
The conditions on the EFT Hamiltonian \(H(\pi_{a},q_{a})\) significantly constrain the terms which can appear. At the quadratic level, the Hamiltonian must take the form
\[H(\pi_{a},q_{a})=\sum_{ab}i\pi_{a}\mathcal{Q}_{ab}(\pi_{b}-i\mu_{b}) \tag{17}\]
for matrix \(\mathcal{Q}_{ab}\). Decomposing \(\mathcal{Q}_{ab}=A_{ab}-\frac{1}{2}S_{ab}\) into its symmetric part \(-\frac{1}{2}S\) and antisymmetric part \(A\), we see
that \(S\) must be positive definite due to (10). Symmetric contributions to \(\mathcal{Q}_{ab}\) are dissipative, while antisymmetric contributions are non-dissipative.
We conclude the review of the formalism with a brief discussion of how additional conservation laws can be accounted for in the dynamics. Suppose we would like to enforce that a quantity \(F(\mathbf{q})\) is conserved. By Noether's theorem, this is equivalent to enforcing the shift symmetry
\[\pi_{a}\rightarrow\pi_{a}+\frac{\partial F}{\partial q_{a}} \tag{11}\]
on the EFT Hamiltonian \(H(\pi_{a},q_{a})\). Naturally, enforcing this symmetry leads to constraints on the allowed terms which can appear in the EFT Hamiltonian. We will encounter many examples of conservation laws below.
### Example: diffusion
Let us illustrate how diffusion of a single conserved density can be recovered within this framework. The only degree of freedom we keep track of is \(\mathbf{q}=\rho(x)\), which is a conserved density. Its corresponding conjugate field is \(\pi(x)\). We take the stationary distribution to be
\[\Phi[\rho(x)]=\int\mathrm{d}^{d}x\ \frac{1}{2}\chi\rho^{2}+\ldots \tag{12}\]
where the terms in \(\ldots\) are higher order in \(\rho\). The chemical potential to leading order is \(\mu=\chi\rho\). In addition to the aforementioned conditions on \(H\), we also demand that \(Q=\int\rho\) is conserved. Applying the continuum analogue of (11), we require that the EFT Hamiltonian is invariant under
\[\pi\rightarrow\pi+c(t). \tag{13}\]
To quadratic order \(H\) takes the form (10); the above symmetry fixes it to be
\[H(\pi,\rho)=-i\sigma\nabla\pi\nabla(\pi-i\mu)+\cdots. \tag{14}\]
This term is dissipative since it is a symmetric contribution to (10). The equation of motion from varying \(\pi\) (and then setting \(\pi\) to zero) is
\[\partial_{t}\rho-\nabla(D\nabla\rho)=0 \tag{15}\]
which is the diffusion equation with \(D=\chi\sigma\).
### Example: Hamiltonian mechanics
The formalism can also capture Hamiltonian mechanics in the presence of dissipation. We take our degrees of freedom to be \(\mathbf{q}=(x_{1},\ldots,x_{M},p_{1},\ldots,p_{M})=(\mathbf{x},\mathbf{p})\), with the usual Poisson brackets
\[\begin{split}\{x_{i},p_{j}\}&=\delta_{ij}.\\ \{x_{i},x_{j}\}&=\{p_{i},p_{j}\}=0\end{split} \tag{16}\]
We assume that there is a Hamiltonian \(\mathcal{H}(x,p)\) (to be distinguished from the \(H(\pi_{a},q_{a})\) of the effective field theory) which generates time evolution in the absence of noise and defines the equilibrium distribution
\[P_{eq}\propto e^{-\mathcal{H}(x,p)}. \tag{17}\]
Correspondingly the chemical potentials are \(\mathbf{\mu}=(\partial\mathcal{H}/\partial\mathbf{x},\partial\mathcal{H}/\partial\bm {p})\). Hamilton's equations in the absence of dissipation can be reproduced from the EFT action
\[S=\int\mathrm{d}t\sum_{a}\pi_{a}\partial_{t}q_{a}-\pi_{a}\{q_{a},q_{b}\}\mu_{b}. \tag{18}\]
The second term is an antisymmetric contribution to the EFT Hamiltonian (10), so it is dissipationless as expected. We can add dissipation by including in \(H(\pi_{a},q_{a})\) a term
\[-i\pi_{a}S_{ab}(\pi_{b}-i\mu_{b}) \tag{19}\]
where \(S_{ab}\) is a positive definite symmetric matrix.
Additional conservation laws are implemented by enforcing invariance under (11). In the absence of dissipation, this is equivalent to the condition
\[\{F,\mathcal{H}\}=0 \tag{20}\]
for a conserved quantity in Hamiltonian mechanics.
### Dipole-conserving vortices
We now derive the model of dipole-conserving vortices with dissipation presented in the main text. Recall that the dynamics without dissipation are characterized by the Poisson brackets (1), (2) and Hamiltonian (3). Following the previous subsection, we define the generalized chemical potentials \(\mu_{i}^{\alpha}=\partial\mathcal{H}/\partial r_{i}^{\alpha}\) and write the dissipationless contribution to the EFT Lagrangian as
\[\begin{split} L&=\sum_{\alpha,i}\left(\pi_{i}^{ \alpha}\partial_{t}r_{i}^{\alpha}-\pi_{i}^{\alpha}\sum_{\beta,j}\{r_{i}^{ \alpha},r_{j}^{\beta}\}\mu_{j}^{\beta}\right)\\ &=\sum_{\alpha,i}\left(\pi_{i}^{\alpha}\partial_{t}r_{i}^{\alpha }-\pi_{i}^{\alpha}\frac{1}{\Gamma_{\alpha}}\sum_{j}\epsilon_{ij}\mu_{j}^{ \alpha}\right).\end{split} \tag{21}\]
The ordinary mutual friction term which appears in dissipative models of vortices is recovered by including
\[\sum_{i\alpha}-i\frac{\gamma}{2}\pi_{i}^{\alpha}\left(\pi_{i}^{\alpha}-i\mu_{ i}^{\alpha}\right). \tag{22}\]
as a term in \(H(\pi_{a},q_{a})\). This is simply (19) where \(S_{ab}\) is diagonal. The resulting equations of motion are given by (5).
As noted in the main text, under these dynamics dipole moment is not conserved. The total dipole moment is given by \(D_{i}(\mathbf{q})=\sum_{\alpha}\Gamma_{\alpha}r_{i}^{\alpha}\), so conservation of dipole
moment corresponds via (10) to invariance under \(\pi_{i}^{\alpha}\to\pi_{i}^{\alpha}+\Gamma_{\alpha}\delta_{ik}\). It is straightforward to see that the term (11) is not invariant under this transformation.
To get a dissipative term which respects dipole conservation, the simplest term at quadratic order is given by
\[\sum_{i\alpha\beta}-i\frac{\gamma^{\prime}}{2}f_{\alpha\beta}\left(\frac{\pi_{ i}^{\alpha}}{\Gamma_{\alpha}}-\frac{\pi_{i}^{\beta}}{\Gamma_{\beta}}\right) \left(\left(\frac{\pi_{i}^{\alpha}-i\mu_{i}^{\alpha}}{\Gamma_{\alpha}}\right) -\left(\frac{\pi_{i}^{\beta}-i\mu_{i}^{\beta}}{\Gamma_{\beta}}\right)\right) \tag{12}\]
where \(f_{\alpha\beta}\coloneqq f(|\mathbf{r}^{\alpha}-\mathbf{r}^{\beta}|)\) is a function which depends only on the distance between vortices \(\alpha\) and \(\beta\). This term is clearly invariant under the transformation \(\pi_{i}^{\alpha}\to\pi^{\alpha}+\Gamma_{\alpha}\delta_{ik}\). The function \(f_{\alpha\beta}\) is not constrained by the EFT; we choose \(f_{\alpha\beta}=|\mathbf{r}^{\alpha}-\mathbf{r}^{\beta}|^{-1}\). While the nonlocality of \(f_{\alpha\beta}\) may seem unnatural, we emphasize that any microscopic dynamics preserving dipole moment leads to an effective dissipative term of this form, up to a choice of \(f_{\alpha\beta}\). The resulting equations of motion are given in (6).
### On the choice of \(f_{\alpha\beta}\)
Given the freedom within the EFT to make different choices of \(f_{\alpha\beta}\) it is natural to ask what, if anything, singles out the choice \(f_{\alpha\beta}=|\mathbf{r}^{\alpha}-\mathbf{r}^{\beta}|^{-1}\). We argue via a simple scaling argument that this is the least nonlocal \(f_{\alpha\beta}\) which does not cause generic vortex dipoles to escape to infinity without annihilating.
Consider a vortex dipole together with a third vortex as in Fig. 2(b). Let us call the distance between the two vortices comprising the dipole \(d\) and the distance between the dipole and the third vortex \(R\). When dissipation is absent, an isolated dipole will travel at constant speed perpendicular to its dipole moment, so \(R\sim t\). In the presence of dissipation, these vortices will approach each other with speed \(\dot{d}\sim-\gamma\) where \(\gamma\) is the effective dissipation strength. Choosing \(f_{\alpha\beta}=|\mathbf{r}^{\alpha}-\mathbf{r}^{\beta}|^{-\eta}\), the effective dissipation scales as \(\gamma\sim 1/R^{\eta}\). Altogether, we obtain
\[\frac{\mathrm{d}}{\mathrm{d}t}d\sim-\frac{1}{t^{\eta}}. \tag{13}\]
When \(\eta>1\), \(d\) asymptotes to a constant as \(t\to\infty\), and the vortex dipole escapes to infinity without annihilating; when \(\eta<1\), the dipole annihilates in finite time. When \(\eta=1\), the dipole always annihilates eventually, but the time to annihilation is exponential in the initial separation \(d_{0}\). For \(\eta=1\) we therefore expect to see the dynamics slow down dramatically when the density drops to a point where inter-vortex separation is \(O(1)\) in our dimensionless distance units, which is indeed seen in our numerics.
That the vortex dipole escapes to infinity is not an issue if we perform our calculations using periodic boundary conditions; however, this complicates the problem significantly and requires substantially more computational resources. To avoid this issue we simply choose \(f_{\alpha\beta}\) so that dipoles don't escape to infinity, which results in the choice in the main text.
## Appendix B Review of two-species annihilation
In this appendix we review the reaction-diffusion model governing two-species annihilation, considering the cases with and without long-range interactions. We closely follow the discussion in [56].
Ordinary two-species annihilation processes are governed at the mean-field level by a kinetic rate equation
\[\frac{\mathrm{d}\rho_{A}}{\mathrm{d}t}=\frac{\mathrm{d}\rho_{B}}{\mathrm{d}t}= -\mathcal{K}\rho_{A}\rho_{B}\;, \tag{14}\]
which captures the fact that an annihilation process requires two species to be present at the same location. Let us introduce the number density \(n\) and the charge density \(\rho\) as
\[n =\rho_{A}+\rho_{B} \tag{15}\] \[\rho =\rho_{A}-\rho_{B}\;\;.\]
In terms of \(n\) and \(\rho\), the rate equation becomes
\[\frac{\mathrm{d}n}{\mathrm{d}t} =-\frac{\mathcal{K}}{2}(n^{2}-\rho^{2}) \tag{16}\] \[\frac{\mathrm{d}\rho}{\mathrm{d}t} =0\]
where the latter equation shows that charge is conserved. When there is an equal initial density of \(\rho_{A}\) and \(\rho_{B}\), _i.e._ at charge neutrality, the asymptotic behavior of this equation is given by
\[n(t)\sim(\mathcal{K}t)^{-1}. \tag{17}\]
The mean-field description is valid in dimensions above the critical dimension \(d_{c}=4\). Below the critical dimension, diffusive and stochastic effects become important, modifying the long-time behavior to
\[\rho(t)\sim(Dt)^{-d/4}. \tag{18}\]
Here \(D\) is the diffusion constant. This behavior and the critical dimension can be derived by considering the long-range interacting case discussed below and taking the limit where the interaction strength vanishes, as was shown in [56].
We now treat the case of an ordinary \(A+B\to 0\) reaction-diffusion system with long-range interactions, reviewing the calculation of [56]. In terms of the number density \(n\) and charge density \(\rho\), the equations of motion are
\[\partial_{t}n-D\nabla^{2}n =-\mathcal{K}\left[n^{2}-f^{2}\right]-Q\nabla\left(\rho\nabla V\right) \tag{19}\] \[\partial_{t}\rho-D\nabla^{2}\rho =-Q\nabla\left(n\nabla V\right)\]
where the potential \(V(\mathbf{r},t)\) is given by
\[V(\mathbf{r},t)=-\int\mathrm{d}^{2}\mathbf{r}^{\prime}\rho(\mathbf{r}^{\prime}, t)\log|\mathbf{r}^{\prime}-\mathbf{r}|. \tag{20}\]
We have introduced a diffusion coefficient \(D\) and a coefficent \(Q\sim\varGamma^{2}\) proportional to the square of the vorticity. The authors in [56] analyzed these equations using a self-consistent approximation where fluctuations of the total number density \(n\) are neglected while fluctuations of the charge density \(\rho\) are kept. In other words, the number density \(n(\mathbf{r},t)=n(t)\) is taken to be a spatially independent function of time. Our goal will be to determine the number density \(n(t)\) averaged over an ensemble of initial conditions for \(f(\mathbf{r},0)\), which we take to be
\[\begin{split}\langle\rho(\mathbf{r},0)\rangle&=0\\ \langle\rho(\mathbf{r}_{1},0)\rho(\mathbf{r}_{2},0)\rangle& =n_{0}^{2}\delta^{(2)}(\mathbf{r}_{1}-\mathbf{r}_{2})\end{split}. \tag{10}\]
We will normalize \(n_{0}=1\). Approximating \(n(\mathbf{r},t)\simeq n(t)\), we average the first equation of (10) over all space and over initial conditions to obtain
\[\frac{\mathrm{d}n(t)}{\mathrm{d}t}+\mathcal{K}n(t)^{2}=\frac{\mathcal{K}}{V} \int\mathrm{d}^{2}\mathbf{r}\;\langle\rho(\mathbf{r},t)^{2}\rangle\;. \tag{11}\]
with \(\langle\ldots\rangle\) denoting the average over initial conditions. Fourier transforming \(\rho(\mathbf{r},t)\) in space, the second equation of (10) gives
\[\partial_{t}\rho(\mathbf{k},t)+Dk^{2}\rho(\mathbf{k},t)=-Qn(t)\rho(\mathbf{k},t) \tag{12}\]
away from \(k=0\) and \(\partial_{t}\rho(\mathbf{0},t)=0\) at \(k=0\). The equation of motion for \(\rho(\mathbf{k},t)\) can be solved exactly to yield
\[\rho(\mathbf{k},t)=\rho(\mathbf{k},0)\exp\left(-Dk^{2}t-Q\int_{0}^{t}\mathrm{ d}t^{\prime}n(t^{\prime})\right). \tag{13}\]
Substituting the solution \(\rho(\mathbf{k},t)\) into (11) and performing the average (14) gives
\[\begin{split}\frac{\mathrm{d}n}{\mathrm{d}t}+\mathcal{K}n^{2}& =\mathcal{K}\int\mathrm{d}^{2}\mathbf{k}\exp\left(-2Dk^{2}t-2Q \int_{0}^{t}\mathrm{d}t^{\prime}n(t^{\prime})\right)\\ &=\mathcal{K}\exp\left(-2Q\int_{0}^{t}\mathrm{d}t^{\prime}n(t^{ \prime})\right)\frac{\pi}{2Dt}\end{split} \tag{14}\]
where in the second line we performed the integral over \(\mathbf{k}\). The mean field solution (10) was obtained from ignoring the RHS of the above equation. This is valid if, at long times, terms on the RHS decay more quickly than terms on the LHS. Substituting the mean field solution, we see that the terms on the LHS decay as \(t^{-2}\), while the RHS decays as \(t^{-\alpha}\) with \(\alpha=2Q/\mathcal{K}+1\). In other words, the mean field solution is valid for \(2Q/\mathcal{K}>1\).
Repeating the calculation for general spatial dimension \(d\) (while modifying \(V\) to be the Coulomb potential in \(d\) dimensions), one finds that the RHS decays as \(\alpha=d/2+2Q/\mathcal{K}\). This gives the critical dimension above which mean field theory is valid as \(d_{c}=4\left(1-Q/\mathcal{K}\right)\). For a system without long-range interaction (\(Q=0\)) as in the previous subsection, this reproduces the result for the upper dimension \(d_{c}=4\).
| Minimalな dissipative vortex dynamics のモデルを2次元空間で構築し、運動的制約:偶交互保存を課します。この追加の保存則は、渦の異常な遅延を意味します。私たちは、渦の動態モデルが、単軸電荷密度波状態への急冷過程における広い時間スケールで関連性があると主張しています。私たちの予測は、$\mathrm{LaTe}_3$における単軸電荷密度波形成に関する最近の実験に一致しています。 |
2309.15259 | SLIQ: Quantum Image Similarity Networks on Noisy Quantum Computers | Exploration into quantum machine learning has grown tremendously in recent
years due to the ability of quantum computers to speed up classical programs.
However, these efforts have yet to solve unsupervised similarity detection
tasks due to the challenge of porting them to run on quantum computers. To
overcome this challenge, we propose SLIQ, the first open-sourced work for
resource-efficient quantum similarity detection networks, built with practical
and effective quantum learning and variance-reducing algorithms. | Daniel Silver, Tirthak Patel, Aditya Ranjan, Harshitta Gandhi, William Cutler, Devesh Tiwari | 2023-09-26T20:33:26 | http://arxiv.org/abs/2309.15259v1 | # Sliq: Quantum Image Similarity Networks on Noisy Quantum Computers
###### Abstract
Exploration into quantum machine learning has grown tremendously in recent years due to the ability of quantum computers to speed up classical programs. However, these efforts have yet to solve unsupervised similarity detection tasks due to the challenge of porting them to run on quantum computers. To overcome this challenge, we propose Sliq, the first open-sourced work for resource-efficient quantum similarity detection networks, built with practical and effective quantum learning and variance-reducing algorithms.
## Introduction
**Brief Overview and Motivation.** Rapid advancements in quantum machine learning (QML) have increasingly allowed researchers to leverage the benefits of quantum computing in solving ML problems. Although the degree of advantage for different ML tasks is being explored, the recent advances suggest that classification and solving high energy physics problems are among the most promising candidates [14, 17]. In particular, recent efforts have focused on developing high-quality classification circuits for quantum computers. While the resulting classifiers have been highly effective, they are restricted because classification, like all other supervised learning methods, requires labeled data. In many real-world scenarios, labeled data is either not readily available (e.g., diagnosing complex diseases based on medical images without quantifiable ground truth [16]) or not feasible (e.g., a visual sketch of a suspect) [20]. In such scenarios, comparison across unlabeled inputs is critical for learning, prediction, and ground truth generation - hence the popularity of similarity detection for various tasks including recommendation systems [21]. However, there is no QML circuit designed to predict similarity on unlabeled data currently available.
**Sliq: Solution and Approach.** To bridge this gap, a naive design for similarity detection might create pairs of similar and dissimilar inputs from the training dataset, much like classical Siamese and Triplet networks [15, 16] (e.g., Anchor-Positive and Anchor-Negative pairs), and pass them sequentially over a variational quantum circuit (VQC)[1] to minimize the loss function. While straightforward, this design is resource-inefficient and does not fully leverage the unique properties of quantum computing. To address this gap in unsupervised QML, we propose Sliq, which mitigates these challenges with multiple novel key design elements.
First, Sliq addresses resource inefficiency by training both images in the pair at once via a superimposed state. This provides multiple advantages: (1) it reduces the overall quantum resource requirements by reducing the number of runs, and (2) it allows the VQC to learn more effectively since the superposition provides explicit hints about the similarity between the data. Next, to take advantage of entanglement in quantum computing systems, Sliq interweaves the features of both inputs and explicitly entangles them at each layer in the VQC, decreasing the distance of corresponding features in Hilbert space. Sliq's design ensures that intertwoven features from different inputs are embedded into all physical qubits of the learning circuit so that the entanglement effects are captured in the measurement qubits. To ensure that Sliq is practical and effective on current error-prone quantum computers, Sliq keeps the number of parameters in the learning circuit minimal to mitigate the compounding noise effects on real quantum computers.
Unfortunately, incorporating superposition and entanglement properties creates new challenges. The identities of individual inputs in a pair (e.g., Anchor input in the Anchor-Positive pair) are indistinguishable due to entanglement, and the projection of the same input on the classical space is inconsistent across different runs. Sliq introduces a new training "loss" estimation and improves quantum embedding methods to reduce projection variance, resulting in more robust training and a network that is resilient to hardware errors on real quantum systems. Overall, Sliq demonstrates that the combination of training on entangled pairs and utilizing a projection variance-aware loss estimation yields effective similarity detection, even on current noisy quantum computers.
**Contributions of Sliq.**
**I.** To the best of our knowledge, Sliq is the first method to build a practical and effective _quantum learning circuit for similarity detection on NISQ-era quantum computers_. Sliq is available as open-source framework at [https://github.com/SilverEngineered/Sliq](https://github.com/SilverEngineered/Sliq).
**II.** SlIQ's design demonstrates how to exploit the superposition and entanglement properties of quantum computing systems for similarity detection. It builds a _resource-efficient training pipeline by creating interwoven, entangled input pairs on a VQC, and it applies new robust methods for the quantum embedding of classical inputs.
**III.** Our simulations and real-computer evaluations demonstrate that SlIQ achieves a 31% point improvement in similarity detection over a baseline quantum triplet network on a real-world, unlabeled dataset [2], while prior state-of-the-art works in QML only perform classification and require labeled input data [14, 15]. We also show that SlIQ performs competitively for classification tasks on labeled data, despite not being a primary objective a similarity network.
## Background
**Qubits, Quantum Gates, and Quantum Circuits.** A quantum bit (qubit) has the ability to attain a _superposition_ of its two basis states: \(\ket{\Psi}=\alpha_{0}\ket{0}+\alpha_{1}\ket{1}\). Here, \(\ket{0}\) and \(\ket{1}\) are the two basis states, \(\alpha_{0}\), and \(\alpha_{1}\) are normalized, complex-valued coefficients and \(\Psi\) is the overall qubit state in superposition. For an \(n\)-qubit computing system, the overall system state is represented as: \(\ket{\Psi}=\sum_{k=0}^{k=2^{n}-1}\alpha_{k}\ket{k}\). When this state is _measured_, the _superposition collapses_, and the system is observed in state \(\ket{k}\) with probability \(\norm{\alpha_{k}}^{2}\).
A qubit can be put in arbitrary superpositions using the \(R3(p_{1},p_{2},p_{3})\)_quantum gate_. The single-qubit \(R3\) gate has three parameters \((p_{1},p_{2},p_{3})\) that can be adjusted to achieve the desired state [1]. Multiple qubits can be _entangled_ together to form an \(n\)-qubit system using two-qubit gates (e.g., the \(CX\) gate). These non-parameterized two-qubit gates and tunable \(R3\) gates may be combined to achieve any \(n\)-qubit computation.
A sequence of quantum gates applied to a system of qubits forms a _quantum circuit_, at the end of which the qubits are measured to obtain the circuit's output. Fig. 1 shows an example of a quantum circuit in the "Variational Quantum Circuit (VQC)" box. The horizontal lines represent five qubit states, to which the \(R3\) and two-qubit gates are applied over time, and the measurement gates are applied at the end.
**Variational Quantum Circuits and Quantum Machine Learning.** Whereas the gates in a normal quantum circuit are deterministic and predefined, a _Variational Quantum Circuit (VQC)_ is a quantum circuit that utilizes parameterized gates that are tuned to optimize a certain objective [11]. This objective can take many forms, from finding the minimal energy of a molecule's Hamiltonian to maximizing the rate of return of a financial portfolio or, in SlIQ's case, optimizing the loss function of a _quantum machine learning (QML)_ task [12, 13]. These circuits are "variational" because the gates' parameter values vary during the optimization process. These gates are adjusted according to a classical optimizer running on a classical machine, while the variational circuit itself is executed on a quantum computer. Fig. 1 demonstrates this hybrid feedback approach between quantum execution and classical optimization.
Although the optimization of VQC parameters is performed classically, a quantum advantage can be obtained from the circuit's execution on quantum hardware, which means the circuit has far fewer parameters to optimize compared to the classical version of the same algorithm [2]. This advantage is gained by utilizing the superposition and entanglement properties on quantum computers that are not available on classical computers. For example, a classical image classification neural network typically consists of millions of parameters while a well-designed quantum network only requires hundreds [11]. However, the accuracy of such quantum networks has been limited due to prevalent noise on current quantum computers [12], especially for unsupervised learning [11] - this paper aims to overcome this barrier.
**Noise on NISQ Computers.** Contemporary quantum computers suffer from noise during program execution due to various imperfections in the hardware of physical qubits, causing errors in the program output. SlIQ aims to achieve effective results in the face of these challenges.
## SlIQ: Design and Implementation
In this section, we discuss the design of SlIQ and its key design elements. Before we discuss the design details of SlIQ, we first describe a base design of a quantum machine learning circuit for similarity detection. We refer to this design as _"Baseline"_. First, we discuss how the baseline design leverages widely-used variational quantum circuits to build a quantum learning network to perform the similarity detection task for labeled/unlabeled data. Next, we discuss the baseline design's resource inefficiency and its inability to exploit the power of superposition and entanglement. SlIQ's design addresses those limitations and provides superior performance, as discussed in our evaluation.
**Baseline Design.** The baseline design has three major components. The first component is generating the input which consists of triplets of Anchor, Positive, and Negative inputs - similar to classical Siamese-based and Triplet models which are widely used in the classical similarity networks [23, 14, 15]. Then the encoding of the input features to the physical qubits is performed. To achieve this, we perform amplitude embedding [16] on the inputs one by one
Figure 1: Hybrid quantum-classical procedure of executing and optimizing a variational quantum circuit (VQC).
for all three inputs (Fig. 2). The amplitude embedding procedure embeds classical data as quantum data in a Hilbert Space. Although recent prior works have utilized principal component analysis (PCA) prior to amplitude embedding for feature encoding [2], the baseline does employ PCA because higher performance is observed with keeping features intact and padding 0's as necessary to maintain the full variance of features. The second component is to feed these encoded features to a VQC for training and optimizing the VQC parameters to minimize the training loss (Fig. 2). The training loss is estimated via calculating the distance between the projection of inputs on a 2-D space - the projection is obtained via measuring two qubits at the end of the VQC circuit (this is the third and final component). The loss is calculated as the squared distance between the projection of the anchor, \(A\), to the positive projection, \(P\), taking the absolute distance between the anchor projection and negative projection \(N\). This is an \(L2\) variant of Triplet Embedding Loss and is formally defined as
\[L_{2}=\Big{\{}(A_{x}-P_{x})^{2}+(A_{y}-P_{y})^{2}\Big{\}}-\Big{\{}(A_{x}-N_{x} )^{2}+(A_{y}-N_{y})^{2}\Big{\}} \tag{1}\]
We note that the effectiveness of training can be adapted to any choice of dimension and shape of the projection space (2-D square box bounded between -1 and 1, in our case) as long as the choice is consistent among all input types (Anchor, Positive, and Negative inputs). A more critical feature is the repeating layers of the VQC which the baseline design chooses to be the same as other widely-used VQCs to make it competitive [11, 13].
### SliQ: Key Design Elements
SliQ builds off the baseline design and introduces multiple novel design aspects to mitigate the limitations of the baseline design. First, SliQ introduces the concept of training an input pair in the same run to leverage the superposition and entanglement properties of quantum computing systems.
**Input Feature Entanglement and Interweaving.** Recall that in the baseline design, each image type (Anchor, Positive, Negative) traverses through the variational quantum circuit one-by-one. The corresponding measurements at the end of each image-run produce coordinates on a 2-D plane that allows us to calculate similarity distance between A-P and A-N inputs. This allows us to calculate the loss that is targeted to be minimized over multiple runs during training. Unfortunately, this procedure requires performing three runs before the loss for a single (A, P, N) triplet input can be estimated, which is resource inefficient.
SliQ's design introduces multiple new elements to address this resource inefficiency. The first idea is to create two training pairs (Anchor-Positive and Anchor-Negative), and each pair is trained in a single run (demonstrated visually in Fig. 3). This design element improves the resource-efficiency of the training process - instead of three runs (one for each image type), SliQ requires only two runs. Note that, theoretically, it is possible to combine all three input types and perform combined amplitude embedding. However, in practice, this process is not effective because it be
Figure 3: SliQ’s procedure of combining A-P and A-N inputs into two runs, interweaving their feature space, and updating the variational structure to leverage the properties of quantum superposition and entanglement to reduce the number of runs and generate better results.
Figure 2: The baseline design inputs one image at a time, requiring three separate runs for A, P, and N.
comes challenging for the quantum network to learn the distinction between the positive and negative input relative to the anchor input. Creating two pairs provides an opportunity for the quantum circuit to learn the similarity and dissimilarity in different pairs without dilution.
The second idea is to interweave the two input types in a pair before performing the amplitude embedding, and then feeding the output of the amplitude embedding circuit to the quantum circuit (Fig. 3). Interweaving provides the advantage of mapping features from different inputs to different physical qubits. This is particularly significant to mitigate the side-effects of noise on the current NISQ-era quantum machines where different physical qubits suffer from different noise levels [16]. If interweaving of images is not performed, we risk the network not learning direct comparison between positionally equivalent features. SlIQ's interweaving mitigates this risk to make it effective on NISQ-era quantum computers which we found when compared to layering images.
As a final remark, we note that all these ideas are combined to leverage the power of entanglement and superposition of quantum systems - by training multiple inputs together, interweaving them, and creating superposition, and entangling them. While SlIQ's design to exploit superposition and entanglement is useful, it creates new challenges too. Next, we discuss the challenges of attaining projection invariance and novel solutions to mitigate the challenge.
**Projection Variance Mitigation (PVM).** Recall that in the baseline design, we measure two qubits and project the input's position in a 2-D space. Over three runs, we receive three separate coordinates in 2-D space, which we can use to calculate the loss - as shown in Fig. 4 (left). Our objective is to minimize the overall loss, defined as below:
\[L_{obj}=\left(\left|A_{x}-P_{x}\right|+\left|A_{y}-P_{y}\right|\right)-\left( \left|A_{x}-N_{x}\right|+\left|A_{y}-N_{y}\right|\right) \tag{2}\]
Optimizing for the above objective function is relatively straightforward. However, this loss function becomes non-trivial when SlIQ introduces the idea of training input pairs. Recall that the inputs are interweaved (Fig. 3), and hence, our measurements need to capture the outputs of anchor and positive/negative features separately. SlIQ resolves these issues by increasing the number of qubits we measure. Instead of two qubits per run, SlIQ measures four qubits. In Fig. 3, these qubits are denoted at the end of both runs. To distinguish the anchor input, SlIQ enforces two apriori-designated qubits measurements to correspond to the anchor in both runs. We note it is not critical which qubits are chosen to "represent" the anchor input as long as our choice is treated as consistent. For example, qubits 1 and 2 could be tied to the anchor image, or qubits 1 and 3. So long as the choice does not change through training and evaluation, these options are computationally identical. However, this idea creates a major challenge - the coordinates corresponding to the anchor input may not project to the same point in our 2-D space. This is visually represented by two points \((A_{NX},A_{NY})\) and \((A_{PX},A_{PY})\) in Fig. 4. Ideally, these two points should project on the same coordinates.
The baseline design inherently has zero projection variance because it only has one measurement corresponding to the anchor input, and the loss for the positive and negative input was calculated from this absolute pivot. To mitigate this challenge, SlIQ designs a new loss function that accounts for minimizing this projection variance over training. As shown below, SlIQ's novel loss function has two components: (1) the traditional loss between the positive/negative input and the anchor input, and (2) new consistency loss. Consistency loss enforces positional embeddings to separate the entangled images at the time of measurement.
\[L_{pvm}=\left|A_{px}-A_{nx}\right|+\left|A_{py}-A_{ny}\right| \tag{3}\]
\[L_{total}=\alpha*L_{obj}+\beta*L_{pvm} \tag{4}\]
In Eq. 4, the parameters \(\alpha\) and \(\beta\) are hyperparameters that denote weights for the objective function to balance the objective of accurate embeddings and precise embeddings. For \(L_{obj}\), we use \((A_{px},A_{py})\) and (\(A_{nx},A_{ny})\) for the positive and negative anchor values respectively. Additionally, to ensure robustness across the circuit, the samples are reversed for the negative case. The pair (A,P) is run along with the pair (N,A). The consistency is then applied between the mappings of the anchor image which now lie on different parts of the circuit. This additional measure ensures robustness by making the entire output of the circuit comply with the decided-upon separability as opposed to just a few qubits. This technique also enables scaling to entanglement of more than 2 images on a single machine.
In Fig. 5, we show an overview for the design of SlIQ. The first step is to take the input dataset and create pairs of the form (Anchor, Positive) and (Anchor, Negative) for training and testing. Once these are formed, the network trains on the dataset classically as part of the hybrid quantum-classical model used in VQCs. Once the data is
Figure 4: While the baseline design has zero projection variance, SlIQ has to take additional steps to mitigate it.
Figure 5: Overview of the design of SlIQ including the steps of preparing the data for training, training the quantum network, and using the network for inference post-training.
trained, SlIQ performs similarity detection by performing inference on new pairs of data. This inference can be used to identify the most similar samples to one another, the most distant samples, and even serve as a classifier if an unsupervised clustering algorithm is used to generate clusters.
## Experimental Methodology
**Training and Testing Datasets.**SlIQ is evaluated on NIH AIDS Antiviral Screen Data Kramer, De Raedt, and Helma (2001), MNIST Deng (2012), Fashion-MNIST Xiao, Rasul, and Volglgraf (2017), and Flickr Landscape Chen, Lai, and Liu (2018). These datasets are chosen because (1) they cover both unlabeled (e.g., Flickr Landscape) and labeled datasets (e.g., AIDS, MNIST, Fashion-MNIST), (2) they represent different characteristics (e.g., medical dataset, images, handwritten characters) and have real-world use cases (e.g., AIDS detection). We note that the size of the Flickr Landscape dataset is \(\approx\)25\(\times\) larger than the commonly used quantum machine learning image datasets of MNIST and Fashion-MNIST Xiao, Rasul, and Vollgraf (2017); Silver, Patel, and Tiwari (2022). This presents additional scaling challenges that we mitigate with the scalable design of SlIQ.
Flickr Landscape is an unlabeled dataset consisting of 4300 images of different landscapes spanning general landscapes, mountains, deserts, seas, beaches, islands, and Japan. The images are different sizes, but are cropped to 80\(\times\)80\(\times\)3 for consistency. The center is kept intact with all color channels. This dataset is unlabeled and serves to show show SlIQ performs on unlabeled data. NIH AIDS Antiviral Screen Data contains 50,000 samples of features alongside a label to indicate the status of the Aids virus ("CA" for confirmed active, "CM" for confirmed moderately active, and "CI" for confirmed inactive). The MNIST dataset is a grayscale handwritten digit dataset Deng (2012) where we perform binary classification on '1's and '0's. The Fashion-MNIST dataset contains the same number of pixels and classes as MNIST, but the images are more complex in nature. In all datasets, 80% of the data is reserved for training and 20% reserved for testing.
**Experimental Framework.**The environment for SlIQ is Python3 with Pennylane Bergholm et al. (2018) and Qiskit Aleksandrowicz et al. (2019) frameworks. Our quantum experiment evaluations are simulated classically, and the inference results which are collected on the real IBM quantum machines are specified in the text. For setting up the datasets for testing and training, triplets are created in the form of an anchor image, a positive image, and a negative image.
For the unlabeled dataset used, three images are selected at random. The color frequency is evaluated for each image on R, G, and B channels individually, then placed into eight bins for each color. The 24 total bins for each image are used to establish a ground truth similarity between the images, where the first image is the anchor and the images closest in L1 norm to the anchor is set as the positive image and the further away image is the negative image. Once the triplets are created, the pixels within the images are interwoven with one another. The image is then padded with 0s to the nearest power of 2 for the amplitude embedding process.
In the case of the labeled datasets, the anchor and positive are chosen to be from the same class, while the negative image is chosen at random from another class. For evaluation, an anchor and positive are passed into the embedding and the four dimensions are used in a Gaussian Mixture Model to form clusters which are then evaluated for accuracy. We use a batch size of 30 for all experiments with a Gradient-DescentOptimizer and a learning rate of \(0.01\). We train for 500 epochs on a four-layer network. The size of the network changes based on the dataset used, a four-qubit circuit is used for the Aids dataset, 14 for Flickr Landscape, and 11 for MNIST and Fashion-MNIST. For the baseline, one less qubit is used for all circuits as it does not necessitate the additional qubit to store an additional sample on each run.
**Competing Techniques.** Our baseline scheme is the same as described earlier: it is a quantum analogue of a triplet network, where weights are shared and use a single network for training and inference. Although SlIQ is not designed for classification tasks on labeled data, we provide a quantitative comparison with also the state-of-the-art quantum machine learning classifier: Projected Quantum Kernel (PQK) (published in Nature'2021) Huang et al. (2021) and Quilt (published in AAAI'2022) Silver, Patel, and Tiwari (2022). The former, PQK, trains a classical network on quantum features generated by a specially designed quantum circuit. Datasets are modified to have quantum features that show the quantum advantage in training. While not feasible to implement on current quantum computers, we modify PQK's architecture to use fewer intermediate layers to reduce the number of gates and training time. The other comparison used, Quilt, is
Figure 6: SlIQ’s loss normalized against ground truth distance between color frequencies. The anchor image was compared to 100 images. _Only the top 3 correlations are shown._ The steep drop in correlation metric shows that the baseline is not effective.
an image classifier built on an ensemble of variational quantum circuits which uses the ensemble to mitigate noise in the NISQ-era of quantum computing.
**Figures of Merit.** We categorize the figures of merit by dataset type: unlabeled or labeled. As image similarity is inherently visual, in addition to quantitative success, we demonstrate qualitative success by including a few relevant snapshots. For our unlabeled results, we also examine quantitatively how well the overall predictions match with the ground truth similarities, showing how well SliQ learns over a distribution. The specific details of the metrics are described near the description of the results. For qualitative success in the unlabeled dataset, we show the model's predicted images for most similar to a given anchor image. For our labeled datasets, we report the accuracy of SliQ. Because SliQ performs an embedding and does not classify, accuracy can not be obtained directly from the output. For accuracy metrics, Gaussian Mixture Models are used to assign clusters for classification.
## 5 Evaluation and Analysis
**SliQ effectively detects the similarity of samples in unlabeled datasets using quantum networks; SliQ is the first work to target this area.** As SliQ is the first work to target quantum similarity detection for unlabeled data, we compare against the results for the baseline quantum triplet model. Using the Flickr Landscape dataset, we rank the image similarity based on triplets formed from color frequency histograms. For each image in a set of 100 images, we treat the image as an anchor and compute the ground truth distance in color frequency between the anchor and every other image. We compare this ground truth distance to the distance identified by the model and correlate the rankings.
We use Spearman correlation [14], which performs ranked correlation of two random variables, to interpret the relationship between ground truth and the model's estimations. Spearman correlation is commonly used to perform this type of analysis, for example [16] uses Spearman correlation in ranking sentence embedding in similarity networks. SliQ _has much better correlations over the baseline triplet model, with a median Spearman correlation of 0.36 compared to a median Spearman correlation of 0.05, which shows that SliQ is 0.31 or 31% points more accurately correlated than Baseline._ In Table 1, we show the distribution of Spearman correlations for SliQ compared to the baselines. At every percentile measured, SliQ has notable improvements in similarity detection, which demonstrates SliQ's overall improvement over an entire distribution.
This trend is also visually observed in Fig. 6(a) for Baseline and Fig. 6(b) for SliQ. The x-axis denotes the ground truth distance, where the points further to the left indicate true similarity. The y-axis denotes the calculated loss of the model, indicating SliQ's ground truth distance estimate. Points closer to the diagonal line denote accurate estimations. Fig. 6 show only the top-3 correlation examples for easier visual interpretation. We note that SliQ tends to cluster more around the ground truth correlation line, and its top-3 correlation drop from 0.68 to 0.64; in contrast, the baseline, drops from 0.63 to 0.46.
Additionally, we show some of the closest predictions identified in Fig. 7. These demonstrate the different scenes of the Landscape dataset. For example, the demonstrated landscapes include mountains, aerial perspectives, forests, and oceans. SliQ is able to match similar landscapes efficiently, demonstrating its effectiveness at similarity detection - better than the baseline's relative picks.
**Although SliQ was not designed to act as a classifier, it is effective at detecting the similarity of samples in labeled datasets and is competitive with prior state-of-the-art quantum classifiers.** On its own, SliQ performs embeddings, not classification, but we can use SliQ as a classifier by performing reasonable clustering on its output. To demonstrate its classification ability, we compare against state-of the art quantum classification methods [10, 17]. Our results (Fig. 8) indicate that SliQ yields competitive accuracy compared to the advanced quantum classifiers on a task that SliQ was
\begin{table}
\begin{tabular}{|c|c c||c|c c|} \hline Percentile & Baseline & SliQ & Percentile & Baseline & SliQ \\ \hline \hline \(25^{th}\) & \(-0.06\) & \(0.23\) & \(75^{th}\) & \(0.19\) & \(0.48\) \\ \hline \(50^{th}\) & \(0.05\) & \(0.36\) & \(100^{th}\) & \(0.63\) & \(0.68\) \\ \hline \end{tabular}
\end{table}
Table 1: Spearman correlation results show that SliQ outperforms the baseline in similarity detection. SliQ’s median Spearman correlation is 0.36 vs. the Baseline’s 0.05.
Figure 8: SliQ performs competitively against the comparative classification techniques for all datasets, despite classification not being a learned objective of SliQ.
Figure 7: Anchor image, Baseline-identified similar image, and SliQ-identified similar image for the same anchor image, using the unlabeled Flickr Dataset. SliQ is effective in identifying images with similar color frequencies.
never trained on (classification) and demonstrates the broad applicability of embeddings. To perform this clustering, we employ a Gaussian Mixture Model on our embeddings. The model is initialized with the true number of classes in the dataset and is fit to 1000 samples to form clusters. For classification, each permutation of the clusters to labels is considered, as these models do not provide labels. The permutation with the highest accuracy is considered to be the correct label. With these techniques, our results show SliQ performs well on a variety of datasets, averaging up to 96.44% accuracy on binary MNIST classification. We show the full classification accuracy results in Fig. 8 for different techniques.
In Table 3, we show how SliQ performs on real quantum computers today. SliQ achieves a 68.8% accuracy on the AIDS dataset, running on IBM Oslo. SliQ _significantly outperforms the state-of-the-art quantum classifier (QUILT)_, even though SliQ was not originally designed for classification._ The reason is because SliQ's design is noise-aware to be effective on error-prone NISQ computers. In particular, SliQ is designed with few parameters for current NISQ computers, where error compounds at each step and quickly explodes. Other quantum machine learning architectures tend to have higher degradation in accuracy on real computers as they require larger architectures with ample opportunity for compounding error. For the AIDS dataset, SliQ has 48 tunable parameters, while Quilt requires 375, and PQK requires 1,633 parameters. As a result of more parameters, the hardware error compounds, explaining the degradation shown above.
**Why does SliQ perform effectively?** By mitigating projection variance, SliQ is able to map the anchor input at the same position to a close location regardless of the second input in the pair. This is necessary as the images get entangled together throughout the entire circuit and will not be separated in the output unless a constraint is put in place to enforce this. This separability can be demonstrated in Fig. 9 where SliQ is compared to a trained version without consistency loss. SliQ has more precise outputs throughout the entire CDF, evaluated over 1000 MNIST test images. Enforcing consistency amounts to an average decrease of 80% in projection variance when changing the order of the inputs - demonstrating the effectiveness of SliQ's projection invariance method. As shown in Fig. 9, interweaving increasing robustness, leading to a decrease in projection variance.
## Related Work
**Classical Similarity Networks.** Siamese networks and triplet networks are commonly-used classical similarity networks Johnson et al. (2021); Koch et al. (2015); Schroff et al. (2015); Roy et al. (2019); Li et al. (2020); Gichane et al. (2020); Li et al. (2020); Hoffer and Ailon (2015); Patel et al. (2022), as they known are to be the best choice for complex datasets Chicco (2021). As an instance, using the Riemannian geometry to train the Siamese network, Roy et al. (Roy et al., 2019) get effective results for image classification, while FaceNet Schroff et al. (2015) achieves representational efficiency using an embedding for face recognition and clustering. On the other hand, TrimNet Li et al. (2020) uses a graph-based approach toward enabling a triplet network to learn molecular representations. However, while these works are effective classically, quantum theory enables the enhancement of machine learning workloads by reducing their size and speeding them up Aaronson (2015); Daley et al. (2022).
**Quantum Machine Learning.** Extensive work has been performed toward porting a wide variety of machine learning tasks to quantum computing Huang et al. (2021); Li and Kais (2021); Tiwari and Melucci (2019); Li et al. (2021); Khairy et al. (2020); Lockwood and Si (2020); Heidari et al. (2022); Lloyd et al. (2020); Radha and Jao (2022); Nandy Pal et al. (2022). This includes porting workloads such as generalized neural networks Beer et al. (2020), convolutional neural networks Hur et al. (2022), and even application-specific networks such as models used to learn the metal-insulator transition of VO\({}_{2}\)Li and Kais (2021).
**Quantum Image Similarity Detection**Liu et al. (2022) and Liu et al. (2019) have also worked on quantum image similarity detection; notably, these did not take a machine-learning approach to identify similarity.
## Conclusion
In this work, we present SliQ, a resource-efficient quantum similarity network, which is the first method to build a practical and effective quantum learning circuit for similarity detection on NISQ-era computers. We show that SliQ improves similarity detection over a baseline quantum triplet network by 31% points for Spearman correlation. SliQ is available at: [https://github.com/SilverEngineered/SliQ](https://github.com/SilverEngineered/SliQ).
\begin{table}
\begin{tabular}{|c|c c c c|} \hline Environment & Baseline & PQK & Quilt & SliQ \\ \hline \hline Simulation & 60.8\% & 81.6\% & 51\% & 71.54\% \\ \hline Real Computer & 64.4\% & N/A & 21.4\% & 68.8\% \\ \hline \end{tabular}
\end{table}
Table 2: SliQ’s real quantum computer results for the AIDS dataset are consistent with the simulation results, showing its resilient to hardware noise due to its low number of parameters. The PQK column is denoted as N/A circuit is prohibitively deep for it to be feasible and run on error-prone NISQ computers (30\(\times\) more parameters than SliQ).
Figure 9: SliQ achieves a lower projection variance compared to when it is run without projection variance mitigation or without interweaving of images.
## Acknowledgements
We thank the anonymous reviewers for their constructive feedback. This work was supported in part by Northeastern University and NSF Award 2144540.
| 量子機械学習の探索は、量子コンピュータの高速化能力により、近年劇的に成長しています。しかし、これらの努力は、量子コンピュータで実行できるように移植する難しさから、無监督類似性検出タスクを解決していません。この課題を克服するために、SLIQを提案します。SLIQは、実用的で効果的な量子学習と分散削減アルゴリズムを用いて構築された、資源効率の高い量子類似性検出ネットワークの最初のオープンソースプロジェクトです。 |
2310.20381 | A Systematic Evaluation of GPT-4V's Multimodal Capability for Medical
Image Analysis | This work conducts an evaluation of GPT-4V's multimodal capability for
medical image analysis, with a focus on three representative tasks of radiology
report generation, medical visual question answering, and medical visual
grounding. For the evaluation, a set of prompts is designed for each task to
induce the corresponding capability of GPT-4V to produce sufficiently good
outputs. Three evaluation ways including quantitative analysis, human
evaluation, and case study are employed to achieve an in-depth and extensive
evaluation. Our evaluation shows that GPT-4V excels in understanding medical
images and is able to generate high-quality radiology reports and effectively
answer questions about medical images. Meanwhile, it is found that its
performance for medical visual grounding needs to be substantially improved. In
addition, we observe the discrepancy between the evaluation outcome from
quantitative analysis and that from human evaluation. This discrepancy suggests
the limitations of conventional metrics in assessing the performance of large
language models like GPT-4V and the necessity of developing new metrics for
automatic quantitative analysis. | Yingshu Li, Yunyi Liu, Zhanyu Wang, Xinyu Liang, Lei Wang, Lingqiao Liu, Leyang Cui, Zhaopeng Tu, Longyue Wang, Luping Zhou | 2023-10-31T11:39:09 | http://arxiv.org/abs/2310.20381v5 | # A Comprehensive Study of GPT-4V's Multimodal Capabilities in Medical Imaging
###### Abstract
This paper presents a comprehensive evaluation of GPT-4V's capabilities across diverse medical imaging tasks, including Radiology Report Generation, Medical Visual Question Answering (VQA), and Visual Grounding. While prior efforts have explored GPT-4V's performance in medical image analysis, to the best of our knowledge, our study represents the first quantitative evaluation on publicly available benchmarks. Our findings highlight GPT-4V's potential in generating descriptive reports for chest X-ray images, particularly when guided by well-structured prompts. Meanwhile, its performance on the MIMIC-CXR dataset benchmark reveals areas for improvement in certain evaluation metrics, such as CIDEr. In the domain of Medical VQA, GPT-4V demonstrates proficiency in distinguishing between question types but falls short of the VQA-RAD benchmark in terms of accuracy. Furthermore, our analysis finds the limitations of conventional evaluation metrics like the BLEU scores, advocating for the development of more semantically robust assessment methods. In the field of Visual Grounding, GPT-4V exhibits preliminary promise in recognizing bounding boxes, but its precision is lacking, especially in identifying specific medical organs and signs. Our evaluation underscores the significant potential of GPT-4V in the medical imaging domain, while also emphasizing the need for targeted refinements to fully unlock its capabilities.
## 1 Introduction
Large Language Models (LLMs) have consistently demonstrated remarkable ability across various domains and tasks (Touvron et al., 2023; OpenAI, 2023; Anil et al., 2023). The ongoing pursuit of enhancing LLMs' capacity for visual comprehension has spurred the emergence of a new research area: Large Multimodal Models (LMMs) (Ye et al., 2023; Li et al., 2023; Awadalla et al., 2023). The basic approach has been to either fine-tune the visual encoder to align with a fixed pre-trained LLM or to use a vision-language model to convert visual input into textual descriptions that can be understood by the LLM. These applications are all based solely on the use of the LLM and do not really explore the visual capabilities of the LLM. GPT-4V, a cutting-edge Large Multimodal Model (LMM) incorporating visual understanding capabilities, is constructed as an evolution of state-of-the-art Large Language Models (LLMs). This model is trained on an extensive corpus of multimodal data. Yang et al. conducted a comprehensive case study to assess GPT-4V's performance in general-purpose scenarios, revealing its robust visual comprehension ability (Yang
et al., 2023). Meanwhile, LMMs have been widely used in the medical field (Wang et al., 2023; Singhal et al., 2023). The introduction of visual capabilities into GPT-4V opens up opportunities for an in-depth examination of its potential in the domain of medical multimodality. In light of this, therefore this paper will evaluate multi-modal tasks in the medical image analysis field based on GPT-4V.
The main contribution of this paper is to explore the capabilities of GPT-4V on medical image analysis. We selected the 3 main medical multimodal tasks, **Radiology Report Generation**, **Medical Visual Question Answering**, and **Medical Visual Grounding**, to assess GPT-4V's performance in the context of medical images. Our evaluation encompassed _standard benchmarks_ and comparative analysis against current state-of-the-art models. Furthermore, we conducted in-depth case studies using representative examples for each task, enhancing our comprehension of GPT-4V's capabilities in medical image understanding.
## 2 Related Work
### Radiology Report Generation
Radiology report generation has emerged as a prominent research area within the domain of medical image analysis in recent years. While similar to image captioning (Vinyals et al., 2015; Xu et al., 2015; Pan et al., 2020), this task presents heightened complexity due to the extended length of medical reports and the increased difficulty in identifying medical anomalies within images, due to data imbalance issues. Numerous research has relied on encoder-decoder architectures to address this task. The research can be grouped into two primary research directions. The first direction concentrates on enhancing the model's architecture to facilitate improved extraction of visual features and the generation of high-quality medical reports. For example, Li et al. used a hierarchical architecture to generate reports with normality and abnormality respectively (Li et al., 2018). Similarly, Liu et al. employed a hierarchical structure to initially generate topics and subsequently produce related sentences (Liu et al., 2019). With the prevailing of the transformer (Vaswani et al., 2017), Chen et al. introduced a transformer-based model, enhancing it with relational memory and memory-driven conditional layer normalization to enhance image feature recognition and capture crucial report patterns (Chen et al., 2020). Another research direction is to solve the data bias problem by incorporating external knowledge information. Zhang et al. constructed a predefined medical knowledge graph to augment the model's ability to capture valuable medical information (Zhang et al., 2020). To further enrich this supplementary knowledge, Li et al. developed a dynamic approach that enables real-time updates to the knowledge graph (Li et al., 2023).
Furthermore, in recent times, there has been a surge in radiology report generation methods leveraging Large Language Models (LLMs). These approaches leverage the capabilities of large language models to generate long-text content and utilize abundant knowledge sources to enhance the quality of radiology reports. Wang et al. employs Llama2 (Touvron et al., 2023) to elevate the quality of the generated reports. To achieve effective image-text alignment, the image embeddings are mapped to the feature space of the Llama2 (Touvron et al., 2023) via a visual mapper to ensure uniform dimensionality (Wang et al., 2023).
### Visual Question Answering
Visual Question Answering (VQA) is a crucial field that has gained significant importance, as demonstrated by various studies, including those by (Jiang et al., 2020; Wu et al., 2019) et al. The goal of VQA is to teach machines to understand images and answer questions about them using natural language. Given a pair comprising an input image and a correlated question, the VQA model is engineered to generate the corresponding answer. A plethora of previous scholarly works have delved into VQA, revealing four critical components within these models: the image encoder, the text encoder, a fusion method, and either a generator or a classifier, contingent upon the model's architectural design. The nature of the posed questions bifurcates into two categories based on the answer type: the close-end type (Nguyen et al., 2019; Finn et al., 2017; Eslami et al.,
2021) and the open-end (Ambati and Dudyala, 2018; Khare et al., 2021) type. Predominantly, models address these two categories distinctly; they typically employ a classification-based approach for close-end types, whereas for open-end types, a generation-based method is utilized. Nevertheless, a select number of studies have attempted to integrate both question types within a singular model (Ren and Zhou, 2020). A notable example is the Q2ATransformer (Liu et al., 2023), which simultaneously tackles both varieties of questions, amalgamating the strengths of classification-based and generation-based methodologies, and subsequently achieving exemplary accuracy across both question types.
With the emergence of Large Language Models (LLMs), there has been a substantial influx of research leveraging LLMs to augment the linguistic inferencing capabilities of VQA (Li et al., 2023). Moreover, certain studies have pioneered the use of LLMs for facilitating continuous questioning in VQA. The introduction of models such as GPT-3.5 has led to the generation of more LLM-based datasets, mitigating the issue of data scarcity (Pellegrini et al., 2023). The advent of GPT-4V marks a significant milestone, as it incorporates image comprehension capabilities directly into the LLM framework. This eliminates the need for VQA systems to translate all tasks into a language understandable by traditional LLMs. With the ability to process multimodal inputs seamlessly, the evolution of LLMs has opened new horizons for research and development in VQA. This paper endeavors to elucidate the application of GPT-4V in the realm of medical image-based VQA, exploring its potential and implications in this specialized field.
### Visual Grounding
Visual grounding (Kamath et al., 2021) stands as a pivotal field at the crossroads of computer vision and natural language processing. Essentially, this task requires interpreting an image while taking into account a relevant textual description of an object, which could range from a single sentence or caption to a more elaborate description. The end goal is to produce a bounding box that accurately outlines the designated object. Given its critical role in integrating visual and textual information, visual grounding has established itself as a crucial application in the realm of multimodal interactions.
With the emergence of extensive language modeling, there has been a noteworthy blend of visual grounding techniques with Large Language Models (LLMs) (Peng et al., 2023; Zhao et al., 2023). In a conventional setup, data from bounding boxes, obtained through visual grounding, is fed into the LLM as a segment of the prompt. This approach steers the LLM towards making the right assessments. However, the debut of GPT-4V marks a significant transformation in this workflow. It eliminates the requirement for crafting prompts manually, allowing users to directly input images and text, and in turn, directly obtain the related bounding box outputs. This advancement simplifies the process, removing the need for extra steps and intermediaries.
Most research on visual grounding mainly deals with regular, everyday images, and only a small number of studies focus on images from the medical field. This might be because there are not many datasets available in medical image. There is a recent published visual grounding dataset which is the MS-CXR dataset, makes some improvement of medical image visual grounding application. Some publications (Huang et al., 2023; Sun et al., 2023; 20) come out base on it. Nevertheless, even as this dataset becomes more widely recognized, there remains a limited body of academic work exploring its potential and applications, highlighting a crucial area for future research and development.
In this paper, we will embark on a comprehensive review of GPT-4V's applications within the domain of medical visual grounding, exploring its capabilities, impact, and potential avenues for future research and development.
## 3 Evaluation of Radiology Report Generation
The exponential growth of radiological imaging data has imposed an escalating burden on radiologists, leading to a heightened risk of diagnostic errors with potentially severe consequences. Consequently, there is a growing demand for automated radiology report generation, which is anticipated to alleviate the workload of radiologists and mitigate diagnostic inaccuracies. The rapid advancements in artificial intelligence, particularly in the domains of computer vision and natural language processing, have made automated medical report generation a feasible reality (e.g., Chen et al., 2020, 2021; Liu et al., 2021; Wang et al., 2023a). A prominent challenge in automated medical report generation is long text generation. Presently, large language models (LLMs) (e.g., Touvron et al., 2023; Chowdhery et al., 2022) have gained widespread prominence and demonstrate a strong proficiency in generating long text. Furthermore, LLM-based large multi-modal models (LMMs) (e.g., Zhu et al., 2023; Wu et al., 2023) possess a notable capability for multi-modal content generation. While LMMs show potential in multi-modal content generation, their efficacy in specialized tasks like radiology report generation is yet to be fully explored. The accuracy and reliability of such reports are paramount, making it crucial to evaluate LMMs in this domain rigorously. In the following sections, we examined the GPT-4V's capability for generating radiology reports using distinct prompt strategies and the dataset.
### Evaluation
This section presents an evaluation of the GPT-4V model's capacity for medical report generation. We employ the MIMIC-CXR dataset (Johnson et al., 2019) for assessment. The model is tasked with generating diagnostic reports for given medical images. To facilitate comparison with established methodologies (e.g., Chen et al., 2020; Yang et al., 2021; Liu et al., 2021; Wang et al., 2022b; 2023a), we employ widely recognized metrics, specifically BLEU scores (Papineni et al., 2002), ROUGE-L (Lin, 2004), METEOR (Banerjee and Lavie, 2005), and CIDEr (Vedantam et al., 2015), to gauge the quality of the generated reports.
Our evaluation focuses on the model's performance with the MIMIC-CXR test set. Each evaluation instance comprises a single medical image coupled with a carefully crafted text prompt as the input.
#### 3.1.1 Dataset
MIMIC-CXR, the largest publicly available dataset in this domain, includes both chest radiographs and unstructured textual reports. This dataset comprises a total of 377,110 chest X-ray images and 227,835 corresponding reports, obtained from 64,588 patients who underwent examinations at the Beth Israel Deaconess Medical Center between 2011 and 2016. To facilitate fair and consistent comparisons, we followed the official partitioning provided by MIMIC-CXR, resulting in a test set containing 3,858 samples.
#### 3.1.2 Prompt Design Strategies
Our primary objective is to evaluate the baseline performance of GPT-4V in medical report generation. To better activate the capabilities of GPT-4V, we explored various prompt design strategies, including the zero-shot and few-shot approaches. In the **zero-shot** scenario, we provided a prompt without reference reports. For the few-shot approach, we investigated three settings: (1) two normal reports **(Few-shot normal examples prompt)**, (2) two abnormal reports **(Few-shot abnormal examples prompt)**, and (3) one normal report paired with one abnormal report **(Few-shot mixed examples prompt)**.
Our comprehensive evaluation unveiled that the inclusion of both a normal and an abnormal example consistently resulted in higher-quality report generation. Thus, we primarily employed the **Few-shot mixed examples prompt** to evaluate GPT-4V on the MIMIC-CXR benchmark. While our focus was on the **zero
shot prompt** and the **few-shot prompt**, we avoided complex techniques such as chain-of-thought (Wei et al., 2022b) or ensembling strategies (Wang et al., 2022a).
Illustrative examples of our prompt design strategies can be found in Appendix A.1. A detailed analysis of the reports generated by GPT-4V under various prompts will be presented in Section 3.2.
**Zero-shot Prompt Scenario** The zero-shot prompt is employed to assess GPT-4V's capacity to autonomously generate reports without external guidance. To facilitate a comprehensive comparison with the Ground Truth report, we tasked GPT-4V with generating both the expression and findings sections.
**Few-Shot Prompts Scenario** In-context few-shot learning represents a crucial methodology for enhancing the capabilities of large language models (Tsimpoukelli et al., 2021; Wei et al., 2022a; Dai et al., 2022). It enables the model to acquire the necessary output format by providing a set of examples. In contrast to fine-tuning, this method empowers the model to generate desired results without any parameter updating at inference time. We evaluated the in-context few-shot learning capability of GPT-4V using diverse prompt examples. Within the scope of our evaluation, we employ contextual learning to facilitate GPT-4V in generating responses that closely align with the form of ground truth, facilitating meaningful comparisons.
In our investigation of few-shot prompts for the GPT-4V, we conducted experiments with a range of prompt strategies designed for GPT-4V. Specifically, we explored diverse compositions:
* Exclusively using normal examples (**Few-shot normal examples prompt**);
* Exclusively using abnormal examples (**Few-shot abnormal examples prompt**);
* Combining one normal and one abnormal example (**Few-shot mixed examples prompt**).
The details of example reports in prompts are shown in Appendix A.1. Our observations highlighted the substantial impact of prompt type on the model's output. Depending on the chosen prompt, the model displayed a clear preference either for generating normal reports or abnormal reports. Details will be discussed in Section 3.2.
#### 3.1.3 Comparison with SOTA
Table 1 presents a performance comparison between the GPT-4V model and state-of-the-art methods using the MIMIC-CXR dataset (Johnson et al., 2019). The methods encompasses standard image captioning techniques, including Show-Tell (Vinyals et al., 2015), Att2in (Xu et al., 2015), AdaAtt (Lu et al., 2017), Transformer (Vaswani et al., 2017), and M2Transformer (Cornia et al., 2020). Additionally, the evaluation methods also have medical report generation methods, specifically R2Gen (Chen et al., 2020), R2GenCMN Chen et al. (2021), MSAT (Wang et al., 2022b), and METransformer (Wang et al., 2023a). To provide fair comparisons, we employ the exact same prompting structure (**Few-shot mixed examples prompt**) to help GPT-4V generate the medical report.
From Table 1, it's clear that medical report generation models such as METransformer, MSAT, and R2Gen showcase top-tier performance. Nevertheless, GPT-4V's capability to generate medical reports is impressive, even if it is designed as a general-purpose model. Leveraging the advantages of an extensive pre-training dataset, GPT-4V excels in several metrics, including BLEU, ROUGE, and METEOR. Meanwhile, when compared to models specifically trained on MIMIC-CXR, GPT-4V exhibits a gap, particularly evident in the CIDEr metric. This discrepancy arises because the CIDEr metric assigns varying score weights to words based on their occurrence frequencies, potentially limiting GPT-4V's performance in generating certain MIMIC-CXR-specific words, consequently yielding relatively lower scores.
Furthermore, our testing has revealed that GPT-4V possesses the capacity to generate information that is absent in the ground truth but is visually evident in the image. This phenomenon contributes to GPT-4V's
relatively lower performance on metrics such as BLEU, which primarily assesses word-match rates. One example is shown in Figure 1.
### Case Study
#### 3.2.1 Zero-shot Behavior
In the zero-shot scenario, through a series of tests on multiple chest X-ray images, we observed that GPT-4V consistently generates reports with a focus on various anatomical organs. This phenomenon is illustrated in
\begin{table}
\begin{tabular}{l l|c c c c c c c} \hline \hline Dataset & Methods & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & ROUGE & METEOR & CIDEr \\ \hline \hline \multirow{8}{*}{MIMIC-CXR} & Show-Tell (Vinyals et al., 2015) & 0.308 & 0.190 & 0.125 & 0.088 & 0.256 & 0.122 & 0.096 \\ & Att2in (Xu et al., 2015) & 0.314 & 0.198 & 0.133 & 0.095 & 0.264 & 0.122 & 0.106 \\ & AdaAtt (Lu et al., 2017) & 0.314 & 0.198 & 0.132 & 0.094 & 0.267 & 0.128 & 0.131 \\ & Transformer (Vaswani et al., 2017) & 0.316 & 0.199 & 0.140 & 0.092 & 0.267 & 0.129 & 0.134 \\ & M2Transformer (Cornia et al., 2020) & 0.332 & 0.210 & 0.142 & 0.101 & 0.264 & 0.134 & 0.142 \\ & R2Gen (Chen et al., 2020) & 0.353 & 0.218 & 0.145 & 0.103 & 0.277 & 0.142 & - \\ & R2GenCMN (Chen et al., 2021) & 0.353 & 0.218 & 0.148 & 0.106 & 0.278 & 0.142 & - \\ & PPKED (Liu et al., 2021) & 0.360 & 0.224 & 0.149 & 0.106 & 0.284 & 0.149 & 0.237 \\ & GSK (Yang et al., 2021b) & 0.363 & 0.228 & 0.156 & 0.115 & 0.284 & - & 0.203 \\ & MSAT (Wang et al., 2022b) & 0.373 & 0.235 & 0.162 & 0.120 & 0.282 & 0.143 & 0.299 \\ & METransformer (Wang et al., 2023a) & **0.386** & **0.250** & **0.169** & **0.124** & **0.291** & **0.152** & **0.362** \\ \hline GPT-4V (OpenAI, 2023) & 0.338 & 0.190 & 0.109 & 0.061 & 0.240 & 0.125 & 0.033 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison on the MIMIC-CXR dataset.
Figure 1: One case with few-shot mixed examples prompt. The ground truth does not reference a medical device; however, one is visibly present in the image indicated by a red box. GPT-4V demonstrates the ability to recognize this medical device, and describes it in the generated report.
Figure 11. Notably, GPT-4V tends to follow a specific order, including the information of lung, cardiomedi-astinal silhouette, bones, diaphragm, and soft tissues, for the majority of the generated reports.
While the format of the generated reports may vary from MIMIC-CXR, the content within these reports does convey both normal and abnormal aspects of the radiographic images. Figure 2 shows a selection of examples. The observations reveal that GPT-4V can describe the normal aspects in the images. Furthermore, as demonstrated in Example 3, GPT-4V exhibits the capacity to recognize abnormalities, '_suggestive of a possible infectious or inflammatory process'_. These instances collectively underscore that, even in the context of zero-shot prompts, GPT-4V may not replicate the exact report format found in MIMIC-CXR, yet it demonstrates a noteworthy ability to generate relevant reports and identify anomalies.
#### 3.2.2 Few-shot Behavior
In this prompt scenario, we explored 3 kinds of prompt settings:
Figure 2: Zero-shot prompt example. GPT-4V can generate radiology reports without example reports and can convey both normal and abnormal aspects. For better illustration, the key medical information in the reports is highlighted using different colors.
* **Few-shot normal examples prompt**
* **Few-shot abnormal examples prompt**
* **Few-shot mixed examples prompt**
In this section, we present a comprehensive analysis of reports generated by GPT-4V under three distinct few-shot prompts. We observe that different prompts significantly influence the generated reports. Specifically, Figure 3 illustrates the response to a normal chest X-ray image, where we employ three distinct prompt settings to guide GPT-4V in generating corresponding reports. Interestingly, the reports generated from the normal examples prompt and the mixed examples prompt both describe the image as normal. In contrast, the report from the abnormal examples prompt highlights anomalies in the image. This indicates that GPT-4V's inclination to generate normal or abnormal reports varies based on the provided example reports.
Figure 3: Few-shot normal case (The key medical information in the reports is highlighted using different colors). GPT-4V is more likely to generate abnormal reports when the prompt includes two abnormal examples. The words in red correspond to descriptions of abnormal conditions.
The analysis of reports generated for an abnormal chest X-ray image can be found in the appendix A.2 with a more detailed explanation. However, it's worth noting here that our subsequent tests have shown that the mixed examples prompt (illustrated in Figure 3, and 14) has a significant influence on GPT-4V's capacity to accurately determine the normalcy or abnormality of an image. Due to this observed consistency and reliability, we opted for the mixed examples prompt when testing the entire MIMIC-CXR test set and in the computation of related evaluation metrics.
For these examples, we can summarize the impact of different prompts on the generated reports as follows:
**Normal Examples Prompt** The generated report focuses on the normal aspects of the image, seemingly overlooking or not emphasizing the presence of abnormalities. This could be attributed to the inherent bias introduced by the normal examples in the prompt, steering the GPT-4V's focus towards more routine or standard interpretations.
**Abnormal Examples Prompt** As expected, the report provides a clear and distinct description of the abnormalities evident in the X-ray. However, for normal chest X-ray radiographs, the GPT-4V may also exhibit a heightened probability of generating certain erroneous indications of abnormality.
**Mixed Examples Prompt** The mixed examples prompt leads the GPT-4V to accurately describe the abnormal and normal conditions of the image. This suggests a balanced effect, where the GPT-4V does not get overly biased by either the normal or abnormal examples but leverages both to arrive at an accurate interpretation.
From this in-depth examination, it becomes evident that the choice of prompt plays a pivotal role in guiding GPT-4V's performance, especially when anomalies are present in medical images. The mixed examples prompt, in particular, shows promise in achieving a balanced and accurate report, making it a potential choice for diverse medical scenarios.
#### 3.2.3 Prompt Augmentation for Output view information
Additionally, our investigations revealed that augmenting the information content, like view information within a given prompt enables GPT-4V to produce more pertinent information in its generated reports. As an illustrative example, we incorporated instances with view information for chest X-ray images within both the few-shot mixed examples prompt and the few-shot abnormal Examples prompt (The example reports are displayed in Figure 13). Conversely, view information was omitted in the few-shot normal examples prompt. This deliberate contrast in prompt content demonstrated that prompts containing view information effectively instructed GPT-4V to incorporate considerations of image viewpoint into the report generation process.
More specifically, we supplemented the few-shot mixed examples prompt with _'Frontal and lateral views of the chest'_ and the few-shots abnormal examples prompt with _'PA and lateral views of the chest provided'_.
As illustrated in Figure 4, and 15 the inclusion of view information prompts GPT-4V to incorporate corresponding viewpoint details into the generated report. For instance, it generates phrases like _'PA view of the chest provided'_ and _'Frontal view of the chest demonstrates...'_. However, it is essential to acknowledge that while enhancing the prompt with view information empowers GPT-4V to produce reports enriched with these details, there are instances where GPT-4V inaccurately identifies viewpoint information. The incorrect case is shown in Appendix A.3.
This phenomenon can be attributed to two primary factors: firstly, potential constraints in GPT-4V's inherent recognition capabilities, and secondly, the potential inadequacy of prompt design in fully activating GPT-4V's ability to discern viewpoint information.
**Text-Prompt:**
It is imperative to emphasize that, even with the incorporation of view information, the core content within the generated reports exhibits a high degree of consistency (crucial medical information in the reports is distinguished using diverse colours in Figure 4). This observation leads to a significant conclusion: the inclusion of supplementary information within the prompt broadens the spectrum of content integrated into the generated report, all while preserving GPT-4V's capability to fulfill common tasks.
These examples vividly illustrate the critical role of prompt design within the domain of in-context few-shot learning. In contrast to the fine-tuning approach, few-shot learning empowers GPT-4V to gain essential knowledge from the prompt and subsequently apply this knowledge in generative tasks. Consequently, the meticulous design of a logical and effective prompt emerges as a pivotal factor when leveraging GPT-4V for medical report generation tasks. This aspect of prompt design deserves future studies.
Figure 4: Viewpoint information Case 1 (The key medical information in the reports is highlighted using different colors). The inclusion of view information in the prompt results in a higher probability of GPT-4V generating view information, indicated in red text in the figure. Notably, GPT-4V does not generate view information when the prompt lacks such information, as seen in the normal examples prompt (in Figure 13).
### Discussion
Our extensive evaluation and case study of GPT-4V's capabilities in Radiology Report Generation reveal its potential as well as its current limitations. By employing various prompts, GPT-4V demonstrates the capacity to generate descriptive reports for chest X-ray images, covering both normal and abnormal aspects. Remarkably, the design of the prompt significantly influences GPT-4V's performance; prompts with more information lead to greater attention to the image and the generation of more detailed descriptions.
It is essential to highlight that GPT-4V was not trained specifically on MIMIC-CXR, which impacts its capacity to generate specific rare words, leading to relatively lower scores on commonly used evaluation metrics. Nevertheless, GPT-4V demonstrates the ability to generate content related to images that is not explicitly mentioned in the Ground Truth but is visually apparent. As a result, further research aimed at improving GPT-4V's report accuracy remains a valuable pursuit.
## 4 Evaluation of Medical Visual Question Answering
Visual Question Answering (VQA) has become a much critical research area. The goal of VQA systems is to enable computers to understand natural language questions and provide accurate answers on images. In the following, we will explore the medical image VQA performance of GPT-4V on the VQA-RAD dataset and compare it with the current SOTA method.
### Evaluation
In order to assess GPT-4V's effectiveness on the Medical VQA dataset, we embarked on a comprehensive series of experiments. Utilizing the GPT-4V model, we applied it to generate predicted answers based on the input medical image and the question related to this image. Then, we proceeded to calculate the accuracy of the results. Subsequently, we conducted a comparative analysis with the current state-of-the-art (SOTA) methods. Herein, we present our main observations and conclusions.
#### 4.1.1 Dataset
VQA-RAD (Lau et al., 2018) is one of the most widely utilized radiology datasets. It comprises 315 images along with 3515 question-answer pairs, ensuring that each image corresponds to at least one question-answer pair. The questions encompass 11 distinct categories, including "anomalies," "properties," "color," "number," "morphology," "organ type," "other," and "section." A noteworthy 58% of these questions are designed as closed-ended queries, while the remainder take the form of open-ended inquiries. These images predominantly feature the head, chest, and abdomen regions of the human body. It is essential to manually partition the dataset into training and test sets for accurate evaluation.
#### 4.1.2 Overview of Prompt Methods
GPT-4V not only possesses powerful natural language processing capabilities, but also incorporates advanced computer vision techniques, which makes it excel in handling fusion tasks of images and text. It is trained to understand questions and extract information from images to generate accurate answers. However, the performance of GPT also depends on how the prompt is designed.
To ensure that GPT-4V accurately grasps the answer style of the VQA-RAD dataset, we provided seven examples to guide the model in generating responses consistent with the dataset's format. Without these examples, GPT-4V tends to produce more unconstrained answer text, complicating the task of comparing predicted answers with the ground truth.
We designed the prompt by following the template in Figure 5:
#### 4.1.3 Comparison with SOTA
Upon scrutinizing the results of GPT-4V on the VQA-RAD dataset's test set, it is calculated that the accuracy for closed-end questions is 61.4%, the result shows in table2, which is significantly lower than other published results. In terms of open-end questions, the calculated BLEU scores is 0.1155, which also does not reach a high standard. The majority of currently available research primarily employs classification based model to tackle Visual Question Answering (VQA) problems. This approach results in a lack of evaluations using the BLEU scores, making it challenging to draw comparisons between different methods. However, upon analyzing the cases provided by GPT-4V, it is postulated that the low BLEU scores may be attributed to the excessive flexibility of GPT-4V, resulting in substantial deviations from the correct answers. This might be due to some clear limitations of BLEU itself. BLEU lacks semantic understanding, as it mainly relies on the literal matching of n-grams and does not deeply understand context and semantics. It is insensitive to synonyms and diverse ways of expression. Even if two sentences mean the same thing, if they use different words or ways of expression, the BLEU scores might end up being quite low. In simpler terms, BLEU struggles to recognize when different words mean the same thing, and this can lead to unfairly low scores
Figure 5: VQA Prompt Method. Elements in double braces are replaced with specific questions
even when the answers are correct. We hope that in the future, more advanced criteria capable of deeply understanding the semantics of text will be developed, providing more accurate and reliable assessments.
### Case Study
We present case study of VQA in Figure 67. From the case study, we can tell that the GPT-4V showed some limitations in the Medical VQA domain. It showed strong ability in determining whether a question was close-end or open-end, and was almost always able to make a correct judgment. However, in answering some open-end questions, it did not make full use of the image information, relying instead on the medical terms mentioned in the question itself, and failing to make effective reference to the information in the medical images. For example, in the last case, the GPT-4V only expanded on the nouns that appeared in the question without taking the medical images into account, resulting in an incorrect answer. There were also some instances of incorrect responses to close-end questions. These questions did not perform as well as expected, and further improvements and optimizations are needed to improve performance in Medical VQA tasks.
### Discussion
Our extensive evaluation and in-depth case studies of GPT-4V's performance on the VQA-RAD dataset have highlighted its potential capabilities as well as the areas that necessitate substantial improvement within the Medical Visual Question Answering (VQA) field.
While GPT-4V demonstrates proficiency in distinguishing between closed-end and open-end questions, its accuracy rate of 61.4% for closed-end questions and low BLEU scores of 0.1155 for open-end questions signify a performance level that is considerably below the published benchmarks in this domain. This discrepancy underscores the need for more refined and optimized models that can more accurately interpret and respond to medical imagery. The capability to accurately identify whether a question is open-ended or closed-ended demonstrates GPT's substantial reasoning skills. However, its occasional low accuracy could be attributed to an insufficient amount of training data. Acquiring general Visual Question Answering
\begin{table}
\begin{tabular}{c|c|c|c|} \hline \hline Dataset & Reference Methods & Fusion Method & Close-end \\ \hline \multirow{10}{*}{VQA-RAD} & StAn (He et al., 2020) & SAN & 57.2 \\ \cline{2-4} & BiAn (He et al., 2020) & BAN & 67.9 \\ \cline{2-4} & MAML (Finn et al., 2017) & SAN & 69.7 \\ \cline{2-4} & BAN & 72.4 \\ \cline{2-4} & MEVF (Nguyen et al., 2019) & SAN & 74.1 \\ \cline{2-4} & BAN & 75.1 \\ \cline{2-4} & MMQ (Do et al., 2021) & SAN & 75.7 \\ \cline{2-4} & BAN & 75.8 \\ \cline{2-4} & PubMedCLIP (Eslami et al., 2021) & - & 80 \\ \cline{2-4} & MMBERT (Khare et al., 2021) & - & 77.9 \\ \cline{2-4} & Q2ATransformer (Liu et al., 2023) & - & 81.2 \\ \cline{2-4} & GPT-4V (OpenAI, 2023) & - & 61.40 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on VQA-RAD benchmark
(VQA) data is relatively easier compared to procuring medical VQA data. This discrepancy is due to the labor-intensive and expensive nature of labeling medical data. Consequently, as the volume of training data in the medical domain increases, we can anticipate an enhancement in the performance of VQA applications.
Furthermore, the limitations of the BLEU scores as an evaluation metric, particularly its lack of semantic understanding and sensitivity to diverse expressions and synonyms, have been highlighted. This brings to light the urgent need for the development of more advanced and semantically aware evaluation methods to provide accurate and reliable assessments of model performance in this field.
Figure 6: VQA Prompt examples.By given few-shot prompts, GPT-4V can generate answers for the given image and question pairs, the result for the close-end question is better than open-end questions
## 5 Evaluation of Medical Visual Grounding
Visual Grounding is one of the important tasks in the field of computer vision, aimed at enabling computers to understand natural language descriptions and associate them with specific regions in an image. This technique has great potential in areas such as medical image analysis. In this paper, we presented the performance of GPT-4V on MS-CXR dataset for visual grounding applications and compare it with current SOTA methods.
Figure 7: VQA Prompt examples. With the assistance of a few-shot prompt, GPT-4V has the capability to generate responses for open-ended questions, though there is room for refinement to enhance its performance.
### Evaluation
#### 5.1.1 Dataset
The MS-CXR (Boecking et al., 2022) dataset is a valuable resource for biomedical vision-language processing, featuring 1162 image-sentence pairs with bounding boxes and corresponding phrases. It was meticulously annotated by board-certified radiologists, covering eight cardiopulmonary radiological findings, each having an approximately equal number of pairs. This dataset offers both reviewed and edited bounding boxes/phrases and manually created bounding box labels from scratch. What sets MS-CXR apart is its focus on complex semantic modeling and real-world language understanding, challenging models with joint image-text reasoning and tasks like parsing domain-specific location references, complex negations, and variations in reporting style. It serves as a benchmark for phrase grounding and has been instrumental in demonstrating the effectiveness of principled textual semantic modeling for enhancing self-supervised vision-language processing.
#### 5.1.2 Overview of Prompt Methods
We have looked at various different ways to give instructions to GPT, and found a specific type that helps GPT understand better and makes it easier to create bounding boxes. We chose this prompt after carefully checking which one work best. We designed the prompt by following the template in Figure 8:
#### 5.1.3 Comparison with SOTA
In order to compare with current existing models, we use mean Intersection-Over-Union (mIoU) as our evaluation metric. Upon conducting an evaluation of GPT-4V's performance on the MS-CXR dataset, the calculated mean Intersection over Union (mIoU) was found to be 0.0833. This result is markedly lower than all published benchmarks. Empirical evidence demonstrates that while GPT-4V possesses the capability to comprehend applications within Visual Grounding, it exhibits a deficiency in accurately identifying medical organs and pathological signs. Consequently, this results in imprecise bounding box predictions.
Figure 8: VG Prompt Method. Elements in double braces are replaced with specific image width, height and description text related to image
Recently, SoM (Yang et al., 2023a) addressed this issue and made significant improvements. The approach in that paper involved first segmenting and labeling the image, and then proceeding with grounding, which led to substantial improvement on in performance. However, that method was applied to generic images, and it is not certain whether it would yield equally impressive results for medical images, which require much finer-grained features. Further experiments will be necessary to validate its effectiveness in such contexts.
### Case study
From the case study, it appears that the GPT-4V has the potential to generate bounding boxes, but notably, its performance is rather poorly. Although it was able to attempt to calibrate the position of the object, there were serious errors and uncertainties in this task. This may be due to the fact that GPT-4V's model has some limitations in processing the image information and is unable to fully understand and interpret the exact position of the object in the image. This is true, especially for medical images, which need more focus on fine features. Another possible reason for GPT-4V's poor performance could be that it was mainly trained using common, everyday images. The GPT model needs a lot more data to work well and become reliable. So, we guess it does not perform very well because it didn't have enough diverse labeled medical data to learn from.
### Discussion
Our comprehensive evaluation and case study of GPT-4V's capabilities in Visual Grounding highlight both its potential and its current limitations. While the model shows promise in recognizing bounding boxes, it falls significantly short in terms of accuracy, as evidenced by its low mIoU score when compared with existing benchmarks and its performance on the MS-CXR dataset. Its inability to precisely identify medical organs and pathological signs leads to imprecise bounding box predictions, and this may caused by lack of training data. Due to the inherent difficulty in acquiring sufficient labeled data in real-world scenarios, we hypothesize that the sub-optimal performance of GPT may be attributed to the limited availability of training data.
In light of these findings, it is evident that GPT-4V requires further refinement and training to overcome its current limitations and to enhance its bounding box localization accuracy. In order to achieve better results in this area, further model improvement and more data is needed to increase the accuracy of its bounding box localization, thus making it more useful and reliable in various applications.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Dataset & Methods & mIoU \\ \hline & BioViL (Boecking et al., 2022) & 22.9 \\ & BioViL-T (Bannur et al., 2023) & 24.3 \\ & RefTR (Li \& Sigal, 2021) & 50.11 \\ MS\_CXR & VGTR (Du et al., 2022) & 53.58 \\ & SeqTR (Zhu et al., 2022) & 56.63 \\ & TransVG (Deng et al., 2021) & 58.91 \\ & MedRPG (Chen et al., 2023) & 59.37 \\ \cline{2-3} & GPT-4V (OpenAI, 2023) & 8.33 \\ \hline \hline \end{tabular}
\end{table}
Table 3: mIoU(%) results on MS-CXR benchmark.
Doing so will undoubtedly make GPT-4V a more reliable and valuable tool in various applications, fostering its integration and utility in practical, real-world scenarios, especially within the medical field. This journey towards improvement is not only necessary but also a crucial step in advancing the field of Visual Grounding and in unlocking the full potential of models like GPT-4V.
Figure 9: Visual Grounding Prompt examples. The bounding boxes in red color are predicted box by GPT-4V, and the green bounding boxes are ground truth boxes. GPT-4V is capable of generating and estimating the bounding box coordinates for the reference text within an image. However, the results show that the GPT-4V cannot understand medical image properly.
## 6 Conclusion
The comprehensive assessment of GPT-4V's capabilities in Radiology Report Generation, Medical Visual Question Answering (VQA), and Visual Grounding offers a perspective on the model's potential and areas for improvement within the medical domain. GPT-4V's ability to generate radiology reports based on chest X-ray images is commendable, particularly when furnished with detailed prompts. This underscores the capacity of language models to aid in radiology diagnosis. Meanwhile, its challenges in recognizing uncommon terms and subtle differences specific to the MIMIC-CXR dataset underscore the necessity for domain-specific training and fine-tuning to elevate its proficiency in medical reporting.
Furthermore, although GPT-4V displays proficiency in distinguishing among various question types within the VQA-RAD dataset, its performance, especially for open-ended questions, falls short of public benchmarks. This sub-optimal performance reveals a gap in its comprehension and response capabilities related to medical imaging. Moreover, the limitations of current evaluation metrics like the BLEU scores underscore
Figure 10: These are some Visual Grounding Prompt examples. The bounding boxes in red color are predicted box by GPT-4V, and the green bounding boxes are ground truth boxes.
the significance of constructing semantically-aware evaluation of methods to gain a holistic comprehension of the model's aptitude.
The Visual Grounding evaluation further showed the difficulties of GPT-4V in achieving high precision in bounding box localization within medical images. These limitations, particularly its struggles in identifying medical organs and pathological indicators, underscore the urgent requirement for specialized training and model improvements to enhance its grounding capabilities.
In summary, GPT-4V demonstrates remarkable potential across various medical image analysis domains. Meanwhile, its current limitations underscore the necessity for domain-specific enhancements. Exploring dedicated training on medical datasets, designing comprehensive prompt methodologies, and advancing evaluation techniques will need further research. | この研究は、GPT-4Vの多 multimodal 能力を評価し、特に放射線画像解析に焦点を当て、放射線画像報告生成、医療画像質問応答、および医療画像 grounding の3つの代表的なタスクを対象とした。評価のために、各タスクごとにプロンプトセットが設計されており、GPT-4Vの対応する能力を誘発し、十分な品質の出力を作成するように促している。計量分析、人間評価、事例検討の3つの評価方法が用いられ、深層的かつ広範な評価を実現している。評価結果によると、GPT-4Vは医学画像の理解に優れており、高品質な放射線画像報告を作成し、医療画像に関する質問に効果的に回答することができた。一方で、医療画像の grounding には性能が大幅に向上させる必要があることが示唆された。さらに、計量分析から得られる評価結果と人間評価から得られる評価結果の |
2309.14147 | Comparison of distance measurements to dust clouds using GRB X-ray halos
and 3D dust extinction | X-ray photons from energetic sources such as gamma-ray bursts (GRBs) can be
scattered on dust clouds in the Milky Way, creating a time-evolving halo around
the GRB position. X-ray observations of such halos allow the measurement of
dust clouds distances in the Galaxy on which the scattering occurs. We present
the first systematic comparison of the distances to scattering regions derived
from GRB halos with the 3D dust distribution derived from recently published
optical-to-near infrared extinction maps. GRB halos were observed around 7
sources by the Swift XRT and the XMM-Newton EPIC instruments, namely GRB
031203, GRB 050713A, GRB 050724, GRB 061019, GRB 070129, GRB 160623A and GRB
221009A. We used four 3D extinction maps that exploit photometric data from
different surveys and apply diverse algorithms for the 3D mapping of
extinction, and compared the X-ray halo-derived distances with the local maxima
in the 3D extinction density distribution. We found that in all GRBs we can
find at least one local maximum in the 3D dust extinction map that is in
agreement with the dust distance measured from X-ray rings. For GRBs with
multiple X-ray rings, the dust distance measurements coincide with at least 3
maxima in the extinction map for GRB 160623A, and 5 maxima for GRB 221009A. The
agreement of these independent distance measurements shows that the methods
used to create dust extinction maps may potentially be optimized by the X-ray
halo observations from GRBs. | Barbara Šiljeg, Željka Bošnjak, Vibor Jelić, Andrea Tiengo, Fabio Pintore, Andrea Bracco | 2023-09-25T14:00:01 | http://arxiv.org/abs/2309.14147v1 | # Comparison of distance measurements to dust clouds using GRB X-ray halos and 3D dust extinction
###### Abstract
X-ray photons from energetic sources such as gamma-ray bursts (GRBs) can be scattered on dust clouds in the Milky Way, creating a time-evolving halo around the GRB position. X-ray observations of such halos allow the measurement of dust clouds distances in the Galaxy on which the scattering occurs. We present the first systematic comparison of the distances to scattering regions derived from GRB halos with the 3D dust distribution derived from recently published optical-to-near infrared extinction maps. GRB halos were observed around 7 sources by the _Swift_ XRT and the _XMM-Newton_ EPIC instruments, namely GRB 031203, GRB 050713A, GRB 050724, GRB 061019, GRB 070129, GRB 160623A and GRB 221009A. We used four 3D extinction maps that exploit photometric data from different surveys and apply diverse algorithms for the 3D mapping of extinction, and compared the X-ray halo-derived distances with the local maxima in the 3D extinction density distribution. We found that in all GRBs we can find at least one local maximum in the 3D dust extinction map that is in agreement with the dust distance measured from X-ray rings. For GRBs with multiple X-ray rings, the dust distance measurements coincide with at least 3 maxima in the extinction map for GRB 160623A, and 5 maxima for GRB 221009A. The agreement of these independent distance measurements shows that the methods used to create dust extinction maps may potentially be optimized by the X-ray halo observations from GRBs.
keywords: X-rays: ISM - dust, extinction - gamma-ray burst: general
## 1 Introduction
The possibility of using X-ray scattering on interstellar dust grains to study the properties of dust such as its spatial distribution and the dust population, was pointed out early by several authors (e.g., Overbeck, 1965; Martin, 1970). The observations were limited by the imaging capabilities of the early X-ray telescopes, and the first dust halos were observed only in the eighties by the Einstein Observatory around bright Galactic sources (Rolf, 1983; Catura, 1983). The theory of X-ray scattering from astrophysical sources was detailed in a number of works (see e.g., Mauche & Gorenstein, 1986; Mathis & Lee, 1991; Smith & Dwek, 1998; Draine, 2003; Xiang et al., 2011). The observed intensity of the X-ray halo at different energies depends on the energy spectrum of the source, column density of dust and the differential scattering cross section. Investigating the energy- and time-dependence of scattering halos is thus crucial to infer the properties of grain sizes, chemical abundances, distances and spatial distribution of the dust layers (e.g. Trumper & Schonfelder, 1973; Mathis & Lee, 1991; Miralda-Escude, 1999; Predehl et al., 2000; Draine, 2003; Costantini et al., 2005). The differential cross section can be calculated using the exact Mie solution for scattering on spherical particles or adopting the Rayleigh-Gans approximation, which is valid above \(\sim 2\) keV (Mauche & Gorenstein, 1986; Mathis & Lee, 1991; Predehl & Schmitt, 1995; Smith & Dwek, 1998). The X-ray scattering by nonspherical grains was calculated by e.g. Draine & Allaf-Akbari (2006) who showed that substantial anisotropy of the X-ray halo may be expected for aligned interstellar grains and realistic size distributions.
The search for X-ray halos around bright Galactic X-ray sources was performed using different surveys, e.g. _ROSAT_ by Predehl & Schmitt (1995) or _Chandra_ and _XMM-Newton_ by Valencic & Smith (2015). Rings, or halos, were also detected around a plethora of
magnetars (e.g. Tiengo et al., 2010; Svirski et al., 2011; Mereghetti et al., 2020).
Gamma-ray bursts (GRBs), as impulsive bright X-ray events, offer a tool to infer the distance of the intervening dust when located behind sufficiently large Galactic column densities along the line of sight. For short X-ray impulses scattered by individual dust clouds, the X-rays scattered at larger angles with respect to the line of sight will arrive at the observer with a time delay, and expanding rings will be formed. To date, measurements of dust-layer distances and modelling of the energy-dependent radial profiles of X-ray halos have been performed only on a limited sample of observed halos surrounding GRB sources (Vaughan et al., 2004, 2006; Tiengo and Mereghetti, 2006; Vianello et al., 2007; Pintore et al., 2017; Tiengo et al., 2023; Vasilopoulos et al., 2023; Williams et al., 2023). The increasing importance of such observations was recently pointed out by Nederlander and Paerels (2020) who proposed X-ray halo observations as a tool for locating the electromagnetic counterparts to gravitational wave sources.
The cross-section for scattering by dust increases rapidly with grain size and X-ray sources can contribute to the current constraints obtained from optical, ultraviolet and infrared observations, providing complementary information on the properties of large particle grains (larger than a few \(\mu\)m, Mathis and Lee, 1991). The sky regions with detected halos around GRBs were studied in different energy bands: e.g. Vaughan et al. (2006) used X-ray 0.4 - 1.2 keV _ROSAT_ all-sky survey data and the _IRAS_ all-sky survey 100 \(\mu\)m map around GRB 050724, showing that the infrared (IR) dust emission and soft X-ray absorption were correlated and therefore caused by the same medium. On the contrary, the 21-cm map of atomic hydrogen (H I) of the region showed no correlation with these images, suggesting the comparatively lower density of H I. Pintore et al. (2017) measured distances of dust layers from X-ray observations of halo from GRB 160623A and compared them to a 3D map of interstellar reddening (Green et al., 2015). They found high levels of extinction at several distances, with the largest extinction coinciding with the main dust layer identified in the X-ray data. The H I profile showed a peak possibly associated with the closest clouds identified in X-ray data, \(\sim\)0.5-1 kpc, and an extended region. The H\({}_{2}\) profile showed a peak at a different distance, \(\sim\) 2 kpc (Pintore et al., 2017).
The brightest GRB of all times, GRB 221009A (Burns et al., 2023), occurred at low Galactic latitude (\(b=4\fdg 3\)) and produced more than 20 bright X-ray rings, observed by the _Swift_(Vasilopoulos et al., 2023; Williams et al., 2023), IXPE (Negro et al., 2023) and _XMM-Newton_(Tingo et al., 2023). In particular, Tiengo et al. (2023) reported _XMM-Newton_ observations of 20 rings around GRB 221009A, resulting from scattering on dust layers at distances from 300 pc to 18.6 kpc. They used the column density based on 3D extinction maps to estimate the GRB fluence, which allowed to constrain the prompt X-ray emission of the burst in the 0.5-5 keV energy band.
Similar studies have been performed for the dust scattering halos in the supernova remnant HESS J1731-347 (Landstorfer et al., 2022), where the dust distribution estimated from the 3D extinction maps from Lallement et al. (2019) was used to constrain the source distance, in addition to _Chandra_ observations. These examples demonstrate that only combining different approaches helps to properly determine the distance of the scattering dust layers and to understand the physical process behind the observed X-ray halo intensity distribution.
In this work for the first time the distances to dust clouds obtained using GRB X-ray observations are systematically compared with 3D maps of Galactic interstellar dust reconstructed through the tomographic inversion of extinction measurements toward stars with known distances. A large sample of reliably measured stellar data is required for this method to be successful. This became possible with the availability of massive stellar surveys, such as 2MASS (Skrutskie et al., 2006), ALLWISE (Wright et al., 2010; Mainzer et al., 2011), Pan-STARRS (Chambers et al., 2016), and the recent arrival of the _Gaia_ mission (Gaia Collaboration et al., 2016, 2018, 2021). Among a number of available 3D maps (e.g. Sale and Magorrian, 2018; Chen et al., 2019; Rezaei Kh. et al., 2020; Hottier et al., 2020; Guo et al., 2021), we used a representative sample of them, done by Green et al. (2019), Leike et al. (2020) and Lallement et al. (2019, 2022). They differ in the choice of the data used for extinction, stellar distance measurements, and applied inversion techniques.
The paper is organized as follows. In Sect. 2 we present the current status of the X-ray halo observations for a sample of gamma-ray bursts and the methods used to determine the distance to the scattering dust layers. Section 3 describes the available 3D extinction maps of Galactic interstellar dust and the methods used to generate them. We present the case study of GRB 160623A, for which we show the extinction density distribution and compare it with the distances of dust layers along the line of sight obtained from X-ray data. The same method was applied to the whole sample of GRBs for which X-ray halos were observed. We discuss our results and possible discrepancies between the dust layer distances determined from these two methods in Sect. 4.
## 2 Determination of distances from X-ray halo observations
Dust scattering halos are nowadays often observed around bright X-ray objects, the number of which is increasing thanks to the imaging capabilities of the current X-ray instruments onboard _XMM-Newton_, _Chandra_ and _Swift_. The basic process responsible for dust halos is the scattering of X-ray photons by grains of the interstellar dust layers between us and the X-ray source. The scattering angles involved in this process are small, see e.g. Draine (2003). Scattered X-ray photons arrive with a certain delay, related to the travelled path, with respect to the unscattered photons. Therefore, slow flux variations of the illuminating central X-ray object can be observed in changes of the dust halo flux. When the X-ray source is impulsive, as in the case of bursts, flares, or GRBs, the dust scattering halo is observed as an expanding ring. In the thin layer approximation for the intervening dust cloud, the angular radius \(\theta(t)\) of the ring can be expressed as:
\[\theta(t)=\left[\frac{2c}{d}\frac{(1-x)}{x}(t-\mathrm{T}_{0})\right]^{0.5}, \tag{1}\]
where \(x=d_{\mathrm{dust}}/d\) (\(d\) and \(d_{\mathrm{dust}}\) are the impulsive source and dust layer distances, respectively), \(c\) is the speed of light and \(\mathrm{T}_{0}\) is the time of the burst. This relation shows that, when \(d\) is much larger than \(d_{\mathrm{dust}}\) (as it is the case for GRBs), it could be simplified as follows:
\[\theta(t)\approx\left[\frac{2c(t-\mathrm{T}_{0})}{d_{\mathrm{dust}}}\right]^{0.5}, \tag{2}\]
removing the previous degeneracy between the source and the dust-layer distances. Such a case shows up for GRBs illuminating dust layers in our Galaxy. Once the ring and its expansion rate are measured, it is possible to univocally determine the dust layer distance. Therefore, this method can be used to map dust regions of our Galaxy with high precision.
### Gamma-ray bursts with observed X-ray halo
Currently, expanding halos have been observed only for a handful of GRBs (see Table 1). The individual distances to dust layers were determined using different methods for different GRBs. For the analysis of the X-ray halo around GRB 031203, Vaughan et al. (2004) used _XMM-Newton_ EPIC MOS (0.7 - 2.5 keV) data and created a background-subtracted radial profile of counts from several time intervals of \(\sim 6000\) s duration. For this particular GRB, there were two peaks corresponding to the two expanding rings visible in the radial profiles; the change of radii as a function of time was found to be consistent with Eq. 2. The halo spectrum can be extracted from the annular region. For GRB 050724 the spectral model used to fit the halo spectrum was an absorbed power law (Vaughan et al., 2006). As expected, the halo spectrum was found to be steeper than the GRB X-ray spectrum due to the strong dependence of the scattering cross-section on energy.
Tiengo & Mereghetti (2006) proposed a new method to analyze time variations of the dust-scattering halos, based on the construction of the so-called dynamical image. It consists of a 3D histogram containing the counts number, their arrival position with respect to GRB, and the arrival time. In such representation, the expanding ring is visible as a linear regression whose slope is inversely proportional to the distance of the scattering layer (Tiengo & Mereghetti, 2006; Vianello et al., 2007). For each detected count in such 3D histogram, the distance \(d_{i}=2c(t_{i}-T_{0})/\theta_{i}^{2}\) is computed. The distribution of \(d_{i}\) includes both the halo photons and the background counts. The dust scattering rings result in clear peaks superimposed on the background contribution, which, if homogeneous, is distributed as a power-law with index \(-2\). The peaks were fitted with Lorentzian curves centered at the scattering layer distance. In Table 1, we present the dust scattering distances determined for the sample of all GRBs for which the X-ray halo has been presently observed. We provide the coordinates, the redshift, the fluence, the duration of the event, and the derived distances to the dust scatterings layers, including the FWHM of fitted Lorentzian functions.
## 3 3D maps of Galactic interstellar dust towards the GRBs
The distances to dust clouds measured using X-ray data can be better understood by examining the distribution of the dust along the GRB direction to the observer. In this section we study the local increases in the dust distribution towards the GRB, as measured by 3D extinction maps, and compare their locations with the distances of dust clouds obtained from X-ray halos. We used four 3D extinction maps from Green et al. (2019), Lallement et al. (2019, 2022) and Leike et al. (2020), hereafter G19, L19, L22, and Le20 map, respectively. They differ in the choice of input data, applied reconstruction techniques, and extent of the mapped volume in the Galaxy. In addition, the G19 map is based on the spherical coordinate system which voxelizes the sky into pencil beams centered at the Sun, while L19, Le20 and L22 maps are based on the Cartesian coordinate system centered at the Sun. Their brief overview is given in the following subsection.
### 3D dust extinction maps
The G19 map combines stellar photometry from _Gaia_ Data Release 2 (DR2), Pan-STARRS 1 and 2MASS for the extinction estimates towards the stars with _Gaia_ DR2 parallaxes for the stellar distances. The dust distribution is inferred along each sightline by taking into account a spatial prior that correlates nearby sightlines. Details of this technique are presented in Green et al. (2019). The map gives the cumulative extinction along sightlines in 120 logarithmically spaced bins of distances from 63 pc to 63 kpc. It covers sightlines in the sky north of a declination of \(-30^{\circ}\). The angular resolution varies between 3.4 to 13.7 arcmin depending on the sky region. The extinction is given in arbitrary units that can be converted to magnitude in different bands using the coefficients in Table 1 of Green et al. (2019). We use the \(r\)-band magnitude (\(A_{r}\), effectively a magnitude at 6170 A) of Pan-STARRS 1 survey. The map is publicly available at the website1 and within the Python package dustmaps (Green, 2018).
Footnote 1: [http://argonaut.skymaps.info](http://argonaut.skymaps.info)
The L19 map combines _Gaia_ DR2 and 2MASS photometric data with _Gaia_ DR2 parallaxes. The dust distribution is inferred by the tomographic inversion of extinction measurements using a regularized Bayesian hierarchical technique described in Lallement et al. (2019). This technique takes into account the spatial correlation of structures and adapts the resulting map resolution to the availability of measurements within a given region. The map has a resolution of 25 pc for structures within 1 kpc from the Sun and up to 500 pc in a few regions more distant than 3 kpc from the Sun. It covers \(6\times 6\times 0.8\) kpc\({}^{3}\) volume around the Sun and is publicly available in VizieR2. The map provides the extinction densities (or differential extinction, \(\mathrm{d}\)/\(\mathrm{d}\)/\(\mathrm{r}\)) in mag pc\({}^{-1}\) with magnitude defined at wavelength of 5500 A (\(A_{0}\)).
Footnote 2: [http://cdsarc.u-strasbg.fr/viz-bin/qcat7J/A+A/625/A135](http://cdsarc.u-strasbg.fr/viz-bin/qcat7J/A+A/625/A135)
Footnote 3: [https://explore-platform.eu](https://explore-platform.eu)
The L22 map is an updated version of the L19 map. It combines _Gaia_ Early Data Release 3 (EDR3) and 2MASS photometric data with _Gaia_ EDR3 parallaxes. The inversion technique and the computational volume used is the same as in the L19 map. The larger available sample of stars and better accuracy of the Gaia EDR3 data compared to DR2 improved contrast between the peak densities and void regions in the L22 map and increased distances at which the structures are reconstructed. The map provides extinction densities in mag pc\({}^{-1}\) with \(A_{0}\), as in L19. In addition, this map provides error estimates based on measured photometric and parallax errors, on availability of the measurements within some region, and on the correlation length used in the computation. The map has a resolution of 25 pc and is publicly available on the EXPLORE website3.
Footnote 4: [http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/639/A138](http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/639/A138)
The Le20 map combines _Gaia_ DR2, 2MASS, PANSTARRS, and ALLWISE photometric data and _Gaia_ DR2 parallaxes. The tomographic reconstruction is done on a smaller volume but at a higher resolution than G19, L19 and L22 maps using variational inference and Gaussian processes. The map covers \(0.74\times 0.74\times 0.54\) kpc\({}^{3}\) volume around the Sun and has a resolution of 2 pc. The map provides extinction densities in mag pc\({}^{-1}\) defined in the natural logarithmic units of the G-band magnitude (\(A_{G}\)), effectively at 6400 A. The map is publicly available at VizieR4 and within dustmaps.
Footnote 4: [http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/639/A138](http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/639/A138)
### Extracting 3D maps towards the GRB halos
To compare positions of peak extinction densities corresponding to dust layers from these maps with measurements done by tracing the X-ray halos of GRBs, we extracted extinction density distributions along the line of sight of each GRB in the sample. In the next section, the methodology is described for the case of GRB 160623A, while for the other GRBs in the sample, results are presented in Sect. 3.2.2 and in Appendix A.
#### 3.2.1 Case study: GRB 160623A
We used GRB 160623A as a case study for our methodology, as X-ray observations of this GRB using _XMM-Newton_ in the 1-2 keV energy band clearly showed six distinct X-ray halos, corresponding to different locations of dust layers along the line of sight.
We used linear interpolation to extract the 3D extinction density from the L19 map in the direction of the GRB. For the L22 map, we used the G-Tomo app on the EXPLORE website, which queries the data for given coordinates and distances. Similarly, we used the Python package Dustmaps for Le20 and G19 maps. To get extinction density from cumulative extinction of the G19 map, we took a derivative on the output of the Dustmaps query. It was also multiplied with the corresponding coefficient (2.617) from Table 1 in G19 to get the values in the \(r\)-band of the Pan-STARRS 1 survey.
The results for each map are presented in Fig. 1. The errors on the distributions are available for L22, Le20 and G19 data. They are plotted as grey areas on the figures. The available coverage of distances for L19, L22 and Le20 maps limited us. The G19 dust distribution is presented for distances up to 2.5 kpc to focus on relevant distances for our analysis (see Table 1). Vertical red lines and red shaded areas mark distances and appropriate errors of X-ray halo measurements from Table 1. The blue shaded areas show the regions covered by the _FWHM_ of the Lorentzian functions fitted in the \(d_{i}\) distribution of counts from the dynamical image, see, e.g. Tiengo & Mereghetti (2006).
The presented distributions of L19, L22 and Le20 maps are taken at the exact position of the GRB because the resolution of these maps is larger than the size of X-ray halos in the sky (of the order of a few arcmin). On the other hand, the G19 map has a resolution comparable to the halo sizes. We examined the surroundings of the GRB sky position at distances where the dust layer is measured from X-ray data. In Fig. 2, we show integrated extinction from L22 and G19 maps near GRB 160623A position around the distance (\(\pm\)20 pc) of the closest dust cloud at 528.1 pc, as measured from X-ray data. The position of the ring is marked with a red circle in the plot. Due to high resolution, the G19 map shows small-scale fluctuations that are not present in the L19 map. Therefore, the simple extraction of one specific line of sight from this map is unreliable for this kind of study as close by sightlines (few arcmin) could significantly differ in extinction density distribution. Given this limitation, we are not using the G19 map for further analysis.
To visualize the GRB surroundings along the line of sight, we made 2D cuts through L19 and L22 maps perpendicular to the Galactic plane (Fig. 3). The line of sight toward the GRB is marked with a white line, while the corresponding measured distances of dust clouds are marked with red dots. The errors on distances and the
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline GRB & \(z\) & fluence & T\({}_{90}\) & \(l\) & \(b\) & instrument & distance & FWHM & ref. \\ & & [ \(10^{-7}\) erg/cm\({}^{2}\)] & [s] & [\({}^{\circ}\)] & [\({}^{\circ}\)] & & [pc] & [pc] & \\ \hline \hline
031203 & 0.105 & 10.6\({}^{\circ}\) & 19\({}^{\circ}\) & 256 & -5 & _XMM-Newton_ & 870 \(\pm\) 5 & 82 \(\pm\) 16 & T06, V04 \\ & & & & & & & 1384 \(\pm\) 9 & 240 \(\pm\) 30 & \\ \hline
050713A & & 7.1 & 125 & 112 & 19 & _XMM-Newton_ & 364 \(\pm\) 6 & 33 \(\pm\) 15 & T06 \\ \hline
050724 & 0.258 & 2.1 & 99 & 350.4 & 15.1 & _Swift_ XRT & 139\(\pm\) 9 & - & V06 \\ \hline
061019 & & 5.1 & 180 & 181.7 & 4.3 & _Swift_ XRT & 941 \(\pm\) 45 & 427 \(\pm\) 107 & V07 \\ \hline
070129 & 2.338 & 6.6 & 460 & 157.2 & -44.7 & _Swift_ XRT & 150 & - & V07 \\ & & & & & & 290 & & & \\ \hline
160623A & 0.367 & 120\({}^{\circ}\) & 107.8\({}^{\circ}\) & 84.2 & -2.7 & _XMM-Newton_ & 528.1 \(\pm\) 1.2 & 23.4 \(\pm\) 3.3 & P17 \\ & & & & & & & 679.2 \(\pm\) 1.9 & 32.2 \(\pm\) 5.7 & \\ & & & & & & & 789.0 \(\pm\) 2.8 & 75 \(\pm\) 10 & \\ & & & & & & & 952 \(\pm\) 5 & 116 \(\pm\) 15 & \\ & & & & & & & 1539 \(\pm\) 20 & 106 \(\pm\) 60 & \\ & & & & & & & 5079 \(\pm\) 64 & 1000 \(\pm\) 400 & \\ \hline
221009A & 0.151 & 740\({}^{\circ}\) & 284\({}^{\circ}\) & 52.9 & 4.3 & _XMM-Newton_ & 300 \(\pm\) 2 & 62\(\pm\) 10 & T23 \\ & & & & & & & 406.3 \(\pm\) 0.2 & 26.9 \(\pm\) 0.7 & \\ & & & & & & & 439.8 \(\pm\) 0.5 & 14.6 \(\pm\) 1.9 & \\ & & & & & & & 475.2 \(\pm\) 0.3 & 30.9 \(\pm\) 0.9 & \\ & & & & & & & 553.6 \(\pm\) 0.3 & 27.7 \(\pm\) 1.0 & \\ & & & & & & & 695.4 \(\pm\) 1.2 & 23.1 \(\pm\) 3.7 & \\ & & & & & & & 728.6 \(\pm\) 1.1 & 42.7 \(\pm\) 2.5 & \\ & & & & & & 1027.3 \(\pm\) 5.2 & 38.1 \(\pm\) 8.7 & \\ & & & & & & 1161.7 \(\pm\) 2.5 & 99 \(\pm\) 21 & \\ & & & & & & 1831 \(\pm\) 13 & 121 \(\pm\) 44 & \\ & & & & & & 1973 \(\pm\) 10 & 141 \(\pm\) 52 & \\ & & & & & & 2129 \(\pm\) 5 & 135 \(\pm\) 14 & \\ & & & & & & 2599 \(\pm\) 5 & 164 \(\pm\) 18 & \\ & & & & & & 3075.5 \(\pm\) 7.4 & 309 \(\pm\) 28 & \\ \hline \end{tabular}
\end{table}
Table 1: Gamma-ray bursts for which the time variable X-ray halo was observed and the dust layer distances were determined from X-ray observations by _Swift_ XRT/_XMM-Newton_. References for distance measurements: Tiengo & Mereghetti (2006) (T06); Vaughan et al. (2004) (V04); Vaughan et al. (2006) (V06); Vianello et al. (2007) (V07); Pintore et al. (2017) (P17); Tiengo et al. (2023) (T23). Fluences are reported for the lowest energy band available. For GRBs observed by Swift, fluences are measured in the energy band 15-25 keV, and duration \({}_{90}\) is determined using _Swift_ BAT (15-150 keV), see Lein et al. (2016). Values derived from the different energy bands/instruments are marked with (\({}^{\circ}\)). For GRB 031203 we adopted the values from _INTEGRAL_ GRB catalog, where 20-200 keV energy band was used for fluence and for T\({}_{90}\)(Bösnjak et al., 2014; Vianello et al., 2009). For GRB 160623A fluence is obtained by extrapolating the Konus-Wind spectrum in the 0.3-10 keV range. The duration T\({}_{90}\) for this burst was determined for _Fermi_ GBM energy band. The duration of GRB 221009A is adopted from (Fredredicz et al., 2023) and it was estimated in 80-320 keV. The fluence is calculated in the 15-150 keV for this burst (Krimm et al., 2022). FWHM refers to the width of the Lorentzian fitted in the distribution of distances derived from the dynamical image (Tiengo & Mereghetti, 2006). For GRB 070129 and GRB 050724, the analysis was based on a different method and no FWHM was estimated.
_FWHM_ of Lorentzians are given as red and blue lines perpendicular to the sightline, respectively.
The first three locations of the scattering dust regions are visible in L19 and L22 maps. The farthest two scattering layers are not identified as the resolution of these maps decreases at distances above \(\sim\) 1 kpc. There is a good agreement between the positions of the maximum extinction in L19 distribution and the position of the first four scattering layers determined from the X-ray observations (see Table 1). The position of the fourth extinction maximum is instead shifted towards lower values in the L22 map. Moreover, the second extinction peak is higher than the third one in the L19 map, whereas the opposite is found in the L22 map. The reason for that can be understood from Fig. 3. As mentioned in Sect. 3.1, the number of used sources for the L22 map is larger than for the L19 map, resulting in the increased contrast between the peak densities and void regions. Note that the Le20 data do not cover the distance to these dust layers and show only the structures within a distance of 100 pc in this case.
#### 3.2.2 Comparison of different distance measurements for the GRB sample
The same method was applied to the whole sample of GRBs with observed halos. Results of our analysis are shown in Appendix A for all GRBs. We compared the scattering layers distances determined from the extinction density distribution for each GRB with those determined from the X-ray data. In Table 2 we give the positions of the local maxima in the extinction density distributions for L19, L22 and Le20 maps that are closest to the dust layers positions determined from X-ray studies. For the L22 map, the Gaussian was fitted to local maxima when there was no overlapping with nearby structures.
* 2.5 keV revealed two expanding rings centred on the GRB. The rings were associated with X-ray scatterings on two distinct dust layers in the Galaxy, where the closest one was located at \(\sim\) 880 pc, and the one further away at \(\sim\) 1390 pc (Vaughan et al. 2004). As shown in Fig. A1, L19 and L22 maps show one distinct extinction density maximum corresponding to the smaller distance, while the Le20 map does not cover the large distances at which the dust layers were identified. The second layer distance determined from X-ray observations coincides with the elongated profile of a dust layer rather than a distinct peaked one. This is also visible in the 2D cut of the extinction density cube for GRB 031203 (last two rows in Fig. A1). Note that the FWHM of the Lorentzian fitted in the distance distribution from the dynamical image of GRB 031203 was rather large, 240 \(\pm\) 30 pc. It corresponds to the wider line in the dynamical image, and this FWHM reflects the size of the region where the increase in extinction is seen in the maps in Fig. A1.
* _GRB 050713A:_ the scattering halo was not visible, but the dynamical image identified a dust-scattering layer. As there is only one clear maximum in the extinction density distribution visible in all maps (L19, L22 and Le20) shown in the first two rows in Fig. A2, the agreement with the X-ray data is rather good. It is also visible in the 2D cut of the extinction density maps, shown in Fig. A2, as a single maximum along the line of sight.
* _GRB 050724:_ As shown in Fig. A3, there is a structure in the extinction maps along the line of sight to this GRB, including the region of the Ophiuchus molecular cloud complex which is therefore a plausible site for scattering dust (Vaughan et al. 2006). The X-ray absorption and IR dust emission correlation pointed towards the same material in which these processes occurred. There is a good agreement with the highest maximum in the extinction density distribution from the L19 and L22 maps", but the rest of the complex structure visible in the maps is not seen in the X-ray data. On the other hand, Le20 has a higher resolution and better shows the distinction between the higher maximum at \(\sim\)150 pc and the much lower local maximum \(\sim\)280 pc. Considering the error, the dust distance measured from the X-ray data agrees with the first (and largest) extinction peak in the Le20 map.
* _GRB 061019:_ The dynamical image for this burst showed a rather wide line formed by the halo events (Vianello et al. 2007). This is reflected in the large FWHM value (107 pc) of the Lorentzian fitted in the distance distribution. Interestingly, the extinction density profile, shown in Fig. A4, shows several maxima distributed over an extensive range of distances, with the X-ray scattering layer distance closest to the position of the most significant maximum in L19 and L22 maps. The Le20 map shows only a shallow local maximum at \(\sim\)150 pc.
* _GRB 070129:_ The halo around this source had a relatively low number of counts, and therefore, the X-ray scattering layer distance was determined from the integral distribution of distances (Vianello et al. 2007). In this representation, the Lorentzian peaks become arctan profiles, and the distance to the scattering layer becomes the inflexion point. The results obtained using this method agree with the larger extinction maximum seen in L19, L22 and Le20 maps shown in Fig. A5. The distance to the closest dust layer determined from X-ray data is not visible in the L19 map, but there is an indication of an extended dusty region in the L22 and Le20 data. Note that the closer layer of dust was identified due to the statistical improvement of the fit when adding another inflexion point in the integral dust distance distribution.
* _GRB 221009A:_ The X-ray observations performed by _XMM-Newton_\(\sim\)2-5 days after this exceptionally bright gamma-ray burst revealed 20 X-ray rings, produced by the dust layers at the distances ranging from 0.3 to 18.6 kpc (Tiengo et al. 2023). The observations of dust-scattering rings were also reported using _Swift_/XRT (Williams et al. 2023; Vasilopoulos et al. 2023) and IXPE (Negro et al. 2023) data. We used the data reported in Tiengo et al. (2023) for our comparison, as the _XMM-Newton_ observations could detect fainter X-ray rings and resolve multiple dust layers. We show the closest 14 peaks in the L19 and L22 maps and the first three peaks in the Le20 map in Fig. A6. The maxima closest to 406.3, 475.2, 553.6, and 728.6 pc (which are measured using the X-ray observations) are easily identifiable in the L19 and L22 maps and coincide with the most prominent maxima in the extinction maps below 1 kpc. The closest maximum in the extinction maps corresponding to the layer at \(\sim\)240 pc is poorly constrained in Tiengo et al. (2023) (a Lorentzian centred at 300 pc, with a 62 pc width) because the corresponding ring was already mostly outside the instrument field of view during the first _XMM-Newton_ observation. In 2D distributions from Fig. A6, we see also that below 1 kpc there are several extended dust regions without distinct maxima.
## 4 Conclusions
The method proposed by Tiengo & Mereghetti (2006) to determine the distances to the dust scattering layers based on the dynamical
Figure 1: Extinction density distribution from L19 map (upper left), L22 (upper right), Le20 (lower left) and G19 (lower right) along the line of sight of GRB 160623A. Vertical red lines represent distances calculated from X-ray halos. Red shaded regions denote errors on these distances, while blue regions denote ranges covered by the FWHMs (see Table 1). When available (L22, Le20 and G19), the errors of extinction maps are shown as a grey-shaded region.
Figure 2: Integrated extinction from G19 (left) and L22 (right) around the distance measured from the X-ray halo for the nearest dust cloud in the case of GRB 160623A. The red circle represents the position of the observed halo.
image, in which each count is binned according to its arrival time and distance from burst (Eq. 2), allows to create the distribution of scattering layer distances. The fit of the Lorentzian functions superimposed on the power law representing the background allows to determine the distance of the scattering layers. The width of the Lorentzian peaks in the distance distribution is determined by the instrumental PSF (resulting in broader peaks for smaller rings), the GRB duration (which is relevant only at early times for sufficiently long GRBs) and the distribution of dust along the line of sight. This last effect can be either due to a single (geometrically) thick cloud or the combination of more clouds close to each other. In the latter case, since different distances imply a different expansion rate, two nearby clouds could appear as an unresolved peak at early times and then be resolved into two separate peaks in later observations (or in observations with better PSF or counting statistics). For example, in the _XMM-Newton_ observation of GRB 221009A one can clearly distinguish two peaks at 698 and 729 pc (Tiengo et al., 2023) which appeared as a single peak in _Swift/XRT_ observations which had poorer statistics (Vasilopoulos et al., 2023). Also, in the case of GRB 061019, Vianello et al. (2007) studied the width of the peak through simulations and found evidence for a significant intrinsic cloud width.
The extinction density distribution from three different extinction maps was extracted along the line of sight of each GRB for which the time-expanding halo is presently observed (Table 1). We show the comparison of distances derived using the X-ray halos with distances of dust regions from the individual extinction maps in Fig. 4. The number of dust layers that we can constrain is a function of fluence (Table 1), and dust layer density. Therefore, the extinction maps and the X-ray observed distances are not always in accordance: the
\begin{table}
\begin{tabular}{c c c c} \hline GRB & \multicolumn{2}{c}{maximum [pc]} \\ & L19 map & L22 map & L20 map \\ \hline \hline
031203 & 880 & 870 (52) & - \\ & 1375 & 1360 & - \\ \hline
050713A & 355 & 365 & 360 \\ \hline
050724 & 160 & 130 (59) & 150 \\ \hline
061019 & 1030 & 1005 (57) & - \\ \hline
070129 & 295 & 305 (62) & 295 \\ \hline
160623A & 545 & 545 & - \\ & 690 & 685 & - \\ & 760 & 765 (42) & - \\ & 945 & - & - \\ \hline
221009A & 230 & 250 & 230 \\ & 415 & 415 (43) & 400 \\ & 465 & 465 (66) & - \\ & 575 & 555 (47) & - \\ & 725 & 735 (47) & - \\ \hline \end{tabular}
\end{table}
Table 2: The local maxima in the extinction density distribution (along the line of sight of GRBs) closest to the distance determined from the X-ray studies. For the the L22 map, we fitted Gaussian functions to individual peaks that do not overlap with nearby structures and reported the obtained FWHMs (given in parentheses).
Figure 3: 2D cut of extinction density cube from L19 map (up) and L22 (down) perpendicular to the Galactic plane and in the direction of GRB 160623A. Height is measured with respect to the position of the Galactic plane. The white line represents the line of sight of the GRB, while red dots represent distances calculated from X-ray halos. Red perpendicular lines denote the errors on these distances, while the blue perpendicular lines denote ranges covered by the FWHMs (see Table 1).
fainter is the GRB and less dense is the cloud, the more difficult is to constrain the position. In all GRBs that we examined, we found at least one local maximum in the 3D dust extinction maps that is in agreement with the dust distance measured from X-ray rings. When multiple rings were detected for a GRB, the dust distance measurements coincide with 4 (3) maxima in L19 (L22) map for the case of GRB 160623, and 5 maxima (in L19 and L22 maps) for GRB 221009A. We fitted a linear function to points corresponding to individual maxima in the extinction maps to check their agreement with the X-ray halo measurements. The fit to L22 data results in slope (1.02 \(\pm\) 0.03), showing a good agreement of the two independent distance measurements. For the errors in dust distance, we used the FWHM of Lorentzian functions reported in Table 1, as it better captures the region in which the scatterings occur in case of the extended scattering regions. The errors for the extinction maxima were estimated for L22 map: we fitted Gaussian function to individual peaks when the peaks were identifiable (Table 2).
When individual dust layers are clearly separated, the distance measurements from the X-ray data are in good agreement with the local maxima in the extinction density distribution. This is clearly seen in the case of GRB 050713A. When there is no clear local maximum along the line of sight towards a GRB (see the 2D cuts of extinction density cube perpendicular to the Galactic plane along the line of sight towards the GRBs, Figs. 3, A1-A6 ), but only extended regions where extinction occurs (e.g. in GRB 061019 or GRB 050724), we do not find clear correspondence with X-ray observations. If the distance to X-ray resolved dust rings is of the same order of magnitude as the resolution of the maps (\(\sim\) 25 pc), it is not possible to capture two separate maxima in the dust extinction profile driven by the sparsity of the starlight data in a given direction.
Observations of X-ray halos can benefit from the study of dust extinction by providing information on the location and morphology of the scattering layers. Vice-versa, our comparison suggests that the method applied to create different dust extinction maps such as L19, L22 and Le20, could be potentially optimized by the use of X-ray halo observations from GRBs, as an independent distance measurement of dust layers in the Galaxy.
## Acknowledgements
We thank the anonymous referee for reviewing our manuscript and Rosine Lallement for her help with the G-TOMO module of the EXPLORE project. Z.B. and V.J. acknowledge support by the Croatian Science Foundation for a project IP-2018-01-2889 (LowFreqCRO). A.B. acknowledges support from the European Research Council through the Advanced Grant MIST (FP7/2017-2022, No.742719). This research has used data, tools or materials developed as part of the EXPLORE project that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101004214.
## Data Availability
All of the data underlying this article are already publicly available from [http://argonaut.skymaps.info](http://argonaut.skymaps.info), [http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/625/A135](http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/625/A135), [https://explore-platform.eu](https://explore-platform.eu), [http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/639/A138](http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/639/A138).
| energetic X-ray photon が、ガンマ線バースト(GRBs)などから発生するエネルギー源から散乱され、銀河の塵雲に散乱され、GRBsの場所の周りで時間変化するハローを形成します。そのようなハローのX線観測は、散乱が発生する銀河内の塵雲の距離を測定するのに役立ちます。私たちは、GRBsのハローから導出された距離と、最近公開された光学から近赤外透過スペクトルマップから導出された3D塵分布との比較を初めて行います。GRBsのハローは、Swift XRTとXMM-Newton EPICの両方の機器によって7つのソースに観測されています。GRB031203、GRB 050713A、GRB 050724、GRB 061019、GRB 07012 |
2309.04354 | Mobile V-MoEs: Scaling Down Vision Transformers via Sparse
Mixture-of-Experts | Sparse Mixture-of-Experts models (MoEs) have recently gained popularity due
to their ability to decouple model size from inference efficiency by only
activating a small subset of the model parameters for any given input token. As
such, sparse MoEs have enabled unprecedented scalability, resulting in
tremendous successes across domains such as natural language processing and
computer vision. In this work, we instead explore the use of sparse MoEs to
scale-down Vision Transformers (ViTs) to make them more attractive for
resource-constrained vision applications. To this end, we propose a simplified
and mobile-friendly MoE design where entire images rather than individual
patches are routed to the experts. We also propose a stable MoE training
procedure that uses super-class information to guide the router. We empirically
show that our sparse Mobile Vision MoEs (V-MoEs) can achieve a better trade-off
between performance and efficiency than the corresponding dense ViTs. For
example, for the ViT-Tiny model, our Mobile V-MoE outperforms its dense
counterpart by 3.39% on ImageNet-1k. For an even smaller ViT variant with only
54M FLOPs inference cost, our MoE achieves an improvement of 4.66%. | Erik Daxberger, Floris Weers, Bowen Zhang, Tom Gunter, Ruoming Pang, Marcin Eichner, Michael Emmersberger, Yinfei Yang, Alexander Toshev, Xianzhi Du | 2023-09-08T14:24:10 | http://arxiv.org/abs/2309.04354v1 | # Mobile V-MoEs: Scaling Down Vision Transformers
###### Abstract
Sparse Mixture-of-Experts models (MoEs) have recently gained popularity due to their ability to decouple model size from inference efficiency by only activating a small subset of the model parameters for any given input token. As such, sparse MoEs have enabled unprecedented scalability, resulting in tremendous successes across domains such as natural language processing and computer vision. In this work, we instead explore the use of sparse MoEs to scale-down Vision Transformers (ViTs) to make them more attractive for resource-constrained vision applications. To this end, we propose a simplified and mobile-friendly MoE design where entire images rather than individual patches are routed to the experts. We also propose a stable MoE training procedure that uses super-class information to guide the router. We empirically show that our sparse Mobile Vision MoEs (V-MoEs) can achieve a better trade-off between performance and efficiency than the corresponding dense ViTs. For example, for the ViT-Tiny model, our Mobile V-MoE outperforms its dense counterpart by 3.39% on ImageNet-1k. For an even smaller ViT variant with only 54M FLOPs inference cost, our MoE achieves an improvement of 4.66%.
## 1 Introduction
The trade-off between performance and efficiency of neural networks (NNs) remains a challenge, especially in settings where computational resources are limited. Recently, sparsely-gated Mixture-of-Experts models (sparse MoEs) have gained popularity as they provide a promising solution to this problem by enabling the decoupling of model size from inference efficiency [3]. MoEs are NNs that are partitioned into "experts", which are trained jointly with a router to specialize on subsets of the data. In MoEs, each input is processed by only a small subset of model parameters (aka _conditional computation_). In contrast, traditional dense models activate all parameters for each input.
Sparse MoEs were popularized in deep learning by [16], which introduced sparse MoE-layers as drop-in replacements for standard NN layers. Most recent MoEs are based on the Transformer [19], which processes individual input tokens; in accordance, recent MoEs also route individual input tokens to experts, i.e., image patches in the case of Vision Transformers (ViTs) [2, 13] (see Fig. 2b). Conditional computation as implemented by sparse MoEs has enabled the training of Transformers of unprecedented size [4]. As a result, MoEs have achieved impressive successes across various domains including language [4, 10], vision [13], speech [20] and multi-modal learning [12], and currently hold state-of-the-art results on many benchmarks [21].
The ability to increase model capacity while keeping inference cost low is also appealing for resource-constrained vision problems. While Transformers are getting increasingly established as the de-facto standard architecture for large-scale visual modeling [2, 13], virtually all mobile
Figure 1: **Accuracy vs. FLOPs** for ViTs of different sizes. Labels (e.g. 12\(\times\)192, which is ViT-Tiny) refer to the number of ViT layers (e.g. 12) and the hidden embedding dimension (e.g. 192). The sparse MoEs outperform their corresponding dense baselines across different model scales. Fig. 3a lists all numerical results.
friendly models still leverage convolutions due to their efficiency [1, 5, 6, 11, 15, 18]. As such, conditional computation could potentially enable attention-based models to reduce the gap to convolutional models in the small-scale regime. However, Transformer-based MoEs have not yet been explored for resource-constrained settings; this might be due to two main weaknesses of recently-popularized MoEs [16].
Firstly, while per-token routing increases the flexibility to learn an optimal computation path through the model, it makes inference inefficient, as many (or even all) experts need to be loaded for a single input image. Secondly, recent MoEs train the routers jointly with the rest or the model in an end-to-end fashion. To avoid collapse to just a few experts while ignoring all others, one needs to use load balancing mechanisms [3] such as dedicated auxiliary losses [16]. However, the resulting complex optimization objectives often lead to training instabilities / divergence [4, 10, 21, 12].
In this work, we investigate the potential of sparse MoEs to scale-down ViTs for resource-constrained vision applications via an MoE design and training procedure that addresses the aforementioned issues. Our contributions are:
1. We propose a simplified, mobile-friendly sparse MoE design in which a single router assigns entire images (rather than image patches) to the experts (see Fig. 2c).
2. We develop a simple yet robust training procedure in which expert imbalance is avoided by leveraging semantic super-classes to guide the router training.
3. We empirically show that our proposed sparse MoE approach allows us to scale-down ViT models by improving their performance vs. efficiency trade-off.
## 2 Scaling down ViTs via sparse MoEs
### Conditional computation with sparse MoEs
An MoE implements conditional computation by activating different subsets of a NN (so-called experts) for different inputs. We consider an MoE layer with \(E\) experts as
\[\text{MoE}(\mathbf{x})=\sum_{i=1}^{E}g(\mathbf{x})_{i}e_{i}(\mathbf{x}), \tag{1}\]
where \(\mathbf{x}\in\mathbb{R}^{D}\) is the input to the layer, \(e_{i}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) is the function computed by expert \(i\), and \(g:\mathbb{R}^{D}\rightarrow\mathbb{R}^{E}\) is the routing function which computes an input-dependent weight for each expert [16]. In a ViT-based MoE, each expert \(e_{i}\) is parameterized by a separate multi-layer perceptron
Figure 2: **Model architectures.** (a) The dense ViT baseline model uses dense ViT layers throughout. (b) Regular sparse V-MoE with layer-wise per-patch routers. (c) Our proposed sparse Mobile V-MoE design with a single per-image router. In both (b) and (c), dense ViT layers are followed by MoE-ViT layers (here, \(k=1\) out of \(E=3\) experts are activated per input). (d) In contrast to dense ViT layers [19], MoE-ViT layers have a separate MLP per expert (preceded by a router) while all other parts of the layer are shared across all experts [13].
(MLP) within the ViT layer, while the other parts are shared across experts (see Fig. 1(d)). We use the routing function
\[g(\mathbf{x})=\text{TOP}_{k}(\text{softmax}(\mathbf{W}\mathbf{x})), \tag{2}\]
where the operation \(\text{TOP}_{k}(\mathbf{x})\) sets all elements of \(\mathbf{x}\) to zero except those with the \(k\) largest values [13]. In a sparse MoE, we have \(k\ll E\), s.t. we only need to load and compute the \(k\) experts with the largest routing weights. This allows us to scale-up the overall model capacity (determined by \(E\)) without increasing the inference cost (determined by \(k\)).
### Efficient and robust MoEs for small-scale ViTs
**Per-image routing.** Recent large-scale sparse MoEs use per-patch routing (i.e. the inputs \(\mathbf{x}\) are individual image patches). This generally requires a larger number of experts to be activated for each image. For example, [13] show that in their MoE with per-patch routing, "most images use -on aggregate by pooling over all their patches- most of the experts" [13, Appendix E.3]. Thus, per-patch routing can increase the computational and memory overhead of the routing mechanism and reduce the overall model efficiency. We instead propose to use per-image routing (i.e., the inputs \(\mathbf{x}\) are entire images) to reduce the number of activated experts per image, as also done in early works on MoEs [7, 9].
**Super-class-based routing.** Previous works on sparse MoEs jointly train the router end-to-end together with the experts and the dense ViT backbone, to allow the model to learn the optimal assignment from inputs to experts based on the data [13]. While learning the optimal routing mechanism from scratch can result in improved performance, it often leads to training instabilities and expert collapse, where most inputs are routed to only a small subset of the experts, while all other experts get neglected during training [3]. Thus, an additional auxiliary loss is typically required to ensure load-balancing between the different experts, which can increase the complexity of the training process [3].
In contrast, we propose to group the classes of the dataset into super-classes and explictly train the router to make each expert specialize on one super-class. To this end, we add an additional cross-entropy loss \(\mathcal{L}_{g}\) between the router output \(g(\mathbf{x})\) in Eq. (2) and the ground truth super-class labels to the regular classification loss \(\mathcal{L}_{C}\) to obtain the overall weighted loss \(\mathcal{L}=\mathcal{L}_{C}+\lambda\mathcal{L}_{g}\) (we use \(\lambda=0.3\) in our experiments, which we found to work well). Such a super-class division is often readily provided with the dataset (e.g. for CIFAR-10/100 or MS-COCO). If a dataset does not come with a super-class division, we can easily obtain one as follows: 1) we first train a dense baseline model on the dataset; 2) we then compute the model's confusion matrix over a held-out validation set; 3) we finally construct a confusion graph from the confusion matrix and apply a graph clustering algorithm to obtain the super-class division [8]. This approach encourages the super-classes to contain semantically similar images that the model often confuses. Intuitively, by allowing the different MoE experts to specialize on the different semantic data clusters, performance on the highly-confused classes should be improved. We use this approach in our experiments on ImageNet-1k, computing the confusion matrix via a dense ViT-S/16 model. The resulting super-class division for \(E=10\) experts is shown in Tab. 1; the super-classes contain semantically related classes.
## 3 Experiments
We now present empirical results on the standard ImageNet-1k classification benchmark [14]. We train all models from scratch on the ImageNet-1k training set of 1.28M images, and then evaluate their top-1 accuracy on the held-out validation set of 50K images. In Sec. 3.1, we first evaluate our proposed sparse Mobile V-MoE across a range of model scales and show that they achieve better performance vs. efficiency trade-offs than the respective dense ViT baselines. In Sec. 3.2, we then conduct several ablation studies to get a better understanding of the properties of our proposed sparse MoE model design and training procedure.
### Accuracy vs. efficiency across ViT scales
We consider ViT models (both MoEs and corresponding dense baselines) of different sizes by scaling the total number of layers (we use 12, 9 or 6) and the hidden embedding size (we use 384, 192, 96 or 64). The number of multi-head self-attention heads is (6, 3, 3, 2) for the different hidden embedding sizes. The embedding size of the MLP is \(4\times\) the hidden embedding size, as is common practice. We use \(E=10\) experts in total for the MoE, out of which \(k=1\) is activated per input image. Our MoEs comprise of \(L=2\) MoE-ViT layers preceded by (10, 7 or 4) dense ViT layers (see Fig. 1(c)). We use a patch size of \(32\times 32\) for all models. This is because the the patch size effectively controls
\begin{table}
\begin{tabular}{l c c} \hline \hline
**ID** & **Classes** & **Super-class** \\ \hline
0 & boxer, pug, Rottweiler & dogs \\
1 & orangutan, weasel, panda & other mammals \\
2 & toucan, flamingo, ostrich & birds \\
3 & eel, scorpion, hammerhead & other animals \\
4 & minivan, ambulance, taxi & land vehicles \\
5 & submarine, canoe, pirate & sea vehicles \\
6 & guacamole, hotdog, banana & food \\
7 & backpack, pyjama, kimono & clothes \\
8 & monitor, iPod, photocopier & tech devices \\
9 & xylophone, harp, trumpet & instruments \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Super-class division for \(E=10\). For each super-class, we list three randomly chosen class names (which turn out to be semantically related) together with a possible super-class name.**
the trade-off between FLOPs and number of model parameters: as we aim to optimize for FLOPs, a larger patch size (resulting in a fewer number of patches) is beneficial. We also tried using a smaller patch size of \(16\times 16\), where the result trends were basically the same (but where the number of FLOPs was higher relative to the model capacity and thus accuracy). For the ViTs with hidden sizes 384 and 192, we use the DeiT training recipe [17], while for hidden sizes 96 and 64, we use the standard ViT training recipe [2] to avoid underfitting. Figs. 1 and 2 compare top-1 validation accuracy vs. FLOPs. Our Mobile V-MoEs outperform the corresponding dense ViT baselines across all model sizes.
### Ablation studies
We train DeiT-Tiny [17] (12 layers total, 192 embedding size, \(16\times 16\) patch size) with \(k=1\) out of \(E=10\) experts per input, and with \(L=2\) MoE layers (unless noted otherwise); the dense ViT baseline achieves 70.79% accuracy.
**Total number of experts \(E\).** We consider different widths of the MoE, i.e., different numbers of experts \(E\) (and thus super-classes), ranging between \(E=5\) and \(E=20\). We report both the accuracy of the entire MoE model (i.e., on the 1,000-way classification task), as well as the accuracy of the router (i.e., on the \(E\)-way super-classification task). Fig. 2(b) shows that overall performance improves until \(E=10\), from which point onwards it stagnates. The router accuracy also drops beyond \(E=10\) due to the increased difficulty of the \(E\)-way super-classification problem.
**Number of MoE layers \(L\).** We consider different depths of the MoE, i.e., different numbers of MoE layers \(L\), ranging between \(L=1\) and \(L=8\) (out of 12 ViT layers in total). We again report both the full MoE and router accuracies. Fig. 2(c) shows that overall performance peaks at \(L=2\), and rapidly decreases for larger \(L\). This is due to the router accuracy, which declines with increasing \(L\) as the router gets less information (from the \(12-L\) ViT layers).
**Number of experts \(k\) per image.** We vary the number of experts \(k\) activated per image. We compare against dense baselines that use an MLP with hidden dimension scaled by \(k\) to match the MoE's inference FLOPs. Fig. 2(d) shows that \(k=1\) and \(k=2\) perform best (relative to the dense baseline), with decreasing performance delta for larger \(k\).
**Routing strategies.** We compare our proposed semantic super-class per-image routing vs. end-to-end-learned routing (both per-image and per-token) and a baseline with random super-classes (for \(k\)=2). Fig. 2(e) shows that our method (Fig. 1(c)) is better, except for learned per-token routing (as in the regular V-MoE [13], Fig. 1(b)), which however needs to activate many more experts and thus model parameters for each input image (up to 11.05M, vs. 6.31M for ours).
## 4 Conclusions and future work
We showed that sparse MoEs can improve the performance vs. efficiency trade-off compared to dense ViTs, in an attempt to make ViTs more amenable to resource-constrained applications. In the future, we aim to apply our MoE design to models that are more mobile-friendly than ViTs, e.g., light-weight CNNs such as MobileNets [5, 6, 15] or ViT-CNN hybrids [1, 18, 11]. We also aim to consider other vision tasks, e.g., object detection. Finally, we aim to get actual on-device latency measurements for all models.
Figure 3: **Empirical results. (a) Our Mobile V-MoEs outperform the respective dense ViTs across model scales. Model names (e.g. 12\(\times\)192) refer to the number of layers (12) and the embedding size (192). (b-e) Ablation studies using DeiT-Ti/16 [17], with \(k=1\), \(E=10\), \(L=2\) by default. Best performance vs. efficiency trade-off is achieved with (b) \(E=10\) experts total, (c) \(L=2\) MoE layers (out of 12 layers total), (d) \(k=1\) or \(k=2\) experts activated per image, (e) our semantic super-class routing; the settings used in (a) are bolded.** | sparse Mixture-of-Experts モデル (MoEs) は、ある入力トークンに対してのみモデルのパラメータの一部を活性化する能力により、モデルサイズと推論効率を decouple することが可能で、近年人気を集めています。その結果、sparse MoE は、自然言語処理やコンピュータビジョンなどの領域において、今までにないスケーラビリティを実現し、大きな成功を収めてきました。この研究では、sparse MoE を用いて、Vision Transformer (ViT) をスケールダウンさせ、リソース制限された視覚アプリケーションへの魅力を高めます。そのために、全体画像ではなく個々のパッチをエキスパートに送り込むように簡略化されたモジュール設計を提案します。また、router をガイドするためのスーパークラス情報を利用した安定した MoE training PROCEDURE を提案します。実証実験の結果、our sparse Mobile Vision MoEs (V-MoEs) は、Dense ViTs と比べて、 |
2309.13628 | Semidefinite Programming Approximation for a Matrix Optimization Problem
over an Uncertain Linear System | A matrix optimization problem over an uncertain linear system on finite
horizon (abbreviated as MOPUL) is studied, in which the uncertain transition
matrix is regarded as a decision variable. This problem is in general NP-hard.
By using the given reference values of system outputs at each stage, we develop
a polynomial-time solvable semidefinite programming (SDP) approximation model
for the problem. The upper bound of the cumulative error between reference
outputs and the optimal outputs of the approximation model is theoretically
analyzed. Two special cases associated with specific applications are
considered. The quality of the SDP approximate solutions in terms of
feasibility and optimality is also analyzed. Results of numerical experiments
are presented to show the influences of perturbed noises at reference outputs
and control levels on the performance of SDP approximation. | Jintao Xu, Shu-Cherng Fang, Wenxun Xing | 2023-09-24T13:03:02 | http://arxiv.org/abs/2309.13628v2 | Semidefinite Programming Approximation for a Matrix Optimization Problem over an Uncertain Linear System +
###### Abstract
A matrix optimization problem over an uncertain linear system on finite horizon (abbreviated as MOPUL) is studied, in which the uncertain transition matrix is regarded as a decision variable. This problem is in general NP-hard. By using the given reference values of system outputs at each stage, we develop a polynomial-time solvable semidefinite programming (SDP) approximation model for the problem. The upper bound of the cumulative error between reference outputs and the optimal outputs of the approximation model is theoretically analyzed. Two special cases associated with specific applications are considered. The quality of the SDP approximate solutions in terms of feasibility and optimality is also analyzed. Results of numerical experiments are presented to show the influences of perturbed noises at reference outputs and control levels on the performance of SDP approximation.
**Keywords** Matrix optimization, Semidefinite programming, Uncertain linear system, NP-hard, Approximation model
**Mathematics Subject Classification (2020)** 90C22 90C26 90C30 90C59 93C05
## 1 Introduction
The discrete-time uncertain linear system is widely studied in control theory [22, 12, 13, 14, 20, 9, 35]. The uncertainty may come from the uncertain parameter matrix associated with the given constraint set such as a convex hull [22, 13, 14, 20, 7] or other settings [12, 35]. Robust optimization models are often adopted to deal with the uncertain parameters [22, 36, 23, 31]. However, in many scenarios such as the linear model predictive control (MPC) for optimal tracking [10, 1], COVID-19 pandemic optimal control [38], Markov chain estimation, and enterprise input-output analysis [25] (to be described in Section 2), we face a new class of matrix optimization problems that regard the uncertain transition matrix as a decision variable. In this paper, we consider the following matrix optimization problem over an uncertain linear
system on finite horizon (MOPUL):
\[\begin{split}\min_{A,U,\omega}&\lambda_{1}f_{1}\left(A \right)+\lambda_{2}f_{2}\left(U\right)+\lambda_{3}f_{3}\left(\omega\right)\\ \mathrm{s.t.}& x_{t}=Ax_{t-1}+Bu_{t-1},\,\,\,t=1,2, \ldots,N,\\ & y_{t}=Cx_{t},\,\,\,t=0,1,\ldots,N,\\ &\sum_{t=1}^{N}\left\|y_{t}-r_{t}\right\|_{2}\leq\omega,\\ &(A,U,\omega)\in\mathcal{S},\end{split}\] (MOPUL)
where \(A\in\mathbb{R}^{n\times n}\), \(U\coloneqq(u_{0},u_{1},\ldots,u_{N-1})\in\mathbb{R}^{m\times N}\) and \(\omega\in\mathbb{R}_{+}\) are decision variables, \(\{\lambda_{i}\}_{i=1}^{3}\subseteq\mathbb{R}_{+}\), \(N\in\mathbb{N}_{+}\), \(B\in\mathbb{R}^{n\times m}\), \(C\in\mathbb{R}^{p\times n}\), \(\{r_{t}\}_{t=1}^{N}\subseteq\mathbb{R}^{p}\) and \(\mathcal{S}\subseteq\mathbb{R}^{n\times n}\times\mathbb{R}^{m\times N}\times \mathbb{R}_{+}\) are given. \(\{f_{i}(\cdot)\}_{i=1}^{3}\) are assumed to be semidefinite representable (SD representable) functions [5, 37], and \(\mathcal{S}\) is assumed to be an SD representable set [5, 37]. In addition, \(C\) is assumed to be of full column rank.
In problem (MOPUL), \(f_{1}(A)\), \(f_{2}(U)\) and \(f_{3}(\omega)\) are the given objective functions of decision variables \(A\), \(U\) and \(\omega\) with given weights of \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\), respectively. For example, \(f_{1}(A)=\|A-A^{r}\|_{F}\), where \(A^{r}\) is a given reference matrix, \(f_{2}(U)=\sum_{t=1}^{N-1}\|u_{t}-u_{t-1}\|_{2}^{2}\) as in [1, 10], and \(f_{3}(\omega)=\omega\). Moreover, \(A\), \(B\), \(C\) and \(U\) are the transition matrix, fixed parameter matrices and control, respectively, of the discrete-time linear system on a finite horizon \(N\) described by the first two constraints. \(x_{t}\in\mathbb{R}^{n}\) is the \(n\) dimensional system state with a given initial state \(x_{0}\), and \(y_{t}\in\mathbb{R}^{p}\) is the \(p\) dimensional system output at the \(t\)th stage, \(t=1,2,\ldots,N\). \(r_{t}\) carries the reference value of the system output \(y_{t}\) for each \(t=1,2,\ldots,N\). \(\omega\) is the control level/threshold of the cumulative error between \(\{y_{t}\}_{t=1}^{N}\) and \(\{r_{t}\}_{t=1}^{N}\).
The first two constraints are commonly seen in the control theory of discrete-time finite horizon linear systems. The difference is that the transition matrix \(A\) is a decision variable in (MOPUL) and an uncertain parameter in control theory. The third constraint restricts the cumulative error between the system outputs and their reference values within a control level \(\omega\). When \(f_{3}(\omega)=\omega\), the cumulative error constraint can be lifted to the objective function. Other restrictions on the decision variables are contained in the set \(\mathcal{S}\) as the fourth constraint, and entanglement of decision variables is allowed in it. In Section 2, we show that the (MOPUL) model is widely applicable.
Notice that the first three constraints of (MOPUL) are multivariate polynomial constraints. Since a multivariate polynomial optimization problem in general is NP-hard [29], we know (MOPUL) is generally computationally intractable. On the other hand, semidefinite programs (SDP) are polynomial-time solvable [5, 30, 34] with many successful applications to the control theory [39, 2, 7, 33, 8], multiple-input multiple-output (MIMO) analysis [26, 28], combinatorial optimization problems [17, 15, 19, 18], and portfolio selection problems [16, 11]. SDP solvers such as SeDuMi ([https://sedumi.ie.lehigh.edu](https://sedumi.ie.lehigh.edu)), MOSEK ([https://www.mosek.com](https://www.mosek.com)) and DSDP ([https://www.mcs.anl.gov/hs/software/DSDP/](https://www.mcs.anl.gov/hs/software/DSDP/)) are readily available.
The first contribution of this paper is to construct an SDP approximation model for (MOPUL). Notice that \(r_{t}\) is regarded as the reference value of \(y_{t}\). We can use \(C^{\dagger}r_{t}\) to approximate \(x_{t}\), where \(C^{\dagger}\) denotes the Moore-Penrose inverse of matrix \(C\). Similar to [38], we can replace the first constraint by \(x_{t}=AC^{\dagger}r_{t-1}+Bu_{t-1},t=1,2,\ldots,N\), to reformulate (MOPUL) as an SDP approximation model.
The second contribution of this paper is to provide a theoretical analysis of the quality of SDP approximate solutions in terms of the feasibility and optimality. For an SDP approximate solution \((A^{\mathrm{a}*},U^{\mathrm{a}*},\omega^{\mathrm{a}*})\) and consequently output values of \(y_{t}\) at stage \(t\) of the linear system, an upper bound of the cumulative error \(\sum_{t=1}^{N}\|y_{t}-r_{t}\|_{2}\) corresponding to \((A^{\mathrm{a}*},U^{\mathrm{a}*},\omega^{\mathrm{a}*})\) is provided in Theorem 3.2 for general setting. Moreover, the feasibility of an SDP approximate solution to (MOPUL) with respect to a fixed control level is guaranteed in Theorem 3.3. Motivated by the application problems, two special cases of (MOPUL) with SDP approximations concerning two settings of \((\{\lambda_{i}\}_{i=1}^{3},\{f_{i}\}_{i=1}^{3},\mathcal{S})\) are considered in Subsection 3.3 for better theoretical estimations on the optimal objective values.
The third contribution of this paper is to show the influences of perturbed noise levels at reference outputs and control levels on the performance of the SDP approximation model through numerical experiments. Equipped with accurate reference outputs and proper control levels, SDP approximation performs really well numerically.
The rest of the paper is organized as follows. Some specific applications of (MOPUL) are introduced in Section 2. In Section 3, an SDP approximation model is constructed, and theoretic analysis of its performance is provided. Numerical results are reported in Section 4 and some concluding remarks are made in Section 5.
**Notations.** Throughout the paper, \(\mathbb{R}^{n}\), \(\mathbb{R}^{n}_{+}\), \(\mathbb{R}^{m\times n}\), and \(\mathbb{N}_{+}\) denote the sets of real \(n\)-dimensional vectors, nonnegative vectors, \(m\times n\) matrices, and positive integers, respectively. \(\mathbf{S}^{n}\), \(\mathbf{S}^{n}_{+}\), and \(\mathbf{S}^{n}_{++}\) denote the sets of real \(n\times n\) symmetric, positive semidefinite (\(X\succeq 0\)), and positive definite matrices, respectively. \(X^{\dagger}\) denotes the Moore-Penrose inverse of \(X\). \(\|x\|_{2}=(\sum_{i=1}^{n}x_{i}^{2})^{\frac{1}{2}}\) and \(\|x\|_{Q}=\sqrt{x^{T}Qx}\), where \(Q\in\mathbf{S}^{n}_{++}\). \(\|X\|_{F}\), \(\|X\|_{2}\), and \(\|X\|_{*}\) denote the Frobenius norm, the spectral norm which is equal to the maximum singular value of \(X\), and the nuclear norm which is equal to the sum of all singular values of matrix \(X\), respectively. \(O\) and \(I\) denote the matrix of all zeros and the unit matrix whose sizes vary from the context, respectively. \(\mathbf{0}\) and \(\mathbf{1}\) denote the column vector of all zeros and ones whose sizes vary from the context, respectively.
## 2 Applications
In this section, we present four specific applications of problem (MOPUL). Their special structures and the quality of the corresponding SDP approximate solutions in terms of feasibility and optimality will be further investigated in Sections 3 and 4.
### Linear model predictive control for optimal tracking
Model predictive control (MPC) is a class of optimal control strategies, in which the optimizer determines control signals and the model predicts outputs [10]. Referring to equation (18) in [1], equation (2.5) in [10], and related discussions therein, an optimal tracking problem over an uncertain linear system goes in the following form:
\[\min_{A,U} \sum_{t=1}^{N}\left\|y_{t}-r_{t}\right\|_{2}+\lambda\sum_{t=1}^{ N-1}\left\|u_{t}-u_{t-1}\right\|_{2}\] (O-MPC) \[\mathrm{s.t.} x_{t}=Ax_{t-1}+Bu_{t-1},\ \ t=1,2,\ldots,N,\] \[y_{t}=Cx_{t},\ \ t=0,1,\ldots,N,\] \[(A,U)\in\mathcal{S}_{\mathrm{MPC}},\]
where the transition matrix \(A\in\mathbb{R}^{n\times n}\) and control \(U\coloneqq(u_{0},u_{1},\ldots,u_{N-1})\in\mathbb{R}^{m\times N}\) are decision variables, system horizon \(N\in\mathbb{N}_{+}\), parameter \(\lambda\geq 0\), \(\{x_{t}\}_{t=0}^{N}\subseteq\mathbb{R}^{n}\) are \(n\) dimensional system states with a given initial state \(x_{0}\), \(\{y_{t}\}_{t=0}^{N}\subseteq\mathbb{R}^{p}\) are \(p\) dimensional system outputs, \(B\in\mathbb{R}^{n\times m}\) and \(C\in\mathbb{R}^{p\times n}\) are given system parameter matrices, and \(\{r_{t}\}_{t=1}^{N}\subseteq\mathbb{R}^{p}\) are given reference signals. The cumulative error \(\sum_{t=1}^{N}\left\|y_{t}-r_{t}\right\|_{2}\) enforces system outputs to track the given reference signals, and control efforts \(\sum_{t=1}^{N-1}\left\|u_{t}-u_{t-1}\right\|_{2}\) are penalized for variations. SD representable set \(\mathcal{S}_{\mathrm{MPC}}=\mathcal{S}_{\mathrm{UC}}\cap\ \mathcal{S}_{\mathrm{AR}}\subseteq\mathbb{R}^{n \times n}\times\mathbb{R}^{m\times N}\), where \(\mathcal{S}_{\mathrm{UC}}\) is the uncertainty set of linear system and \(\mathcal{S}_{\mathrm{AR}}\) is composed of additional restrictions on \((A,U)\). Examples of \(\mathcal{S}_{\mathrm{UC}}\) include \(\{\mathrm{constant}\ A\}\times\mathbb{R}^{m\times N}\) for the linear system with a constant transition matrix [4] and \(\{\sum_{i=1}^{k}\theta^{i}A^{i},(\theta^{1},\theta^{2},\ldots,\theta^{k})^{ \mathrm{T}}\in\mathbb{R}^{k}_{+},\sum_{i=1}^{k}\theta^{i}=1\}\times\mathbb{R} ^{m\times N}\) with given
\(k\) matrices \(\{A^{i}\}_{i=1}^{k}\) for the uncertain linear system with transition matrix in the polytopic uncertainty set [9]. Examples of \(\mathcal{S}_{\mathrm{{\tiny AR}}}\) include \(\mathbb{R}^{n\times n}\times\{U|\alpha_{1}\leq u_{t}-u_{t-1}\leq\alpha_{2}\}\) with \(\alpha_{1},\alpha_{2}\in\mathbb{R}^{m}\) and component-wise inequalities [1]. Notice that (O-MPC) is a special case of (MOPUL) by setting \(\lambda_{1}=0\), \(\lambda_{2}=\lambda\), \(\lambda_{3}=1\), \(f_{2}(U)=\sum_{t=1}^{N-1}\left\|u_{t}-u_{t-1}\right\|_{2}\), \(f_{3}(\omega)=\omega\), and \(\mathcal{S}=\mathcal{S}_{\mathrm{{\tiny MPC}}}\times\mathbb{R}_{+}\).
### COVID-19 pandemic optimal control model
To realize an effective prevention and control for the COVID-19 pandemic, we can construct the so-called "susceptible-asymptomatic infected-symptomatic infected-removed optimal control" model as below by dividing the total population into 4 groups of susceptible (S), asymptomatic infected (I\({}_{\mathrm{a}}\)), symptomatic infected (I\({}_{\mathrm{s}}\)), and removed (R).
\[\begin{split}\min_{A,U}&\sum_{t=1}^{N}\left\|x_{t}-r _{t}\right\|_{2}\\ \mathrm{s.t.}& x_{t}=Ax_{t-1}+u_{t-1},\ \ t=1,2,\ldots,N, \\ &(A,U)\in\mathcal{S}_{\mathrm{{\tiny COVID}}},\end{split}\] (O-COVID)
where the transmission matrix \(A\in\mathbb{R}^{4\times 4}\) and the exit and entry control \(U\coloneqq(u_{0},u_{1},\ldots,u_{N-1})\in\mathbb{R}^{4\times N}\) are decision variables, \(N\in\mathbb{N}_{+}\) is the duration of COVID-19 transmission studied in (O-COVID), \(\{x_{t}\coloneqq(x_{t}^{\mathrm{s}},x_{t}^{\mathrm{l_{a}}},x_{t}^{\mathrm{l_{ u}}},x_{t}^{\mathrm{h}})^{\mathrm{T}}\}_{t=0}^{N}\) and \(\{u_{t}\coloneqq(u_{t}^{\mathrm{s}},u_{t}^{\mathrm{l_{a}}},u_{t}^{\mathrm{l_{ u}}},u_{t}^{\mathrm{h}})^{\mathrm{T}}\}_{t=0}^{N}\subseteq\mathbb{R}^{4}\) are the numbers of individuals in each group and their variations through exit and entry, respectively, and \(\{r_{t}\coloneqq(r_{t}^{\mathrm{s}},r_{t}^{\mathrm{l_{a}}},r_{t}^{\mathrm{l_{ a}}},r_{t}^{\mathrm{h}})^{\mathrm{T}}\}_{t=1}^{N}\subseteq\mathbb{R}^{4}\) are the expected numbers of individuals in each group. Then COVID-19 transmission is \(x_{t}=Ax_{t-1}+u_{t-1}\) in (O-COVID). Additional constraints on \((A,U)\) are contained in the SD representable constraint set \(\mathcal{S}_{\mathrm{{\tiny COVID}}}\subseteq\mathbb{R}^{4\times 4}\times \mathbb{R}^{4\times N}\). To realize the target \(\{r_{t}\}_{t=1}^{N}\) estimated by the medical facilities, the transmission matrix \(A\) and the exit and entry control \(U\) are determined in (O-COVID) through the minimization of \(\sum_{t=1}^{N}\left\|x_{t}-r_{t}\right\|_{2}\). Notice that (O-COVID) is a special case of (MOPUL) by setting \(\lambda_{1}=\lambda_{2}=0,\lambda_{3}=1\), \(f_{3}(\omega)=\omega\), \(B=C=I\), and \(\mathcal{S}=\mathcal{S}_{\mathrm{{\tiny COVID}}}\times\mathbb{R}_{+}\).
### Markov chains Estimation
Let \(\{X_{t}\}_{t\geq 0}\) be a homogeneous Markov chain on states \(\{s_{i}\}_{i=1}^{m}\) with an unknown low-rank transition matrix \(P=(p_{ij}\coloneqq\mathbb{P}\left(X_{t}=s_{j}|X_{t-1}=s_{i}\right))_{m\times m }\in\mathbb{R}^{m\times m}\), which implies a latent low-dimensionality structure [42]. We can construct an optimization model for the Markov chains estimation with a low-rank demand as the following:
\[\begin{split}\min_{P}&\sum_{t=1}^{N}\left\|\pi_{t}-r _{t}\right\|_{2}\\ \mathrm{s.t.}&\pi_{t}=P\pi_{t-1},\ \ t=1,2,\ldots,N,\\ & P\in\mathcal{S}_{\mathrm{{\tiny Markov}}},\end{split}\] (O-Markov)
where the transition matrix \(P\) is a decision variable, observation horizon \(N\in\mathbb{N}_{+}\), probability distributions
\[\pi_{t}\coloneqq\left(\mathbb{P}\left(X_{t}=s_{1}\right),\mathbb{P}\left(X_{t }=s_{2}\right),\ldots,\mathbb{P}\left(X_{t}=s_{m}\right)\right)^{\mathrm{T}} \in\mathbb{R}^{m},t=0,1,\ldots,N,\]
the \(i\)th component of \(r_{t}\), i.e. \((r_{t})_{i}\) is an observed frequency of the event \(\{X_{t}=s_{i}\}\), for \(i=1,2,\ldots,m\), \(t=1,2,\ldots,N\), and
\[\mathcal{S}_{\mathrm{Markov}}\coloneqq\left\{P=\left(p_{ij}\right)_{m\times m }\in\mathbb{R}^{m\times m}\left|\begin{array}{l}p_{ij}\geq 0,i,j=1,2, \ldots,m,\\ \sum_{i=1}^{m}p_{ij}=1,j=1,2,\ldots,m,\\ \left\|P\right\|_{\ast}\leq\alpha,\\ \text{and subject to a finite number of linear inequality}\\ \text{constraints on }P.\end{array}\right.\right\},\]
in which \(\|\cdot\|_{\ast}\) denotes the nuclear norm and \(0<\alpha<m\). Different from Zhang and Wang [41], Li et al. [24], and Zhu et al. [42] of using the information of event \(\{X_{t-1}=s_{i},X_{t}=s_{j}\}\), (O-Markov) estimates the low-rank transition matrix through frequency approximation of event \(\{X_{t}=s_{i}\}\). The low-rank demand is enforced by the nuclear norm constraint in \(\mathcal{S}_{\mathrm{Markov}}\). Notice that (O-Markov) is a special case of (MOPUL) by setting \(\lambda_{1}=\lambda_{2}=0,\lambda_{3}=1\), \(f_{3}(\omega)=\omega\), \(B=O,C=I\), and \(\mathcal{S}=\mathcal{S}_{\mathrm{Markov}}\times\mathbb{R}^{m\times N}\times \mathbb{R}_{+}\), where \(O\) is the matrix of all zeros.
### Multi-stage enterprise input-output problems
Input-output analysis is a framework describing and analyzing input (consumption) and output (production) activities and their relations in an economy [27, 25]. Referring to [25], considering an enterprise production with \(m_{1}\) self-made and \(m_{2}\) out-sourced products, the following multi-stage enterprise input-output optimization problem can be constructed to realize the given expected output values of enterprise production by controlling production technologies and purchase-sale plans.
\[\min_{A,U} \sum_{t=1}^{N}\left\|x_{t}-r_{t}\right\|_{2}\] (O-IN/OUTPUT1) \[\mathrm{s.t.} x_{t}=Ax_{t-1}+u_{t-1},\;\;t=1,2,\ldots,N,\] \[A\in\mathcal{S}_{\mathrm{IO}},\]
where the production technology matrix \(A\in\mathbb{R}^{(m_{1}+m_{2})\times(m_{1}+m_{2})}\) with structure described in (1) and purchase-sale control \(U\coloneqq(u_{0},u_{1},\ldots,u_{N-1})\in\mathbb{R}^{(m_{1}+m_{2})\times N}\) are decision variables, \(N\in\mathbb{N}_{+}\) is the duration of enterprise production, \(\{x_{t}\}_{t=1}^{N}\subseteq\mathbb{R}^{m_{1}+m_{2}}\) are the production output values of \(m_{1}+m_{2}\) products in which \((x_{t})_{i}\) is the output value of the \(i\)th self-made product, \(i=1,2,\ldots,m_{1}\), and \((x_{t})_{m_{1}+j}\) is the output value of the \(j\)th out-sourced product at each stage, \(j=1,2,\ldots,m_{2}\), \(\{u_{t}\}_{t=0}^{N-1}\subseteq\mathbb{R}^{m_{1}+m_{2}}\) are the purchase-sale values of \(m_{1}+m_{2}\) products at each stage, \(\{r_{t}\}_{t=1}^{N}\subseteq\mathbb{R}^{m_{1}+m_{2}}\) are given expected output values as the references for \(\{x_{t}\}_{t=1}^{N}\), and constraint set
\[\mathcal{S}_{\mathrm{IO}}\coloneqq\left\{\begin{pmatrix}I-G&O\\ -H&I\end{pmatrix}\in\mathbb{R}^{(m_{1}+m_{2})\times(m_{1}+m_{2})}\left|\begin{array} []{l}\text{subject to a finite number of linear inequality}\\ \text{constraints on }G\in\mathbb{R}^{m_{1}\times m_{1}}\text{ and }H\in\mathbb{R}^{m_{2} \times m_{1}}.\end{array}\right.\right\}, \tag{1}\]
in which \(G\) and \(H\) are composed of technical coefficients [25]. Then enterprise production is \(x_{t}=Ax_{t-1}+u_{t-1}\) in (O-IN/OUTPUT1). The production technology matrix \(A\) and purchase-sale control \(U\) are determined to realize the expected enterprise output values by minimizing the discrepancy between the system output and the expected output values \(\sum_{t=1}^{N}\left\|x_{t}-r_{t}\right\|_{2}\). Notice that (O-IN/OUTPUT1) is a special case of (MOPUL) by setting \(\lambda_{1}=\lambda_{2}=0,\lambda_{3}=1\), \(f_{3}(\omega)=\omega\), \(B=C=I\), and \(\mathcal{S}=\mathcal{S}_{\mathrm{IO}}\times\mathbb{R}^{(m_{1}+m_{2})\times N} \times\mathbb{R}_{+}\).
When a steady and controllable change of the production technology is preferred within a guaranteed
level of cumulative error, we may consider the following problem:
\[\min_{A,U} \left\|A-A^{\mathrm{r}}\right\|_{F}\] (O-IN/OUTPUT2) \[\mathrm{s.t.} x_{t}=Ax_{t-1}+u_{t-1},\;\;t=1,2,\ldots,N,\] \[\sum_{t=1}^{N}\left\|x_{t}-r_{t}\right\|_{2}\leq\omega,\] \[\|u_{t}-u_{t}^{\mathrm{r}}\|_{2}\leq\omega_{t},\;\;t=0,1,\ldots,N-1,\] \[A\in\mathcal{S}_{\mathrm{i}\mathrm{o}},\]
where the production technology matrix \(A\in\mathbb{R}^{(m_{1}+m_{2})\times(m_{1}+m_{2})}\) and purchase-sale control \(U\coloneqq(u_{0},u_{1},\)\(\ldots,u_{N-1})\in\mathbb{R}^{(m_{1}+m_{2})\times N}\) are decision variables, \(N\in\mathbb{N}_{+}\) is the duration of enterprise production, \(\{r_{t}\}_{t=1}^{N}\subseteq\mathbb{R}^{m_{1}+m_{2}}\), \(A^{\mathrm{r}}\in\mathbb{R}^{(m_{1}+m_{2})\times(m_{1}+m_{2})}\), and \(\{u_{t}^{\mathrm{r}}\}_{t=0}^{N-1}\subseteq\mathbb{R}^{m_{1}+m_{2}}\) are given reference values of \(\{x_{t}\}_{t=1}^{N}\), \(A\), and \(\{u_{t}\}_{t=0}^{N-1}\), respectively, and \(\omega,\{\omega_{t}\}_{t=0}^{N-1}\subseteq\mathbb{R}_{+}\) are control levels. The second constraint guarantees a cumulative precision of the iteration within the control level \(\omega\). And the third constraint guarantees a controllable change of the purchase-sale values. Notice that (O-IN/OUTPUT2) is a special case of (MOPUL) by setting \(\lambda_{1}=1,\lambda_{2}=\lambda_{3}=0\), \(f_{1}(A)=\|A-A^{\mathrm{r}}\|_{F}\), \(B=C=I\), and \(\mathcal{S}=\mathcal{S}_{\mathrm{i}\mathrm{o}}\times\{U|\|u_{t}-u_{t}^{ \mathrm{r}}\|_{2}\leq\omega_{t},t=0,1,\ldots,N-1\}\times\{\mathrm{constant} \;\omega\}\).
## 3 Semidefinite approximation
In this section, we explore the SDP relaxation for problem (MOPUL). In Subsection 3.1, we discuss the computational intractability of the problem and construct a polynomial-time solvable SDP approximation model in Subsection 3.2. The quality of SDP approximate solutions to two specific settings in terms of feasibility and optimality is analyzed in Subsection 3.3.
### Computational intractability
When \(\{f_{i}\}_{i=1}^{3}\) and \(\mathcal{S}\) are SD representable, the computational intractability of (MOPUL) mainly comes from the entanglement of decision variables in the first three constraints. Specifically, combined with the first two constraints, the third constraint of (MOPUL) is equivalent to
\[\sum_{t=1}^{N}\xi_{t}\leq\omega,\] \[\|y_{t}-r_{t}\|_{2}^{2}\leq\xi_{t}^{2},0\leq\xi_{t},\;t=1,2,\ldots,N,\]
where
\[\left\|y_{t}-r_{t}\right\|_{2}^{2}= x_{0}^{\mathrm{T}}(A^{\mathrm{T}})^{t}C^{\mathrm{T}}CA^{t}x_{0}+2 \sum_{j=0}^{t-1}x_{0}^{\mathrm{T}}(A^{\mathrm{T}})^{t}C^{\mathrm{T}}CA^{j}Bu_{ t-1-j} \tag{2}\] \[+\sum_{i,j=0}^{t-1}u_{t-1-i}^{\mathrm{T}}B^{\mathrm{T}}(A^{ \mathrm{T}})^{i}C^{\mathrm{T}}CA^{j}Bu_{t-1-j}-2x_{0}^{\mathrm{T}}(A^{\mathrm{ T}})^{t}C^{\mathrm{T}}r_{t}\] \[-2\sum_{i=0}^{t-1}u_{t-1-i}^{\mathrm{T}}B^{\mathrm{T}}(A^{ \mathrm{T}})^{i}C^{\mathrm{T}}r_{t}+r_{t}^{\mathrm{T}}r_{t}.\]
This is a nonnegative multivariate polynomial of degree \(2t\), \(t=1,2,\ldots,N\) over \(A\). Thus (MOPUL) equivalently contains a series of multivariate polynomial constraints. Since the problem of minimizing a nonnegative multivariate polynomial of degree higher than or equal to 4 is in general NP-hard [29], we know (MOPUL) is NP-hard.
### Approximation model
Notice that the vector \(r_{t}\) in (MOPUL) can be viewed as given reference values of the system output \(y_{t}\) at each stage. For (O-MPC) in Subsection 2.1, it represents the reference signal in the linear control system. For (O-COVID) in Subsection 2.2, it represents the expected number of individuals. For (O-Markov) in Subsection 2.3, it represents the observed frequency of certain event. And for (O-IN/OUTPUT1) and (O-IN/OUTPUT2) in Subsection 2.4, it represents the expected output value of enterprise production. In the proposed approximation model, with the similar idea of [38], \(r_{t}\) is used to decouple the nested iteration of \(x_{t}\) to avoid the multivariate polynomial structures in (2). Specifically, we replace the constraint \(x_{t}=Ax_{t-1}+Bu_{t-1}\), \(t=1,2,\ldots,N\) in (MOPUL) by \(x_{t}=AC^{\dagger}r_{t-1}+Bu_{t-1}\), \(t=1,2,\ldots,N\). Then an approximate matrix optimization problem over an uncertain linear system on finite horizon (abbreviated as AMOPUL) can be constructed as the following:
\[\min_{A,U,\omega} \lambda_{1}f_{1}\left(A\right)+\lambda_{2}f_{2}\left(U\right)+ \lambda_{3}f_{3}\left(\omega\right)\] (AMOPUL) \[\mathrm{s.t.} r_{0}=Cx_{0},\;\;x_{0}^{\mathrm{a}}=x_{0},\] \[x_{t}^{\mathrm{a}}=AC^{\dagger}r_{t-1}+Bu_{t-1},\;\;t=1,2,\ldots,N,\] \[y_{t}^{\mathrm{a}}=Cx_{t}^{\mathrm{a}},\;\;t=0,1,\ldots,N,\] \[\sum_{t=1}^{N}\left\|y_{t}^{\mathrm{a}}-r_{t}\right\|_{2}\leq\omega,\] \[(A,U,\omega)\in\mathcal{S},\]
where the transition matrix \(A\in\mathbb{R}^{n\times n}\), control \(U:=(u_{0},u_{1},\ldots,u_{N-1})\in\mathbb{R}^{m\times N}\), and control level/threshold \(\omega\in\mathbb{R}_{+}\) are decision variables. To avoid potential confusions in notation, \(x_{t}^{\mathrm{a}}\) and \(y_{t}^{\mathrm{a}}\) are used to denote the approximate values of \(x_{t}\) and \(y_{t}\) in (MOPUL), respectively. We call \(\sum_{t=1}^{N}\|y_{t}^{\mathrm{a}}-r_{t}\|_{2}\) the approximate cumulative error. The meanings of other notations are the same as in (MOPUL).
Definitions and some properties of the SD representability are included below.
**Definition 3.1**.: _(Semidefinite representable set [5]) A convex set \(\mathcal{C}\subseteq\mathbb{R}^{n}\) is called semidefinite representable (SD representable) if there exist_
\[L_{i}^{j}\in\textbf{S}^{m_{j}},i=0,1,\ldots,n+d,j=1,2,\ldots,J\]
_such that_
\[\mathcal{C}=\left\{x=(x_{1},x_{2},\ldots,x_{n})^{\mathrm{T}}\in\mathbb{R}^{n} \left|\begin{array}{c}L_{0}^{j}+\sum_{i=1}^{n}x_{i}L_{i}^{j}+\sum_{i=1}^{d} u_{i}L_{n+i}^{j}\succeq 0,\;\;j=1,2\ldots,J\\ \text{for some}\;u=(u_{1},u_{2},\ldots,u_{d})^{\mathrm{T}}\in\mathbb{R}^{d} \end{array}\right.\right\}.\]
**Definition 3.2**.: _(Semidefinite representable function [5]) A convex function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\cup\{+\infty\}\) is called semidefinite representable (SD representable) if the set \(\{(x,\lambda)\in\mathbb{R}^{n+1}|f(x)\leq\lambda\}\) is SD representable._
SD representability of sets is preserved through the set operations of intersection, direct product, affine mapping and its inverse [5]. Notice that the matrix norm \(\|\cdot\|_{F}\), \(\|\cdot\|_{Q}\), \(\|\cdot\|_{2}\), and \(\|\cdot\|_{*}\) used in this paper are all SD representable functions [5]. The next lemma discloses the connections between the SD representability and SDP problems.
**Lemma 3.1** ([5]).: _A minimization problem \(\min_{x}\{c^{\mathrm{T}}x|x\in\cap_{i=1}^{k}\mathcal{C}_{i}\subseteq\mathbb{R}^{n}\}\) can be equivalently formulated as an SDP problem if \(\mathcal{C}_{i}\) is SD representable for \(i=1,2,\ldots,k\)._
In order to construct an equivalent SDP reformulation, we need the next two results.
**Lemma 3.2**.: _Let \(x\in\mathbb{R}^{n}\) and \(X=\begin{pmatrix}O&x\\ x^{\mathrm{T}}&0\end{pmatrix}\in\mathbf{S}^{n+1}\). If \(X\in\mathbf{S}^{n+1}_{+}\), then \(x=\mathbf{0}\)._
Proof.: Let \(v_{i,n+1}=(0,\ldots,0,1,0,\ldots,0,1)^{\mathrm{T}}\in\mathbb{R}^{n+1}\), whose \(i\)th and \((n+1)\)th elements are 1, and \(\bar{v}_{i,n+1}=(0,\ldots,0,-1,0,\ldots,0,1)^{\mathrm{T}}\in\mathbb{R}^{n+1}\), whose \(i\)th element is \(-1\) and \((n+1)\)th element is 1, \(i=1,2,\ldots,n\). Since \(X\in\mathcal{S}^{n+1}_{+}\), we know that
\[2x_{i}=v_{i,n+1}^{\mathrm{T}}Xv_{i,n+1}\geq 0,\;\mathrm{and}\;-2x_{i}=\bar{v }_{i,n+1}^{\mathrm{T}}X\bar{v}_{i,n+1}\geq 0,i=1,2,\ldots,n.\]
Therefore \(x=\mathbf{0}\).
**Lemma 3.3**.: _(Schur complement [21, Theorem 1.12(b) 1]) Let \(M\in\mathbf{S}^{n}\) be partitioned as_
\[M=\begin{pmatrix}E&F\\ F^{\mathrm{T}}&G\end{pmatrix},\]
_where \(E\in\mathbf{S}^{q}\) is nonsingular with \(q\leq n-1\). Then \(M\in\mathbf{S}^{n}_{+}\) if and only if \(E\in\mathbf{S}^{q}_{++}\) and \(G-F^{\mathrm{T}}E^{-1}F\in\mathbf{S}^{n-q}_{+}\)._
**Theorem 3.1**.: _Under the assumption that \(\{f_{i}\}_{i=1}^{3}\) and \(\mathcal{S}\) are SD representable, problem (AMOPUL) has the following SDP reformulation:_
\[\min_{A,U,\omega,\{\xi_{t}\}_{t=1}^{N}} \lambda_{1}f_{1}\left(A\right)+\lambda_{2}f_{2}\left(U\right)+ \lambda_{3}f_{3}\left(\omega\right)\] \[\mathrm{s.t.} \begin{pmatrix}\xi_{t}I&CAC^{\dagger}r_{t-1}+CBu_{t-1}-r_{t}\\ \left(CAC^{\dagger}r_{t-1}+CBu_{t-1}-r_{t}\right)^{\mathrm{T}}&\xi_{t}\end{pmatrix} \in\mathbf{S}^{p+1}_{+},\] \[t=1,2,\ldots,N,\] \[\sum_{t=1}^{N}\xi_{t}\leq\omega,\] \[(A,U,\omega)\in\mathcal{S},\]
_where \(A\in\mathbb{R}^{n\times n}\), \(U\coloneqq(u_{0},u_{1},\ldots,u_{N-1})\in\mathbb{R}^{m\times N}\), \(\omega\in\mathbb{R}_{+}\), and \(\{\xi_{t}\}_{t=1}^{N}\subseteq\mathbb{R}_{+}\) are decision variables, and the remaining notations are defined the same as in (AMOPUL)._
Proof.: (AMOPUL) is equivalent to
\[\min_{A,U,\omega,\{\xi_{t}\}_{t=1}^{N}} \lambda_{1}f_{1}\left(A\right)+\lambda_{2}f_{2}\left(U\right)+ \lambda_{3}f_{3}\left(\omega\right)\] \[\mathrm{s.t.} \left\|CAC^{\dagger}r_{t-1}+CBu_{t-1}-r_{t}\right\|_{2}^{2}\leq \xi_{t}^{2},\;\;\xi_{t}\geq 0,\;\;t=1,2,\ldots,N,\] \[\sum_{t=1}^{N}\xi_{t}\leq\omega,\] \[(A,U,\omega)\in\mathcal{S}.\]
We now prove that the first constraint in the above reformulation is equivalent to
\[\begin{pmatrix}\xi_{t}I&CAC^{\dagger}r_{t-1}+CBu_{t-1}-r_{t}\\ \left(CAC^{\dagger}r_{t-1}+CBu_{t-1}-r_{t}\right)^{\mathrm{T}}&\xi_{t}\end{pmatrix} \in\mathbf{S}_{+}^{p+1},\;\;t=1,2,\ldots,N.\]
For each \(t=1,2,\ldots,N\), if \(\xi_{t}=0\), then
\[\left\|CAC^{\dagger}r_{t-1}+CBu_{t-1}-r_{t}\right\|_{2}^{2}\leq \xi_{t}^{2},\xi_{t}=0\] \[\Longleftrightarrow CAC^{\dagger}r_{t-1}+CBu_{t-1}-r_{t}= \mathbf{0}\] \[\Longleftrightarrow\begin{pmatrix}O&CAC^{\dagger}r_{t-1}+CBu_{t -1}-r_{t}\\ \left(CAC^{\dagger}r_{t-1}+CBu_{t-1}-r_{t}\right)^{\mathrm{T}}&0\end{pmatrix} \in\mathbf{S}_{+}^{p+1}\] \[\Longleftrightarrow\begin{pmatrix}\xi_{t}I&CAC^{\dagger}r_{t-1}+ CBu_{t-1}-r_{t}\\ \left(CAC^{\dagger}r_{t-1}+CBu_{t-1}-r_{t}\right)^{\mathrm{T}}&\xi_{t}\end{pmatrix} \in\mathbf{S}_{+}^{p+1},\]
where the second equivalency follows from Lemma 3.2.
If \(\xi_{t}>0\), then
\[\left\|CAC^{\dagger}r_{t-1}+CBu_{t-1}-r_{t}\right\|_{2}^{2}\leq \xi_{t}^{2},\;\;\xi_{t}>0\] \[\Longleftrightarrow\xi_{t}-\left(CAC^{\dagger}r_{t-1}+CBu_{t -1}-r_{t}\right)^{\mathrm{T}}\left(\xi_{t}I\right)^{-1}\left(CAC^{\dagger}r_{ t-1}+CBu_{t-1}-r_{t}\right)\geq 0,\;\;\xi_{t}I\in\mathbf{S}_{++}^{p}\] \[\Longleftrightarrow\begin{pmatrix}\xi_{t}I&CAC^{\dagger}r_{t-1 }+CBu_{t-1}-r_{t}\\ \left(CAC^{\dagger}r_{t-1}+CBu_{t-1}-r_{t}\right)^{\mathrm{T}}&\xi_{t}\end{pmatrix} \in\mathbf{S}_{+}^{p+1},\]
where the second equivalency follows from the fact that a symmetric matrix is positive definite if and only if all of its eigenvalues are positive [3], and the third equivalency follows from Lemma 3.3. Therefore, we obtain the equivalent reformulation. According to the SD representability assumptions and Lemma 3.1, (AMOPUL) becomes an SDP problem.
The computational tractability of SDP problems [5, 30, 34] and Theorem 3.1 assure that (AMOPUL) is polynomial-time solvable. Once an optimal solution \((A^{\mathrm{a}*},U^{\mathrm{a}*},\omega^{\mathrm{a}*})\) of (AMOPUL) is obtained, it is worth estimating the cumulative error between the system outputs and the references in (MOPUL). An upper bound for this gap is provided in the next theorem.
**Theorem 3.2**.: _If there exist \(\beta>0\) and \(\omega^{\mathrm{u}}>0\) such that \(\|CA^{\mathrm{a}}C^{\dagger}\|_{2}\leq\beta\) and \(\omega^{\mathrm{a}}\leq\omega^{\mathrm{u}}\) for any feasible \(A^{\mathrm{a}},\omega^{\mathrm{a}}\) of (AMOPUL), then for each optimal solution \((A^{\mathrm{a}*},U^{\mathrm{a}*},\omega^{\mathrm{a}*})\) of (AMOPUL), by letting \(x_{t}=A^{\mathrm{a}*}x_{t-1}+Bu_{t-1}^{\mathrm{a}*},y_{t}=Cx_{t},t=1,2,\ldots,N\), we have_
\[\sum_{t=1}^{N}\|y_{t}-r_{t}\|_{2}\leq\left(\sum_{i=0}^{N-1}\beta^{i}\right) \omega^{\mathrm{u}}.\]
Proof.: When \(C\) has full column rank, we have \(x_{0}=C^{\dagger}r_{0}\) and \(C^{\dagger}C=I\)[32]. Hence
\[\sum_{t=1}^{N}\left\|y_{t}-r_{t}\right\|_{2}\] \[= \sum_{t=1}^{N}\left\|C(A^{\mathrm{a}*}x_{t-1}+Bu_{t-1}^{\mathrm{a }*})-CA^{\mathrm{a}*}C^{\dagger}r_{t-1}+CA^{\mathrm{a}*}C^{\dagger}r_{t-1}-r_{ t}\right\|_{2}\] \[\leq \sum_{t=1}^{N}\left\|CA^{\mathrm{a}*}C^{\dagger}\left(Cx_{t-1}-r _{t-1}\right)\right\|_{2}+\sum_{t=1}^{N}\left\|y_{t}^{\mathrm{a}}-r_{t}\right\| _{2}\] \[\leq \sum_{t=1}^{N}\left\|CA^{\mathrm{a}*}C^{\dagger}\left(y_{t-1}-r _{t-1}\right)\right\|_{2}+\omega^{\mathrm{a}*}\] \[\leq \left\|CA^{\mathrm{a}*}C^{\dagger}\right\|_{2}\left(\sum_{t=1}^ {N-1}\left\|y_{t}-r_{t}\right\|_{2}\right)+\omega^{\mathrm{a}*}.\]
With the same arguments, we have
\[\sum_{t=1}^{n}\left\|y_{t}-r_{t}\right\|_{2}\leq\left\|CA^{\mathrm{a}*}C^{ \dagger}\right\|_{2}\left(\sum_{t=1}^{n-1}\left\|y_{t}-r_{t}\right\|_{2}\right) +\omega^{\mathrm{a}*},\;\;n=2,3,\ldots,N-1.\]
Consequently, we have
\[\sum_{t=1}^{N}\left\|y_{t}-r_{t}\right\|_{2}\] \[\leq \left\|CA^{\mathrm{a}*}C^{\dagger}\right\|_{2}^{N-1}\left\|y_{1}- r_{1}\right\|_{2}+\left(\sum_{i=0}^{N-2}\left\|CA^{\mathrm{a}*}C^{\dagger} \right\|_{2}^{i}\right)\omega^{\mathrm{a}*}\] \[= \left\|CA^{\mathrm{a}*}C^{\dagger}\right\|_{2}^{N-1}\left\|y_{1}^ {\mathrm{a}}-r_{1}\right\|_{2}+\left(\sum_{i=0}^{N-2}\left\|CA^{\mathrm{a}*}C^ {\dagger}\right\|_{2}^{i}\right)\omega^{\mathrm{a}*}\] \[\leq \left(\sum_{i=0}^{N-1}\left\|CA^{\mathrm{a}*}C^{\dagger}\right\|_ {2}^{i}\right)\omega^{\mathrm{a}*}\] \[\leq \left(\sum_{i=0}^{N-1}\beta^{i}\right)\omega^{\mathrm{u}}.\]
**Remark 3.1**.: _For the application problems in Section 2, we may assume that the variables \(A\) and \(\omega\) are bounded. Then the assumptions \(\|CA^{\mathrm{a}}C^{\dagger}\|_{2}\leq\beta\) and \(\omega^{\mathrm{a}}\leq\omega^{\mathrm{u}}\) for any feasible \(A^{\mathrm{a}}\) and \(\omega^{\mathrm{a}}\) of (AMOPUL) in Theorem 3.2 are satisfied. In this case, additional constraints such as \(\|A\|_{2}\leq\alpha\) and \(\omega\leq\omega^{\mathrm{u}}\) with large enough \(\alpha\) and \(\omega^{\mathrm{u}}\) can be added to \(\mathcal{S}\) if necessary._
When the control level \(\omega\) in (MOPUL) is fixed as a constant, we see the feasibility of an (AMOPUL) solution to (MOPUL) as below.
**Theorem 3.3**.: _Suppose that \(\mathcal{S}\subseteq\{(A,U,\omega)|\omega=\omega^{\mathrm{c}}\}\) in (MOPUL) with \(\omega^{\mathrm{c}}>0\) being given. Replace the constraint \(\sum_{t=1}^{N}\|y_{t}^{\mathrm{a}}-r_{t}\|_{2}\leq\omega^{\mathrm{c}}\) with \(\sum_{t=1}^{N}\|y_{t}^{\mathrm{a}}-r_{t}\|_{2}\leq\omega^{\mathrm{c}}/(\sum_{ i=0}^{N-1}\beta^{i})\) in the SDP approximation
(AMOPUL) for some \(\beta>0\). If \(\|CA^{\mathrm{a}}C^{\dagger}\|_{2}\leq\beta\) for any feasible \(A^{\mathrm{a}}\) of (AMOPUL), then any feasible solution \((A^{\mathrm{a}},U^{\mathrm{a}})\) of (AMOPUL) is feasible to (MOPUL), and the optimal objective value of (AMOPUL) becomes an upper bound for that of (MOPUL)._
Proof.: With similar arguments as in the proof of Theorem 3.2, we have
\[\sum_{t=1}^{n}\left\|y_{t}-r_{t}\right\|_{2}\leq\left\|CA^{\mathrm{a}}C^{\dagger }\right\|_{2}\left(\sum_{t=1}^{n-1}\left\|y_{t}-r_{t}\right\|_{2}\right)+\frac{ \omega^{\mathrm{c}}}{\sum_{i=0}^{N-1}\beta^{i}},\ \ n=2,3,\ldots,N,\]
for any feasible solution \((A^{\mathrm{a}},U^{\mathrm{a}})\) of (AMOPUL). Hence
\[\sum_{t=1}^{N}\left\|y_{t}-r_{t}\right\|_{2}\leq \left(\sum_{i=0}^{N-1}\left\|CA^{\mathrm{a}}C^{\dagger}\right\|_{2 }^{i}\right)\frac{\omega^{\mathrm{c}}}{\sum_{i=0}^{N-1}\beta^{i}}\] \[\leq \left(\sum_{i=0}^{N-1}\beta^{i}\right)\frac{\omega^{\mathrm{c}}}{ \sum_{i=0}^{N-1}\beta^{i}}\] \[= \ \omega^{\mathrm{c}},\]
which implies the feasibility of \((A^{\mathrm{a}},U^{\mathrm{a}})\) to (MOPUL). Then the optimal objective value of (AMOPUL) becomes an upper bound for that of (MOPUL).
The above theorem shows that the control level plays an important role in (AMOPUL). Numerically, we will study this issue further in Section 4.
**Remark 3.2**.: _For Theorem 3.3, if \(\mathcal{S}\subseteq\{(A,U,\omega)|\|A\|_{2}\leq\alpha\}\), by taking \(\beta=\alpha\|C\|_{2}\|C^{\dagger}\|_{2}\), the assumption \(\|CA^{\mathrm{a}}C^{\dagger}\|_{2}\leq\beta\) is satisfied._
**Remark 3.3**.: _A weighted cumulative error \(\sum_{t=1}^{N}\|y_{t}-r_{t}\|_{Q}\) can also be used in (MOPUL) by replacing \(\|\cdot\|_{2}\) with \(\|\cdot\|_{Q}\). The corresponding approximation model can then be constructed by using \(\|\cdot\|_{Q}\) in (AMOPUL). With similar arguments as in the proof of Theorem 3.1, its approximation is an SDP problem. As \(\eta_{1}\|x\|_{Q}\leq\|x\|_{2}\leq\eta_{2}\|x\|_{Q}\) holds for all \(x\in\mathbb{R}^{p}\) and some \(\eta_{1},\eta_{2}>0\)[40, 1.12, Proposition 4], Theorem 3.3 follows when an upper bound of \(\sum_{t=1}^{N}\|y_{t}^{\mathrm{a}}-r_{t}\|_{Q}\) is given by_
\[\frac{\omega^{\mathrm{c}}}{(\frac{\eta_{2}\beta}{\eta_{1}})^{N-1}+\sum_{i=0}^ {N-2}\frac{\eta_{i}^{i+1}\beta^{i}}{\eta_{1}^{i+1}}}.\]
**Remark 3.4**.: _When the matrix \(A\) is time-varying in (MOPUL), i.e., \(x_{t}=A_{t-1}x_{t-1}+Bu_{t-1}\), we can also get a similar SDP approximation with similar discussions._
### Two special cases
We study two special cases of (MOPUL) associated with specific application problems in Section 2 focusing on the reference outputs fitting and transition matrix estimation, respectively.
#### 3.3.1 Mopul1
To fit the given reference outputs \(\{r_{t}\}_{t=1}^{N}\), we consider the following (MOPUL1) problem to minimize the cumulative error of reference outputs:
\[\min_{\begin{array}{ll}\Lambda,U&\sum_{t=1}^{N}\left\|y_{t}-r_{t} \right\|_{2}\\ \mathrm{s.t.}&x_{t}=Ax_{t-1}+Bu_{t-1},\ \ t=1,2,\ldots,N,\\ &y_{t}=Cx_{t},\ \ t=0,1,\ldots,N,\\ &(A,U)\in\mathcal{S}_{\textsc{m}1},\end{array}\] (MOPUL1)
where the transition matrix \(A\in\mathbb{R}^{n\times n}\) and control \(U\coloneqq(u_{0},u_{1},\ldots,u_{N-1})\in\mathbb{R}^{m\times N}\) are decision variables, \(\mathcal{S}_{\textsc{m}1}\) is SD representable. It is a special case of (MOPUL) by setting \(\lambda_{1}=\lambda_{2}=0\), \(\lambda_{3}=1\), \(f_{3}(\omega)=\omega\), and \(\mathcal{S}=\mathcal{S}_{\textsc{m}1}\times\mathbb{R}_{+}\). Notice that (O-MPC) with \(\lambda=0\) in Subsection 2.1, (O-COVID) in Subsection 2.2, (O-Markov) in Subsection 2.3, and (O-IN/OUTPUT1) in Subsection 2.4 are four examples of (MOPUL1). An SDP approximation model of (MOPUL1) becomes the following:
\[\min_{\begin{array}{ll}A,U&\sum_{t=1}^{N}\left\|y_{t}^{\mathrm{a}}-r_{t} \right\|_{2}\\ \mathrm{s.t.}&r_{0}=Cx_{0},\ \ x_{0}^{\mathrm{a}}=x_{0},\\ &x_{t}^{\mathrm{a}}=AC^{\dagger}r_{t-1}+Bu_{t-1},\ \ t=1,2,\ldots,N,\\ &y_{t}^{\mathrm{a}}=Cx_{t}^{\mathrm{a}},\ \ t=0,1,\ldots,N,\\ &(A,U)\in\mathcal{S}_{\textsc{m}1},\end{array}\] (AMOPUL1)
where \(A\) and \(U\coloneqq(u_{0},u_{1},\ldots,u_{N-1})\) are decision variables.
The next theorem provides an upper bound for the optimal objective value \(v_{\textsc{m}1}^{*}\) of (AMOPUL1) assuming that the given reference outputs are accurate.
**Theorem 3.4**.: _Let \(\{\epsilon_{t}\}_{t=1}^{N}\) be \(N\) nonnegative constants. For problem (AMOPUL1), if the set_
\[\mathcal{Z}_{\epsilon}\coloneqq\left\{(A,U)\in\mathcal{S}_{\textsc{m}1} \left|\begin{array}{ll}U=(u_{0},u_{1},\ldots,u_{N-1}),\\ \hat{y}_{0}=Cx_{0},\\ \hat{y}_{t}=CAC^{\dagger}\hat{y}_{t-1}+CBu_{t-1},\\ \|\hat{y}_{t}-r_{t}\|_{2}\leq\epsilon_{t},\ \ t=1,2,\ldots,N\end{array} \right.\right\}\neq\emptyset, \tag{3}\]
_then we have_
\[v_{\textsc{m}1}^{*}\leq(1+\gamma)\sum_{t=1}^{N-1}\epsilon_{t}+ \epsilon_{N},\]
_where \(\gamma\coloneqq\inf_{(A,U)\in\mathcal{Z}_{\epsilon}}\|CAC^{\dagger}\|_{2}\). Moreover, if \(\{r_{t}\}_{t=1}^{N}\) are accurate system references, i.e., there exists an \((A,U)\in\mathcal{S}_{\textsc{m}1}\) such that \(\hat{y}_{t}=r_{t},t=1,2,\ldots,N\), then \(v_{\textsc{m}1}^{*}=0\)._
Proof.: For each approximate iteration \(y_{t}^{\text{a}}=CAC^{\dagger}r_{t-1}+CBu_{t-1},t=1,2,\ldots,N\) with \((A,U)\in\mathcal{Z}_{\epsilon}\),
\[\sum_{t=1}^{N}\left\|y_{t}^{\text{a}}-r_{t}\right\|_{2}\] \[= \sum_{t=1}^{N}\left\|CAC^{\dagger}r_{t-1}+CBu_{t-1}-r_{t}\right\| _{2}\] \[= \sum_{t=1}^{N}\left\|CAC^{\dagger}(r_{t-1}-\hat{y}_{t-1}+\hat{y}_ {t-1})+CBu_{t-1}-(r_{t}-\hat{y}_{t}+\hat{y}_{t})\right\|_{2}\] \[\leq \sum_{t=1}^{N}\left\|CAC^{\dagger}\left(\hat{y}_{t-1}-r_{t-1} \right)\right\|_{2}+\sum_{t=1}^{N}\left\|\hat{y}_{t}-r_{t}\right\|_{2}\] \[\leq \left\|CAC^{\dagger}\right\|_{2}\sum_{t=1}^{N-1}\left\|\hat{y}_{ t}-r_{t}\right\|_{2}+\sum_{t=1}^{N}\left\|\hat{y}_{t}-r_{t}\right\|_{2}\] \[\leq \left(1+\|CAC^{\dagger}\|_{2}\right)\sum_{t=1}^{N-1}\epsilon_{t }+\epsilon_{N}.\]
Thus
\[v_{{}_{A1}}^{*}\leq\sum_{t=1}^{N}\left\|y_{t}^{\text{a}}-r_{t} \right\|_{2}\leq\left(1+\|CAC^{\dagger}\|_{2}\right)\sum_{t=1}^{N-1}\epsilon_{ t}+\epsilon_{N}\]
holds for each \((A,U)\in\mathcal{Z}_{\epsilon}\). Consequently, we have
\[v_{{}_{A1}}^{*}\leq\inf_{{}_{(A,U)\in\mathcal{Z}_{\epsilon}}} \left\{\left(1+\|CAC^{\dagger}\|_{2}\right)\sum_{t=1}^{N-1}\epsilon_{t}+ \epsilon_{N}\right\}=(1+\gamma)\sum_{t=1}^{N-1}\epsilon_{t}+\epsilon_{N},\]
and the rest of theorem follows.
This theorem shows that the accuracy of reference outputs is important to (AMOPUL1). Given a sequence of accurate reference outputs, (AMOPUL1) can fit them without error. Numerically, we will study this issue further in Section 4.
Let \(v_{{}_{\text{M1}}}^{*}\) and \(v_{{}_{A1}}^{*}\) be the optimal objective values of (MOPUL1) and (AMOPUL1), respectively, some bounds on \(v_{{}_{A1}}^{*}/v_{{}_{\text{M1}}}^{*}\) are provided in the next two results.
**Theorem 3.5**.: _If problem (MOPUL1) is attainable, then we have_
\[v_{{}_{A1}}^{*}\leq(1+\gamma^{*})\,v_{{}_{\text{M1}}}^{*},\]
_where \(\gamma^{*}\coloneqq\inf_{{}_{(A^{*},U^{*})\in\mathcal{F}_{\text{M1}}^{*}}} \inf_{{}_{(A,U)\in\mathcal{Z}_{\epsilon}^{*}}}\|CAC^{\dagger}\|_{2}\) with \(\mathcal{F}_{{}_{\text{M1}}}^{*}\) being the optimal solution set of (MOPUL1) and \(\mathcal{Z}_{\epsilon}^{*}\) being the set defined in (3) in which \(\epsilon_{t}=\|\hat{y}_{t}^{*}-r_{t}\|_{2}\), \(\hat{y}_{t}^{*}=CA^{*}C^{\dagger}\hat{y}_{t-1}^{*}+CBu_{t-1}^{*},t=1,2,\ldots,N\), and \(\hat{y}_{0}^{*}=Cx_{0}\) for each \((A^{*},U^{*})\in\mathcal{F}_{{}_{\text{M1}}}^{*}\)._
Proof.: When (MOPUL1) is attainable, we have \((A^{*},U^{*})\in\mathcal{Z}_{\epsilon^{*}}\neq\emptyset\) for each \((A^{*},U^{*})\in\mathcal{F}_{{}_{\text{M1}}}^{*}\). Theorem 3.4 says that
\[v_{{}_{A1}}^{*}\leq\left(1+\inf_{{}_{(A,U)\in\mathcal{Z}_{\epsilon^{*}}}}\|CAC ^{\dagger}\|_{2}\right)v_{{}_{\text{M1}}}^{*}\]
holds for each \((A^{*},U^{*})\in\mathcal{F}^{*}_{\textsc{M1}}\). Hence
\[v^{*}_{\textsc{A1}}\leq\inf_{(A^{*},U^{*})\in\mathcal{F}^{*}_{ \textsc{M1}}}\left(1+\inf_{(A,U)\in\mathbb{Z}_{*^{*}}}\|CAC^{\dagger}\|_{2} \right)v^{*}_{\textsc{M1}}=\left(1+\gamma^{*}\right)v^{*}_{\textsc{M1}}.\]
**Theorem 3.6**.: _If problem (AMOPUL1) is attainable, then we have_
\[v^{*}_{\textsc{M1}}\leq\left(\sum_{i=0}^{N-1}(\zeta^{*})^{i} \right)v^{*}_{\textsc{A1}},\]
_where \(\zeta^{*}\coloneqq\inf_{(A^{\mathrm{a}*},U^{\mathrm{a}*})\in\mathcal{F}^{*}_{ \textsc{A1}}}\|CA^{\mathrm{a}*}C^{\dagger}\|_{2}\) with \(\mathcal{F}^{*}_{\textsc{A1}}\) being the optimal solution set of (AMOPUL1)._
Proof.: When (AMOPUL1) is attainable, let \((A^{\mathrm{a}*},U^{\mathrm{a}*})\in\mathcal{F}^{*}_{\textsc{A1}}\) be an optimal solution of
(AMOPUL1), \(x_{t}=A^{\mathrm{a}*}x_{t-1}+Bu^{\mathrm{a}*}_{t-1}\), \(y_{t}=Cx_{t}\) in (MOPUL1), and \(x^{\mathrm{a}*}_{t}=A^{\mathrm{a}*}C^{\dagger}r_{t-1}+Bu^{\mathrm{a}*}_{t-1}\), \(y^{\mathrm{a}*}_{t}=Cx^{*}_{t}\) in (AMOPUL1), \(t=1,2,\ldots,N\). With similar arguments as in the proof of Theorem 3.2, we can obtain the following \(N-1\) inequalities:
\[\sum_{t=1}^{n}\|y_{t}-r_{t}\|_{2} \leq\|CA^{\mathrm{a}*}C^{\dagger}\|_{2}\sum_{t=1}^{n-1}\|y_{t}-r_ {t}\|_{2}+\sum_{t=1}^{N}\|y^{\mathrm{a}*}_{t}-r_{t}\|_{2},\ \ n=2,3,\ldots,N,\]
which imply that
\[\sum_{t=1}^{N}\|y_{t}-r_{t}\|_{2} \leq\left(\sum_{i=0}^{N-1}\|CA^{\mathrm{a}*}C^{\dagger}\|_{2}^{i} \right)\sum_{t=1}^{N}\|y^{\mathrm{a}*}_{t}-r_{t}\|_{2}\] \[=\left(\sum_{i=0}^{N-1}\|CA^{\mathrm{a}*}C^{\dagger}\|_{2}^{i} \right)v^{*}_{\textsc{A1}}.\]
Since any optimal solution of (AMOPUL1) is feasible to (MOPUL1), we know
\[v^{*}_{\textsc{M1}}\leq\left(\sum_{i=0}^{N-1}\|CA^{\mathrm{a}*} C^{\dagger}\|_{2}^{i}\right)v^{*}_{\textsc{A1}}\]
holds for each \((A^{\mathrm{a}*},U^{\mathrm{a}*})\in\mathcal{F}^{*}_{\textsc{A1}}\). Hence
\[v^{*}_{\textsc{M1}} \leq\inf_{(A^{\mathrm{a}*},U^{\mathrm{a}*})\in\mathcal{F}^{*}_{ \textsc{A1}}}\left(\sum_{i=0}^{N-1}\|CA^{\mathrm{a}*}C^{\dagger}\|_{2}^{i} \right)v^{*}_{\textsc{A1}}\] \[=\left(\sum_{i=0}^{N-1}(\zeta^{*})^{i}\right)v^{*}_{\textsc{A1}}.\]
**Remark 3.5**.: _When (MOPUL1) and (AMOPUL1) are both attainable with \(v^{*}_{\textsc{M1}}>0\), Theorems 3.5 and 3.6 imply that_
\[\frac{1}{\sum_{i=0}^{N-1}(\zeta^{*})^{i}}\leq\frac{v^{*}_{\textsc{A1}}}{v^{*} _{\textsc{M1}}}\leq 1+\gamma^{*}.\]
_Furthermore, if \(\zeta^{*}=\gamma^{*}=0\), then \(v^{*}_{\textsc{M1}}=v^{*}_{\textsc{A1}}\)._
#### 3.3.2 Mopul2
In some scenarios such as (O-IN/OUTPUT2) in Subsection 2.4 and a COVID-19 pandemic optimal control model MCOM in [38], a small change of the transition matrix \(A\) is preferred within a guaranteed level of cumulative error. We may consider the following problem:
\[\min_{A,U} \left\|A-A^{\mathrm{r}}\right\|_{F}\] (MOPUL2) \[\mathrm{s.t.} x_{t}=Ax_{t-1}+Bu_{t-1},\ \ t=1,2,\ldots,N,\] \[y_{t}=Cx_{t},\ \ t=0,1,\ldots,N,\] \[\sum_{t=1}^{N}\left\|y_{t}-r_{t}\right\|_{2}\leq\omega,\] \[\left\|u_{t}-u_{t}^{\mathrm{r}}\right\|_{2}\leq\omega_{t},\ \ t=0,1, \ldots,N-1,\] \[(A,U)\in\mathcal{S}_{\mathrm{M2}},\]
where the transition matrix \(A\in\mathbb{R}^{n\times n}\) and control \(U\coloneqq(u_{0},u_{1},\ldots,u_{N-1})\in\mathbb{R}^{m\times N}\) are decision variables, \(\{r_{t}\}_{t=1}^{N}\subseteq\mathbb{R}^{p}\), \(A^{\mathrm{r}}\in\mathbb{R}^{n\times n}\), and \(\{u_{t}^{\mathrm{r}}\}_{t=0}^{N-1}\subseteq\mathbb{R}^{m\times 1}\) are given references of \(\{y_{t}\}_{t=1}^{N}\), \(A\), and \(\{u_{t}\}_{t=0}^{N-1}\), respectively, \(\omega,\{\omega_{t}\}_{t=0}^{N-1}\subseteq\mathbb{R}_{+}\) are control levels, and constraint set \(\mathcal{S}_{\mathrm{M2}}\subseteq\mathbb{R}^{n\times n}\times\mathbb{R}^{m \times N}\) is SD representable. Notice that this is a special case of (MOPUL) by setting \(\lambda_{1}=1\), \(\lambda_{2}=\lambda_{3}=0\), \(f_{1}(A)=\|A-A^{\mathrm{r}}\|_{F}\), and \(\mathcal{S}=(\mathbb{R}^{n\times n}\times\{U|\|u_{t}-u_{t}^{\mathrm{r}}\|_{2} \leq\omega_{t},t=0,1,\ldots,N-1\}\cap\mathcal{S}_{\mathrm{M2}})\times\{\mathrm{ constant}\,\omega\}\). The objective function enforces a steady and controllable change of transition matrix \(A\) with respect to the reference value. The third constraint guarantees a cumulative precision of the iteration within the control level \(\omega\). And the fourth constraint guarantees a controllable change of the system inputs. Correspondingly, we can derive the following SDP approximation model:
\[\min_{A,U} \left\|A-A^{\mathrm{r}}\right\|_{F}\] (AMOPUL2) \[\mathrm{s.t.} r_{0}=Cx_{0},\ \ x_{0}^{\mathrm{a}}=x_{0},\] \[x_{t}^{\mathrm{a}}=AC^{\mathrm{r}}r_{t-1}+Bu_{t-1},\ \ t=1,2, \ldots,N,\] \[y_{t}^{\mathrm{a}}=Cx_{t}^{\mathrm{a}},\ \ t=0,1,\ldots,N,\] \[\sum_{t=1}^{N}\left\|y_{t}^{\mathrm{a}}-r_{t}\right\|_{2}\leq \tilde{\omega},\] \[\left\|u_{t}-u_{t}^{\mathrm{r}}\right\|_{2}\leq\omega_{t},\ \ t=0,1, \ldots,N-1,\] \[(A,U)\in\mathcal{S}_{\mathrm{M2}},\]
where \(A\) and \(U\coloneqq(u_{0},u_{1},\ldots,u_{N-1})\) are decision variables, \(\tilde{\omega}\in\mathbb{R}_{+}\) is control level. Following Theorem 3.3, we have the relationship between (MOPUL2) and (AMOPUL2) in the next result.
**Theorem 3.7**.: _Take \(\tilde{\omega}=\omega/(\sum_{i=0}^{N-1}\beta^{i})\) in (AMOPUL2) with respect to \(\omega\) in (MOPUL2) and \(\beta>0\). If \(\|CA^{\mathrm{a}}C^{\mathrm{t}}\|_{2}\leq\beta\) for any feasible \(A^{\mathrm{a}}\) of (AMOPUL2), then any feasible solution of (AMOPUL2) is feasible to (MOPUL2), and the optimal objective value of (AMOPUL2) becomes an upper bound for that of (MOPUL2)._
**Remark 3.6**.: _The existence of an upper bound \(\beta\) of \(\|CA^{\mathrm{a}}C^{\mathrm{t}}\|_{2}\) in Theorem 3.7 is guaranteed as mentioned in Remark 3.1._
Numerical experiments
In this section, we study the influences of perturbed noises at reference outputs \(\{r_{t}\}_{t=1}^{N}\) and control level \(\omega\) on the performance of the proposed approximation model (AMOPUL) numerically. Theorem 3.4 shows that the noise levels of the given reference outputs \(\{r_{t}\}_{t=1}^{N}\) are keys to the optimal objective value. We study the numerical performance of (AMOPUL1) and (AMOPUL2) with different noise levels of \(\{r_{t}\}_{t=1}^{N}\) in Subsection 4.1 and Subsubsection 4.2.1, respectively. In addition, Theorems 3.3 and 3.7 indicate that the size of feasible set of (AMOPUL) is mainly determined by the control level \(\tilde{\omega}\). Related numerical results on the performance of (AMOPUL2) in terms of \(\tilde{\omega}\) are reported in Subsubsection 4.2.2.
All data are randomly generated in our experiments as following:
* **An ideal instance.** Take \(n=m=p=100\), \(N=30\), and \(B=C=I\) in (MOPUL). Initial \(r_{0}\) is uniformly generated in \((-0.5,0.5)^{p}\). \(\hat{A}\) is an ideal value of \(A\) with each component being generated from the normal distribution \(\mathcal{N}(0,0.1^{2})\). \(\hat{u}_{t}\coloneqq\mathds{1}\hat{u}_{t}^{\text{s}}\) is an ideal value of \(u_{t}\) with \(\hat{u}_{t}^{\text{s}}\) being uniformly generated in \((-0.5,0.5)\), \(t=0,1,\ldots,N-1\). Define \(\hat{x}_{t}=\hat{A}\hat{x}_{t-1}+\hat{u}_{t-1},t=1,2,\ldots,N\) and \(\hat{x}_{0}=r_{0}\). Then \((\hat{A},\{\hat{u}_{t}\}_{t=0}^{N-1},\{\hat{x}_{t}\}_{t=0}^{N})\) forms an ideal instance.
* **Reference outputs.** Reference output \(r_{t}\coloneqq\hat{x}_{t}+e_{t}\) with the ideal value \(\hat{x}_{t}\) and a perturbed noise \(e_{t}\) with each component being generated from \(\mathcal{N}(\mu,\sigma^{2})\), \(t=1,2,\ldots,N\). A total of 20 random instances are generated for each \((\mu,\sigma)\).
The following 4 evaluation criteria are used to measure the performance of (AMOPUL1) and (AMOPUL2):
1. **Cumulative error (CE)**: \(\sum_{t=1}^{N}\|x_{t}^{*}-r_{t}\|_{2}\) measures the cumulative precision of (AMOPUL1), where \(\{r_{t}\}_{t=0}^{N}\) are the generated reference outputs, \(x_{t}^{*}=A^{\text{a*}}x_{t-1}^{*}+u_{t-1}^{\text{a*}},t=1,2,\ldots,N\), and \(x_{0}^{*}=r_{0}\) for an optimal solution \((A^{\text{a*}},(u_{0}^{\text{a*}},u_{1}^{\text{a*}},\ldots,u_{N-1}^{\text{a*}}))\) of (AMOPUL1).
2. **Approximate cumulative error (ACE)**: \(\sum_{t=1}^{N}\|x_{t}^{\text{a*}}-r_{t}\|_{2}\) measures the approximate cumulative precision of (AMOPUL2), where \(\{r_{t}\}_{t=0}^{N}\) are the generated reference outputs, \(x_{t}^{\text{a*}}=A^{\text{a*}}r_{t-1}+u_{t-1}^{\text{a*}}\), \(t=1,2,\ldots,N\) for an optimal solution \((A^{\text{a*}},(u_{0}^{\text{a*}},u_{1}^{\text{a*}},\ldots,u_{N-1}^{\text{a*}}))\) of (AMOPUL2).
3. **Relative error of \(A^{\text{a*}}\) (RE\(A^{\text{a*}}\))**: \(\frac{\|A^{\text{a*}}-\hat{A}\|_{F}}{\|\hat{A}\|_{F}}\) measures the difference between the true \(\hat{A}\) and an approximate solution \(A^{\text{a*}}\).
4. **Relative error of \(U^{\text{a*}}\) (RE\(U^{\text{a*}}\))**: \(\frac{\|U^{\text{a*}}-\hat{U}\|_{F}}{\|\hat{U}\|_{F}}\) measures the difference between the true \(\hat{U}\) and an approximate solution \(U^{\text{a*}}\), where \(\hat{U}\coloneqq(\hat{u}_{0},\hat{u}_{1},\ldots,\hat{u}_{N-1})\) and \(U^{\text{a*}}\coloneqq(u_{0}^{\text{a*}},u_{1}^{\text{a*}},\ldots,u_{N-1}^{ \text{a*}})\). Numerical results using real data for the COVID-19 pandemic optimal control can be referred to [38]. All experiments are implemented using MATLAB R2019b on a laptop equipped with 4.00 GB memory and AMD Ryzen 3 2200U with Radeon Vega Mobile Gfx (2.50 GHz). We use MOSEK (version 9.1.9) ([https://www.mosek.com](https://www.mosek.com)) in CVX-w64 (version 2.2) ([http://cvxr.com/cvx/](http://cvxr.com/cvx/)) to solve all involved optimization problems. Five significant digits are taken for the numerical results shown in every table.
### Performance of AMOPUL1
We study the influences of noise levels at the reference outputs on the performance of (AMOPUL1). For simplicity, the constraint set \(\mathcal{S}_{\text{M1}}\) is set to be box constrained:
\[\{A|-0.4\leq a_{ij}\leq 0.4,i,j=1,2,\ldots,n\}\times\{U|-0.5\leq u_{t}^{i} \leq 0.5,i=1,2,\ldots,n,t=0,1,\ldots,N-1\},\]
which is bounded and hence (AMOPUL1) is attainable. Then (AMOPUL1) can be reformulated as
\[\min_{A,U} \sum_{t=1}^{N}\left\|Ar_{t-1}+u_{t-1}-r_{t}\right\|_{2}\] \[\mathrm{s.t.} r_{0}=x_{0},\] \[-0.4\leq a_{ij}\leq 0.4,\;\;i,j=1,2,\ldots,n,\] \[-0.5\leq u_{t}^{i}\leq 0.5,\;\;i=1,2,\ldots,n,t=0,1,\ldots,N-1.\]
A total of 20 instances of \(\{r_{t}\}_{t=1}^{N}\) with perturbed noises for each \((\mu,\sigma)\) are generated with respect to \(\mu=0,\sigma=\) 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8; and \(\mu=1,\sigma=\) 2.5, 3, respectively. The means and standard deviations of the cumulative errors (CE) and relative errors of \(A^{\mathrm{a*}}\) and \(U^{\mathrm{a*}}\) (RE\(A^{\mathrm{a*}}\) and RE\(U^{\mathrm{a*}}\)) are shown in Table 1, where \((A^{\mathrm{a*}},U^{\mathrm{a*}})\) is the optimal solution of (AMOPUL1).
Figures 1(a) and 1(b) plot the trends of CE, RE\(A^{\mathrm{a*}}\) and RE\(U^{\mathrm{a*}}\) shown in Table 1 with respect to the perturbed \((\mu,\sigma)\) pairs, respectively, in which the length of each error bar above and below the mean value reflects the corresponding standard deviation.
\begin{table}
\begin{tabular}{c c c c c c c} \hline & \multicolumn{2}{c}{CE} & \multicolumn{2}{c}{RE\(A^{\mathrm{a*}}\)} & \multicolumn{2}{c}{RE\(U^{\mathrm{a*}}\)} \\ \cline{2-7} (\(\mu\), \(\sigma\)) & mean & std & mean & std & mean & std \\ \hline (0, 0.05) & 1.9885e-08 & 4.9712e-08 & 1.0070e+00 & 4.8692e-03 & 7.7927e-01 & 9.3722e-03 \\ (0, 0.1) & 1.3192e-08 & 2.7314e-08 & 1.0190e+00 & 7.4878e-03 & 8.1443e-01 & 1.5714e-02 \\ (0, 0.2) & 1.8346e-08 & 2.5523e-08 & 1.0324e+00 & 8.0172e-03 & 8.8606e-01 & 1.2572e-02 \\ (0, 0.3) & 4.1105e-08 & 1.2360e-07 & 1.0485e+00 & 9.6251e-03 & 9.2916e-01 & 1.0614e-02 \\ (0, 0.4) & 7.6893e-09 & 2.0765e-08 & 1.0725e+00 & 1.1846e-02 & 9.5445e-01 & 1.1122e-02 \\ (0, 0.5) & 1.2974e-08 & 3.5385e-08 & 1.0949e+00 & 9.8607e-03 & 9.6895e-01 & 7.6003e-03 \\ (0, 0.6) & 3.6863e-08 & 5.0465e-08 & 1.1194e+00 & 1.7583e-02 & 9.7755e-01 & 7.0614e-03 \\ (0, 0.7) & 7.2976e-09 & 1.7096e-08 & 1.1439e+00 & 1.0911e-02 & 9.8106e-01 & 3.4137e-03 \\ (0, 0.8) & 1.5437e-08 & 2.9897e-08 & 1.1655e+00 & 2.1094e-02 & 9.8639e-01 & 3.6822e-03 \\ **(1, 2.5)** & **1.6904e+01** & **4.3973e+01** & **1.6462e+00** & **6.3211e-02** & **1.0176e+00** & **1.2296e-02** \\ **(1, 3.0)** & **8.6961e+01** & **9.3299e+01** & **1.8027e+00** & **7.9844e-02** & **1.0479e+00** & **2.4774e-02** \\ \hline \end{tabular}
\end{table}
Table 1: Means and standard deviations of CE, RE\(A^{\mathrm{a*}}\) and RE\(U^{\mathrm{a*}}\) for (AMOPUL1).
Observe that when the perturbed noise of \(\{r_{t}\}_{t=1}^{N}\) is small (say, \(\mu=0,\sigma=0.05\) - \(0.8\)), the means and standard deviations of CE are all less than \(10^{-6}\) as shown in Table 1 and Figure 1(a). This shows that (AMOPUL1) achieves accurate and robust solutions. When the perturbed noise becomes large (say, \(\mu=1,\sigma=2.5\), \(3\)), the reference outputs \(\{r_{t}\}_{t=1}^{N}\) become chaotic. In this case, the approximate output values fail to fit the given reference outputs, and CE becomes large and oscillating.
On the other hand, RE\(A^{\rm a*}\) and RE\(U^{\rm a*}\) are more sensitive to the perturbed noises. The means of RE\(A^{\rm a*}\) and RE\(U^{\rm a*}\) in Figure 1(b) increase as the perturbed noise becomes larger. In particular, when the noise is large enough (say, \(\mu=1,\sigma=2.5\), \(3\)), there is a significant increase in the mean and standard deviation of RE\(A^{\rm a*}\).
In summary, (AMOPUL1) handles the system output quite well in fitting given reference outputs with small perturbed noises.
### Performance of AMOPUL2
We now study the performance of (AMOPUL2) with perturbed noises at reference outputs \(\{r_{t}\}_{t=1}^{N}\) and different control levels \(\tilde{\omega}\) and \(\{\omega_{t}\}_{t=0}^{N-1}\). The constraint set \(\mathcal{S}_{\rm M2}\) is set to be the trivial constraint \(\mathbb{R}^{n\times n}\times\mathbb{R}^{n\times N}\) for simplicity. Then (AMOPUL2) can be reformulated as
\[\begin{split}\min_{A,U}&\left\|A-\hat{A}\right\|_{F} \\ {\rm s.t.}& r_{0}=x_{0},\\ &\sum_{t=1}^{N}\left\|Ar_{t-1}+u_{t-1}-r_{t}\right\|_{2}\leq \tilde{\omega},\\ &\left\|u_{t}-\hat{u}_{t}\right\|_{2}\leq\omega_{t},\ \ t=0,1,\ldots,N-1,\end{split} \tag{4}\]
where \(\hat{A}\) and \(\hat{U}=(\hat{u}_{0},\hat{u}_{1},\ldots,\hat{u}_{N-1}))\) are taken to be the ideal data, \(\tilde{\omega}\) and \(\{\omega_{t}\}_{t=0}^{N-1}\) are the control levels.
#### 4.2.1 Influence of noises
Fixed \(\tilde{\omega}=10\) and \(\omega_{t}=3,t=0,1,\ldots,N-1\), we study the influence of the perturbed noise at the reference outputs on the performance of (AMOPUL2). A total of 20 instances of \(\{r_{t}\}_{t=1}^{N}\) with perturbed noises for each \((\mu,\sigma)\) with respect to \(\mu=0\), and \(\sigma=\)0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 are employed. The means and standard deviations of the relative error of \(A^{\rm a*}\) (RE\(A^{\rm a*}\)) and approximate cumulative error (ACE) are shown in Table 2, where \((A^{\rm a*},(u_{0}^{\rm a*},u_{1}^{\rm a*},\ldots,u_{N-1}^{\rm a*}))\) is an optimal solution of (AMOPUL2).
Table 2 shows that for the reference outputs with small noises (say, \(\mu=0,\sigma=0.05\) - \(0.2\)), the means and standard deviations of RE\(A^{\mathrm{a}*}\) are all less than \(10^{-10}\), while the means of ACE are less than 10 (\(=\tilde{\omega}\)) with small standard deviations, i.e., the strict inequality ACE \(<\tilde{\omega}\) is satisfied in the second constraint of (4), which means that this constraint is inactive. When the noise becomes larger (say, \(\mu=0\), \(\sigma=0.3\)-\(0.8\)), the mean of RE\(A^{\mathrm{a}*}\) increases drastically, while the means of ACE become 10 (\(=\tilde{\omega}\)) with standard deviations being less than \(10^{-6}\), i.e., the equality ACE \(=\tilde{\omega}\) is almost binding in the second constraint of (4), which means the constraint becomes active. Therefore, the error of recovering the true \(\hat{A}\) mainly comes from the perturbed noises at reference outputs in (AMOPUL2) with a fixed control level of \(\tilde{\omega}\) and \(\{\omega_{t}\}_{t=0}^{N-1}\). The smaller the perturbed noises are, the higher accuracy is for recovering the true \(\hat{A}\).
#### 4.2.2 Influence of control levels
Fixed the noise level of \(\{r_{t}\}_{t=1}^{N}\) with respect to \(\mu=0\) and \(\sigma=0.5\), we study the influence of control levels \(\tilde{\omega}\) and \(\{\omega_{t}\}_{t=0}^{N-1}\) on the performance of (AMOPUL2). Set \(\tilde{\omega}=\) 2, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, and \(\omega_{t}=\) 3, 4.5, 6, 8, \(t=0,1,\ldots,N-1\), respectively. The means and standard deviations of RE\(A^{\mathrm{a}*}\) and ACE are shown in Table 3, where \((A^{\mathrm{a}*},(u_{0}^{\mathrm{a}*},u_{1}^{\mathrm{a}*},\ldots,u_{N-1}^{ \mathrm{a}*}))\) is an optimal solution of (AMOPUL2).
Figures 2(a) and 2(b) plot the trends of RE\(A^{\text{a*}}\) and ACE shown in Table 3 with respect to different values of \(\tilde{\omega}\) and \(\{\omega_{t}\}_{t=0}^{N-1}\), respectively, in which the length of each error bar above and below the mean value reflects the corresponding standard deviation.
Figures 2(a) and 2(b) plot the trends of RE\(A^{\text{a*}}\) and ACE shown in Table 3 with respect to different values of \(\tilde{\omega}\) and \(\{\omega_{t}\}_{t=0}^{N-1}\), respectively, in which the length of each error bar above and below the mean value reflects the corresponding standard deviation.
Two major observations can be made as follows.
**A. Influence of \(\tilde{\mathbf{\omega}}\)**
For each fixed \(\omega_{t}=3\), 4.5, 6, \(t=0,1,\ldots,N-1\), the mean of RE\(A^{\rm a*}\) decreases to almost zero as \(\tilde{\omega}\) increases as shown in Figure 2(a). This implies that we can obtain a higher accuracy of recovering the transition matrix through a more relaxed constraint on the cumulative precision in the second constraint of (4). However, the means of ACE has an increasing trend with respect to \(\tilde{\omega}\) as shown in Figure 2(b) for each fixed \(\omega_{t}\). Observe that when \(\tilde{\omega}\) is small, the ACE curves almost coincide with the line \(y=x\), i.e., the equality \(\text{ACE}=\tilde{\omega}\) is almost binding in the second constraint of (4), which means that this constraint becomes active. When \(\tilde{\omega}\) becomes larger, the ACE curves lie below the line \(y=x\), i.e., the strict inequality \(\text{ACE}<\tilde{\omega}\) is satisfied in the second constraint of (4), which means that it becomes inactive. The trade-off between the accuracy of transition matrix recovery and the approximate cumulative error shows that a proper value of \(\tilde{\omega}\) plays an important role in the performance of (AMOPUL2). In an extreme case with sufficiently large \(\omega_{t}\) (say, \(\omega_{t}=8\), \(t=0,1,\ldots,N-1\)), since the third constraint of (4) is sufficiently relaxed, RE\(A^{\rm a*}\) is almost equal to zero for each \(\tilde{\omega}\) as shown in Figure 2(a), while its ACE curve lies below the line \(y=x\) as shown in Figure 2(b), i.e., the strict inequality \(\text{ACE}<\tilde{\omega}\) is satisfied in the second constraint of (4), which means the constraint becomes inactive.
**B. Influence of \(\{\omega_{t}\}_{t=0}^{N-1}\)**
For each fixed value of \(\tilde{\omega}\), the means of RE\(A^{\rm a*}\) and ACE both decrease as \(\omega_{t}\) increases as shown in Figures 2(a) and 2(b), respectively, while we gradually lose the accuracy of recovery of the true \(\hat{U}\). In the extreme cases with sufficiently large \(\tilde{\omega}=\) 130, 140, 150, 160, i.e., the second constraint of (4) is much relaxed, RE\(A^{\rm a*}\) is almost equal to zero as shown in Figure 2(a). In addition, the corresponding curves in Figure 2(b) lie below the line \(y=x\), i.e., the strict inequality \(\text{ACE}<\tilde{\omega}\) is satisfied in the second constraint of (4), which means this constraint becomes inactive.
Consequently, the values of the control levels \(\tilde{\omega}\) and \(\{\omega_{t}\}_{t=0}^{N-1}\) play key roles in the performance of (AMOPUL2). On one hand, when the values of \(\tilde{\omega}\) and \(\{\omega_{t}\}_{t=0}^{N-1}\) are small, since the second constraint of (4) becomes tight, ACE becomes small, i.e., the cumulative precision in the second constraint of (4) becomes high. However, the value of RE\(A^{\rm a*}\) becomes large, i.e., the recovery precision of the true \(\hat{A}\) becomes low. In the extreme cases with sufficiently small \(\tilde{\omega}\) and \(\{\omega_{t}\}_{t=0}^{N-1}\), i.e., extremely high requirements of the cumulative precision in the second constraint of (4) and the recovery precision of the true \((\hat{u}_{0},\hat{u}_{1},\ldots,\hat{u}_{N-1})\) in the third constraint of (4), (AMOPUL2) may be infeasible, for example, setting \(\tilde{\omega}=\omega_{t}=0,t=0,1,\ldots,N-1\). On the other hand, when the values of \(\tilde{\omega}\) and \(\{\omega_{t}\}_{t=0}^{N-1}\) become sufficiently large, the second and third constraints of (4) may become inactive, while RE\(A^{\rm a*}\) becomes small, i.e., the recovery precision of the true \(\hat{A}\) becomes high.
Based on the results for (AMOPUL1) and (AMOPUL2) in Subsection 4.1 and Subsubsection 4.2.1, we can see that the accuracy of given reference outputs \(\{r_{t}\}_{t=1}^{N}\) is the key for the performance of (AMOPUL). More accurate \(\{r_{t}\}_{t=1}^{N}\) leads to better performance of (AMOPUL).
In addition, the results of (AMOPUL2) in Subsubsection 4.2.2 indicate that the control level of approximate cumulative error plays an important role in the performance of (AMOPUL). With a proper value of the control level, the proposed SDP approximation model may perform very well.
## 5 Concluding remarks
This paper studies a matrix optimization problem over an uncertain linear system on finite horizon, in which the uncertain transition matrix is regarded as a decision variable. To decouple the entanglement of decision variables caused by the corresponding multivariate polynomial constraints for computational efficiency, we construct a polynomial-time solvable SDP approximation model by taking the given reference values as system outputs at each stage. Theoretical and numerical results show that the reference values | |
2309.12522 | K-stability of Casagrande-Druel varieties | We introduce a new subclass of Fano varieties (Casagrande-Druel varieties),
that are $n$-dimensional varieties constructed from Fano double covers of
dimension $n-1$. We conjecture that a Casagrande-Druel variety is K-polystable
if the double cover and its base space are K-polystable. We prove this for
smoothable Casagrande-Druel threefolds, and for Casagrande-Druel varieties
constructed from double covers of $\mathbb{P}^{n-1}$ ramified over smooth
hypersurfaces of degree $2d$ with $n>d>\frac{n}{2}>1$. As an application, we
describe the connected components of the K-moduli space parametrizing
smoothable K-polystable Fano threefolds in the families 3.9 and 4.2 in the
Mori-Mukai classification. | Ivan Cheltsov, Tiago Duarte Guerreiro, Kento Fujita, Igor Krylov, Jesus Martinez-Garcia | 2023-09-21T22:49:49 | http://arxiv.org/abs/2309.12522v1 | # K-stability of Casagrande-Druel varieties
###### Abstract.
We introduce a new subclass of Fano varieties (Casagrande-Druel varieties), that are \(n\)-dimensional varieties constructed from Fano double covers of dimension \(n-1\). We conjecture that a Casagrande-Druel variety is K-polystable if the double cover and its base space are K-polystable. We prove this for smoothable Casagrande-Druel threefolds, and for Casagrande-Druel varieties constructed from double covers of \(\mathbb{P}^{n-1}\) ramified over smooth hypersurfaces of degree \(2d\) with \(n>d>\frac{n}{2}>1\). As an application, we describe the connected components of the K-moduli space parametrizing smoothable K-polystable Fano threefolds in the families \(\mathbb{N}3.9\) and \(\mathbb{N}4.2\) in the Mori-Mukai classification.
## 1. Introduction
Let \(V\) be a Fano variety with Kawamata log terminal singularities, and let \(L\) be a line bundle on \(V\) such that the divisor \(-(K_{V}+L)\) is ample, and \(|2L|\) contains a non-zero effective divisor. Let \(R\) be a divisor in \(|2L|\), and let \(\eta\colon B\to V\) be the double cover ramified over \(R\). Then \(B\) can be explicitly constructed as follows. Let
\[Y=\mathbb{P}\big{(}\mathcal{O}_{V}\oplus\mathcal{O}_{V}(L)\big{)},\]
let \(\pi\colon Y\to V\) be the natural projection, and let \(\xi\) be the tautological line bundle on \(Y\). Set \(H=\pi^{*}(L)\). Then we have isomorphisms:
\[H^{0}\big{(}Y,\mathcal{O}_{Y}(\xi)\big{)} \cong H^{0}\big{(}V,\mathcal{O}_{V}\big{)}\oplus H^{0}\big{(}V, \mathcal{O}_{V}(L)\big{)},\] \[H^{0}\big{(}Y,\mathcal{O}_{Y}(\xi-H)\big{)} \cong H^{0}\big{(}V,\mathcal{O}_{V}\big{)}\oplus H^{0}\big{(}V, \mathcal{O}_{V}(-L)\big{)}.\]
Using these isomorphisms, fix sections \(u^{+}\in H^{0}(Y,\mathcal{O}_{Y}(\xi))\) and \(u^{-}\in H^{0}(Y,\mathcal{O}_{Y}(\xi-H))\) that correspond to \(1\in H^{0}(V,\mathcal{O}_{V})\) under the isomorphisms above. Set \(S^{\pm}=\{u^{\pm}=0\}\). Then we have \(S^{-}\cap S^{+}=\varnothing\) and \(S^{+}\sim S^{-}+H\). Take \(f\in H^{0}(V,\mathcal{O}_{V}(2L))\) that defines \(R\). Then we can identify \(B\) with the divisor
\[\big{\{}\pi^{*}(f)(u^{-})^{2}=(u^{+})^{2}\big{\}}\in|2S^{+}|,\]
where the double cover \(\eta\) is induced by \(\pi\).
_Remark 1.1_.: We allow \(R\) to be singular, so \(B\) can be very singular (and even reducible). However, if the log pair \((V,\frac{1}{2}R)\) has Kawamata log terminal singularities, then the double cover \(B\) is a Fano variety with Kawamata log terminal singularities [22]. So, for simplicity, we will always say that \(B\) is a Fano double cover (even if \(B\) is non-normal or reducible).
Let \(F=\pi^{*}(R)\), and let \(\phi\colon X\to Y\) be the blow up of the intersection \(S^{+}\cap F\). Then
\[X\text{ is smooth }\iff Y\text{ and }B\text{ are smooth }\iff V\text{ and }R\text{ are smooth.}\]
Moreover, the variety \(X\) is also a Fano variety (see Section 2).
**Definition 1.2**.: If the Fano variety \(X\) has at most Kawamata log terminal singularities, then \(X\) is called _the Casagrande-Druel variety_ constructed from \(\eta\colon B\to V\) (or, from the ramification divisor \(R\subset V\)). Note that \(L\in\operatorname{Pic}V\) is uniquely determined by \(R\).
The group \(\operatorname{Aut}(Y)\) contains a subgroup \(\Gamma\cong\mathbb{G}_{m}\) that fixes both \(S^{-}\) and \(S^{+}\) pointwise, and the action of \(\Gamma\) lifts to \(\operatorname{Aut}(X)\), so we can identify \(\Gamma\) with a subgroup in \(\operatorname{Aut}(X)\). In Section 2, we will show that \(\operatorname{Aut}(X)\) also contains an involution \(\iota\) such that
\[\langle\Gamma,\iota\rangle\cong\mathbb{G}_{m}\rtimes\boldsymbol{\mu}_{2},\]
and \(\iota\) swaps the proper transforms of the sections \(S^{-}\) and \(S^{+}\). Set \(G=\langle\Gamma,\iota\rangle\) and \(\theta=\pi\circ\phi\). Then we have commutative diagram:
(1.3)
and the composition \(\theta\) is a \(G\)-equivariant conic bundle such that \(G\) acts trivially on \(V\).
_Remark 1.4_.: Our construction of Casagrande-Druel varieties is inspired by the paper [8]. See [8, Lemma 3.1 (iii)]. But it goes back to the construction of de Jonquieres involutions using hyperelliptic curves instead of Fano double covers. See also [26, 7, 37, 15].
The del Pezzo surface of degree \(6\) (blow up of \(\mathbb{P}^{2}\) at three general points) is the unique smooth Casagrande-Druel surface. Smooth Casagrande-Druel threefolds form \(3\) families. To present them, we use labeling of smooth Fano threefolds from [6].
**Example 1.5**.: Let \(V=\mathbb{P}^{2}\), let \(L=\mathcal{O}_{\mathbb{P}^{2}}(1)\), let \(R\) be an arbitrary smooth conic in \(|2L|\). Then \(B\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\), and \(X\) is the unique smooth Fano threefold in the family 3.19.
**Example 1.6**.: Let \(V=\mathbb{P}^{2}\), let \(L=\mathcal{O}_{\mathbb{P}^{2}}(2)\), let \(R\) be any smooth quartic curve in \(|2L|\). Then \(B\) is a del Pezzo surface of degree \(2\), and \(X\) is a Fano threefold in the family 3.9.
**Example 1.7**.: Let \(V=\mathbb{P}^{1}\times\mathbb{P}^{1}\), let \(L=\mathcal{O}_{V}(1,1)\), let \(R\) be any smooth curve in \(|2L|\). Then \(B\) is a del Pezzo surface of degree \(4\), and \(X\) is a Fano threefold in the family 4.2.
All smooth Casagrande-Druel threefolds are K-polystable, see [21, Theorem 6.1] and [6]. In fact, K-polystable Casagrande-Druel varieties exist in every dimension:
**Example 1.8** ([11, 12]).: Suppose that \(V=\mathbb{P}^{n-1}\), \(L=\mathcal{O}_{\mathbb{P}^{n-1}}(1)\), \(R\) is smooth, \(n\geqslant 2\). Then \(X\) can be obtained by blowing up the \(n\)-dimensional smooth quadric at two points. The variety \(X\) is spherical, and it is known that \(X\) is K-polystable [12, 4.4.2].
In this paper, we prove the following theorem:
**Theorem 1.9**.: _Suppose that \(V=\mathbb{P}^{n-1}\), \(L=\mathcal{O}_{\mathbb{P}^{n-1}}(r)\), \(R\) is smooth, and \(n>r>\frac{n}{2}>1\). Then \(X\) is K-polystable._
We obtain this result as an application of the following K-polystability criteria:
**Theorem 1.10**.: _Suppose that both \(V\) and \(R\) are smooth (or equivalently \(X\) is smooth), and \(-K_{V}\sim_{\mathbb{Q}}aL\), where \(a\in\mathbb{Q}_{>0}\) such that \(a>1\). Let \(\mu\) be the smallest rational number such that \(\mu L\) is very ample. Set \(n=\dim(X)\) (so \(\dim(V)=n-1\)), set \(d=L^{n-1}\), set_
\[k_{n}(a,d,\mu)=\frac{a^{n+1}-(a-1)^{n+1}}{(n+1)(a^{n}-(a-1)^{n})}d\mu^{n-2}+ \frac{a^{n+1}-(a+n)(a-1)^{n}}{2(n+1)(a^{n}-(a-1)^{n})}\]
_and set_
\[\gamma=\min\Biggl{\{}\frac{1}{k_{n}(a,d,\mu)},\frac{(n+1)(a^{n}-(a-1)^{n})}{( n+1-a)a^{n}+(a-1)^{n+1}},\frac{a\delta(V)(n+1)(a^{n}-(a-1)^{n})}{n(a^{n+1}-(a-1)^ {n+1})}\Biggr{\}},\]
_where \(\delta(V)\) is the \(\delta\)-invariant of the Fano variety \(V\). If \(n\geqslant 3\), \(d\mu^{n-2}\geqslant 2\) and \(\gamma>1\), then the Casagrande-Druel variety \(X\) is K-polystable._
_Remark 1.11_.: In the notations of Theorem 1.10, if \(n\geqslant 2\) and \(d\mu^{n-2}<2\), then \(d\mu^{n-2}=1\), which gives \(V=\mathbb{P}^{n-1}\) and \(L=\mathcal{O}_{\mathbb{P}^{n-1}}(1)\), so \(X\) is K-polystable, see Example 1.8.
In this paper, we also prove the following two theorems about K-polystability of several singular Casagrande-Druel \(3\)-folds:
**Theorem 1.12**.: _Suppose \(V=\mathbb{P}^{1}\times\mathbb{P}^{1}\), \(L=\mathcal{O}_{V}(1,1)\), and \(R\) is one of the following curves:_
1. \(C_{1}+C_{2}\)_, where_ \(C_{1}\) _and_ \(C_{2}\) _are smooth curves in_ \(|L|\) _such that_ \(|C_{1}\cap C_{2}|=2\)_;_
2. \(\ell_{1}+\ell_{2}+\ell_{3}+\ell_{4}\)_, where_ \(\ell_{1}\) _and_ \(\ell_{2}\) _are two distinct smooth curves of degree_ \((1,0)\)_, and_ \(\ell_{3}\) _and_ \(\ell_{4}\) _are two distinct smooth curves of degree_ \((0,1)\)_;_
3. \(2C\)_, where_ \(C\) _is a smooth curve in_ \(|L|\)_._
_Then \(X\) is K-polystable._
**Theorem 1.13**.: _Suppose \(V=\mathbb{P}^{2}\), \(L=\mathcal{O}_{\mathbb{P}^{2}}(2)\), and \(R\) is one of the following curves:_
1. _a singular reduced curve in_ \(|2L|\) _with at most_ \(\mathbb{A}_{1}\) _or_ \(\mathbb{A}_{2}\) _singularities;_
2. \(C_{1}+C_{2}\)_, where_ \(C_{1}\) _and_ \(C_{2}\) _are smooth conics that are tangent at two points;_
3. \(C+\ell_{1}+\ell_{2}\)_, where_ \(C\) _is a smooth conic,_ \(\ell_{1}\) _and_ \(\ell_{2}\) _are distinct lines tangent to_ \(C\)_;_
4. \(2C\)_, where_ \(C\) _is a smooth conic._
_Then \(X\) is K-polystable._
To present their applications, let \(\mathcal{M}_{n,v}^{\rm Kss}\) be the K-moduli functor of Fano varieties that have dimension \(n\) and anticanonical volume \(v\in\mathbb{Q}_{>0}\) in the sense of [39, Theorem 2.17]. Then \(\mathcal{M}_{n,v}^{\rm Kss}\) is an Artin stack of finite type. Moreover, as in [23, Theorem 1.3], it admits a good moduli space \(\mathcal{M}_{n,v}^{\rm Kss}\longrightarrow M_{n,v}^{\rm Kps}\) in the sense of [4], where \(M_{n,v}^{\rm Kps}\) is a projective scheme whose points parametrize K-polystable Fano varieties of dimension \(n\) and anticanonical volume \(v\). Let \(M_{(3.9)}^{\rm Kps}\) and \(M_{(4.2)}^{\rm Kps}\) be the closed subvarieties of \(M_{3,26}^{\rm Kps}\) and \(M_{3,28}^{\rm Kps}\) whose general points parametrize smooth Fano theeedfolds in the families \(\mathcal{N}\)3.9 and \(\mathcal{N}\)4.2, respectively. Then Theorems 1.12 and 1.13 imply the following two results (see Section 6 and cf. [19]).
**Corollary 1.14**.: _Let \(V=\mathbb{P}^{1}\times\mathbb{P}^{1}\), let \(L=\mathcal{O}_{V}(1,1)\), let \(\Gamma=\left(\mathrm{SL}_{2}(\mathbb{C})\times\mathrm{SL}_{2}(\mathbb{C}) \right)\rtimes\boldsymbol{\mu}_{2}\), let \(T=\mathbb{P}\left(H^{0}\left(V,\mathcal{O}_{V}(2,2)\right)^{\vee}\right)\), let \(T^{\rm ss}\subset T\) be the GIT semistable open subset with respect to the natural \(\Gamma\)-action, and let \(M\) be the GIT quotient \(T^{\rm ss}\,/\!\!/\,\Gamma\). Then there is a morphism_
\[\begin{array}{ccc}\Phi\colon M&\to&M_{3,28}^{\rm Kps}\\ \cup&&\cup\\ [f]&\mapsto&[X_{f}],\end{array}\]
_where \(X_{f}\) is the Casagrande-Druel threefold that is constructed from \(R=\{f=0\}\in|2L|\). Furthermore, the morphism \(\Phi\) is an isomorphism onto \(M_{(4.2)}^{\operatorname{Kps}}\), and \(M_{(4.2)}^{\operatorname{Kps}}\) is a connected component of the scheme \(M_{3,28}^{\operatorname{Kps}}\)._
**Corollary 1.15**.: _Let \(V=\mathbb{P}^{2}\), \(L=\mathcal{O}_{\mathbb{P}^{2}}(2)\), let \(\Gamma=\operatorname{SL}_{3}(\mathbb{C})\), let \(T=\mathbb{P}\left(H^{0}\left(\mathbb{P}^{2},\mathcal{O}_{\mathbb{P}^{2}}(4) \right)^{\vee}\right)\), let \(T^{\operatorname{ss}}\subset T\) be the GIT semistable open subset with respect to the natural \(\Gamma\)-action, and let \(M\) be the GIT quotient \(T^{\operatorname{ss}}\mathbin{/\!\!/}\Gamma\). Then there exists a morphism_
\[\Phi\colon M \to M_{3,26}^{\operatorname{Kps}}\] \[\cup \cup\] \[\left[f\right] \mapsto \left[X_{f}\right],\]
_where \(X_{f}\) is the Casagrande-Druel threefold that is constructed from \(R=\{f=0\}\in|2L|\). Furthermore, the morphism \(\Phi\) is an isomorphism onto \(M_{(3.9)}^{\operatorname{Kps}}\), and \(M_{(3.9)}^{\operatorname{Kps}}\) is a connected component of the scheme \(M_{3,26}^{\operatorname{Kps}}\)._
If \(B\) is the smooth del Pezzo surface from Examples 1.5, 1.6, 1.7, then \(B\) is K-polystable. If \(B\) is the Fano manifold from Theorem 1.9, then \(B\) is K-polystable [14, Theorem 1.1]. If \(B\) is the singular del Pezzo surface from Theorems 1.12 and 1.13 such that \(R\) is reduced, then \(B\) is also K-polystable [28]. Inspired by this, we pose
**Conjecture 1.16**.: _If \(V\) and \(B\) are K-polystable Fano varieties, then \(X\) is K-polystable._
If \(B\) is a K-polystable Fano variety, the log Fano pair \((V,\frac{1}{2}R)\) is also K-polystable [24]. Thus, our conjecture is closely related to the following recent result:
**Theorem 1.17** ([25]).: _Suppose that \(-K_{V}\sim_{\mathbb{Q}}aL\), where \(a\in\mathbb{Q}_{>0}\) such that \(a>1\). Set_
\[\lambda_{n}(a)=\frac{a^{n+1}-(a+n)(a-1)^{n}}{2(n+1)(a^{n}-(a-1)^{n})},\]
_where \(n=\dim X\). Then \(X\) is K-semistable \(\iff\)\((V,\lambda_{n}(a)R)\) is K-semistable._
The K-polystability of \(V\) in Conjecture 1.16 is necessary.
**Example 1.18** (Yuchen Liu).: Let \(V=\mathbb{P}(1,1,4)\), let \(L=\mathcal{O}_{V}(4)\), let \(R\) be a general curve in \(|2L|\), and let \(\lambda\in\left(0,\frac{3}{4}\right)\cap\mathbb{Q}\). Then \((V,\lambda R)\) is a log Fano pair. One can show that
\[\delta(V,\lambda R)\geqslant 1\ (\delta(V,\lambda R)>1,\,\text{respectively})\iff \ \lambda\geqslant\frac{3}{8}\ (\lambda>\frac{3}{8},\,\text{respectively}),\]
so that the singular del Pezzo surface \(B\) is K-polystable, but \(\left(V,\frac{9}{52}R\right)\) is not K-semistable. Hence, the threefold \(X\) is not K-semistable by Theorem 1.17.
Let us say few words about the proofs of Theorems 1.10 and 1.13. In Section 2, we will show that \(X/\iota\cong Y\), and we have the following commutative diagram:
where \(\rho\) is the quotient map, which is a double cover ramified over our divisor \(B\in|2S^{+}|\). Thus, using [24], we see that
\[X\text{ is K-polystable}\iff\text{ the log Fano pair }\big{(}Y,\tfrac{1}{2}B\big{)} \text{ is K-polystable}.\]
In Section 3, we will prove the following result, which implies Theorem 1.10.
**Theorem 1.19**.: _Suppose that \(V\) and \(R\) are smooth (so \(B\) is smooth), and \(-K_{V}\sim_{\mathbb{Q}}aL\), where \(a\in\mathbb{Q}_{>0}\) such that \(a>1\). Let \(\mu\) be a rational number such that \(\mu L\) is very ample. Set \(n=\dim Y\) (so \(\dim V=n-1\)) and \(d=L^{n-1}\). Suppose \(n\geqslant 3\) and \(d\mu^{n-2}\geqslant 2\). Then_
\[\delta\Big{(}Y,\frac{1}{2}B\Big{)}\geqslant\min\Biggl{\{}\frac{1}{k_{n}(a,d, \mu)},\frac{(n+1)(a^{n}-(a-1)^{n})}{(n+1-a)a^{n}+(a-1)^{n+1}},\frac{a\delta(V)(n +1)(a^{n}-(a-1)^{n})}{n(a^{n+1}-(a-1)^{n+1})}\Biggr{\}},\]
_where \(k_{n}(a,d,\mu)\) is defined in Theorem 1.10._
Let us describe the structure of this paper. First, in Section 2, we will prove few basic properties of Casagrande-Druel varieties. Then, in Section 3, we will prove Theorem 1.19. In Sections 4 and 5, we will give proofs of Theorem 1.12 and Theorem 1.13, respectively. Finally, in Section 6, we will prove Corollary 1.14, and we will show that \(M_{(4,2)}^{\mathrm{Kps}}\cong\mathbb{P}(1,2,3)\). We will omit the proof of Corollary 1.15, since it is similar to the proof of Corollary 1.14.
**Acknowledgements.** This paper was written during our visit to the Gokova Geometry Topology Institute in April 2023. We are very grateful to the institute for its hospitality. We would like to thank Yuchen Liu for his help with the proof of Corollary 1.14, and we would like to thank Noam Elkies for his proof of Proposition 6.3.
Ivan Cheltsov was supported by EPSRC grant EP/V054597/1, Tiago Duarte Guerreiro was supported by EPSRC grant EP/V055399/1, Kento Fujita was supported by JSPS KAKENHI Grant Number 22K03269, Igor Krylov was supported by IBS-R003-D1 grant, and Jesus Martinez-Garcia was supported by EPSRC grant EP/V055399/1.
## 2. Preliminaries
Let \(V\) be a (possibly non-projective) variety, let \(L_{1}\) and \(L_{2}\) be line bundles on \(V\) such that \(L_{1}+L_{2}\not\sim 0\) and \(|L_{1}+L_{2}|\neq\varnothing\), and let \(f\in H^{0}(V,\mathcal{O}_{V}(L_{1}+L_{2}))\) that defines a nonzero effective divisor \(R\) on \(V\). Set
\[Y_{1} =\mathbb{P}\big{(}\mathcal{O}_{V}\oplus\mathcal{O}(L_{1})\big{)},\] \[Y_{2} =\mathbb{P}\big{(}\mathcal{O}_{V}\oplus\mathcal{O}(L_{2})\big{)}.\]
Now, let \(\pi_{1}\colon Y_{1}\to V\) and \(\pi_{2}\colon Y_{2}\to V\) be the natural projections, and let \(\xi_{1}\) and \(\xi_{2}\) be the tautological line bundles on \(Y_{1}\) and \(Y_{2}\), respectively. We have isomorphisms:
\[H^{0}\big{(}Y_{1},\mathcal{O}_{Y_{1}}(\xi_{1})\big{)} \cong H^{0}\big{(}V,\mathcal{O}_{V}\big{)}\oplus H^{0}\big{(}V, \mathcal{O}_{V}(L_{1})\big{)},\] \[H^{0}\big{(}Y_{1},\mathcal{O}_{Y_{1}}(\xi_{1}-\pi_{1}^{*}(L_{1}) )\big{)} \cong H^{0}\big{(}V,\mathcal{O}_{V}\big{)}\oplus H^{0}\big{(}V, \mathcal{O}_{V}(-L_{1})\big{)},\] \[H^{0}\big{(}Y_{2},\mathcal{O}_{Y_{2}}(\xi_{2})\big{)} \cong H^{0}\big{(}V,\mathcal{O}_{V}\big{)}\oplus H^{0}\big{(}V, \mathcal{O}_{V}(L_{2})\big{)},\] \[H^{0}\big{(}Y_{2},\mathcal{O}_{Y_{2}}(\xi_{2}-\pi_{2}^{*}(L_{2}) )\big{)} \cong H^{0}\big{(}V,\mathcal{O}_{V}\big{)}\oplus H^{0}\big{(}V, \mathcal{O}_{V}(-L_{2})\big{)}.\]
Using these isomorphisms, fix sections
\[u_{1}^{+} \in H^{0}\big{(}Y_{1},\mathcal{O}_{Y_{1}}(\xi_{1})\big{)},\] \[u_{1}^{-} \in H^{0}\big{(}Y_{1},\mathcal{O}_{Y_{1}}(\xi_{1}-\pi_{1}^{*}(L_{1} ))\big{)},\] \[u_{2}^{+} \in H^{0}\big{(}Y_{2},\mathcal{O}_{Y_{2}}(\xi_{2})\big{)},\] \[u_{2}^{-} \in H^{0}\big{(}Y_{2},\mathcal{O}_{Y_{2}}(\xi_{2}-\pi_{2}^{*}(L_{2} ))\big{)},\]
that correspond to the section \(1\in H^{0}(V,\mathcal{O}_{V})\). Let
\[S_{1}^{-} =\{u_{1}^{-}=0\}\subset Y_{1},\] \[S_{1}^{+} =\{u_{1}^{+}=0\}\subset Y_{1},\] \[S_{2}^{-} =\{u_{2}^{-}=0\}\subset Y_{2},\] \[S_{2}^{+} =\{u_{2}^{+}=0\}\subset Y_{2}.\]
For \(i\in\{1,2\}\), the divisors \(S_{i}^{-}\) and \(S_{i}^{+}\) are disjoint sections of the natural projection \(\pi_{i}\) such that \(S_{i}^{-}\big{|}_{S_{i}^{-}}\sim-L_{i}\sim-S_{i}^{+}\big{|}_{S_{i}^{+}}\), where we use isomorphisms \(S_{i}^{-}\cong V\cong S_{i}^{+}\) induced by \(\pi_{i}\).
Now, we set \(Q=Y_{1}\times_{V}Y_{2}\). Then we have canonical isomorphisms
\[\mathbb{P}\Big{(}\mathcal{O}_{Y_{1}}\oplus\mathcal{O}_{Y_{1}}\big{(}\pi_{1}^ {*}(L_{2})\big{)}\Big{)}\cong Q\cong\mathbb{P}\Big{(}\mathcal{O}_{Y_{2}}\oplus \mathcal{O}_{Y_{2}}\big{(}\pi_{2}^{*}(L_{1})\big{)}\Big{)},\]
so that we have commutative Cartesian diagram
where \(\rho_{1}\) and \(\rho_{2}\) are natural projections. Set \(\vartheta=\pi_{1}\circ\rho_{1}=\pi_{2}\circ\rho_{2}\).
Set \(F_{1}=\pi_{1}^{*}(R)\subset Y_{1}\). Let \(\phi_{1}\colon X\to Y_{1}\) be the blowup along the intersection \(F_{1}\cap S_{1}^{+}\), and let \(E_{1}\) be the \(\phi_{1}\)-exceptional divisor. Since \(F_{1}+S_{1}^{-}\) corresponds to
\[\pi_{1}^{*}(f)u_{1}^{-}\in H^{0}\left(Y_{1},\mathcal{O}_{Y_{1}}\big{(}\xi_{1}+ \pi_{1}^{*}(L_{2})\right),\]
there is a natural closed embedding \(X\hookrightarrow Q\) over \(V\) such that its image is the effective divisor defined by the zeroes of the section
\[\vartheta^{*}(f)u_{1}^{-}u_{2}^{-}-u_{1}^{+}u_{2}^{+}\in H^{0}\Big{(}Q, \mathcal{O}_{Q}\big{(}\rho_{1}^{*}(\xi_{1})+\rho_{2}^{*}(\xi_{2})\big{)} \Big{)},\]
where we identified \(H^{0}(Q,\mathcal{O}_{Q}(\rho_{i}^{*}(D)))=H^{0}(Y_{i},\mathcal{O}_{Y_{i}}(D))\) for every \(D\in\mathrm{Pic}(Y_{i})\).
Let us identify \(X\) with its image in \(Q\). Set \(\theta=\pi_{1}\circ\phi_{1}\). Then \(\theta\) is induced by \(\vartheta\), it is a conic bundle, and \(R\) is its discriminant divisor. Set
\[S_{1} =\phi_{1}^{*}(S_{1}^{-}),\] \[S_{2} =\phi_{1}^{*}(S_{1}^{+})-E_{1},\] \[E_{2} =\phi_{1}^{*}(F_{1})-E_{1}.\]
Then \(S_{1}\), \(S_{2}\), \(E_{2}\) are effective Cartier divisors on the variety \(X\) -- these are the proper transforms of the divisors \(S_{1}^{-}\), \(S_{1}^{+}\), \(F_{1}\), respectively. Moreover, the divisors \(S_{1}\) and \(S_{2}\) are mutually disjoint sections of the conic bundle \(\theta\). Furthermore, we have
\[S_{1}\big{|}_{S_{1}}\sim-L_{1}\text{ and }S_{2}\big{|}_{S_{2}}\sim-L_{2}\]
where we use isomorphisms \(S_{1}\cong V\) and \(S_{2}\cong V\) induced by \(\theta\). Similarly, we see that the divisor \(E_{1}+E_{2}\) is given by zeroes of the section
\[\theta^{*}(f)\in H^{0}\Big{(}X,\mathcal{O}_{X}\big{(}\theta^{*}(L_{1}+L_{2}) \big{)}\Big{)}\cong H^{0}\big{(}V,\mathcal{O}_{V}(L_{1}+L_{2})\big{)}.\]
Set \(F_{2}=\pi_{2}^{*}(R)\subset Y_{2}\), and let \(\phi_{2}\colon X\to Y_{2}\) be the morphism induced by \(\rho_{2}\colon Q\to Y_{2}\). Since the defining equation of \(X\subset Q\) is symmetric, we conclude that \(\phi_{2}\) is the blowup along the scheme-theoretic intersection \(F_{2}\cap S_{2}^{+}\), the \(\phi_{2}\)-exceptional divisor is \(E_{2}\), and there exists the following commutative diagram:
(2.1)
This is an elementary transformation of the \(\mathbb{P}^{1}\)-bundle \(\pi_{1}\) in the sense of Maruyama [26]. Now, using [26, Theorem 1.4] and [26, Proposition 1.6], we see that
\[S_{1} =\phi_{2}^{*}(S_{2}^{+})-E_{2},\] \[S_{2} =\phi_{2}^{*}(S_{2}^{-}),\] \[E_{1} =\phi_{2}^{*}(F_{1})-E_{2}.\]
_Remark 2.2_.: Let \(U=\mathbb{P}(\mathcal{O}_{V}\oplus\mathcal{O}_{V}(-L_{1})\oplus\mathcal{O}_{ V}(-L_{2}))\), let \(\xi_{U}\) be the tautological line bundle on the variety \(U\), let \(\pi_{U}\colon U\to V\) be the natural projection. We have isomorphisms:
\[H^{0}\big{(}U,\mathcal{O}_{U}(\xi_{U})\big{)} \cong H^{0}\big{(}V,\mathcal{O}_{V}\big{)}\oplus H^{0}\big{(}V, \mathcal{O}_{V}(-L_{1})\big{)}\oplus H^{0}\big{(}V,\mathcal{O}_{V}(-L_{2}) \big{)},\] \[H^{0}\big{(}U,\mathcal{O}_{U}(\xi_{U}+\pi_{U}^{*}(L_{1}))\big{)} \cong H^{0}\big{(}V,\mathcal{O}_{V}\big{)}\oplus H^{0}\big{(}V, \mathcal{O}_{V}(L_{1})\big{)}\oplus H^{0}\big{(}V,\mathcal{O}_{V}(L_{1}-L_{2} )\big{)},\] \[H^{0}\big{(}U,\mathcal{O}_{U}(\xi_{U}+\pi_{U}^{*}(L_{2}))\big{)} \cong H^{0}\big{(}V,\mathcal{O}_{V}\big{)}\oplus H^{0}\big{(}V, \mathcal{O}_{V}(L_{2})\big{)}\oplus H^{0}\big{(}V,\mathcal{O}_{V}(L_{2}-L_{1} \big{)}.\]
Using these isomorphisms, fix sections
\[v_{0} \in H^{0}\big{(}U,\mathcal{O}_{U}(\xi_{U})\big{)},\] \[v_{1} \in H^{0}\big{(}U,\mathcal{O}_{U}(\xi_{U}+\pi_{U}^{*}(L_{1})) \big{)},\] \[v_{2} \in H^{0}\big{(}U,\mathcal{O}_{U}(\xi_{U}+\pi_{U}^{*}(L_{2})) \big{)},\]
which correspond to the section \(1\in H^{0}(V,\mathcal{O}_{V})\). One can show that there exists a closed embedding \(X\hookrightarrow U\) over \(V\) such that the image of \(X\) is defined by
\[\pi_{U}^{*}(f)v_{0}^{2}-v_{1}v_{2}=0,\]
so that we can idendity \(X\) with a Cartier divisor on \(U\) such that \(X\sim 2\xi_{U}+\pi_{U}^{*}(L_{1}+L_{2})\).
Starting from now, we assume, in addition, that \(V\) is projective.
**Proposition 2.3**.: _Suppose that \(V\) is normal, and \(K_{V}\) is \(\mathbb{Q}\)-Cartier. Then \(X\) is normal, and \(K_{X}\) is \(\mathbb{Q}\)-Cartier. Moreover, the following assertion holds:_
\[-K_{X}\text{ is ample }\Longleftrightarrow\ -K_{V}\text{, }-K_{V}-L_{1}\text{, }-K_{V}-L_{2}\text{ are ample}.\]
Proof.: The normality of the variety \(X\) follows from Remark 2.2 and [36, Proposition 5.24]. Similarly, using notations introduced in Remark 2.2, we see that
\[K_{U}\sim_{\mathbb{Q}}-3\xi_{U}+\pi_{U}^{*}\big{(}K_{V}-L_{1}-L_{2}),\]
so \(K_{X}\) is \(\mathbb{Q}\)-Cartier by the adjunction formula, because \(X\) is a Cartier divisor on \(U\).
To prove the remaining assertion, suppose that \(-K_{V}\), \(-K_{V}-L_{1}\), \(-K_{V}-L_{2}\) are ample. Then \(\xi_{U}+\pi_{U}^{*}(-K_{V})\) in Remark 2.2 is ample. Then so is \(-K_{X}\sim_{\mathbb{Q}}(\xi_{U}+\pi_{U}^{*}(-K_{V}))\,|_{X}\). Alternatively, we can prove the ampleness of \(-K_{X}\) directly. Namely, observe that
\[-K_{X}\sim_{\mathbb{Q}}S_{1}+S_{2}+\theta^{*}(-K_{V}). \tag{2.4}\]
Moreover, applying the adjunction formula to the sections \(S_{1}\) and \(S_{2}\), we get
\[-K_{X}\big{|}_{S_{1}}\sim_{\mathbb{Q}}-K_{V}-L_{1},\] \[-K_{X}\big{|}_{S_{2}}\sim_{\mathbb{Q}}-K_{V}-L_{2},\]
where we used \(S_{1}\cong V\) and \(S_{2}\cong V\). Hence, if \(-K_{V}\), \(-K_{V}-L_{1}\), \(-K_{V}-L_{2}\) are ample, then the divisor \(-K_{X}\) is also ample by Kleiman's ampleness criterion.
This also shows that both divisors \(-K_{V}-L_{1}\) and \(-K_{V}-L_{2}\) are ample if \(-K_{X}\) is ample. Observe that \(E_{1}\cap E_{2}\cong R\). Using this isomorphism and (2.4), we get \(-K_{V}|_{R}\sim-K_{X}|_{R}\). On the other hand, we have
\[-2K_{V}\sim_{\mathbb{Q}}\big{(}-K_{V}-L_{1}\big{)}+\big{(}-K_{V}-L_{2}\big{)}+R.\]
Hence, using Kleiman's criterion again, we see that \(-K_{V}\) is ample if \(-K_{X}\) is ample.
**Example 2.5**.: Suppose \(V=\mathbb{P}^{1}\times\mathbb{P}^{1}\), and \(L_{1}\) and \(L_{2}\) are divisors of degrees \((1,0)\) and \((0,1)\), and \(R\) is a smooth divisor in \(|L_{1}+L_{2}|\). Then \(X\) is a smooth Fano \(3\)-fold by Proposition 2.3. One can show that \(X\) is the unique smooth Fano \(3\)-fold in the deformation family \(\mathcal{N}\)4.7. Note that \(X\) is K-polystable [6, SS3.3].
_Remark 2.6_ ([16, Lemma 9.8]).: Suppose that \(V\) is a smooth Fano variety, and \(-K_{V}\sim_{\mathbb{Q}}aL\), where \(L\) is an ample divisor in \(\operatorname{Pic}(V)\), and \(a\in\mathbb{Q}_{>0}\). Suppose \(R\) and \(X\) are smooth, and
\[L_{1}\sim_{\mathbb{Q}}a_{1}L,\] \[L_{2}\sim_{\mathbb{Q}}a_{2}L,\]
where \(a_{1}\) and \(a_{2}\) are rational numbers such that \(a_{1}\geqslant a_{2}\). It follows from Proposition 2.3 that \(X\) is a Fano variety \(\iff\,a>a_{1}\). Further, if \(X\) is a Fano variety, then it follows from the proof of [16, Lemma 9.8] that
\[\beta(S_{2})<0\iff\,a_{1}>a_{2}.\]
Therefore, if \(a>a_{1}>a_{2}\), then \(X\) is a K-unstable Fano variety.
From now on, we also assume that \(L_{1}=L_{2}\). Set \(L=L_{1}\). Then \(R\in|2L|\). Set
\[Y=\mathbb{P}\big{(}\mathcal{O}_{V}\oplus\mathcal{O}(L)\big{)}.\]
let \(\pi\colon Y\to V\) be the natural projection, and let \(\xi\) be the tautological line bundle on \(Y\). Note that \(Y\cong Y_{1}\cong Y_{2}\). Using the isomorphisms
\[H^{0}\big{(}Y,\mathcal{O}_{Y}(\xi)\big{)} \cong H^{0}\big{(}V,\mathcal{O}_{V}\big{)}\oplus H^{0}\big{(}V, \mathcal{O}_{V}(L)\big{)},\] \[H^{0}\big{(}Y,\mathcal{O}_{Y}(\xi-\pi^{*}(L))\big{)} \cong H^{0}\big{(}V,\mathcal{O}_{V}\big{)}\oplus H^{0}\big{(}V, \mathcal{O}_{V}(-L)\big{)},\]
fix \(u^{+}\in H^{0}(Y,\mathcal{O}_{Y}(\xi))\) and \(u^{-}\in H^{0}(Y,\mathcal{O}_{Y}(\xi-\pi^{*}(L)))\) that correspond to \(1\in H^{0}(V,\mathcal{O}_{V})\). Let \(S^{-}=\{u^{-}=0\}\) and \(S^{+}=\{u^{+}=0\}\). Then \(S^{+}\sim S^{-}+\pi^{*}(L)\).
**Proposition 2.7**.: _There is a double cover \(X\to Y\) ramified in a divisor \(B\in|2S^{+}|\) such that the projection \(\pi\) induces a double cover \(B\to V\) that is ramified in \(R\)._
Proof.: Let \(T=\mathbb{P}(\mathcal{O}_{V}\oplus\mathcal{O}_{V}(-L))\oplus\mathcal{O}_{V}(-2L)\), let \(\varpi\colon T\to V\) be the natural projection, and let \(\xi_{T}\) be the tautological line bundle on \(T\). Observe that
\[H^{0}\big{(}T,\mathcal{O}_{T}(\xi_{T})\big{)} \cong H^{0}\big{(}V,\mathcal{O}_{V}\big{)}\oplus H^{0}\big{(}V, \mathcal{O}_{V}(-L)\big{)}\oplus H^{0}\big{(}V,\mathcal{O}_{V}(-2L)\big{)},\] \[H^{0}\big{(}T,\mathcal{O}_{T}(\xi_{T}+\varpi^{*}(L))\big{)} \cong H^{0}\big{(}V,\mathcal{O}_{V}\big{)}\oplus H^{0}\big{(}V, \mathcal{O}_{V}(L)\big{)}\oplus H^{0}\big{(}V,\mathcal{O}_{V}(L)\big{)}.\]
Using these isomorphisms, fix sections
\[t_{0} \in H^{0}\big{(}T,\mathcal{O}_{T}(\xi_{T})\big{)},\] \[t_{1} \in H^{0}\big{(}T,\mathcal{O}_{T}(\xi_{T}+\varpi^{*}(L))\big{)},\] \[t_{2} \in H^{0}\big{(}T,\mathcal{O}_{T}(\xi_{T}+\varpi^{*}(2L)\big{)}\]
that corresponds to \(1\in H^{0}(V,\mathcal{O}_{V})\). Then
\[\{t_{0}=0\} \cong\mathbb{P}\big{(}\mathcal{O}_{V}(-L))\oplus\mathcal{O}_{V} (-2L)\big{)},\] \[\{t_{1}=0\} \cong\mathbb{P}\big{(}\mathcal{O}_{V}\oplus\mathcal{O}_{V}(-2L) \big{)},\] \[\{t_{2}=0\} \cong\mathbb{P}\big{(}\mathcal{O}_{V}\oplus\mathcal{O}_{V}(-L) \big{)}.\]
Now, we consider the homomorphism
\[\mathcal{O}_{Q}\oplus\mathcal{O}_{Q}\big{(}\vartheta^{*}(L)\big{)}\oplus \mathcal{O}_{Q}\big{(}\vartheta^{*}(2L)\big{)}\to\mathcal{O}_{Q}\big{(} \rho_{1}^{*}(\xi_{1})+\rho_{2}^{*}(\xi_{2})\big{)} \tag{2.8}\]
defined by the composition of
\[\begin{pmatrix}1&0&0\\ 0&\frac{1}{2}&0\\ 0&\frac{1}{2}&0\\ 0&0&1\end{pmatrix}:\mathcal{O}_{Q}\oplus\mathcal{O}_{Q}\big{(}\vartheta^{*}(L )\big{)}\oplus\mathcal{O}_{Q}\big{(}\vartheta^{*}(2L)\big{)}\to\mathcal{O}_{Q }\oplus\mathcal{O}_{Q}\big{(}\vartheta^{*}(L)\big{)}\oplus\mathcal{O}_{Q} \big{(}\vartheta^{*}(2L)\big{)}\]
and the surjection
\[\mathcal{O}_{Q}\oplus\mathcal{O}_{Q}\big{(}\vartheta^{*}(L)\big{)}\oplus \mathcal{O}_{Q}\big{(}\vartheta^{*}(L)\big{)}\oplus\mathcal{O}_{Q}\big{(} \vartheta^{*}(2L)\big{)}\twoheadrightarrow\mathcal{O}_{Q}\big{(}\rho_{1}^{*}( \xi_{1})+\rho_{2}^{*}(\xi_{2})\big{)}\]
obtained by the tensor product of the pullbacks of the following natural surjections
\[\mathcal{O}_{Y_{1}}\oplus\mathcal{O}_{Y_{1}}\big{(}\pi_{1}^{*}(L_{1})\big{)} \twoheadrightarrow\mathcal{O}_{Y_{1}}(\xi_{1}),\]
\[\mathcal{O}_{Y_{2}}\oplus\mathcal{O}_{Y_{2}}\big{(}\pi_{2}^{*}(L_{2})\big{)} \twoheadrightarrow\mathcal{O}_{Y_{2}}(\xi_{2}).\]
Then (2.8) is surjective. This gives the morphism \(\rho\colon Q\to T\) over \(V\) with
\[\rho^{*}(t_{0}) =u_{1}^{-}u_{2}^{-},\] \[\rho^{*}(t_{1}) =\frac{1}{2}\left(u_{1}^{+}u_{2}^{-}+u_{1}^{-}u_{2}^{+}\right),\] \[\rho^{*}(t_{2}) =u_{1}^{+}u_{2}^{+},\]
where we identified \(H^{0}(Q,\mathcal{O}_{Q}(\rho_{1}^{*}(D)))=H^{0}(Y_{i},\mathcal{O}_{Y_{i}}(D))\) for \(D\in\operatorname{Pic}(Y_{i})\).
Using the local criterion for flatness, we see that \(\rho\) is flat. Further, \(\rho\) is finite of degree \(2\). Now, using [18, I (6.11)] and [18, I (6.12)], we see that the morphism \(\rho\) is branched over the divisor \(B_{T}\in|2(\xi_{T}+\varpi^{*}(L))|\) that is given by \(t_{1}^{2}-t_{0}t_{2}=0\).
Let \(Y_{0}\) be the divisor in \(|\xi_{T}+\varpi^{*}(2L)|\) that is given by
\[\varpi^{*}(f)t_{0}-t_{2}=0,\]
and let \(\pi_{0}\colon Y_{0}\to V\) be the morphism induced by \(\varpi\). Then \(X=\rho^{*}(Y_{0})\) as Cartier divisors, so that the restriction \(X\to Y_{0}\) is a double cover branched over \(B_{T}|_{Y_{0}}\). Moreover, using the exact sequence
\[0\to\mathcal{O}_{V}(-2L)\xrightarrow{\left(\begin{matrix}f\\ 0\\ -1\end{matrix}\right)}\mathcal{O}_{V}\oplus\mathcal{O}_{V}(-L)\oplus\mathcal{O }_{V}(-2L)\xrightarrow{\left(\begin{matrix}1&0&f\\ 0&1&0\end{matrix}\right)}\mathcal{O}_{V}\oplus\mathcal{O}_{V}(-L)\to 0,\]
we get an isomorphism \(Y_{0}\cong Y\) over \(V\). Hence, we identify \(Y=Y_{0}\).
Set \(B=B_{T}|_{Y}\). Then \(B\) is defined by
\[(u^{+})^{2}-\pi^{*}(f)(u^{-})^{2}=0,\]
which implies the remaining assertions of the proposition.
Let \(\iota\in\operatorname{Aut}(X)\) be the Galois involution of the double cover \(X\to Y\) in Proposition 2.7. Then \(\iota(S_{1})=S_{2}\) and \(\iota(E_{1})=E_{2}\), and it follows from the proof of Proposition 2.7 that the conic bundle \(\theta\colon X\to V\) is \(\langle\iota\rangle\)-equivariant with \(\iota\) acting trivially on \(V\).
**Proposition 2.9**.: _Suppose that \(V\) is smooth, \(L\) is nef, \(X\) has Kawamata log terminal singularities, and \(-K_{X}\) is ample. Then the deformations of \(X\) are unobstructed._
Proof.: By Remark 2.2, \(X\) can be embedded into \(U=\mathbb{P}_{V}\left(\mathcal{O}_{V}\oplus\mathcal{O}_{V}(-L)\oplus\mathcal{ O}_{V}(-L)\right)\) such that \(X\in|2\xi_{U}+2\pi_{U}^{*}(L)|\), where \(\xi_{U}\) is the tautological line bundle and \(\pi_{U}\) is the natural projection. Therefore, since \(U\) is smooth, the variety \(X\) has at worst canonical singularities, and \(X\) has at worst local complete intersection singularities. Hence, it follows from [35, Theorem 2.3.2], [35, Theorem 2.4.1], [35, Corollary 2.4.2], [34, Proposition 2.4], [34, Proposition 2.6] that the deformations of \(X\) are unobstructed if \(\operatorname{Ext}^{2}_{\mathcal{O}_{X}}\left(\Omega^{1}_{X},\mathcal{O}_{X} \right)=0\).
Let us show that \(\operatorname{Ext}^{2}_{\mathcal{O}_{X}}\left(\Omega^{1}_{X},\mathcal{O}_{X} \right)=0\). Set \(n=\dim(X)\). As in [34, SS1.2], we have
\[\operatorname{Ext}^{2}_{\mathcal{O}_{X}}\left(\Omega^{1}_{X},\mathcal{O}_{X} \right)\simeq\operatorname{Ext}^{2}_{\mathcal{O}_{X}}\left(\Omega^{1}_{X} \otimes\omega_{X},\omega_{X}\right)\simeq H^{n-2}\left(X,\Omega^{1}_{X} \otimes\omega_{X}\right)^{\vee}.\]
Since \(-K_{V}\) and \(-K_{V}-L\) are ample and \(L\) is nef, we see that \(\xi_{U}+\pi_{U}^{*}(-K_{V})\) is ample, and \(\xi_{U}+\pi_{U}^{*}(L)\) is nef. In particular, both divisors
\[-K_{U} \sim 3\xi_{U}+\pi_{U}^{*}(-K_{V}+2L),\] \[-K_{U}-X \sim \xi_{U}+\pi_{U}^{*}(-K_{V})\]
are ample. On the other hand, using the exact sequence of sheaves
\[0\longrightarrow\mathcal{O}_{U}(-X)\big{|}_{X}\longrightarrow\Omega^{1}_{U} \big{|}_{X}\longrightarrow\Omega^{1}_{X}\longrightarrow 0,\]
we get the following exact sequence:
\[H^{n-2}\left(X,\Omega^{1}_{U}\big{|}_{X}\otimes\omega_{X}\right)\longrightarrow H ^{n-2}\left(X,\Omega^{1}_{X}\otimes\omega_{X}\right)\longrightarrow H^{n-1} \left(X,\mathcal{O}_{U}(-X)\big{|}_{X}\otimes\omega_{X}\right).\]
Moreover, using the Kodaira-type vanishing theorem, we get
\[H^{n-1}\left(X,\mathcal{O}_{U}(-X)\big{|}_{X}\otimes\omega_{X}\right)\simeq H ^{1}\left(X,K_{X}+(-K_{U})\big{|}_{X}\right)^{\vee}=0.\]
Furthermore, using the exact sequence of sheaves
\[0\longrightarrow\Omega^{1}_{U}\otimes\omega_{U}\longrightarrow\Omega^{1}_{U} \otimes\omega_{U}(X)\longrightarrow\Omega^{1}_{U}\big{|}_{X}\otimes\omega_{X} \longrightarrow 0,\]
we get the exact sequence
\[H^{n-2}\left(U,\Omega^{1}_{U}\otimes\omega_{U}(X)\right)\longrightarrow H^{n-2 }\left(X,\Omega^{1}_{U}|_{X}\otimes\omega_{X}\right)\longrightarrow H^{n-1} \left(U,\Omega^{1}_{U}\otimes\omega_{U}\right).\]
Since both \(\omega_{U}\) and \(\omega_{U}(X)\) are anti-ample, the Akizuki-Nakano vanishing theorem gives
\[H^{n-2}\left(U,\Omega^{1}_{U}\otimes\omega_{U}(X)\right)=H^{n-1}\left(U,\Omega^{ 1}_{U}\otimes\omega_{U}\right)=0.\]
This gives \(\operatorname{Ext}^{2}_{\mathcal{O}_{X}}\left(\Omega^{1}_{X},\mathcal{O}_{X} \right)=0\), which completes the proof.
## 3. K-polystability criteria
The goal of this section is to prove Theorem 1.19. To do this, fix a positive integer \(n\geqslant 3\). Let \(V\) be a smooth projective variety of dimension \(n-1\), and let \(L\) be an ample Cartier divisor on \(V\). Set \(d=L^{n-1}\). Fix \(\mu\in\mathbb{Q}_{>0}\) such that \(\mu L\) is very ample. Let
\[Y=\mathbb{P}\big{(}\mathcal{O}_{V}\oplus\mathcal{O}_{V}(L)\big{)},\]
and let \(\pi\colon Y\to V\) be the natural projection. Set \(H=\pi^{*}(L)\). Let \(S^{-}\) and \(S^{+}\) be disjoint sections of the projection \(\pi\) such that \(S^{+}\sim S^{-}+H\).
_Remark 3.1_.: Unlike Section 1, we do not assume that \(V\) is a Fano variety.
Fix a positive rational number \(a\geqslant 1\). Let \(D(a)=S^{-}+aH\). Then \(D(a)\) is nef and big. Moreover, if \(a>1\), then \(D(a)\) is ample.
**Lemma 3.2** (cf. [40]).: _Let \(P\) be a point in \(S^{-}\). Then_
\[\delta_{P}(Y;D(a))\geqslant\min\Biggl{\{}\frac{(n+1)(a^{n}-(a-1)^{n})}{(n+1-a )a^{n}+(a-1)^{n+1}},\frac{\delta(V;L)(n+1)(a^{n}-(a-1)^{n})}{n(a^{n+1}-(a-1)^{n +1})}\Biggr{\}},\]
_where \(\delta_{P}(Y;D(a))\) is the (local) \(\delta\)-invariant of the variety \(Y\) polarized by the divisor \(D(a)\), and \(\delta(V;L)\) is the \(\delta\)-invariant of \(V\) polarized by \(L\). Further, if \(\delta(V;L)\leqslant a\), then_
\[\delta_{P}(Y;D(a))\geqslant\frac{\delta(V;L)(n+1)(a^{n}-(a-1)^{n})}{n(a^{n+1} -(a-1)^{n+1})}.\]
Proof.: It follows from [2, 6] that
\[\delta_{P}(Y;D(a))\geqslant\min\left\{\frac{1}{S_{D(a)}(S^{-})},\inf_{ \begin{subarray}{c}F/S^{-}\\ P\in C_{S^{-}}(F)\end{subarray}}\frac{A_{S^{-}}(F)}{S(W^{S^{-}}_{\bullet, \bullet}\,;F)}\right\},\]
where \(S(W^{S^{-}}_{\bullet,\bullet}\,;F)\) is defined in [6, Section 1.7], and the infimum is taken over all prime divisors over \(S^{-}\) whose centers on \(S^{-}\) contain \(P\). This easily implies the required assertion.
Indeed, take \(u\in\mathbb{R}_{\geqslant 0}\). Then \(D(a)-uS^{-}\sim_{\mathbb{R}}(1-u)S^{-}+aH\), so that
\[D(a)-uS^{-}\text{ is nef }\Longleftrightarrow\ D(a)-uS^{-}\text{ is pseudo- effective }\Longleftrightarrow\ u\leqslant 1.\]
Thus, since \(\operatorname{vol}(D(a))=D(a)^{n}=d(a^{n}-(a-1)^{n})\), we have
\[S_{D(a)}(S^{-})=\frac{1}{D(a)^{n}}\int_{0}^{\infty}\operatorname {vol}(D(a)-uS^{-})du=\\ =\frac{1}{d(a^{n}-(a-1)^{n})}\int_{0}^{1}((1-u-a)^{n}(-1)^{n+1}d +a^{n}d)du=\frac{(n+1-a)a^{n}+(a-1)^{n+1}}{(n+1)(a^{n}-(a-1)^{n})}.\]
Using \(S^{-}\cong V\), we get \((D(a)-uS^{-})|_{S^{-}}\sim_{\mathbb{R}}(a+u-1)H|_{S^{-}}\sim_{\mathbb{R}}(a+u -1)L\).
Let \(F\) be any prime divisor over \(S^{-}\). Then it follows from [6, Section 1.7] that
\[S(W^{S^{-}}_{\bullet,\bullet};F) =\frac{n}{D(a)^{n}}\int_{0}^{1}\int_{0}^{\infty}\operatorname{vol }((D(a)-uS^{-})|_{S^{-}}-vF)dvdu\] \[=\frac{n}{D(a)^{n}}\int_{0}^{1}\int_{0}^{\infty}\operatorname{vol }((a+u-1)L-vF)dvdu\] \[=\frac{n}{D(a)^{n}}\int_{0}^{1}(a+u-1)^{n}\int_{0}^{\infty} \operatorname{vol}(L-vF)dvdu\] \[=\frac{n}{d(a^{n}-(a-1)^{n})}\cdot\frac{a^{n+1}-(a-1)^{n+1}}{n+1} \int_{0}^{\infty}\operatorname{vol}(L-vF)dv\] \[=\frac{n}{n+1}\frac{a^{n+1}-(a-1)^{n+1}}{d(a^{n}-(a-1)^{n})}\cdot L ^{n-1}S_{L}(F)\] \[=\frac{n}{n+1}\frac{a^{n+1}-(a-1)^{n+1}}{a^{n}-(a-1)^{n}}S_{L}(F).\]
This gives
\[\frac{A_{S^{-}}(F)}{S(W^{S^{-}}_{\bullet,\bullet};F)}=\frac{A_{S^{-}}(F)}{S_{ L}(F)}\cdot\frac{n+1}{n}\cdot\frac{a^{n}-(a-1)^{n}}{a^{n+1}-(a-1)^{n+1}}\leqslant \delta_{P}(V;L)\cdot\frac{n+1}{n}\cdot\frac{a^{n}-(a-1)^{n}}{a^{n+1}-(a-1)^{n +1}},\]
which implies the first part of the assertion.
We now assume \(\delta(V;L)\leqslant a\) and we want to show
\[\frac{(n+1)(a^{n}-(a-1)^{n})}{(n+1-a)a^{n}+(a-1)^{n+1}}\geqslant\frac{\delta (V;L)(n+1)(a^{n}-(a-1)^{n})}{n(a^{n+1}-(a-1)^{n+1})}.\]
This inequality is equivalent to
\[\delta(V;L)\leqslant\frac{n(a^{n+1}-(a-1)^{n+1})}{(n+1-a)a^{n}+(a-1)^{n+1}}\]
We must show that the right hand side of the inequality above is at least \(a\). But
\[\frac{n(a^{n+1}-(a-1)^{n+1})}{(n+1-a)a^{n}+(a-1)^{n+1}}>a\iff a^{n+1}(a-1)-(a -1)^{n+1}(a+n)>0,\]
which is clearly true.
Now, fix a smooth divisor \(B\in|2S^{+}|\). Let \(\eta\colon B\to V\) be the morphism induced by \(\pi\). Suppose that \(\eta\) is the double cover ramified over a smooth divisor \(R\in|2L|\). Set
\[\Delta=\frac{1}{2}B.\]
Note that \(B\cap S^{-}=\varnothing\). Let \(k_{n}(a,d,\mu)\) be the number defined in Theorem 1.10.
**Proposition 3.3**.: _Let \(P\) be a point in \(Y\setminus S^{-}\). Suppose that \(d\mu^{n-2}\geqslant 2\). Then_
\[\delta_{P}(Y,\Delta;D(a))\geqslant\frac{1}{k_{n}(a,d,\mu)},\]
_where \(\delta_{P}(Y,\Delta;D(a))\) is the (local) \(\delta\)-invariants of the pair \((Y,\Delta)\) polarized by \(D(a)\)._
This result together with Lemma 3.2 implies Theorem 1.19.
Proof of Theorem 1.19.: Note that \(V\) is a Fano variety and \(-K_{V}\sim_{\mathbb{Q}}aL\). Then
\[-K_{Y}\sim 2S^{+}-\pi^{*}(K_{V}+L)\sim_{\mathbb{Q}}2S^{+}+(a-1)H,\]
which gives
\[-(K_{Y}+\Delta)\sim_{\mathbb{Q}}S^{+}+(a-1)H\sim_{\mathbb{Q}}S^{-}+aH=D(a),\]
so that \((Y,\Delta)\) is the log Fano pair and
\[\delta(Y,\Delta)=\delta(Y,\Delta;D(a)),\]
where \(\delta(Y,\Delta)\) is the \(\delta\)-invariant of the log Fano pair \((Y,\Delta)\). Now, we can apply Lemma 3.2 and Proposition 3.3 to get the required assertion.
In the remaining part of the section, we will prove Proposition 3.3 by induction on \(n\)
### Base of induction
Let \(V\) be a smooth projective surface, let \(L\) be an ample Cartier divisor on \(V\), let \(\mu\) be the smallest rational number such that \(\mu L\) is very ample, let
\[Y=\mathbb{P}\big{(}\mathcal{O}_{V}\oplus\mathcal{O}_{V}(L)\big{)},\]
and let \(\pi\colon Y\to V\) be the natural projection. Set \(H=\pi^{*}(L)\). Let \(S^{-}\) and \(S^{+}\) be disjoint sections of the projection \(\pi\) such that \(S^{+}\sim S^{-}+H\), and let \(B\) be an irreducible normal surface in \(|2S^{+}|\) such that \(\pi\) induces a double cover \(B\to V\) which is ramified in a reduced curve \(R\in|2L|\). Fix \(a\in\mathbb{Q}\) such that \(a\geqslant 1\). Let
\[D(a)=S^{-}+aH.\]
Then \(D(a)\) is nef and big, and \(D(a)\) is ample for \(a>1\). Set \(\Delta=\frac{1}{2}B\) and \(d=L^{2}\).
_Remark 3.4_.: Since \(\mu L\) is very ample and \(L\) is Cartier, we have \(d\mu=(\mu L)\cdot L\in\mathbb{Z}_{>0}\) and
\[d\mu^{2}=(\mu L)^{2}\in\mathbb{Z}_{>0}.\]
Moreover, if \(d\mu=1\), then \(\mu=1\), \(d=L^{2}=1\), \(V=\mathbb{P}^{2}\) and \(L=\mathcal{O}_{\mathbb{P}^{2}}(1)\).
Suppose, in addition, that \(d\mu\geqslant 2\). Set
\[k_{3}(a,d,\mu)=\frac{8d\mu a^{3}+6(1-2d\mu)a^{2}+8(d\mu-1)a-2d\mu+3}{8(3a^{2}- 3a+1)}.\]
Let \(P\) be a point in \(Y\) such that \(P\not\in S^{-}\) and \(P\not\in\operatorname{Sing}(B)\).
**Proposition 3.5**.: _One has \(\delta_{P}(Y,\Delta;D(a))\geqslant\frac{1}{k_{3}(a,d,\mu)}\)._
In the remaining part of this subsection, we will prove this result. We will only consider the case \(P\in B\), because the case \(P\not\in B\) is much simpler.
Let \(V_{1}\) be a general curve in \(|\mu L|\) that contains the point \(\pi(P)\), and let \(Y_{1}=\pi^{*}(V_{1})\). Then \(V_{1}\) is a smooth curve, and \(Y_{1}\) is a smooth surface. For simplicity, we set \(D=D(a)\). Take \(u\in\mathbb{R}_{\geqslant 0}\). Then
\[D-uY_{1}\sim_{\mathbb{R}}S^{-}+(a-\mu u)H,\]
so that \(D-uY_{1}\) is pseudo-effective \(\iff\)\(u\leqslant\frac{a}{\mu}\). We have
\[(D-uY_{1})\big{|}_{S^{-}}\sim_{\mathbb{R}}(S^{-}+(a-\mu u)H)\big{|}_{S^{-}}\sim _{\mathbb{R}}(a-1-\mu u)L,\]
where we use isomorphism \(S^{-}\cong V\) induced by \(\pi\). Hence, the divisor \(D-uY_{1}\) is nef if and only if \(u\leqslant\frac{a-1}{\mu}\). Moreover, the Zariskis decomposition of \(D-uY_{1}\) is
\[P(u)\equiv\begin{cases}S^{-}+(a-\mu u)H&\text{if }u\in[0,\frac{a-1}{\mu}],\\ (a-\mu u)(S^{-}+H)=(a-\mu u)S^{+}&\text{if }u\in[\frac{a-1}{\mu},\frac{a}{\mu}], \end{cases}\]
and
\[N(u)=\begin{cases}0&\text{if }u\in[0,\frac{a-1}{\mu}],\\ (\mu u+1-a)S^{-}&\text{if }u\in[\frac{a-1}{\mu},\frac{a}{\mu}],\end{cases}\]
where \(P(u)\) is the positive part, and \(N(u)\) is the negative part.
Note that \(H^{3}=0\), \(H^{2}\cdot S^{-}=d\), \(H\cdot(S^{-})^{2}=-d\), \((S^{-})^{3}=d\). Then
\[S_{D}(Y_{1}) =\frac{1}{D^{3}}\int_{0}^{\frac{a}{\mu}}\text{vol}(D-uY_{1})du\] \[=\frac{1}{(S^{-}+aH)^{3}}\Bigg{(}\int_{0}^{\frac{a-1}{\mu}}(S^{-} +(a-\mu u)H)^{3}du+\int_{\frac{a-1}{\mu}}^{\frac{a}{\mu}}((a-\mu u)(S^{-}+H))^ {3}du\Bigg{)}\] \[=\frac{(2a-1)(2a^{2}-2a+1)}{4\mu(3a^{2}-3a+1)}.\]
Let \(f\) be the fiber of the \(\mathbb{P}^{1}\)-bundle \(\pi\) that contains \(P\). Then there are two cases to consider: either \(B\) intersects \(f\) transversely at \(P\) or tangentially. For each case, we consider an appropriate plt blow up \(h\colon\widetilde{Y}_{1}\to Y_{1}\) at the point \(P\) with smooth exceptional curve \(E\). We let \(\Delta_{1}=\Delta|_{Y_{1}}\), and we denote by \(\widetilde{\Delta}_{1}\) the proper transform on \(\widetilde{Y}_{1}\) of the divisor \(\Delta_{1}\). Then it follows from [2, 6, 17] that
\[\delta_{P}(Y,\Delta)\geqslant\min\bigg{\{}\frac{1}{S_{D}(Y_{1})},\frac{A_{Y_{ 1},\Delta_{1}}(E)}{S(V_{\bullet,\bullet}^{Y_{1}};E)},\inf_{Q\in E}\frac{A_{E, \Delta_{E}}(Q)}{S(V_{\bullet,\bullet,\bullet}^{\widetilde{Y}_{1},E};Q)} \bigg{\}}.\]
where \(S(V_{\bullet,\bullet}^{Y_{1}};E)\) and \(S(V_{\bullet,\bullet}^{\widetilde{Y}_{1},E};Q)\) are defined in [6, Section 1.7], and \(\Delta_{E}\) is the different computed via the adjunction formula
\[K_{E}+\Delta_{E}=\big{(}K_{\widetilde{Y}_{1}}+\widetilde{\Delta}_{1}\big{)}|_ {E}.\]
For instance, if \(h\) is the ordinary blow up at the point \(P\), then \(\Delta_{E}=\widetilde{\Delta}_{1}|_{E}\). For simplicity, we rewrite the last inequality as
\[\frac{1}{\delta_{P}(Y,\Delta)}\leqslant\max\bigg{\{}S_{D}(Y_{1}),\frac{S(V_{ \bullet,\bullet}^{Y_{1}};E)}{A_{Y_{1},\Delta_{1}}(E)},\sup_{Q\in E}\frac{S(V_{ \bullet,\bullet,\bullet}^{\widetilde{Y}_{1},E};Q)}{A_{E,\Delta_{E}}(Q)} \bigg{\}}. \tag{3.6}\]
Thus, to prove Proposition 3.5, it is enough to bound each term in (3.6) by \(k_{3}(a,d,\mu)\).
We set \(S_{1}^{-}=S^{-}|_{Y_{1}}\), \(H_{1}:=H|_{Y_{1}}\), \(B_{1}:=B|_{Y_{1}}\), \(D_{1}=P(u)|_{Y_{1}}\). Note that \(H_{1}\equiv d\mu f\) and
\[D_{1}\equiv\begin{cases}S_{1}^{-}+(a-\mu u)d\mu f&\text{if }u\in[0,\frac{a-1}{\mu}],\\ (a-\mu u)(S_{1}^{-}+d\mu f)&\text{if }u\in[\frac{a-1}{\mu},\frac{a}{\mu}]. \end{cases}\]
We denote by \(\widetilde{S}_{1}^{-}\), \(\widetilde{B}_{1}\), \(\widetilde{f}\) the proper transforms on \(\widetilde{Y}_{1}\) of the curves \(S_{1}^{-}\), \(B_{1}\), \(f\), respectively.
**Lemma 3.7**.: _Suppose \(B\) intersects \(f\) transversally. Then \(\delta_{P}(Y,\Delta;D(a))\geqslant\frac{1}{k_{3}(a,d,\mu)}\)._
Proof.: Let \(h\colon\widetilde{Y}_{1}\to Y_{1}\) be the ordinary blow up at \(P\). Recall that \(E\) is the \(h\)-exceptional curve. We have \(\widetilde{S}_{1}^{-}\sim h^{*}(S_{1}^{-})\) and \(\widetilde{f}\sim h^{*}(f)-E\). Take \(v\in\mathbb{R}_{\geqslant 0}\). Then
\[h^{*}(D_{1})-vE\equiv\begin{cases}\widetilde{S}_{1}^{-}+(a-\mu u)d\mu\widetilde{ f}+((a-\mu u)d\mu-v)E&\text{if }u\in[0,\frac{a-1}{\mu}],\\ (a-\mu u)(\widetilde{S}_{1}^{-}+d\mu\widetilde{f})+((a-\mu u)d\mu-v)E&\text{if } u\in[\frac{a-1}{\mu},\frac{a}{\mu}].\end{cases}\]
We have the following intersection numbers:
\[\begin{array}{|c||c|c|c|}\hline\bullet&\widetilde{S}_{1}^{-}&\widetilde{f}&E \\ \hline\hline\widetilde{S}_{1}^{-}&-d\mu&1&0\\ \hline\widetilde{f}&1&-1&1\\ \hline E&0&1&-1\\ \hline\end{array}\]
This shows that \(h^{*}(D_{1})-vE\) is pseudo-effective \(\iff v\leqslant(a-\mu u)d\mu\).
If \(u\in[0,\frac{a-1}{\mu}]\), the positive part of the Zariski decomposition of \(h^{*}(D_{1})-vE\) is
\[\widetilde{P}(u,v)\equiv\begin{cases}\widetilde{S}_{1}^{-}+(a-\mu u)d\mu \widetilde{f}+((a-\mu u)d\mu-v)E&\text{if }v\in[0,1]\\ \widetilde{S}_{1}^{-}+((a-\mu u)d\mu+1-v)\widetilde{f}+((a-\mu u)d\mu-v)E& \text{if }v\in[1,1-d\mu^{2}u+ad\mu-d\mu]\\ \frac{-d\mu^{2}u+ad\mu-v}{d\mu-1}(\widetilde{S}_{1}^{-}+d\mu\widetilde{f}+(d \mu-1)E)&\text{if }v\in[1-d\mu^{2}u+ad\mu-d\mu,(a-\mu u)d\mu],\end{cases}\]
and the negative part is
\[\widetilde{N}(u,v)=\begin{cases}0&\text{if }v\in[0,1]\\ (v-1)\widetilde{f}&\text{if }v\in[1,1-d\mu^{2}u+ad\mu-d\mu]\\ \frac{d\mu(\mu u-a+v)}{d\mu-1}\widetilde{f}+\frac{d\mu^{2}u-ad\mu+d\mu+v-1}{d \mu-1}\widetilde{S}_{1}^{-}&\text{if }v\in[1-d\mu^{2}u+ad\mu-d\mu,(a-\mu u)d\mu].\end{cases}\]
Similarly, if \(u\in[\frac{a-1}{\mu},\frac{a}{\mu}]\), the positive part of the Zariski decomposition of \(h^{*}(D_{1})-vE\) is
\[\widetilde{P}(u,v)\equiv\begin{cases}(a-\mu u)(\widetilde{S}_{1}^{-}+d\mu \widetilde{f})+((a-\mu u)d\mu-v)E&\text{if }v\in[0,a-\mu u]\\ \frac{1}{d\mu-1}(-d\mu^{2}u+ad\mu-v)(\widetilde{S}_{1}^{-}+d\mu\widetilde{f}+( d\mu-1)E)&\text{if }v\in[a-\mu u,(a-\mu u)d\mu].\end{cases}\]
and the negative part is
\[\widetilde{N}(u,v)=\begin{cases}0&\text{if }v\in[0,a-\mu u]\\ \frac{1}{d\mu-1}(d\mu(\mu u-a+v)\widetilde{f}+(\mu u-a+v)\widetilde{S}_{1}^{ -})&\text{if }v\in[a-\mu u,(a-\mu u)d\mu].\end{cases}\]
Now, using results from [6, Section 1.7], we compute
\[S(W_{\bullet,\bullet}^{\widetilde{Y_{1}}};E)=\frac{3}{D^{3}}\int \limits_{0}^{\frac{a}{\mu}}\int\limits_{0}^{(a-\mu u)d\mu}\text{vol}(D_{1}-vF) dvdu =\frac{3}{(S^{-}+aH)^{3}}\int\limits_{0}^{\frac{a}{\mu}}\int\limits _{0}^{(a-\mu u)d\mu}\widetilde{P}(u,v)^{2}dvdu\] \[=\frac{4a^{3}d\mu+6(1-d\mu)a^{2}+4(d\mu-2)a-d\mu+3}{4(3a^{2}-3a+1)}.\]
Moreover, we have \(A_{Y_{1},\Delta_{1}}(E)=2-\frac{1}{2}=\frac{3}{2}\), so that
\[\frac{S(W_{\bullet,\bullet}^{Y_{1}};E)}{A_{Y_{1},\Delta_{1}}(E)}=\frac{4a^{3}d \mu+6(1-d\mu)a^{2}+4(d\mu-2)a-d\mu+3}{6(3a^{2}-3a+1)}.\]
Let \(Q\) be a point in \(E\). Then, using results from [6, Section 1.7], we compute
\[S(W^{\widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet};Q) =\frac{3}{(S^{-}+aH)^{3}}\int\limits_{0}^{\frac{a}{\mu}}\int\limits _{0}^{(a-\mu u)d\mu}\big{(}\widetilde{P}(u,v)\cdot E\big{)}^{2}dvdu+F_{q}(W^{ \widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet})\] \[=\frac{6a^{2}-8a+3}{4(3a^{2}-3a+1)}+F_{Q}\big{(}W^{\widetilde{Y_{ 1}},E}_{\bullet,\bullet,\bullet}\big{)},\]
where
\[F_{Q}\big{(}W^{\widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet}\big{)}=\frac{6}{ (S^{-}+aH)^{3}}\int\limits_{0}^{\frac{a}{\mu}}\int\limits_{0}^{(a-\mu u)d\mu} \big{(}\widetilde{P}(u,v)\cdot E\big{)}\cdot\mathrm{ord}_{Q}\big{(}\widetilde{ N}(u,v)|_{E}\big{)}dvdu,\]
because \(P\not\in\mathrm{Supp}(N(u))\) for \(u\in[0,\frac{a}{\mu}]\). Notice that \(F_{Q}(W^{\widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet})\neq 0\) only when \(Q\in\widetilde{f}\). Thus, there are three cases to consider.
* \(Q=E\cap\widetilde{f}\). Then \[F_{Q}\big{(}W^{\widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet}\big{)}=\frac{3- 8a+6a^{2}+d\mu-4ad\mu+6a^{2}d\mu-4a^{3}d\mu}{4(3a^{2}-3a+1)}\] and \(A_{E,\Delta_{E}}(Q)=1\) since \(Q\not\in\widetilde{B}_{1}\). Hence, we have \[\frac{S(W^{\widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet};Q)}{A_{E,\Delta_{E} }(Q)}=\frac{d\mu(2a-1)(2a^{2}-2a+1)}{4(3a^{2}-3a+1)}.\]
* \(Q\in E\cap\widetilde{B}_{1}\). Then \(A_{E,\Delta_{E}}(Q)=\frac{1}{2}\), so that \[\frac{S(W^{\widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet};Q)}{A_{E,\Delta_{E} }(Q)}=\frac{6a^{2}-8a+3}{2(3a^{2}-3a+1)}.\]
* \(Q\in E\) away from \(\widetilde{f}\) and \(\widetilde{B}_{1}\). Then \(A_{E,\Delta_{E}}(Q)=1\), so that \[\frac{S(W^{\widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet};Q)}{A_{E,\Delta_{E} }(Q)}=\frac{6a^{2}-8a+3}{4(3a^{2}-3a+1)}.\]
The third case is smaller than the previous one (exactly half) so we do not consider it. So, using (3.6), we obtain the inequality
\[\frac{1}{\delta_{P}(Y,\Delta)}\leqslant\max\bigg{\{}\frac{(2a-1 )(2a^{2}-2a+1)}{4\mu(3a^{2}-3a+1)},\\ \frac{4a^{3}d\mu+6(1-d\mu)a^{2}+4(d\mu-2)a-d\mu+3}{6(3a^{2}-3a+1)},\\ \frac{d\mu(2a-1)(2a^{2}-2a+1)}{4(3a^{2}-3a+1)},\frac{6a^{2}-8a+3} {2(3a^{2}-3a+1)}\bigg{\}}. \tag{3.8}\]
Recall from Remark 3.4 that \(d\mu^{2}\geqslant 1\). This allows us to conclude
\[\frac{d\mu(2a-1)(2a^{2}-2a+1)}{4(3a^{2}-3a+1)}\geqslant\frac{(2a-1)(2a^{2}-2a +1)}{4\mu(3a^{2}-3a+1)}\]
so we can discard the first term in (3.8). Moreover, since \(d\mu\geqslant 2\), we have
\[\frac{4a^{3}d\mu+6(1-d\mu)a^{2}+4(d\mu-2)a-d\mu+3}{6(3a^{2}-3a+1)} \leqslant k_{3}(a,d,\mu),\] \[\frac{d\mu(2a-1)(2a^{2}-2a+1)}{4(3a^{2}-3a+1)} \leqslant k_{3}(a,d,\mu),\] \[\frac{6a^{2}-8a+3}{2(3a^{2}-3a+1)} \leqslant k_{3}(a,d,\mu),\]
which gives \(\delta_{P}(Y,\Delta;D(a))\geqslant\frac{1}{k_{3}(a,d,\mu)}\).
Now, we deal with the case when \(f\) is tangent to \(B\) at the point \(P\).
**Lemma 3.9**.: _Suppose \(B\) and \(f\) are tangent at \(P\). Then \(\delta_{P}(Y,\Delta;D(a))\geqslant\frac{1}{k_{3}(a,d,\mu)}\)._
Proof.: Now, we let \(h\colon\widetilde{Y}_{1}\to Y_{1}\) be the \((1,2)\)-weighted blowup of the point \(P\) such that the curves \(\widetilde{B}_{1}\) and \(\widetilde{f}\) are disjoint. Then \(\widetilde{f}=h^{*}(f)-2E\). Take \(v\in\mathbb{R}_{\geqslant 0}\). Then
\[h^{*}(D_{1})-vE\equiv\begin{cases}\widetilde{S}_{1}^{-}+(a-\mu u)d\mu\widetilde {f}+(2(a-\mu u)d\mu-v)E&\text{if }u\in[0,\frac{a-1}{\mu}],\\ (a-\mu u)(\widetilde{S}_{1}^{-}+d\mu\widetilde{f})+(2(a-\mu u)d\mu-v)E&\text{ if }u\in[\frac{a-1}{\mu},\frac{a}{\mu}].\end{cases}\]
Moreover, we have the following intersection numbers:
\begin{tabular}{|c||c|c|c|} \hline \(\bullet\) & \(\widetilde{S}_{1}^{-}\) & \(\widetilde{f}\) & \(E\) \\ \hline \(\widetilde{\widetilde{S}}_{1}^{-}\) & \(-d\mu\) & \(1\) & \(0\) \\ \hline \(\widetilde{f}\) & \(1\) & \(-2\) & \(1\) \\ \hline \(E\) & \(0\) & \(1\) & \(-\frac{1}{2}\) \\ \hline \end{tabular} Thus, the divisor \(h^{*}(D_{1})-vE\) is pseudo-effective \(\iff\)\(v\leqslant 2(a-\mu u)d\mu\).
If \(u\in[0,\frac{a-1}{\mu}]\), the positive part of the Zariski decomposition of \(h^{*}(D_{1})-vE\) is
\[\widetilde{P}(u,v)\equiv\begin{cases}\widetilde{S}_{1}^{-}+(a-\mu u)d\mu \widetilde{f}+(2(a-\mu u)d\mu-v)E&\text{if }v\in[0,1]\\ \widetilde{S}_{1}^{-}+((a-\mu u)d\mu+\frac{1-v}{2})\widetilde{f}+(2(a-\mu u) d\mu-v)E&\text{if }v\in[1,-2d\mu^{2}u+2ad\mu-2d\mu+1]\\ \frac{-2d\mu^{2}u+2ad\mu-v}{2d\mu-1}(\widetilde{S}_{1}^{-}+d\mu\widetilde{f} +(2d\mu-1)E)&\text{if }v\in[-2d\mu^{2}u+2ad\mu-2d\mu+1,2(a-\mu u)d\mu],\end{cases}\]
and the negative part is
\[\widetilde{N}(u,v)=\begin{cases}0&\text{if }v\in[0,1]\\ \frac{v-1}{2}\widetilde{f}&\text{if }v\in[1,-2d\mu^{2}u+2ad\mu-2d\mu+1]\\ \frac{d\mu(\mu u-a+v)}{2d\mu-1}\widetilde{f}+\frac{2d\mu^{2}u-2ad\mu+2d\mu+ 1}{2d\mu-1}\widetilde{S}_{1}^{-}&\text{if }v\in[-2d\mu^{2}u+2ad\mu-2d\mu+1,2(a-\mu u)d\mu].\end{cases}\]
Similarly, if \(u\in[\frac{a-1}{\mu},\frac{a}{\mu}]\), the positive part of the Zariski decomposition of \(h^{*}(D_{1})-vE\) is
\[\widetilde{P}(u,v)\equiv\begin{cases}(a-\mu u)(\widetilde{S}_{1}^{-}+d\mu \widetilde{f})+(2(a-\mu u)d\mu-v)E&\text{if }v\in[0,a-\mu u]\\ \frac{-2d\mu^{2}u+2ad\mu-v}{2d\mu-1}(\widetilde{S}_{1}^{-}+d\mu\widetilde{f} +(2d\mu-1)E)&\text{if }v\in[a-\mu u,2(a-\mu u)d\mu],\end{cases}\]
and the negative part is
\[\widetilde{N}(u,v)=\begin{cases}0&\text{if }v\in[0,a-\mu u]\\ \frac{d\mu(\mu u-a+v)}{2d\mu-1}\widetilde{f}+\frac{\mu u-a+v}{2d\mu-1} \widetilde{S}_{1}^{-}&\text{if }v\in[a-\mu u,2(a-\mu u)d\mu].\end{cases}\]
Now, using results from [6, Section 1.7], we compute
\[S(W^{Y_{1}}_{\bullet,\bullet};E)=\frac{3}{D^{3}}\int\limits_{0} ^{\frac{a}{\mu}}\int\limits_{0}^{2(a-\mu u)d\mu}\text{vol}(D_{1}-vF)dvdu =\frac{3}{(S^{-}+aH)^{3}}\int\limits_{0}^{\frac{a}{\mu}}\int \limits_{0}^{2(a-\mu u)d\mu}\widetilde{P}(u,v)dvdu\] \[=\frac{1}{4}\cdot\frac{8a^{3}d\mu+6(1-2d\mu)a^{2}+8(d\mu-1)a-2d \mu+3}{3a^{2}-3a+1}.\]
Moreover, since \(A_{Y_{1},\Delta_{1}}(E)=2\), we have
\[\frac{S(W^{Y_{1}}_{\bullet,\bullet};E)}{A_{Y_{1},\Delta_{1}}(E)}=\frac{1}{8} \cdot\frac{8a^{3}d\mu+6(1-2d\mu)a^{2}+8(d\mu-1)a-2d\mu+3}{3a^{2}-3a+1}.\]
Let \(Q\) be a point in \(E\). Using results from [6, Section 1.7], we get
\[S(W^{\widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet};Q) =\frac{3}{(S^{-}+aH)^{3}}\int\limits_{0}^{\frac{a}{\mu}}\int \limits_{0}^{2(a-\mu u)d\mu}\big{(}\widetilde{P}(u,v)\cdot E\big{)}^{2}dvdu+F_ {Q}\big{(}W^{\widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet}\big{)}\] \[=\frac{1}{8}\cdot\frac{6a^{2}-8a+3}{3a^{2}-3a+1}+F_{Q}\big{(}W^{ \widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet}\big{)}\]
where
\[F_{Q}\big{(}W^{\widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet} \big{)}=\frac{6}{(S^{-}+aH)^{3}}\int\limits_{0}^{\frac{a}{\mu}}\int\limits_{ 0}^{2(a-\mu u)d\mu}(\widetilde{P}(u,v)\cdot E)\cdot\text{ord}_{Q}(\widetilde{ N}(u,v)|_{E})dvdu.\]
There are three cases to consider.
* \(Q=E\cap\widetilde{f}\). Then \[F_{Q}(W^{\widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet})=\frac{1}{8}\frac{8 a^{3}d\mu-6(2d\mu-1)a^{2}+8(d\mu+1)a-2d\mu-3}{3a^{2}-3a+1}\] and \(A_{E,\Delta_{E}}(Q)=1\) since \(Q\not\in\widetilde{B}_{1}\). Hence, we have \[\frac{S(W^{\widetilde{Y_{1}},E}_{\bullet,\bullet};Q)}{A_{E,\Delta_{E}}(Q)}= \frac{d\mu}{4}\cdot\frac{(2a-1)(2a^{2}-2a+1)}{3a^{2}-3a+1}.\]
* \(Q\in E\cap\widetilde{B}\). Then \(A_{E,\Delta_{E}}(Q)=\frac{1}{2}\), so that \[\frac{S(W^{\widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet};Q)}{A_{E,\Delta_{E}} (Q)}=\frac{1}{4}\cdot\frac{6a^{2}-8a+3}{3a^{2}-3a+1}.\]
* \(Q\in E\) is the \(\mathbb{A}_{1}\) singularity. Then \(A_{E,\Delta_{E}}(Q)=\frac{1}{2}\) and so \[\frac{S(W^{\widetilde{Y_{1}},E}_{\bullet,\bullet,\bullet};Q)}{A_{E,\Delta_{E} }(Q)}=\frac{1}{4}\cdot\frac{6a^{2}-8a+3}{3a^{2}-3a+1}.\]
We have the inequality:
\[\frac{1}{\delta_{P}(Y,\Delta)}\leqslant\max\bigg{\{}\frac{(2a-1)(2a^ {2}-2a+1)}{4\mu(3a^{2}-3a+1)},\\ \frac{1}{8}\cdot\frac{8a^{3}d\mu+6(1-2d\mu)a^{2}+8(d\mu-1)a-2d\mu+3 }{3a^{2}-3a+1},\\ \frac{d\mu}{4}\cdot\frac{(2a-1)(2a^{2}-2a+1)}{3a^{2}-3a+1},\frac {1}{4}\cdot\frac{6a^{2}-8a+3}{3a^{2}-3a+1}\bigg{\}}. \tag{3.10}\]
Now, arguing as in the end of the proof of Lemma 3.7, we find
\[\frac{1}{\delta_{P}(Y,\Delta)}\leqslant\frac{1}{8}\cdot\frac{8a^{3}d\mu+6(1-2 d\mu)a^{2}+8(d\mu-1)a-2d\mu+3}{3a^{2}-3a+1},\]
and the result follows.
Proposition 3.5 is proved.
### The induction
Let us use all assumption and notations introduced in Section 3. Recall that \(\mu\) is the smallest rational number for which \(\mu L\) is a very ample Cartier divisor on the variety \(V\) and \(d=L^{n-1}\). Then
\[\mu^{n-1}d=(\mu L)^{n-1}\geqslant 1.\]
Let us prove Proposition 3.3 by induction on \(\dim(Y)=n\geqslant 3\) -- the base of induction (the case when \(n=3\)) is done in Section 3.
Therefore, we suppose that Proposition 3.3 holds for varieties of dimension \(n-1\geqslant 3\). Let \(P\) be a point in \(Y\) such that \(P\not\in S^{-}\). We must prove that
\[\delta_{P}(Y,\Delta;D(a))\geqslant\frac{1}{k_{n}(a,d,\mu)},\]
where \(k_{n}(a,d,\mu)\) is presented in Theorem 1.10. We will only consider the case when \(P\in B\), since the case \(P\not\in B\) is simpler and similar. Thus, we suppose that \(P\in B\).
Let \(V_{n-1}\) be a general divisor in \(|\mu L|\) that contains the point \(\pi(P)\). Set \(Y_{n-1}=\pi^{*}(V_{n-1})\). For simplicity, set \(D=D(a)\). First, let us compute \(S_{D}(Y_{n-1})\). Take \(u\in\mathbb{R}_{\geqslant 0}\). Then
\[D(a)-uY_{n-1}\sim_{\mathbb{R}}S^{-}+(a-\mu u)H,\]
so \(D(a)-uY_{n-1}\) is pseudo-effective \(\iff u\leqslant\frac{a}{\mu}\). For \(u\in[0,\frac{a}{\mu}]\), let \(P(u)\) be the positive part of the Zariski decomposition of \(D(a)-uY_{n-1}\), and let \(N(u)\) be its negative part. Then
\[P(u)\equiv\begin{cases}S^{-}+(a-\mu u)H=D(a-\mu u)&\text{if }u\in[0,\frac{a-1}{ \mu}],\\ (a-\mu u)(S^{-}+H)=(a-\mu u)D(1)&\text{if }u\in[\frac{a-1}{\mu},\frac{a}{\mu}], \end{cases}\]
and
\[N(u)=\begin{cases}0&\text{if }u\in[0,\frac{a-1}{\mu}],\\ (\mu u+1-a)S^{-}&\text{if }u\in[\frac{a-1}{\mu},\frac{a}{\mu}].\end{cases}\]
Recall that \(S^{-}\cap S^{+}=\varnothing\). Note that \((S^{-})^{n}=(-1)^{n+1}d\) and \((S^{+})^{n}=d\). Hence, we have
\[D(a)^{n}=(S^{-}+aH)^{n}=((1-a)S^{-}+aS^{+})^{n}=d(a^{n}-(a-1)^{n}).\]
Now, we compute
\[S_{D}(Y_{n-1}) =\frac{1}{D(a)^{n}}\int_{0}^{\infty}\operatorname{vol}(D(a)-uY_{n-1 })du\] \[=\frac{1}{D(a)^{n}}\int_{0}^{\frac{a-1}{\mu}}(S^{-}+(a-\mu u)H)^{n} du+\frac{1}{D(a)^{n}}\int_{\frac{a-1}{\mu}}^{\frac{a}{\mu}}((a-\mu u)(S^{-}+H))^{n}du\] \[=\frac{1}{D(a)^{n}}\int_{0}^{\frac{a-1}{\mu}}d((-1)^{n+1}(1-a+\mu u )^{n}+(a-\mu u)^{n})du+\frac{1}{D(a)^{n}}\int_{\frac{a-1}{\mu}}^{\frac{a}{\mu} }d(a-\mu u)^{n}du\] \[=\frac{a^{n+1}-(a-1)^{n+1}}{\mu(n+1)(a^{n}-(a-1)^{n})}.\]
Set
\[\operatorname{Res}_{n}(a)=\frac{a^{n+1}-(a+n)(a-1)^{n}}{2(n+1)(a^{n}-(a-1)^{n })}.\]
**Lemma 3.11**.: _One has \(k_{n}(a,d,\mu)=S_{D(a)}(Y_{n-1})d\mu^{n-1}+\operatorname{Res}_{n}(a)\) and \(\operatorname{Res}_{n}(a)>0\)._
Proof.: The equality follows from the formulas for \(k_{n}(a,d,\mu)\) and \(S_{D(a)}(Y_{n-1})\).
Let us show that \(\operatorname{Res}_{n}(a)>0\). We may assume that \(a>1\). The denominator is clearly positive. Hence, we only need to verify that \(a^{n+1}-(a+n)(a-1)^{n}>0\). But
\[\left(\frac{a}{a-1}\right)^{n}=\left(1+\frac{1}{a-1}\right)^{n}=\sum_{i=0}^{ n}\binom{n}{i}\bigg{(}\frac{1}{a-1}\bigg{)}^{i}>1+\frac{n}{a-1}>1+\frac{n}{a}= \frac{a+n}{a},\]
which gives \(a^{n+1}-(a+n)(a-1)^{n}>0\). This shows that \(\operatorname{Res}_{n}(a)>0\).
Set \(\Delta_{n-1}=\Delta|_{Y_{n-1}}\). Then \(S_{D}(Y_{n-1})\leqslant k_{n}(a,d,\mu)\) by Lemma 3.11, since \(d\mu^{n-1}\geqslant 1\). Therefore, using [2], we see that \(\delta_{P}(Y,\Delta;D)\geqslant\frac{1}{k_{n}(a,d,\mu)}\) provided that
\[S(V_{\bullet,\bullet}^{Y_{n-1}};E)\leqslant k_{n}(a,d,\mu)A_{Y_{n-1},\Delta_{ n-1}}(E), \tag{3.12}\]
for every prime divisor \(E\) over the variety \(Y_{n-1}\) such that its center on \(Y_{n-1}\) contains \(P\), where \(A_{Y_{n-1},\Delta_{n-1}}(E)\) is the log discrepancy, and \(S(V_{\bullet,\bullet}^{Y_{n-1}};E)\) is defined in [6, Section 1.7].
Suppose that \(n\geqslant 4\). Let us prove (3.12) using Proposition 3.3 applied to \((Y_{n-1},\Delta_{n-1})\).
Let \(E\) be a prime divisor over \(Y_{n-1}\) whose center in \(Y_{n-1}\) contains \(P\). Since \(P\not\in S^{-}\), it follows from [6, Corollary 1.108] that
\[S(V_{\bullet,\bullet}^{Y_{n-1}};E)=\frac{n}{D^{n}}\int\limits_{0 }^{\frac{a}{\mu}}\Bigg{(}\int\limits_{0}^{\infty}\operatorname{vol}(P(u)|_{Y_{ n-1}}-vE)dv\Bigg{)}du=\] \[=\frac{n}{D^{n}}\int\limits_{0}^{\frac{a-1}{\mu}}\int\limits_{0} ^{\infty}\operatorname{vol}(S^{-}+(a-\mu u)H-vE)dvdu+\frac{n}{D^{n}}\int \limits_{\frac{a-1}{\mu}}^{\frac{a}{\mu}}\int\limits_{0}^{\infty}\operatorname {vol}((a-\mu u)(S^{-}+H)-vE)dvdu=\] \[=\frac{n}{D^{n}}\int\limits_{0}^{\frac{a-1}{\mu}}\int\limits_{0} ^{\infty}\operatorname{vol}(S^{-}+(a-\mu u)H-vE)dvdu+\frac{n}{D^{n}}\int \limits_{\frac{a-1}{\mu}}^{\frac{a}{\mu}}(a-\mu u)^{n}\int\limits_{0}^{ \infty}\operatorname{vol}(S^{-}+H-vE)dvdu.\]
Now, applying Proposition 3.3 (induction step), we get
\[\int\limits_{0}^{\infty}\operatorname{vol}(S^{-}+(a-\mu u)H-vE)dv\leqslant k_{n-1 }(a-\mu u,d\mu,\mu)(S^{-}+(a-\mu u)H)^{n-1}A_{Y_{n-1},\Delta_{n-1}}(E)\]
and
\[\int\limits_{0}^{\infty}\operatorname{vol}(S^{-}+H-vE)dv\leqslant k_{n-1}(1,d \mu,\mu)(S^{-}+H)^{n-1}A_{Y_{n-1},\Delta_{n-1}}(E).\]
Hence, combining, we obtain
\[S(V_{\bullet,\bullet}^{Y_{n-1}};E)\leqslant\frac{n}{D^{n}}\int \limits_{0}^{\frac{a-1}{\mu}}k_{n-1}(a-\mu u,d\mu,\mu)(S^{-}+(a-\mu u)H)^{n-1} A_{Y_{n-1},\Delta_{n-1}}(E)du+\\ +\frac{n}{D^{n}}\int\limits_{\frac{a-1}{\mu}}^{\frac{a}{\mu}}(a- \mu u)^{n}k_{n-1}(1,d\mu,\mu)(S^{-}+H)^{n-1}A_{Y_{n-1},\Delta_{n-1}}(E)du=\\ =A_{Y_{n-1},\Delta_{n-1}}(E)\frac{n}{D^{n}}\int\limits_{0}^{ \frac{a-1}{\mu}}k_{n-1}(a-\mu u,d\mu,\mu)(S^{-}+(a-\mu u)H)^{n-1}du+\\ +A_{Y_{n-1},\Delta_{n-1}}(E)\frac{n}{D^{n}}\int\limits_{\frac{a-1 }{\mu}}^{\frac{a}{\mu}}(a-\mu u)^{n}k_{n-1}(1,d\mu,\mu)(S^{-}+H)^{n-1}du.\]
Let us compute these two integrals separately. We have
\[A_{1}:=\int\limits_{0}^{\frac{a-1}{\mu}}k_{n-1}(a-\mu u,d\mu,\mu)(S^{-}+(a- \mu u)H)^{n-1}du=\\ =d\mu^{n-1}\int\limits_{0}^{\frac{a-1}{\mu}}\frac{d\mu((-1)^{n-1 }(1-a+\mu u)^{n}+(a-\mu u)^{n})}{\mu n}du+\\ +\int\limits_{0}^{\frac{a-1}{\mu}}\frac{d\mu((a-\mu u)^{n}-(a- \mu u+n-1)(a-\mu u-1)^{n-1})}{2n}du=\\ =\frac{d^{2}\mu^{n-1}}{\mu n(n+1)}(a^{n+1}-(a-1)^{n+1}-1)+\frac{ d}{2n(n+1)}(a^{n+1}-(a+n)(a-1)^{n}-1)\]
and
\[A_{2}:=\int\limits_{\frac{a-1}{\mu}}^{\frac{a}{\mu}}(a_{n}-\mu u)^{n}k_{n-1}( 1,d\mu,\mu)(S^{-}+H)^{n-1}du=\frac{d(2d\mu^{n-2}+1)}{2n(n+1)}=\frac{d^{2}\mu^ {n-1}}{\mu n(n+1)}+\frac{d}{2n(n+1)}.\]
Adding these two integrals we get
\[\frac{n}{D(a)^{n}}(A_{1}+A_{2}) =\frac{d\mu^{n-1}}{\mu(n+1)}\frac{a^{n+1}-(a-1)^{n+1}}{a^{n}-(a-1)^ {n}}+\frac{1}{2(n+1)}\frac{a^{n+1}-(a+n)(a-1)^{n}}{a^{n}-(a-1)^{n}}\] \[=S_{D(a)}(Y_{n-1})d\mu^{n-1}+\operatorname{Res}_{n}(a).\]
This gives \(S(V_{\bullet,\bullet}^{Y_{n-1}};E)\leqslant k_{n}(a,d,\mu)A_{Y_{n-1},\Delta_{ n-1}}(E)\) by Lemma 3.11, which proves (3.12) and completes the proof of Proposition 3.3.
### Applications
The only application of Theorem 1.10 we could find is Theorem 1.9. Let us use assumptions and notations of Theorem 1.10. Let \(V=\mathbb{P}^{n-1}\) and \(L=\mathcal{O}_{\mathbb{P}^{n-1}}(r)\). Suppose that \(1<\frac{n}{2}<r<n\). Then \(\mu=\frac{1}{r}\), \(d=r^{n-1}\) and \(a=\frac{n}{r}\).
**Lemma 3.13**.: _One has \(k_{n}(a,d,\mu)<1\)._
Proof.: One has
\[k_{n}(a,d,\mu)=\frac{(2d\mu^{n-2}+1)a^{n+1}-(a+n)(a-1)^{n}-2d\mu^{n-2}(a-1)^{n +1}}{2(n+1)(a^{n}-(a-1)^{n})}.\]
Thus, it is enough to show that
\[2(n+1)(a^{n}-(a-1)^{n})-\left((2d\mu^{n-2}+1)a^{n+1}-(a+n)(a-1)^{n}-2d\mu^{n-2 }(a-1)^{n+1}\right)>0.\]
Substituting \(\mu=\frac{1}{r}\), \(d=r^{n-1}\), \(a=\frac{n}{r}\), and multiplying by \(r^{n+1}\), we get the inequality
\[(n^{n}-(n-r)^{n}(r+1))(2r-n)>0,\]
which holds since \(2r-n>0\) and \(n>r>\frac{n}{2}\) by assumption.
**Lemma 3.14**.: _One has_
\[\frac{(n+1)(a^{n}-(a-1)^{n})}{(n+1-a)a^{n}+(a-1)^{n+1}}>1.\]
Proof.: The inequality is equivalent to
\[(n+1)(a^{n}-(a-1)^{n})>(n+1-a)a^{n}+(a-1)^{n+1}.\]
Substituting \(a=\frac{n}{r}\), multiplying by \(r^{n}\), and dividing by \(n\), we get \(n^{n}-(r+1)(n-r)^{n}>0\), which holds since \(1<\frac{n}{2}<r<n\).
**Lemma 3.15**.: _One has_
\[\frac{a\delta(V)(n+1)(a^{n}-(a-1)^{n})}{n(a^{n+1}-(a-1)^{n+1})}>1.\]
Proof.: We have \(\delta(V)=\delta(\mathbb{P}^{n-1})=1\). Thus, the required inequality is equivalent to
\[n(a^{n+1}-(a-1)^{n+1})-a(n+1)(a^{n}-(a-1)^{n})<0.\]
Substituting \(a=\frac{n}{r}\), multiplying by \(r^{n+1}\), and dividing by \(n\), we get \(n^{n}-(r+1)(n-r)^{n}>0\), which holds since \(1<\frac{n}{2}<r<n\).
Theorem 1.9 follows from Lemmas 3.13, 3.14, 3.15 and Theorem 1.10.
## 4. Proof of Theorem 1.12
The goal of this section is to prove Theorem 1.12 and describe singular K-polystable limits of smooth Fano \(3\)-folds in the deformation family N*4.2. We start with the following (probably well-known) result, which we fail to find in the literature.
**Proposition 4.1**.: _Let \(C\) be a \((2,2)\)-curve in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\). Then \(C\) is_
* _GIT stable for_ \(\operatorname{PGL}_{2}(\mathbb{C})\times\operatorname{PGL}_{2}(\mathbb{C})\)_-action_ \(\iff\) _it is smooth,_
* _GIT strictly polystable_ \(\iff\) _it is one of the curves in Theorem_ 1.12_._
Proof.: Choose homogeneous coordinates \(x,\,y\) of degree \((1,0)\) on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\), and choose homogeneous coordinates \(u\), \(v\) of degree \((0,1)\). Then \(C\) is given by
\[\sum_{i=0}^{2}\sum_{j=0}^{2}a_{ij}x^{2-i}y^{i}u^{2-j}v^{j}=0.\]
Observe that any one parameter subgroup \(\lambda\colon\mathbb{C}^{*}\to\operatorname{PSL}_{2}(\mathbb{C})\times \operatorname{PSL}_{2}(\mathbb{C})\) is conjugate to a diagonal one of the form
\[t\longmapsto\left(\begin{pmatrix}t^{r_{0}}&0\\ 0&t^{-r_{0}}\end{pmatrix},\begin{pmatrix}t^{r_{1}}&0\\ 0&t^{-r_{1}}\end{pmatrix}\right)\]
for some integers \(r_{1}\geqslant r_{0}\geqslant 0\) and \(r_{1}>0\), which we will write as \(\lambda=(r_{0},-r_{0},r_{1},-r_{1})\). Then the Hilbert-Mumford function is
\[\mu(f,\lambda)=\max\{r_{0}(2-2i)+r_{1}(2-2j),\,a_{ij}\neq 0\}.\]
Clearly, if \(\mu(f,\lambda)\leqslant 0\), then \(a_{00}=a_{10}=a_{01}=0\). Moreover, if this inequality is strict, then we additionally have \(a_{11}=0\). Furthermore, we have
\[\mu(x^{2}v^{2},\lambda)=-\mu(y^{2}u^{2},\lambda).\]
So, at least one of \(a_{20}\) and \(a_{02}\) is zero. Without loss of generality, we assume that \(a_{20}=0\). Therefore, if \(\mu(f,\lambda)<0\), then \(a_{00}=a_{10}=a_{01}=a_{11}=a_{20}=0\).
Suppose that \(C\) is singular at the point \(([1:0],[1:0])\), so that \(a_{00}=a_{10}=a_{01}=0\), and consider the one parameter subgroup \(\lambda=(1,-1,1,-1)\). Then
\[\mu(f,\lambda)=4-2(i+j),\]
which is non-positive if and only if \(i+j\geqslant 2\). But, since \(a_{ij}=0\) whenever \(i+j<2\), we conclude that \(\mu(f,\lambda)\leqslant 0\) and \(C\) is not stable.
Conversely, suppose there exists a one parameter subgroup \(\lambda\) for which \(\mu(f,\lambda)\leqslant 0\). Note that
\[\mu(x^{2-i}y^{i}u^{2-j}v^{j},\lambda)>0\]
for any one parameter subgroup \(\lambda\) provided that \(i+j<2\). This gives \(a_{00}=a_{10}=a_{01}=0\), so that the curve \(C\) is singular at \(([1:0],[1:0])\).
Now, let us describe the unstable locus. Suppose that \(a_{00}=a_{10}=a_{01}=a_{11}=a_{20}=0\). Consider the one parameter subgroup \(\lambda=(1,-1,2,-2)\). Then
\[\mu(f,\lambda)=6-2(i+2j)\]
which is negative if and only if \(i+2j>3\). But since \(a_{ij}=0\) whenever \(i+2j\leqslant 3\) it follows that \(\mu(f,\lambda)<0\). Similarly, one can show that \(C\) is GIT-unstable if it can be given by
\[a_{02}x^{2}v^{2}+a_{12}xyv^{2}+a_{21}y^{2}uv+a_{22}y^{2}v^{2}=0.\]
This describes all possibilities for the curve \(C\) to be GIT-semistable, which easily implies the description of GIT-polystable \((2,2)\)-curves.
Now, we set \(V=\mathbb{P}^{1}\times\mathbb{P}^{1}\). Let \(L=\mathcal{O}_{V}(1,1)\), let \(R\) be a curve in \(|2L|\), set
\[Y=\mathbb{P}\big{(}\mathcal{O}_{V}\oplus\mathcal{O}_{V}(L)\big{)},\]
let \(\pi\colon Y\to V\) be the natural projection, let \(S^{-}\) and \(S^{+}\) be disjoint sections of \(\pi\) such that
\[S^{+}\sim S^{-}+\pi^{*}(L).\]
Finally, we set \(F=\pi^{*}(R)\), and let \(\phi\colon X\to Y\) be the blow up at the intersection \(S^{+}\cap F\). If \(R\) is smooth, then \(X\) is K-polystable [6]. Theorem 1.12 says that \(X\) is also K-polystable in the case when \(R\) is one of the following singular curves:
1. \(C_{1}+C_{2}\), where \(C_{1}\) and \(C_{2}\) are smooth curves in \(|L|\) such that \(|C_{1}\cap C_{2}|=2\);
2. \(\ell_{1}+\ell_{2}+\ell_{3}+\ell_{4}\), where \(\ell_{1}\) and \(\ell_{2}\) are two distinct smooth curves of degree \((1,0)\), and \(\ell_{3}\) and \(\ell_{4}\) are two distinct smooth curves of degree \((0,1)\);
3. \(2C\), where \(C\) is a smooth curve in \(|L|\).
Now, let us prove Theorem 1.12. We start with
_Remark 4.2_.: Suppose that \(R=\ell_{1}+\ell_{2}+\ell_{3}+\ell_{4}\), where \(\ell_{1}\) and \(\ell_{2}\) are two distinct smooth curves in \(V\) of degree \((1,0)\), and \(\ell_{3}\) and \(\ell_{4}\) are two distinct smooth curves of degree \((0,1)\). Then \(X\) is toric, and it corresponds to the moment polytope in \(M_{\mathbb{R}}\) whose vertices are
\[\begin{array}{cccc}(0,0,1),&(1,0,1),&(1,1,1),&(0,1,1),\\ (1,1,0),&(-1,1,0),&(-1,-1,0),&(1,-1,0),\\ (0,0,-1),&(-1,0,-1),&(-1,-1,-1),&(0,-1,-1).\end{array}\]
The barycenter of the moment polytope is the origin, so \(X\) is K-polystable.
Our next step is the following simple lemma:
**Lemma 4.3**.: _Suppose \(R=2C\) for a smooth curve \(C\in|L|\). Then \(X\) is K-polystable._
Proof.: In this case, the morphism \(\phi\) is a weighted blow up at the intersection \(\pi^{*}(C)\cap S^{+}\), and \(X\) has non-isolated singularities along a smooth curve, which we will denote by \(\overline{C}\). The threefold \(X\) can be obtained in a slightly different way. Let us describe it.
Set \(W=V\times\mathbb{P}^{1}\), let \(\varpi\colon W\to V\) be the natural projection, let \(\widetilde{S}^{-}\) and \(\widetilde{S}^{+}\) be its disjoint sections, and let \(\widetilde{E}=\varpi^{*}(C)\). Then there exists commutative diagram
where \(\alpha\) blows up the intersection curves \(\widetilde{E}\cap\widetilde{S}^{-}\) and \(\widetilde{E}\cap\widetilde{S}^{+}\), and \(\psi\) contracts the proper transform of the surface \(\widetilde{E}\) to the curve \(\overline{C}\). Moreover, we may assume that \(\phi\circ\psi\) maps the proper transforms of the surfaces \(\widetilde{S}^{-}\) and \(\widetilde{S}^{+}\) to the surfaces \(S^{-}\) and \(S^{+}\), respectively.
Let \(\widetilde{E}\) be the proper transform on the threefold \(U\) of the surface \(\widetilde{E}\). We may assume that the curve \(C\) is the diagonal curve in \(V=\mathbb{P}^{1}\times\mathbb{P}^{1}\).Using this, we see that
\[\operatorname{Aut}(X)\cong\operatorname{Aut}(U)\cong\operatorname{Aut}\bigl{(} W,\widetilde{E}+\widetilde{S}^{-}+\widetilde{S}^{+}\bigr{)}\cong\operatorname{ PGL}_{2}(\mathbb{C})\times(\mathbb{G}_{m}\rtimes\boldsymbol{\mu}_{2})\times \boldsymbol{\mu}_{2},\]
and \(\widehat{E}\) is the only \(\operatorname{Aut}(X)\)-invariant prime divisor over \(X\). Thus, using [41], we conclude that the threefold \(X\) is K-polystable if \(\beta(\widehat{E})>0\). Let us compute \(\beta(\widehat{E})\).
We let \(F^{-}\) and \(F^{+}\) be \(\alpha\)-exceptional surfaces such that \(\alpha(F^{-})\subset\widetilde{S}^{-}\) and \(\alpha(F^{+})\subset\widetilde{S}^{+}\), let \(\widehat{S}^{-}\) and \(\widehat{S}^{+}\) be the proper transforms on \(U\) of the surfaces \(S^{-}\) and \(S^{+}\), respectively. Further, set \(H_{1}=(\operatorname{pr}_{1}\circ\alpha)^{*}(\mathcal{O}_{\mathbb{P}^{1}}(1))\), \(H_{2}=(\operatorname{pr}_{2}\circ\alpha)^{*}(\mathcal{O}_{\mathbb{P}^{1}}(1))\), \(H_{3}=(\operatorname{pr}_{3}\circ\alpha)^{*}(\mathcal{O}_{\mathbb{P}^{1}}(1))\), where \(\operatorname{pr}_{1}\), \(\operatorname{pr}_{2}\), \(\operatorname{pr}_{3}\) are projections \(W\to\mathbb{P}^{1}\) such that \(\operatorname{pr}_{1}\) and \(\operatorname{pr}_{2}\) factors through \(\varpi\). Then
\[\psi^{*}(-K_{X})\sim-K_{U}\sim 2(H_{1}+H_{2}+H_{3})-F^{-}-F^{+}\sim 2\widehat{E }+\widehat{S}^{-}+\widehat{S}^{+}+2(F^{-}+F^{+}).\]
Now, we take \(u\in\mathbb{R}_{\geqslant 0}\). Then the divisor \(\psi^{*}(-K_{X})-u\widehat{E}\) is \(\mathbb{R}\)-rationally equivalent to
\[(2-u)(H_{1}+H_{2})+2H_{3}+(u-1)(F^{-}+F^{+})\sim_{\mathbb{R}}(2-u)\widehat{E}+ \widehat{S}^{-}+\widehat{S}^{+}+2(F^{-}+F^{+}),\]
and \(\widehat{S}^{-}+\widehat{S}^{+}+2(F^{-}+F^{+})\) is not big, so \(\psi^{*}(-K_{X})-u\widehat{E}\) is pseudoeffective \(\iff u\leqslant 2\). Moreover, if \(u\in[0,1]\), then the divisor \(\psi^{*}(-K_{X})-u\widehat{E}\) is nef. Furthermore, if \(u\in[1,2]\), then the Zariski decomposition of the divisor \(\psi^{*}(-K_{X})-u\widehat{E}\) is given by
\[\psi^{*}(-K_{X})-u\widehat{E}\sim_{\mathbb{R}}\underbrace{(2-u)(H_{1}+H_{2}) +2H_{3}}_{\text{positive part}}+\underbrace{(u-1)(F^{-}+F^{+})}_{\text{ negative part}}.\]
Hence, we have
\[\beta(\widehat{E})=1-\frac{1}{(-K_{X})^{3}}\int_{0}^{2}\operatorname {vol}\Bigl{(}\psi^{*}(-K_{X})-u\widehat{E}\Bigr{)}du=\\ =1-\frac{1}{28}\int_{0}^{1}\Big{(}(2-u)(H_{1}+H_{2})+2H_{3}+(u-1) (F^{-}+F^{+})\Big{)}^{3}du-\frac{1}{28}\int_{1}^{2}\Big{(}(2-u)(H_{1}+H_{2}) +2H_{3}\Big{)}^{3}du=\\ =1-\int_{0}^{1}8u^{3}-24u^{2}+28du-\int_{1}^{2}12(2-u)^{2}du= \frac{1}{14}>0,\]
which implies that \(X\) is K-polystable.
To complete the proof of Theorem 1.12, let us present \(X\) as a codimension two complete intersection in a toric variety. Let \(T=(\mathbb{C}^{7}\setminus Z(I))/\mathbb{G}_{m}^{2}\), where the \(\mathbb{G}_{m}^{2}\)-action is given by
\[\left(\begin{array}{ccccccc}x&y&z&w&u&v&s\\ 1&1&1&1&1&0&2\\ 0&0&0&0&1&1&0\end{array}\right),\]
and \(I\) is the irrelevant ideal \(\langle x,y,z,w,s\rangle\cap\langle u,v\rangle\). Let \(\widetilde{\mathbb{P}}=\operatorname{Proj}(\mathcal{O}_{\mathbb{P}^{3}}\oplus \mathcal{O}_{\mathbb{P}^{3}}(1))\). Then we can identify \(\widetilde{\mathbb{P}}\) with the hypersurface in \(T\) given by
\[s=f(x,y,z,w),\]
where \(f(x,y,z,w)\) is any non-zero homogeneous polynomial of degree \(2\). Since \(Y\) can be obtained by blowing up the quadric cone over the surface \(\{xy=zw\}\subset\mathbb{P}^{3}\) at the vertex, we can identify \(Y\) with the complete intersection in \(T\) given by
\[\begin{cases}xy=zw,\\ s=f(x,y,z,w),\end{cases}\]
Then the projection \(\pi\colon T\to V\) is given by
\[(x,y,z,w,u,v,s)\mapsto(x,y,z,w),\]
where we identify \(V\) with \(\{xy=zw\}\subset\mathbb{P}^{3}\). Then the surface \(S^{-}\) is cut out on \(Y\) by \(v=0\). Moreover, we can assume that \(S^{+}\) is cut out on \(Y\) by \(u=0\), and we can identify \(R\) with the curve in \(S^{+}\) that is cut out by \(s=0\).
Let \(\varphi\colon\overline{T}\to T\) be the blow up of \(T\) along \(u=s=0\). Then \(\overline{T}=(\mathbb{C}^{8}\setminus Z(\overline{I}))/\mathbb{G}_{m}^{3}\), where the torus action is given by the matrix
\[\left(\begin{array}{cccccccc}x&y&z&w&u&v&s&t\\ 1&1&1&1&1&0&2&0\\ 0&0&0&0&1&1&0&0\\ 0&0&0&0&1&0&1&-1\end{array}\right),\]
and the irrelevant ideal
\[\overline{I}=\langle x,y,z,w,s\rangle\cap\langle x,y,z,w,t\rangle\cap\langle u,v\rangle\cap\langle u,s\rangle\cap\langle v,t\rangle.\]
Then \(\varphi\) induces the blow up of \(Y\) along \(R\). Thus, we can identify \(X\) with the complete intersection in the toric variety \(\overline{T}\) given by
\[\begin{cases}xy=zw,\\ st=f(x,y,z,w).\end{cases}\]
Now, the subgroup \(\Gamma\cong\mathbb{G}_{m}\) of the group \(\operatorname{Aut}(X)\) mentioned in Section 1 can be explicitly seen -- it consists of all automorphisms
\[(x,y,z,w,u,v,s,t)\mapsto(x,y,z,w,\lambda u,v,s,t),\]
where \(\lambda\in\mathbb{C}^{*}\). Similarly, we can choose the involution \(\iota\in\operatorname{Aut}(X)\) to be the involution
\[(x,y,z,w,u,v,s,t)\mapsto(x,y,z,w,v,u,t,s).\]
Note that \(\iota\) is not canonically defined, since we can conjugate it with an element in \(\Gamma\).
Suppose that \(R=C_{1}+C_{2}\), where \(C_{1}\) and \(C_{2}\) are smooth curves in \(|L|\) that meet transversally at two points. Then, up to a change of coordinates, we may assume that
\[f(x,y,z,t)=xy-\lambda(z^{2}+w^{2}).\]
where \(\lambda\in\mathbb{C}\) such that \(\lambda\not\in\{0,2,-2\}\). Then \(X\) is the complete intersection in \(\overline{T}\) given by
\[\begin{cases}xy=zw,\\ st=xy-\lambda(z^{2}+w^{2}),\end{cases}\]
Note that \(\operatorname{Aut}(X)\) contains automorphisms
\[(x,y,z,w,u,v,s,t)\mapsto\Big{(}\mu x,\frac{y}{\mu},z,w,u,v,s,t\Big{)},\]
where \(\mu\in\mathbb{C}^{*}\). Similarly, the group \(\operatorname{Aut}(X)\) contains two involutions:
\[(x,y,z,w,u,v,s,t)\mapsto(y,x,z,w,u,v,s,t)\]
and
\[(x,y,z,w,u,v,s,t)\mapsto(x,y,w,z,u,v,s,t).\]
Let \(G\) be the subgroup in \(\operatorname{Aut}(X)\) that is generated by all automorphisms described above. Then \(G\cong\mathbb{G}_{m}^{2}\rtimes\boldsymbol{\mu}_{2}\), and we have the following result:
**Lemma 4.4**.: _The following assertions hold:_
1. \(X\) _does not contain_ \(G\)_-fixed points,_
2. \(X\) _does not contain_ \(G\)_-invariant irreducible curves,_
_._
3. \(X\) _contains two_ \(G\)_-invariant irreducible surfaces -- they are cut out by_ \(z\pm w=0\)_._
Proof.: Left to the reader.
Now, we can complete the proof of Theorem 1.12. Suppose that \(X\) is not K-polystable. Using [41], we see that there is a \(G\)-invariant prime divisor \(\mathbf{F}\) over \(X\) such that \(\beta(\mathbf{F})\leqslant 0\). Let \(Z\) be the center of this divisor on \(X\). By Lemma 4.4, \(Z\) is a surface and
\[Z\sim(\pi\circ\phi)^{*}(L).\]
Then, as in [16], we compute \(\beta(\mathbf{F})=\beta(Z)>0\). This shows that \(X\) is K-polystable.
## 5. Proof of Theorem 1.13
In this section, we prove Theorem 1.13. This result describes all singular K-polystable limits of smooth Fano \(3\)-folds in the family \(\mathbb{N}\)\(\ast\)3.9. To show this, we need
**Theorem 5.1** ([20, Theorem 2], [27, Example 7.13], [3]).: _Let \(C\) be a quartic curve in \(\mathbb{P}^{2}\). Then the curve \(C\) is_
* _GIT stable for_ \(\operatorname{PGL}_{3}(\mathbb{C})\)_-action_ \(\iff\) _it is smooth or has_ \(\mathbb{A}_{1}\) _or_ \(\mathbb{A}_{2}\)_-singularities,_
* _GIT strictly polystable_ \(\iff\) _it is one of the remaining curves in Theorem_ 1.13_._
Let us prove Theorem 1.13. Set \(V=\mathbb{P}^{2}\), set \(L=\mathcal{O}_{\mathbb{P}^{2}}(2)\), and set \(Y=\mathbb{P}(\mathcal{O}_{V}\oplus\mathcal{O}_{V}(L))\). Let \(\pi\colon Y\to V\) be the natural projection, set \(H=\pi^{*}(L)\), let \(S^{-}\) and \(S^{+}\) be disjoint sections of \(\pi\) such that \(S^{+}\sim S^{-}+H\), and let \(R\) be one of the following curves:
1. a reduced quartic curve with at most \(\mathbb{A}_{1}\) or \(\mathbb{A}_{2}\) singularities;
2. \(C_{1}+C_{2}\), where \(C_{1}\) and \(C_{2}\) are smooth conics that are tangent at two points;
3. \(C+\ell_{1}+\ell_{2}\), where \(C\) is a smooth conic, \(\ell_{1}\) and \(\ell_{2}\) are distinct lines tangent to \(C\);
4. \(2C\), where \(C\) is a smooth conic in \(|L|\).
Set \(F=\pi^{*}(R)\), and let \(\phi\colon X\to Y\) be the blow up at the complete intersection \(S^{+}\cap F\). Then \(X\) is a singular Fano threefold, and our Theorem 1.13 claims that \(X\) is K-polystable. To prove this, we start with the most singular (and the most symmetric case).
**Lemma 5.2**.: _Suppose that \(R=2C\) for a smooth conic \(C\subset\mathbb{P}^{2}\). Then \(X\) is K-polystable._
Proof.: In this case, the threefold \(X\) has non-isolated singularities along a smooth curve, and the proof is very similar to the proof of Lemma 4.3. Namely, we have
\[\operatorname{Aut}(X)\cong\operatorname{PGL}_{2}(\mathbb{C})\times(\mathbb{G}_ {m}\rtimes\boldsymbol{\mu}_{2}), \tag{5.3}\]
and there exists exactly one \(\operatorname{Aut}(X)\)-invariant prime divisor over \(X\) -- the exceptional divisor of the blow up of \(X\) along the curve \(\operatorname{Sing}(X)\). So, to check that \(X\) is K-polystable, it is enough to compute the \(\beta\)-invariant of this prime divisor. Let us give details.
As in the proof of Lemma 4.3, we set \(W=V\times\mathbb{P}^{1}\). Let \(\varpi\colon W\to V\) be the natural projection, let \(\widetilde{S}^{-}\) and \(\widetilde{S}^{+}\) be its disjoint sections, and let \(\widetilde{E}=\varpi^{*}(C)\). Then there exists the following commutative diagram:
(5.4)
such that
* \(\alpha\) is a blow up along the curves \(\widetilde{E}\cap\widetilde{S}^{-}\) and \(\widetilde{E}\cap\widetilde{S}^{+}\),
* \(\psi\) is a contraction of the proper transform of \(\widetilde{E}\) to the curve \(\operatorname{Sing}(X)\),
* \(\phi\circ\psi\) maps the proper transforms of \(\widetilde{S}^{-}\) and \(\widetilde{S}^{+}\) to \(S^{-}\) and \(S^{+}\), respectively.
This easily implies (5.3). Similarly, we see that (5.4) is \(\operatorname{Aut}(X)\)-equivariant.
Let \(\widehat{E}\) be the \(\psi\)-exceptional divisor. Then \(\widehat{E}\) is the only \(\operatorname{Aut}(X)\)-invariant prime divisor over the threefold \(X\). Thus, if \(\beta(\widehat{E})>0\), them \(X\) is K-polystable [41].
We let \(F^{-}\) and \(F^{+}\) be \(\alpha\)-exceptional surfaces such that \(\alpha(F^{-})\subset\widetilde{S}^{-}\) and \(\alpha(F^{+})\subset\widetilde{S}^{+}\), let \(\widehat{S}^{-}\) and \(\widehat{S}^{+}\) be the proper transforms on \(U\) of the surfaces \(S^{-}\) and \(S^{+}\), respectively. Set \(H_{1}=(\operatorname{pr}_{1}\circ\alpha)^{*}(\mathcal{O}_{\mathbb{P}^{1}}(1))\) for the projection \(\operatorname{pr}_{1}\colon W\to\mathbb{P}^{1}\), set \(H_{2}=(\varpi\circ\alpha)^{*}(\mathcal{O}_{V}(1))\). Then \(\widehat{E}\sim 2H_{2}-F^{-}-F^{+}\), which gives
\[\psi^{*}(-K_{X})\sim-K_{U}\sim 2H_{1}+3H_{2}-F^{-}-F^{+}\sim_{\mathbb{Q}}2H_{1 }+\frac{3}{2}\widehat{E}+\frac{1}{2}(F^{-}+F^{+}).\]
Take \(u\in\mathbb{R}_{\geqslant 0}\). Then
\[\psi^{*}(-K_{X})-u\widehat{E}\sim_{\mathbb{R}}2H_{1}+(3-2u)H_{2}+(u-1)(F^{-}+ F^{+})\sim_{\mathbb{R}}2H_{1}+\frac{3-2u}{2}\widehat{E}+\frac{1}{2}(F^{-}+F^{+}).\]
This shows that \(\psi^{*}(-K_{X})-u\widehat{E}\) is pseudoeffective \(\iff\)\(u\leqslant\frac{3}{2}\). Moreover, if \(u\in[0,1]\), then the divisor \(\psi^{*}(-K_{X})-u\widehat{E}\) is nef. If \(1<u\leqslant\frac{3}{2}\), its Zariski decomposition is
\[\psi^{*}(-K_{X})-u\widehat{E}\sim_{\mathbb{R}}\underbrace{2H_{1}+(3-2u)H_{2}} _{\text{positive part}}+\underbrace{(u-1)(F^{-}+F^{+})}_{\text{negative part}}.\]
Hence, we have
\[\beta(\widehat{E})=1-\frac{1}{(-K_{X})^{3}}\int_{0}^{\frac{3}{2} }\operatorname{vol}\Bigl{(}\psi^{*}(-K_{X})-u\widehat{E}\Bigr{)}du=\\ =1-\frac{1}{26}\int_{0}^{1}\Big{(}2H_{1}+(3-2u)H_{2}+(u-1)(F^{-}+ F^{+})\Big{)}^{3}du-\frac{1}{26}\int_{1}^{\frac{3}{2}}\Big{(}2H_{1}+(3-2u)H_{2} \Big{)}^{3}du=\\ =1-\frac{1}{26}\int_{0}^{1}16u^{3}-36u^{2}+26du-\frac{1}{26}\int _{1}^{\frac{3}{2}}24u^{2}-72u+54du=\frac{7}{26}>0,\]
which implies that \(X\) is K-polystable.
Similarly, we can show that \(X\) is K-polystable if \(R=C_{1}+C_{2}\), where \(C_{1}\) and \(C_{2}\) are smooth conics that are tangent at two points. Indeed, in this case, the full automorphism group \(\operatorname{Aut}(X)\) contains a subgroup \(G\) such that
\[G\cong\bigl{(}\mathbb{G}_{m}\bigr{)}^{2}\rtimes\boldsymbol{\mu}_{2}^{2},\]
the threefold \(X\) does not contains \(G\)-fixed points, and the only \(G\)-invariant irreducible curve in \(X\) is a smooth fiber of the conic bundle \(\pi\circ\phi\). Therefore, arguing exactly as in the proofs of [6, Lemma 4.64] and [6, Lemma 4.66], we see that \(X\) is K-polystable.
However, this approach fails in the case when \(R\) has a singular point of type \(\mathbb{A}_{1}\) or \(\mathbb{A}_{2}\). To overcome this difficulty, we will use another approach described in the end of Section 1.
Namely, we proved in Section 2 that \(\operatorname{Aut}(X)\) contains an involution \(\iota\) such that \(\iota\) swaps the proper transforms of \(S^{-}\) and \(S^{+}\), \(X/\iota\cong Y\), and the following diagram commutes:
where \(\rho\) is the quotient map. Moreover, we also proved that the double cover \(\rho\) is ramified over a divisor \(B\in|2S^{+}|\) such that the morphism \(B\to V\) induced by \(\pi\) is a double cover ramified in the curve \(R\). Set \(\Delta=\frac{1}{2}B\). Then
\[-K_{X}\sim_{\mathbb{Q}}\rho^{*}\big{(}K_{Y}+\Delta\big{)},\]
and \((Y,\Delta)\) has Kawamata log terminal singularities. Therefore, \((Y,\Delta)\) is a log Fano pair. Moreover, it follows from [24] that
\[X\text{ is K-polystable }\Longleftrightarrow\ \big{(}Y,\tfrac{1}{2}B\big{)} \text{ is K-polystable}.\]
However, everything in life comes with a price: the action of the group \(\Gamma\cong\mathbb{G}_{m}\) described earlier in Section 1 does not descent to \(Y\) via \(\rho\), because \(\Gamma\) does not commute with \(\iota\). Thus, the group \(\operatorname{Aut}(Y,\Delta)\) is much smaller than the group \(\operatorname{Aut}(X)\).
To explicitly describe \(B\subset Y\), consider \(Y\) as the toric variety \((\mathbb{C}^{5}\setminus Z(I))/\mathbb{G}_{m}^{2}\) such that the torus action is given by the matrix
\[\left(\begin{array}{ccccc}x_{1}&x_{2}&x_{3}&x_{4}&x_{5}\\ 1&1&1&2&0\\ 0&0&0&1&1\end{array}\right),\]
with irrelevant ideal \(I=\langle x_{1},x_{2},x_{3}\rangle\cap\langle x_{4},x_{5}\rangle\). Let us also consider \(x_{1},x_{2},x_{3}\) as coordinates on \(V=\mathbb{P}^{2}\), so that the projection \(\pi\) is given by
\[(x_{1},x_{2},x_{3},x_{4},x_{5})\mapsto(x_{1},x_{2},x_{3}).\]
Then \(S^{-}=\{x_{5}=0\}\). Moreover, we may assume that \(S^{+}=\{x_{4}=0\}\), and \(B\) is given by
\[x_{4}^{2}-f_{4}(x_{1},x_{2},x_{3})x_{5}^{2}=0,\]
where \(f_{4}(x_{1},x_{2},x_{3})\) is a quartic polynomial such that \(R=\{f_{4}(x_{1},x_{2},x_{3})=0\}\).
In the remaining part of the section, we will prove that the pair \((Y,\Delta)\) is K-polystable. Recall that \(H=\pi^{*}(L)\). Note also that
\[-(K_{Y}+\Delta)\sim_{\mathbb{Q}}S^{-}+\frac{3}{2}H.\]
We will split the proof in several lemmas and propositions. We start with
**Lemma 5.5**.: _Let \(P\) be a point in \(S^{-}\). Then \(\delta_{P}(Y,\Delta)>1\)._
Proof.: Let us apply Lemma 3.2. We have
\[\delta_{P}(Y,\Delta)=\delta_{P}(Y;D(a))\geqslant\min\Bigl{\{}\frac{4(a^{3}-(a -1)^{3})}{(4-a)a^{3}+(a-1)^{4}},\frac{4(a^{3}-(a-1)^{3})}{3(a^{4}-(a-1)^{4})} \delta(V;L)\Bigr{\}},\]
where \(D(a)=-(K_{Y}+\Delta)\) and \(a=\frac{3}{2}\). Thus, we have
\[\delta_{P}(Y,\Delta)\geqslant\min\Bigl{\{}\frac{26}{17},\frac{13}{15}\delta(V;L) \Bigr{\}}.\]
But
\[\delta(V;L)=\delta\Bigl{(}V;\frac{2}{3}(-K_{V})\Bigr{)}=\frac{3}{2}\delta(V;-K_ {V})=\frac{3}{2}\delta(V)=\frac{3}{2}\delta(\mathbb{P}^{2})=\frac{3}{2},\]
so that \(\delta_{P}(Y,\Delta)\geqslant\frac{13}{10}\).
Similarly, applying Proposition 3.5, we obtain
**Lemma 5.6**.: _Let \(P\) be a point \(Y\) such that \(P\not\in\operatorname{Sing}(B)\). Then \(\delta_{P}(Y,\Delta)>1\)._
Proof.: By Lemma 5.5, we may assume that \(P\not\in S^{-}\). Then Proposition 3.5 gives
\[\delta_{P}(Y,\Delta)=\delta_{P}(Y;D(a))\geqslant\frac{8(3a^{2}-3a+1)}{8d\mu a ^{3}+6(1-2d\mu)a^{2}+8(d\mu-1)a-2d\mu+3},\]
where \(D(a)=-(K_{Y}+\Delta)\), \(a=\frac{3}{2}\), \(d=L^{2}=4\), \(\mu=\frac{1}{2}\). This gives \(\delta_{P}(Y,\Delta)\geqslant\frac{52}{49}\).
The two most difficult parts of the proof that \((Y,\Delta)\) is K-polystable are the following two propositions, which will be proved in Subsections 5.1 and 5.2 later.
**Proposition 5.7**.: _Let \(P\) be a point in \(B\) that such \(B\) has singular point of type \(\mathbb{A}_{1}\) at \(P\), and let \(\mathbf{F}\) be a prime divisor over \(Y\) such that \(P=C_{Y}(\mathbf{F})\). Then \(\beta_{Y,\Delta}(\mathbf{F})>0\)._
**Proposition 5.8**.: _Let \(P\) be a point in \(B\) such that \(B\) has singular point of type \(\mathbb{A}_{2}\) at \(P\), and let \(\mathbf{F}\) be a prime divisor over \(Y\) such that \(P=C_{Y}(\mathbf{F})\). Then \(\beta_{Y,\Delta}(\mathbf{F})>0\)._
By Lemmas 5.5 and 5.6 and Propositions 5.7 and 5.8, the log pair \((Y,\Delta)\) is K-stable in the case when \(R\) is a reduced plane quartic curve that has at most \(\mathbb{A}_{1}\) or \(\mathbb{A}_{2}\) singularities. Therefore, to complete the proof, we may assume that \(R\) is one of the following curves:
1. \(C_{1}+C_{2}\), where \(C_{1}\) and \(C_{2}\) are smooth conics that are tangent at two points;
2. \(C+\ell_{1}+\ell_{2}\), where \(C\) is a smooth conic, \(\ell_{1}\) and \(\ell_{2}\) are distinct lines tangent to \(C\);
3. \(2C\), where \(C\) is a smooth conic in \(|L|\).
Hence, appropriately changing coordinates \(x_{1},x_{2},x_{3}\), we may assume that
\[f_{4}(x_{1},x_{2},x_{3})=(x_{1}x_{2}-x_{3}^{2})(x_{1}x_{2}-\lambda x_{3}^{2}),\]
where one of the following three cases holds:
1. \(\lambda\not\in\{0,1\}\), \(R=C_{1}+C_{2}\), where \(C_{1}=\{x_{1}x_{2}=x_{3}^{2}\}\) and \(C_{2}=\{x_{1}x_{2}=\lambda x_{3}^{2}\}\);
2. \(\lambda=0\), \(R=C+\ell_{1}+\ell_{2}\), where \(C=\{x_{1}x_{2}=x_{3}^{2}\}\), \(\ell_{1}=\{x_{1}=0\}\) and \(\ell_{2}=\{x_{2}=0\}\);
3. \(\lambda=1\), \(R=2C\), where \(C=\{x_{1}x_{2}=x_{3}^{2}\}\).
In each case, the group \(\operatorname{Aut}(Y,\Delta)\) contains an involution \(\tau\) such that
\[\tau(x_{1},x_{2},x_{3},x_{4},x_{5})=(x_{2},x_{1},x_{3},x_{4},x_{5}).\]
**Lemma 5.9**.: _Suppose that \(\lambda\not\in\{0,1\}\). Then \((Y,\Delta)\) is K-polystable._
Proof.: Suppose \((Y,\Delta)\) is not K-polystable. It follows from [41] that there is a \(\langle\tau\rangle\)-invariant prime divisor \(\mathbf{F}\) over \(Y\) such that \(\beta_{Y,\Delta}(\mathbf{F})\leqslant 0\). Let \(P\) be a general point in \(C_{Y}(\mathbf{F})\). Then
\[\delta_{P}\bigl{(}Y,\Delta\bigr{)}\leqslant 1.\]
But \(P\not\in\operatorname{Sing}(B)\), since \(\operatorname{Sing}(B)\) consists of two singular points that are swapped by \(\tau\). Then \(\delta_{P}(Y,\Delta)>1\) by Lemmas 5.5 and 5.6, which is a contradiction.
**Lemma 5.10**.: _Suppose \(\lambda=0\). Then \((Y,\Delta)\) is K-polystable._
Proof.: The surface \(B\) has a singular point of type \(\mathbb{A}_{1}\), and two singular points of type \(\mathbb{A}_{3}\), that are swapped by \(\tau\). Arguing as in the proof of Lemma 5.9 and using Propositions 5.7, we see that \(X\) is K-polystable.
**Lemma 5.11** (cf. Lemma 5.2).: _Suppose \(\lambda=1\). Then \((Y,\Delta)\) is K-polystable._
Proof.: In this case, we have \(R=2C\), where \(C\) is an irreducible conic. Then \(B=B_{1}+B_{2}\), where \(B_{1}\) and \(B_{2}\) are smooth surfaces in \(|S^{+}|\) that intersect transversally along a smooth curve such that \(\pi(B_{1}\cap B_{2})=C\).
We already know from Lemma 5.2 that the threefold \(X\) is K-polystable in this case, so that \((Y,\Delta)\) is also K-polystable [24]. Let us prove this directly for consistency.
Let \(W=V\times\mathbb{P}^{1}\), let \(\varpi\colon W\to V\) be the natural projection, let \(\widetilde{S}^{-}\), \(\widetilde{B}_{1}\), \(\widetilde{B}_{2}\) be its disjoint sections, and let \(\widetilde{E}=\varpi^{*}(C)\). Then there exists the following commutative diagram:
such that \(\alpha\) is a blow up along the curve \(\widetilde{E}\cap\widetilde{S}^{-}\), the morphism \(\psi\) is a contraction of the proper transform of the surface \(\widetilde{E}\) to the intersection curve \(B_{1}\cap B_{2}\) such that \(\psi\) maps the proper transforms of the surfaces \(\widetilde{S}^{-}\), \(\widetilde{B}_{1}\), \(\widetilde{B}_{2}\) to the surfaces \(S^{-}\), \(B_{1}\), \(B_{2}\), respectively. Then
\[\operatorname{Aut}(Y,\Delta)\cong\operatorname{Aut}(U)\cong\operatorname{Aut }\bigl{(}W,\widetilde{B}_{1}+\widetilde{B}_{2}+\widetilde{E}+\widetilde{S}^{ -}\bigr{)}\cong\operatorname{PGL}_{2}(\mathbb{C})\times\boldsymbol{\mu}_{2}.\]
Note that the commutative diagram above is \(\operatorname{Aut}(Y,\Delta)\)-equivariant.
Let \(F\) be \(\alpha\)-exceptional surface, let \(\widehat{E}\) be the \(\psi\)-exceptional surface, let \(\widehat{B}_{1}\) and \(\widehat{B}_{2}\) be the proper transforms on \(U\) of the surfaces \(B_{1}\) and \(B_{2}\), respectively. Set \(\widehat{\Delta}=\frac{1}{2}(\widehat{B}_{1}+\widehat{B}_{2})\). Then \(K_{U}+\widehat{\Delta}\sim_{\mathbb{Q}}\psi^{*}(K_{Y}+\Delta)\), so that \(\psi\) is log crepant for \((U,\widehat{\Delta})\). Then \(A_{Y,\Delta}(\widehat{E})=1\).
First, we compute \(\beta_{Y,\Delta}(\widehat{E})\). Set \(H_{1}=(\operatorname{pr}_{1}\circ\alpha)^{*}(\mathcal{O}_{\mathbb{P}^{1}}(1))\) and \(H_{2}=(\varpi\circ\alpha)^{*}(\mathcal{O}_{V}(1))\), where \(\operatorname{pr}_{1}\) is the natural projection \(W\to\mathbb{P}^{1}\). Then \(\widehat{\Delta}\sim_{\mathbb{Q}}H_{1}\) and \(\widehat{E}\sim 2H_{2}-F\), so that
\[\psi^{*}\bigl{(}K_{Y}+\Delta\bigr{)}\sim_{\mathbb{Q}}K_{U}+\widehat{\Delta} \sim_{\mathbb{Q}}H_{1}+3H_{2}-F\sim_{\mathbb{Q}}H_{1}+\frac{3}{2}\widehat{E} +\frac{1}{2}F.\]
Let \(u\) be a non-negative real number. Then
\[\psi^{*}\bigl{(}K_{Y}+\Delta\bigr{)}-u\widehat{E}\sim_{\mathbb{R}}H_{1}+(3-2u )H_{2}+(u-1)F\sim_{\mathbb{R}}H_{1}+\frac{3-2u}{2}\widehat{E}+\frac{1}{2}F,\]
and this divisor is pseudoeffective \(\iff u\leqslant\frac{3}{2}\). For \(u\in[0,\frac{3}{2}]\), let \(P(u)\) be the positive part of the Zariski decomposition of \(\psi^{*}(K_{Y}+\Delta)-u\widehat{E}\), and let \(N(u)\) be the negative part. Then
\[P(u)\sim_{\mathbb{R}}\begin{cases}H_{1}+(3-2u)H_{2}+(u-1)F\text{ if }0\leqslant u \leqslant 1,\\ H_{1}+(3-2u)H_{2}\text{ if }1\leqslant u\leqslant\frac{3}{2},\end{cases}\]
\[N(u)=\begin{cases}0\text{ if }0\leqslant u\leqslant 1,\\ (u-1)F\text{ if }1\leqslant u\leqslant\frac{3}{2}.\end{cases}\]
This gives
\[\beta_{Y,\Delta}(\widehat{E})=A_{Y,\Delta}(\widehat{E})-\frac{1}{( -K_{Y}-\Delta)^{3}}\int_{0}^{\frac{3}{2}}\big{(}P(u)\big{)}^{3}du=\\ =1-\frac{1}{13}\int_{0}^{1}\Big{(}2H_{1}+(3-2u)H_{2}+(u-1)F\Big{)}^{3} du-\frac{1}{13}\int_{1}^{\frac{3}{2}}\Big{(}2H_{1}+(3-2u)H_{2}\Big{)}^{3}du=\\ =1-\int_{0}^{1}8u^{3}-18u^{2}+13du-\int_{1}^{\frac{3}{2}}12u^{2}- 36u+27du=\frac{7}{26}>0.\]
Suppose that \((Y,\Delta)\) is not K-polystable. By [41], there exists an \(\operatorname{Aut}(Y,\Delta)\)-invariant prime divisor \(\mathbf{F}\) over \(Y\) such that \(\beta_{Y,\Delta}(\mathbf{F})\leqslant 0\). Let \(Z\) be its center on \(Y\). Then
\[\delta_{P}(Y,\Delta)\leqslant 1\]
for every point \(P\in Z\). Hence, it follows from Lemmas 5.5 and 5.6 that \(Z\subset B_{1}\cap B_{2}\). Hence, since \(Z\) is a \(\operatorname{Aut}(Y,\Delta)\)-invariant irreducible subvariety, we see that \(Z=B_{1}\cap B_{2}\).
Let \(\widehat{Z}\) be the center of the divisor \(\mathbf{F}\) on the threefold \(U\). Then \(\widehat{Z}\neq\widehat{E}\), since \(\beta(\widehat{E})>0\). Moreover, since \(\widehat{Z}\subset\widehat{E}\) and \(\widehat{Z}\) is \(\operatorname{Aut}(U)\)-invariant, we see that \(\widehat{Z}\) is a \(\operatorname{Aut}(U)\)-invariant section of the natural projection \(\widehat{E}\to Z\). Set \(A=K_{U}+\widehat{\Delta}\). Then
\[0\geqslant\beta_{Y,\Delta}(\mathbf{F})=A_{Y,\Delta}(\mathbf{F})-S_{A}( \mathbf{F})=A_{U,\widehat{\Delta}}(\mathbf{F})-S_{A}(\mathbf{F}),\]
because \(K_{U}+\widehat{\Delta}\sim_{\mathbb{Q}}\psi^{*}(K_{Y}+\Delta)\). Moreover, it follows from [2, 6, 17] that
\[1\geqslant\frac{A_{U,\widehat{\Delta}}(\mathbf{F})}{S_{A}(\mathbf{F})} \geqslant\min\left\{\frac{1}{S_{A}(\widehat{E})},\frac{1}{S_{A}\big{(}W_{ \bullet,\bullet}^{\widehat{E}};\widehat{Z}\big{)}}\right\},\]
where \(S_{A}(W_{\bullet,\bullet}^{\widehat{E}};\widehat{Z})\) is defined in [6, Section 1.7]. But \(S_{A}(\widehat{E})=\frac{19}{26}\), so \(S_{A}(W_{\bullet,\bullet}^{\widehat{E}};\widehat{Z})\geqslant 1\).
Let us compute \(S_{A}(W_{\bullet,\bullet}^{\widehat{E}};\widehat{Z})\). Using [6, Corollary 1.109], we see that
\[S_{A}\big{(}W_{\bullet,\bullet}^{\widehat{E}};\widehat{Z}\big{)}=\frac{3}{A^ {3}}\int_{0}^{\frac{3}{2}}\Big{(}P(u)\big{|}_{\widehat{E}}\Big{)}^{2} \mathrm{ord}_{\widehat{Z}}\big{(}N(u)\big{|}_{\widehat{E}}\Big{)}+\frac{3}{A^ {3}}\int_{0}^{\frac{3}{2}}\int_{0}^{\infty}\mathrm{vol}\big{(}P(u)\big{|}_{ \widehat{E}}-v\widehat{Z}\big{)}dvdu,\]
which is easy to compute, because \(\widehat{E}\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\). Let us do this.
Let \(\mathbf{s}=F\cap\widehat{E}\). Then \(\mathbf{s}\) is a section of the projection \(\widehat{E}\to Z\). Let \(\mathbf{f}\) be a fiber of this projection. Then
\[P(u)\big{|}_{\widehat{E}}=\begin{cases}(6-4u)\mathbf{f}+u\mathbf{s}\text{ if }0\leqslant u\leqslant 1,\\ (6-4u)\mathbf{f}+\mathbf{s}\text{ if }1\leqslant u\leqslant\frac{3}{2},\end{cases}\]
and
\[N(u)\big{|}_{\widehat{E}}=\begin{cases}0\text{ if }0\leqslant u\leqslant 1,\\ (u-1)\mathbf{s}\text{ if }1\leqslant u\leqslant\frac{3}{2}.\end{cases}\]
Thus, we see that \(S_{A}(W^{\widehat{E}}_{\bullet,\bullet};\widehat{Z})\leqslant S_{A}(W^{\widehat{E}}_ {\bullet,\bullet};\mathbf{s})\) and
\[S_{A}\big{(}W^{\widehat{E}}_{\bullet,\bullet};\mathbf{s}\big{)}= \frac{3}{13}\int_{1}^{\frac{3}{2}}\big{(}(6-4u)\mathbf{f}+\mathbf{s}\big{)}^{2} (u-1)du+\frac{3}{13}\int_{0}^{1}\int_{0}^{u}\big{(}(6-4u)\mathbf{f}+(u-v) \mathbf{s}\big{)}^{2}dvdu+\\ +\frac{3}{13}\int_{1}^{\frac{3}{2}}\int_{0}^{1}\big{(}(6-4u) \mathbf{f}+(1-v)\mathbf{s}\big{)}^{2}dvdu=\frac{3}{13}\int_{1}^{\frac{3}{2}}2(6 -4u)(u-1)du+\\ +\frac{3}{13}\int_{0}^{1}\int_{0}^{u}2(6-4u)(u-v)dvdu+\frac{3}{13 }\int_{1}^{\frac{3}{2}}\int_{0}^{1}2(6-4u)(1-v)dvdu=\frac{5}{13}<1,\]
which is a contradiction.
In the remaining part of this sections, we will prove Proposition 5.7 and 5.8.
### Proof of Proposition 5.7
Let us use notations introduced in earlier in this section before Proposition 5.7, and let \(P\) be an isolated ordinary double point of the surface \(B\). Then, up to a change of coordinates, we may assume that \(P=(0,0,1,0,1)\) and
\[f_{4}(x_{1},x_{2},1)=x_{1}^{2}+x_{2}^{2}+\text{higher order terms}.\]
Let \(\rho\colon Y_{0}\to Y\) be the blow up at \(P\). Then \(Y_{0}\) is the toric variety \((\mathbb{C}^{6}\setminus Z(I_{0}))/\mathbb{G}_{m}^{3}\) for the torus action given by
\[M=\left(\begin{array}{cccccc}x_{0}&x_{1}&x_{2}&x_{3}&x_{4}&x_{5}\\ 0&1&1&1&2&0\\ 0&0&0&0&1&1\\ 1&0&0&1&1&0\end{array}\right)\]
with irrelevant ideal \(I_{0}=\langle x_{1},x_{2},x_{3}\rangle\cap\langle x_{1},x_{2},x_{4}\rangle \cap\langle x_{4},x_{5}\rangle\cap\langle x_{0},x_{3}\rangle\cap\langle x_{0},x_{5}\rangle\). To describe its fan, denote the vector generating the ray corresponding to \(x_{i}\) by \(v_{i}\). Then
\[v_{0} =(1,1,1), v_{1} =(1,0,0), v_{2} =(0,1,0),\] \[v_{3} =(-1,-1,-2), v_{4} =(0,0,1), v_{5} =(0,0,-1).\]
The cone structure can be derived from the irrelevant ideal \(I_{0}\), and it can can be visualized via the following diagram:
Let \(F_{i}=\{x_{i}=0\}\subset Y_{0}\), and let \(C_{ij}=F_{i}\cap F_{j}\) for \(i\neq j\) such that \(\dim(F_{i}\cap F_{j})=1\). Consider the \(\mathbb{Z}^{3}\)-grading of \(\operatorname{Pic}(Y_{0})\) given by \(M\). If \(D_{1}\) and \(D_{2}\) are two divisors in \(\operatorname{Pic}(Y_{0})\), then it follows from [10, Chapter 5] that
\[D_{1}\sim D_{2}\iff\deg_{M}(D_{1})=\deg_{M}(D_{2}).\]
Moreover, we have
\[\overline{\operatorname{Eff}(Y_{0})}=\langle F_{0},F_{1},F_{5}\rangle\]
and
\[\overline{\operatorname{NE}(Y_{0})}=\langle C_{12},C_{15},C_{01}\rangle.\]
In particular, a divisor \(D\) with \(\deg_{M}(D)=(a,b,c)\) is effective \(\iff\) all \(a,b,c\geqslant 0\).
**Lemma 5.12**.: _Intersections of divisors \(F_{0}\), \(F_{1}\), \(F_{5}\) are given the following table:_
\[\begin{array}{|c|c|c|c|c|c|c|c|c|c|}\hline F_{0}^{3}&F_{0}^{2}F_{1}&F_{0}^{ 2}F_{5}&F_{0}F_{1}^{2}&F_{0}F_{1}F_{5}&F_{0}F_{5}^{2}&F_{1}^{3}&F_{1}^{2}F_{5} &F_{1}F_{5}^{2}&F_{5}^{3}\\ \hline 1&-1&0&1&0&0&-1&1&-2&4\\ \hline\end{array}\]
Proof.: Recall that for distinct torus-invariant divisors \(F_{i},F_{j},F_{k}\) we may compute their intersection using the fan and the cone structure (or the irrelevant ideal)
\[F_{i}F_{j}F_{k}=\begin{cases}0&x_{i}x_{j}x_{k}\in I_{0}\\ \hline\frac{1}{\left|\det\{v_{i},v_{j},v_{k}\}\right|}&\text{otherwise}.\end{cases}\]
This fact together with the linear equivalences implies the required assertion.
Using Lemma 5.12, we obtain the following intersection table:
\[\begin{array}{|c||c|c|c|}\hline\bullet&F_{0}&F_{1}&F_{5}\\ \hline\hline C_{12}&1&-1&1\\ \hline C_{15}&0&1&-2\\ \hline C_{01}&-1&1&0\\ \hline\end{array}\]
Now, we set \(A=-(K_{Y}+\Delta)\). Take \(u\in\mathbb{R}_{\geqslant 0}\). Set
\[L(u)=\rho^{*}(A)-uF_{0}.\]
Then \(L(u)\sim_{\mathbb{R}}(3-u)F_{0}+3F_{1}+F_{5}\). So, the divisor \(L(u)\) is pseudo-effective \(\iff\)\(u\leqslant 3\). Let us find a Zariski decomposition of the divisor \(L(u)\) for \(u\in[0,3]\).
The divisor \(L(u)\) is nef for \(u\in[0,1]\). We have \(L(1)\cdot C_{12}=0\). Since \(C_{12}\) is a flopping curve, we have to consider a small \(\mathbb{Q}\)-factorial modification \(Y_{0}\dashrightarrow Y_{1}\) such that
\[Y_{1}=\big{(}\mathbb{C}^{6}\setminus Z(I_{1})\big{)}/\mathbb{G}_{m}^{3},\]
where the torus-action is the same (given by the matrix \(M\)) and the irrelevant ideal
\[I_{1}=\langle x_{1},x_{2}\rangle\cap\langle x_{4},x_{5}\rangle\cap\langle x_{ 0},x_{3}\rangle,\]
which is obtained from \(I_{0}\) by replacing \(\langle x_{0},x_{5}\rangle\) with \(\langle x_{1},x_{2}\rangle\). The fan of \(Y_{1}\) is generated by the same vectors, but the cone structure is different:
Abusing our previous notations, we denote the divisor \(\{x_{i}=0\}\subset Y_{1}\) also by \(F_{i}\), and we let \(C_{ij}=F_{i}\cap F_{j}\) for \(i\neq j\) such that \(F_{i}\cap F_{j}\) is a curve. As above, we see that
\[\overline{\operatorname{NE}(Y_{1})}=\langle C_{01},C_{15},C_{05}\rangle.\]
Moreover, intersections of divisors on \(Y_{1}\) are described in the following table:
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(F_{0}^{3}\) & \(F_{0}^{2}F_{1}\) & \(F_{0}^{2}F_{5}\) & \(F_{0}F_{1}^{2}\) & \(F_{0}F_{1}F_{5}\) & \(F_{0}F_{5}^{2}\) & \(F_{1}^{3}\) & \(F_{1}^{2}F_{5}\) & \(F_{1}F_{5}^{2}\) & \(F_{5}^{3}\) \\ \hline
0 & 0 & \(-1\) & 0 & 1 & \(-1\) & 0 & 0 & \(-1\) & 3 \\ \hline \end{tabular} Using these intersections, we obtain the following intersection table:
\begin{tabular}{|c||c|c|c|} \hline \(\bullet\) & \(F_{0}\) & \(F_{1}\) & \(F_{5}\) \\ \hline \hline \(C_{05}\) & \(-1\) & \(1\) & \(-1\) \\ \hline \(C_{15}\) & \(1\) & \(0\) & \(-1\) \\ \hline \(C_{01}\) & \(0\) & \(0\) & \(1\) \\ \hline \end{tabular}
The proper transform on \(Y_{1}\) of the divisor \(L(u)\) is nef for \(u\in[1,2]\), and it intersects the curve \(C_{15}\) trivially for \(u=2\). Note that \(C_{15}\sim C_{25}\) on the surface \(F_{5}\), which implies that the divisor \(F_{5}\) is contained in the negative part of the Zariski decomposition of the proper transform of the divisor \(L(u)\). In fact, we have \(N(u)=(u-2)F_{5}\) and
\[P(u)=(3-u)(F_{0}+F_{5})+3F_{1},\]
where \(N(u)\) is the negative part of the decomposition, and \(P(u)\) is the positive part.
**Lemma 5.13**.: _One has \(A_{Y,\Delta}(F_{0})=2\) and \(S_{A}(F_{0})=\frac{49}{26}\), so that_
\[\frac{A_{Y,\Delta}(F_{0})}{S_{A}(F_{0})}=\frac{52}{49}.\]
Proof.: The equality \(A_{Y,\Delta}(F_{0})=2\) is obvious. Moreover, we have
\[\operatorname{vol}(L(u))=\begin{cases}-u^{3}+13&u\in[0,1]\\ -3u^{2}+3u+12&u\in[1,2]\\ 3u^{3}-18u^{2}+27u&u\in[2,3].\end{cases}\]
Thus, we compute
\[S_{A}(F_{0})=\frac{1}{A^{3}}\int_{0}^{3}\operatorname{vol}(L(u))du=\frac{49}{26}\]
as claimed.
Now, we construct a common toric resolution \(\widetilde{Y}\) for \(Y_{0}\) and \(Y_{1}\). Such variety is easy to see from the fans of \(Y_{0}\) and \(Y_{1}\), we want to add the following ray:
\[v_{6}=(1,1,0)\in\langle v_{1},v_{2}\rangle\cap\langle v_{0},v_{5}\rangle,\]
Set \(\widetilde{Y}\) to be the toric variety corresponding to \(v_{0},\dots,v_{6}\) with the following cone structure:
Let \(\varphi_{0}\colon\widetilde{Y}\to Y_{0}\) and \(\varphi_{1}\colon\widetilde{Y}\to Y_{1}\) be the corresponding toric birational maps. Then
* \(\varphi_{0}\) is the blow up of \(Y_{0}\) along the curve \(C_{12}\),
* \(\varphi_{1}\) is the blow up of \(Y_{1}\) along the curve \(C_{05}\).
Set \(\widetilde{F}_{i}=\{x_{i}=0\}\subset\widetilde{Y}\). Then \(\widetilde{F}_{6}\) is the exceptional divisor of \(\varphi_{0}\) and \(\varphi_{1}\).
The Zariski decomposition of the divisor \(\varphi_{0}^{*}(L(u))\) can be described as follows:
\[\widetilde{P}(u)\sim_{\mathbb{R}}\begin{cases}(3-u)\widetilde{F}_{0}+3 \widetilde{F}_{1}+\widetilde{F}_{5}+3\widetilde{F}_{6}&u\in[0,1],\\ (3-u)\widetilde{F}_{0}+3\widetilde{F}_{1}+\widetilde{F}_{5}+(4-u)\widetilde{F }_{6}&u\in[1,2],\\ (3-u)(\widetilde{F}_{0}+\widetilde{F}_{5})+3\widetilde{F}_{1}+(6-2u) \widetilde{F}_{6}&u\in[2,3],\end{cases}\]
and
\[\widetilde{N}(u)=\begin{cases}0&u\in[0,1],\\ (u-1)\widetilde{F}_{6}&u\in[1,2],\\ (u-2)\widetilde{F}_{5}+(2u-3)\widetilde{F}_{6}&u\in[2,3],\end{cases}\]
where \(\widetilde{P}(u)\) is the positive part, and \(\widetilde{N}(u)\) is the negative part. Note that
Let \(\sigma\colon\widetilde{F}_{0}\to F_{0}\) be the morphism induced by \(\phi_{0}\). Then \(\sigma\) is a blow up at one point. So, we have \(\widetilde{F}_{0}\cong\mathbb{F}_{1}\). Let \(\mathbf{e}\) be the \(\sigma\)-exceptional curve, and let \(\mathbf{f}\) be a fiber of the natural projection \(\widetilde{F}_{0}\to\mathbb{P}^{1}\). Then \(\widetilde{F}_{0}|_{\widetilde{F}_{0}}\sim-\mathbf{e}-\mathbf{f}\), \(\widetilde{F}_{1}|_{\widetilde{F}_{0}}\sim\mathbf{f}\), \(\widetilde{F}_{5}|_{\widetilde{F}_{0}}\sim 0\), \(\widetilde{F}_{6}|_{\widetilde{F}_{0}}=\mathbf{e}\), which gives
\[\widetilde{P}(u)\big{|}_{\widetilde{F}_{0}}=\begin{cases}u(\mathbf{f}+\mathbf{ e})&u\in[0,1],\\ u\mathbf{f}+\mathbf{e}&u\in[1,2],\\ u\mathbf{f}+(3-u)\mathbf{e}&u\in[2,3],\end{cases}\]
\[\widetilde{N}(u)\big{|}_{\widetilde{F}_{0}}=\begin{cases}0&u\in[0,1],\\ (u-1)\mathbf{e}&u\in[1,2],\\ (2u-3)\mathbf{e}&u\in[2,3].\end{cases}\]
We are ready to apply [2, 6, 17]. Set \(B_{F_{0}}=\rho_{*}^{-1}(B)|_{F_{0}}\) and \(\Delta_{F_{0}}=\frac{1}{2}B_{F_{0}}\). Set
\[\delta\big{(}F_{0},\Delta_{F_{0}};V_{\bullet,\bullet}^{\widetilde{F}_{0}}\big{)} =\inf_{E/\widetilde{F}_{0}}\frac{A_{F_{0},\Delta_{F_{0}}}(E)}{S(W_{\bullet, \bullet}^{\widetilde{F}_{0}};E)}\]
where the infimum is taken over all prime divisors \(E\) over \(\widetilde{F}_{0}\), and
\[S(W_{\bullet,\bullet}^{\widetilde{F}_{0}};E)=\frac{3}{A^{3}}\int\limits_{0}^{ 3}\big{(}\widetilde{P}(u)\big{|}_{\widetilde{F}_{0}}\big{)}^{2}\mathrm{ord}_{ E}\big{(}\widetilde{N}(u)\big{|}_{\widetilde{F}_{0}}\big{)}du+\frac{3}{A^{3}} \int\limits_{0}^{3}\int\limits_{0}^{\infty}\mathrm{vol}\big{(}\widetilde{P}(u )\big{|}_{\widetilde{F}_{0}}-vE\big{)}dvdu.\]
Let \(\mathbf{F}\) be a prime divisor over \(Y\) such that \(P=C_{Y}(\mathbf{F})\). Recall that
\[\beta_{Y,\Delta}(\mathbf{F})=A_{Y,\Delta}(\mathbf{F})-S_{A}(\mathbf{F})=A_{Y,\Delta}(\mathbf{F})-\frac{1}{A^{3}}\int\limits_{0}^{\infty}\mathrm{vol}\big{(} A-u\mathbf{F}\big{)}du.\]
It follows from [17, Theorem 4.8] and [17, Corollary 4.9] that
\[\frac{A_{Y,\Delta}(\mathbf{F})}{S_{A}(\mathbf{F})}\geqslant\delta_{P}(Y, \Delta)\geqslant\min\left\{\frac{A_{Y,\Delta}(F_{0})}{S_{A}(F_{0})},\delta \big{(}F_{0},\Delta_{F_{0}};V_{\bullet,\bullet}^{\widetilde{F}_{0}}\big{)} \right\}. \tag{5.14}\]
Suppose \(\beta_{Y,\Delta}(\mathbf{F})\leqslant 0\). Then it follows from (5.14) and Lemma 5.13 that there is a prime divisor \(E\) over \(\widetilde{F}_{0}\) such that
\[S(W_{\bullet,\bullet}^{\widetilde{F}_{0}};E)\geqslant A_{F_{0},\Delta_{F_{0 }}}(E). \tag{5.15}\]
Let \(Z\) be the center of the divisor \(E\) on the surface \(\widetilde{F}_{0}\). Note that \(\sigma(\mathbf{e})\not\in B_{F_{0}}\).
**Lemma 5.16**.: _One has \(Z\cap\mathbf{e}=\varnothing\)._
Proof.: Note that \(A_{F_{0},\Delta_{F_{0}}}(\mathbf{e})=2\). Let us compute \(S(W_{\bullet,\bullet}^{\widetilde{F}_{0}};\mathbf{e})\). For \(u\in[0,3]\), let
\[t(u)=\sup\Big{\{}v\in\mathbb{R}_{\geqslant 0}\bigm{|}\widetilde{P}(u)|_{ \widetilde{F}_{0}}-v\mathbf{e}\text{ is pseudoeffective}\Big{\}}.\]
For every \(v\in[0,t(u)]\), let us denote by \(P(u,v)\) and \(N(u,v)\) the positive and the negative parts of the Zariski decompositions of the divisor \(\widetilde{P}(u)|_{\widetilde{F}_{0}}-v\mathbf{e}\), respectively. Then
\[S(W_{\bullet,\bullet}^{\widetilde{F}_{0}};\mathbf{e})=\frac{3}{A^{3}}\int \limits_{0}^{3}\big{(}P(u,0)\big{)}^{2}\mathrm{ord}_{\mathbf{e}}\big{(} \widetilde{N}(u)\big{|}_{\widetilde{F}_{0}}\big{)}du+\frac{3}{A^{3}}\int \limits_{0}^{3}\int\limits_{0}^{t(u)}\big{(}P(u,v)\big{)}^{2}dvdu.\]
Observe that
\[\mathrm{ord}_{\mathbf{e}}\big{(}\widetilde{N}(u)\big{|}_{\widetilde{F}_{0}} \big{)}=\begin{cases}0&u\in[0,1],\\ u-1&u\in[1,2],\\ 2u-3&u\in[2,3].\end{cases}\]
Moreover, we have
\[t(u)=\begin{cases}u&u\in[0,1],\\ 1&u\in[1,2],\\ 3-u&u\in[2,3].\end{cases}\]
Furthermore, we have \(N(u,v)=0\) for every \(u\in[0,3]\) and \(v\in[0,t(u)]\). Finally, we have
\[P(u,v)=\begin{cases}uf+(u-v)e&u\in[0,1],v\in[0,u],\\ uf+(1-v)e&u\in[1,2],v\in[0,1],\\ uf+(3-u-v)e&u\in[2,3],v\in[0,3-u],\end{cases}\]
which gives
\[\big{(}P(u,v)\big{)}^{2}=\begin{cases}u^{2}-v^{2}&u\in[0,1],v\in[0,u],\\ u^{2}-(1-v-u)^{2}&u\in[1,2],v\in[0,1],\\ u^{2}-(3-2u-v)^{2}&u\in[2,3],v\in[0,3-u].\end{cases}\]
Integrating, we get \(S(W^{\widetilde{F}_{0}}_{\bullet,\bullet};\mathbf{e})=\frac{20}{13}<2=A_{F_{ 0},\Delta_{\widetilde{F}_{0}}}(\mathbf{e})\), so that \(Z\neq\mathbf{e}\) by (5.15).
Suppose that \(Z\cap\mathbf{e}\neq\varnothing\). Let \(O\) be a point of the intersection \(Z\cap\mathbf{e}\). Then it follows from [17, Theorem 4.17] and [17, Corollary 4.18] that
\[\frac{A_{F_{0},\Delta_{F_{0}}}(E)}{S(W^{\widetilde{F}_{0}}_{\bullet,\bullet}; E)}\geqslant\min\left\{\frac{2}{S(W^{\widetilde{F}_{0}}_{\bullet,\bullet}; \mathbf{e})},\frac{1}{S(W^{\widetilde{F}_{0},\mathbf{e}}_{\bullet,\bullet, \bullet};O)}\right\}=\min\left\{\frac{13}{10},\frac{1}{S(W^{\widetilde{F}_{0},\mathbf{e}}_{\bullet,\bullet,\bullet};O)}\right\},\]
where
\[S\big{(}W^{\widetilde{F}_{0},\mathbf{e}}_{\bullet,\bullet,\bullet};O\big{)}= \frac{3}{A^{3}}\int\limits_{0}^{3}\int\limits_{0}^{t(u)}\big{(}P(u,v)\cdot \mathbf{e}\big{)}^{2}dvdu.\]
Integrating, we get \(S(W^{\widetilde{F}_{0},\mathbf{e}}_{\bullet,\bullet,\bullet};O)=\frac{20}{13}\), which contradicts (5.15).
Thus, we see that \(Z\) is disjoint from \(\mathbf{e}\). In particular, we see that
\[Z\cap\operatorname{Supp}\bigl{(}\widetilde{N}(u)\bigr{|}_{\widetilde{F}_{0} }\bigr{)}=\varnothing\]
for every \(u\in[0,3]\). This will simplify some formulas in the following.
Let \(B_{\widetilde{F}_{0}}\) be the strict transform on \(\widetilde{F}_{0}\) of the curve \(B_{F_{0}}\). Then \(B_{\widetilde{F}_{0}}\) is a smooth irreducible curve in \(|2(\mathbf{e}+\mathbf{f})|\). Set \(\Delta_{\widetilde{F}_{0}}=\frac{1}{2}B_{\widetilde{F}_{0}^{\prime}}\). Let \(O\) be a point in \(Z\). We may assume that \(O\in\mathbf{f}\). Then there are three cases to consider:
1. \(O\not\in B_{\widetilde{F}_{0}}\),
2. \(O\in B_{\widetilde{F}_{0}}\cap\mathbf{f}\), and \(\mathbf{f}\) intersects \(B_{\widetilde{F}_{0}}\) transversely at the point \(O\),
3. \(O=B_{\widetilde{F}_{0}}\cap\mathbf{f}\), and \(\mathbf{f}\) is tangent to \(B_{\widetilde{F}_{0}}\) at the point \(O\).
Let \(\theta\colon\widehat{F}_{0}\to\widetilde{F}_{0}\) be a plt blow up of the point \(O\) defined as follows:
* the map \(\theta\) is an ordinary blow up in the case when \(O\not\in B_{\widetilde{F}_{0}}\), or when \(O\in B_{\widetilde{F}_{0}}\cap\mathbf{f}\), and the fiber \(\mathbf{f}\) intersects the curve \(B_{\widetilde{F}_{0}}\) transversely at the point \(O\),
* the map \(\theta\) is a weighted blow up at the point \(O=B_{\widetilde{F}_{0}}\cap\mathbf{f}\) with weights \((1,2)\) such that the proper transforms on \(\widehat{F}_{0}\) of the curves \(B_{\widetilde{F}_{0}}\) and \(\mathbf{f}\) are disjoint in the case when the fiber \(\mathbf{f}\) is tangent to the curve \(B_{\widetilde{F}_{0}}\) at the point \(O\).
Let \(C\) be the \(\theta\)-exceptional curve. We have \(C\cong\mathbb{P}^{1}\). Let \(B_{\widehat{F}_{0}}\) be the proper transform on the surface \(\widehat{F}_{0}\) of the curve \(B_{\widehat{F}_{0}}\). Set \(\Delta_{\widehat{F}_{0}}=\frac{1}{2}B_{\widehat{F}_{0}}\). Let \(\Delta_{C}\) be the effective \(\mathbb{Q}\)-divisor on the curve \(C\) known as the different, which can be defined via the adjunction formula:
\[K_{C}+\Delta_{C}=\bigl{(}K_{\widehat{F}_{0}}+\Delta_{\widehat{F}_{0}}\bigr{)} \bigr{|}_{C}.\]
If \(\theta\) is a usual blow up, then \(\Delta_{C}=\Delta_{\widehat{F}_{0}}|_{C}\). Similarly, if \(\theta\) is a weighted blow up, then
\[\Delta_{C}=\Delta_{\widehat{F}_{0}}\big{|}_{C}+\frac{1}{2}\mathbf{o},\]
where \(\mathbf{o}\) is the singular point of the surface \(\widehat{F}_{0}\) contained in \(C\) -- \(\mathbf{o}\) is an ordinary double point, which is not contained in the proper transforms of the curves \(B_{\widetilde{F}_{0}}\) and \(\mathbf{f}\).
Now, for \(u\in[0,3]\), we let
\[\hat{t}(u)=\sup\Big{\{}v\in\mathbb{R}_{\geqslant 0}\ \big{|}\ \theta^{*}\big{(} \widetilde{P}(u)|_{\widehat{F}_{0}}\big{)}-vC\text{ is pseudoeffective}\Big{\}}.\]
For every \(v\in[0,\hat{t}(u)]\), let us denote by \(\widehat{P}(u,v)\) and \(\widehat{N}(u,v)\) the positive and the negative parts of the Zariski decompositions of the divisor \(\theta^{*}(\widetilde{P}(u)|_{\widetilde{F}_{0}})-vC\), respectively. Then
\[1\geqslant\frac{A_{F_{0},\Delta_{F_{0}}}(E)}{S(W_{\bullet,\bullet}^{\widehat {F}_{0}},E)}\geqslant\min\left\{\frac{A_{F_{0},\Delta_{F_{0}}}(C)}{S(W_{ \bullet,\bullet}^{\widehat{F}_{0}},C)},\inf_{Q\in C}\frac{A_{C,\Delta_{C}}(Q )}{S\big{(}W_{\bullet,\bullet,\bullet}^{\widehat{F}_{0},C},Q\big{)}}\right\} \tag{5.17}\]
by (5.15) and [17, Corollary 4.18], where the infimum is taken by all points \(Q\in C\), and
\[S\big{(}W_{\bullet,\bullet,\bullet}^{\widehat{F}_{0},C},Q\big{)}=\frac{3}{A^ {3}}\int\limits_{0}^{3}\int\limits_{0}^{\hat{t}(u)}\big{(}\widehat{P}(u,v) \cdot C\big{)}^{2}dvdu+F_{Q}\big{(}W_{\bullet,\bullet,\bullet}^{\widehat{F}_{ 0},C}\big{)}\]
for
\[F_{Q}\big{(}W_{\bullet,\bullet,\bullet}^{\widehat{F}_{0},C}\big{)}=\frac{6}{ A^{3}}\int\limits_{0}^{3}\int\limits_{0}^{\hat{t}(u)}\big{(}\widehat{P}(u,v) \cdot C\big{)}\text{ord}_{Q}\big{(}\widehat{N}(u,v)\big{|}_{C}\big{)}dvdu.\]
Denote by \(\widehat{\mathbf{e}}\) and \(\widehat{\mathbf{f}}\) the proper transforms of the curves \(\mathbf{e}\) and \(\mathbf{f}\), respectively.
**Lemma 5.18**.: _Suppose that \(\theta\) is an ordinary blow up. Let \(Q\) be a point in \(C\). Then_
\[\frac{A_{F_{0},\Delta_{F_{0}}}(C)}{S(W_{\bullet,\bullet}^{\widehat{F}_{0}},C )}\geqslant\frac{39}{29}\]
_and_
\[\frac{A_{C,\Delta_{C}}(Q)}{S\big{(}W_{\bullet,\bullet,\bullet}^{\widehat{F}_{ 0},C},Q\big{)}}\geqslant\frac{13}{10}.\]
Proof.: One has
\[\theta^{*}\big{(}\widetilde{P}(u)|_{\widehat{F}_{0}}\big{)}\sim_{\mathbb{R}} \begin{cases}u(\widehat{\mathbf{f}}+\widehat{\mathbf{e}}+C)&u\in[0,1],\\ u(\widehat{\mathbf{f}}+C)+\widehat{\mathbf{e}}&u\in[1,2],\\ u(\widehat{\mathbf{f}}+C)+(3-u)\widehat{\mathbf{e}}&u\in[2,3].\end{cases}\]
This easily implies that \(\hat{t}(u)=u\) and
\[\widehat{N}(u,v)=\begin{cases}0&u\in[0,1],v\in[0,u],\\ 0&u\in[1,2],v\in[0,1],\\ (v-1)\widehat{\mathbf{f}}&u\in[1,2],v\in[1,u],\\ 0&u\in[2,3],v\in[0,3-u],\\ (v+u-3)\widehat{\mathbf{f}}&u\in[2,3],v\in[3-u,u],\end{cases}\]
so that
\[\widehat{P}(u,v)=\begin{cases}u(\widehat{\mathbf{f}}+\widehat{\mathbf{e}})+(u-v)C&u \in[0,1],v\in[0,u],\\ u\widehat{\mathbf{f}}+(u-v)C+\widehat{\mathbf{e}}&u\in[1,2],v\in[0,1],\\ (u-v+1)\widehat{\mathbf{f}}+(u-v)C+\widehat{\mathbf{e}}&u\in[1,2],v\in[1,u],\\ u\widehat{\mathbf{f}}+(u-v)C+\widehat{\mathbf{e}}&u\in[2,3],v\in[0,3-u],\\ (3-v)\widehat{\mathbf{f}}+(u-v)C+(3-u)\widehat{\mathbf{e}}&u\in[2,3],v\in[3-u, u],\end{cases}\]
which gives
\[\left(\widehat{P}(u,v)\right)^{2}=\begin{cases}u^{2}-v^{2}&u\in[0,1],v\in[0,u],\\ -v^{2}+2u-1&u\in[1,2],v\in[0,1],\\ 2u-2v&u\in[1,2],v\in[1,u],\\ -3u^{2}-v^{2}+12u-9&u\in[2,3],v\in[0,3-u],\\ -2u^{2}+2uv+6u-6v&u\in[2,3],v\in[3-u,u].\end{cases}\]
Thus, integrating, we get \(S(W^{\widetilde{F}_{0}}_{\bullet,\bullet};C)=\frac{29}{26}\). Note that
\[A_{F_{0},\Delta_{F_{0}}}(C)=\begin{cases}\frac{3}{2}&O\in B_{\widetilde{F}_{0 }},\\ 2&O\not\in B_{\widetilde{F}_{0}}.\end{cases}\]
This gives the first required inequality. Similarly, we compute
\[S\big{(}W^{\widehat{F}_{0},C}_{\bullet,\bullet,\bullet};Q\big{)}=\frac{9}{26} +F_{Q}\big{(}W^{\widehat{F}_{0},C}_{\bullet,\bullet,\bullet}\big{)}\]
where
\[F_{Q}\big{(}W^{\widehat{F}_{0},C}_{\bullet,\bullet,\bullet}\big{)}=\begin{cases} \frac{11}{26}&Q=\widehat{\mathbf{f}}\cap C,\\ 0&\text{otherwise}.\end{cases}\]
Observe that
\[A_{C,\Delta_{C}}(Q)=\begin{cases}\frac{1}{2}&Q\in B_{\widetilde{F}_{0}},\\ 1&Q\not\in B_{\widetilde{F}_{0}}.\end{cases}\]
Moreover, if \(O\in B_{\widetilde{F}_{0}}\cap\mathbf{f}\), the intersection \(C\cap\widehat{\mathbf{f}}\) consists of a single point, which is not contained in \(B_{\widetilde{F}_{0}}\). Thus, we have
\[\frac{A_{C,\Delta_{C}}(Q)}{S\big{(}W^{\widehat{F}_{0},C}_{\bullet,\bullet};Q \big{)}}=\begin{cases}\frac{13}{10}&Q=C\cap\widehat{\mathbf{f}},\\ \frac{13}{9}&Q=C\cap B_{\widetilde{F}_{0}},\\ \frac{26}{9}&\text{otherwise}.\end{cases}\]
which implies the second required inequality.
Thus, it follows from (5.17) and Lemma 5.18 that \(O=B_{\widetilde{F}_{0}}\cap\mathbf{f}\), so \(\mathbf{f}\) and \(B_{\widetilde{F}_{0}}\) are tangent at the point \(O\). Then \(\theta\) is a weighted blow up with weights \((1,2)\). We have
\[\theta^{*}\big{(}\widetilde{P}(u)\big{|}_{\widetilde{F}_{0}}\big{)}\sim_{ \mathbb{R}}\begin{cases}u(\widehat{\mathbf{f}}+\widehat{\mathbf{e}}+2C)&u\in[ 0,1],\\ u(\widehat{\mathbf{f}}+2C)+\widehat{\mathbf{e}}&u\in[1,2],\\ u(\widehat{\mathbf{f}}+2C)+(3-u)\widehat{\mathbf{e}}&u\in[2,3].\end{cases}\]
This gives \(\hat{t}(u)=2u\). Moreover, we have
\[\widehat{N}(u,v)=\begin{cases}0&u\in[0,1],v\in[0,u],\\ (v-u)(\widehat{\mathbf{f}}+\widehat{\mathbf{e}})&u\in[0,1],v\in[u,2u],\\ 0&u\in[1,2],v\in[0,1],\\ \frac{v-1}{2}\widehat{\mathbf{f}}&u\in[1,2],v\in[1,2u-1],\\ (v-u)\widehat{\mathbf{f}}+(v-2u+1)\widehat{\mathbf{e}}&u\in[1,2],v\in[1,2u-1],\\ 0&u\in[2,3],v\in[0,3-u],\\ \frac{v+u-3}{2}\widehat{\mathbf{f}}&u\in[2,3],v\in[0,3u-3],\\ (v-u)\widehat{\mathbf{f}}+(v+3-3u)\widehat{\mathbf{e}}&u\in[2,3],v\in[3u-3,2u],\end{cases}\]
and
\[\widehat{P}(u,v)=\begin{cases}(2u-v)C+u\widehat{\mathbf{f}}+u\widehat{ \mathbf{e}}&u\in[0,1],v\in[0,u],\\ (2u-v)(C+\widehat{\mathbf{f}}+\widehat{\mathbf{e}})&u\in[0,1],v\in[u,2u],\\ (2u-v)C+u\widehat{\mathbf{f}}+\widehat{\mathbf{e}}&u\in[1,2],v\in[0,1],\\ (2u-v)C+\frac{2u-v+1}{2}\widehat{\mathbf{f}}+\widehat{\mathbf{e}}&u\in[1,2],v \in[1,2u-1],\\ (2u-v)(C+\widehat{\mathbf{f}}+\widehat{\mathbf{e}})&u\in[1,2],v\in[1,2u-1],\\ (2u-v)C+u\widehat{\mathbf{f}}+(3-u)\widehat{\mathbf{e}}&u\in[2,3],v\in[0,3-u],\\ (2u-v)C+\frac{u-v+3}{2}\widehat{\mathbf{f}}+(3-u)\widehat{\mathbf{e}}&u\in[2,3],v\in[0,3u-3],\\ (2u-v)(C+\widehat{\mathbf{f}}+\widehat{\mathbf{e}})&u\in[2,3],v\in[3u-3,2u].\end{cases}\]
Then
\[\big{(}\widehat{P}(u,v)\big{)}^{2}=\begin{cases}u^{2}-\frac{v^{2}}{2}&u\in[0, 1],v\in[0,u],\\ \frac{(2u-v)^{2}}{2}&u\in[0,1],v\in[u,2u],\\ 2u-1-\frac{v^{2}}{2}&u\in[1,2],v\in[0,1],\\ 2u-v-\frac{1}{2}&u\in[1,2],v\in[1,2u-1],\\ \frac{(2u-v)^{2}}{2}&u\in[1,2],v\in[1,2u-1],\\ 12u-9-3u^{2}-\frac{v^{2}}{2}&u\in[2,3],v\in[0,3-u],\\ \frac{(5u-2v-3)(u-3)}{2}&u\in[2,3],v\in[0,3u-3],\\ \frac{(2u-v)^{2}}{2}&u\in[2,3],v\in[3u-3,2u].\end{cases}\]
Now, integrating, we get \(S(W^{\widehat{F}_{0}}_{\bullet,\bullet};C)=\frac{49}{26}\). Thus, since \(A_{F_{0},\Delta_{F_{0}}}(C)=2\), we get
\[\frac{A_{F_{0},\Delta_{F_{0}}}(C)}{S(W^{\widehat{F}_{0}}_{\bullet,\bullet};C )}=\frac{52}{49},\]
so it follows from (5.17) that there is a point \(Q\in C\) such that \(S(W^{\widehat{F}_{0},C}_{\bullet,\bullet};Q)\geqslant A_{C,\Delta_{C}}(Q)\). On the other hand, we compute
\[S\big{(}W^{\widehat{F}_{0},C}_{\bullet,\bullet,\bullet};Q\big{)}=\frac{9}{5 2}+F_{Q}\big{(}W^{\widehat{F}_{0},C}_{\bullet,\bullet,\bullet}\big{)}\]
where
\[F_{Q}\big{(}W^{\widehat{F}_{0},C}_{\bullet,\bullet,\bullet}\big{)}=\begin{cases} \frac{3}{4}&Q=C\cap\widehat{\mathbf{f}},\\ 0&\text{otherwise}.\end{cases}\]
Recall that \(B_{\widehat{F}_{0}}\) and \(\widehat{\mathbf{f}}\) are disjoint and do not contain the singular point of the surface \(\widehat{F}_{0}\). Moreover, we have
\[A_{C,\Delta_{C}}(Q)=\begin{cases}\frac{1}{2}&Q=C\cap B_{\widehat{F}_{0}},\\ \frac{1}{2}&Q=\operatorname{Sing}(\widehat{F}_{0}),\\ 1&\text{otherwise}.\end{cases}\]
Thus, summarizing, we get
\[\frac{A_{C,\Delta_{C}}(Q)}{S\big{(}W_{\bullet,\bullet,\bullet}^{\widehat{F}_{ 0},C};Q\big{)}}=\begin{cases}\frac{13}{12}&Q=C\cap\widehat{\mathbf{f}},\\ \frac{26}{9}&Q=C\cap B_{\widehat{F}_{0}},\\ \frac{26}{9}&Q=\operatorname{Sing}(\widehat{F}_{0}),\\ \frac{52}{9}&\text{otherwise}.\end{cases}\]
In particular, we see that \(S(W_{\bullet,\bullet,\bullet}^{\widehat{F}_{0},C};Q)<A_{C,\Delta_{C}}(Q)\) in every possible case. The obtained contradiction completes the proof of Proposition 5.7.
### Proof of Proposition 5.8
Let us use notations introduced earlier in this section before Proposition 5.8, and let \(P\) be a singular point of type \(\mathbb{A}_{2}\) of the surface \(B\in|2S^{+}|\). Then, up to a change of coordinates, we may assume that \(P=(0,0,1,0,1)\) and
\[f_{4}(x_{1},x_{2},1)=x_{1}^{2}+x_{2}^{3}+\text{higher order terms}.\]
Let \(\rho\colon Y_{0}\to Y\) be the blow up if the point \(P\) with weights \((3,2,3)\) with respect to variables \((x_{1},x_{2},x_{4})\). We may describe \(Y_{0}\) as a toric variety given as \((\mathbb{C}^{6}\setminus Z(I_{0}))/\mathbb{G}_{m}^{3}\), where the action is given by the matrix
\[M=\left(\begin{array}{cccccc}x_{0}&x_{1}&x_{2}&x_{3}&x_{4}&x_{5}\\ 0&1&1&1&2&0\\ 0&0&0&0&1&1\\ 1&0&1&3&3&0\end{array}\right),\]
where the irrelevant ideal \(I_{0}=\langle x_{1},x_{2},x_{3}\rangle\cap\langle x_{1},x_{2},x_{4}\rangle \cap\langle x_{4},x_{5}\rangle\cap\langle x_{0},x_{3}\rangle\cap\langle x_{ 0},x_{5}\rangle\). To describe the fan of the toric threefold \(Y_{0}\), we denote by \(v_{i}\) the vector generating the ray corresponding to \(x_{i}\). Then
\[v_{0} =(3,2,3), v_{1} =(1,0,0), v_{2} =(0,1,0),\] \[v_{3} =(-1,-1,-2), v_{4} =(0,0,1), v_{5} =(0,0,-1),\]
and the cone structure can be visualized with the following diagram:
Let \(F_{i}=\{x_{i}=0\}\subset Y_{0}\) and \(C_{ij}=F_{i}\cap F_{j}\) for \(i\neq j\) such that \(\dim(F_{i}\cap F_{j})=1\). Then
\[\overline{\operatorname{Eff}(Y_{0})}=\langle F_{0},F_{1},F_{5}\rangle\]
and
\[\overline{\operatorname{NE}(Y_{0})}=\langle C_{12},C_{15},C_{01}\rangle.\]
Intersections of divisors \(F_{0}\), \(F_{1}\), \(F_{5}\) are described in following table:
\[\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|}\hline F_{0}^{3}&F_{0}^{2}F_{1}&F_{0}^{ 2}F_{5}&F_{0}F_{1}^{2}&F_{0}F_{1}F_{5}&F_{0}F_{5}^{2}&F_{1}^{3}&F_{1}^{2}F_{5}& F_{1}F_{5}^{2}&F_{5}^{3}\\ \hline\frac{1}{18}&-\frac{1}{6}&0&\frac{1}{2}&0&0&-\frac{3}{2}&1&-2&4\\ \hline\end{array}\]
This gives the following intersection table:
\[\begin{array}{|c||c|c|c|c|c|}\hline\bullet&F_{0}&F_{1}&F_{5}\\ \hline\hline C_{12}&\frac{1}{3}&-1&1\\ \hline C_{15}&0&1&-2\\ \hline C_{01}&-\frac{1}{6}&\frac{1}{2}&0\\ \hline\end{array}\]
Now, we set \(A=-(K_{Y}+\Delta)\). Take \(u\in\mathbb{R}_{\geqslant 0}\). Set \(L(u)=\rho^{*}(A)-uF_{0}\). Then
\[L(u)\sim_{\mathbb{R}}(9-u)F_{0}+3F_{1}+F_{5},\]
so \(L(u)\) is pseudo-effective \(\iff\)\(u\leqslant 9\). Let us find the Zariski decomposition for \(L(u)\).
Observe that \(L(u)\) is nef for \(u\in[0,3]\). Since \(L(3)\cdot C_{12}=0\) and \(C_{12}\) is unique in its numerical equivalence class, we consider a small \(\mathbb{Q}\)-factorial modification \(Y_{0}\dashrightarrow Y_{1}\) along the curve \(C_{12}\) such that
\[Y_{1}=\big{(}\mathbb{C}^{6}\setminus Z(I_{1})\big{)}/\mathbb{G}_{m}^{3},\]
where the torus-action is the same, and the irrelevant ideal
\[I_{1}=\langle x_{1},x_{2}\rangle\cap\langle x_{4},x_{5}\rangle\cap\langle x_{ 0},x_{3}\rangle.\]
The fan of \(Y_{1}\) is generated by the same vectors, but the cone structure is different:
\[\begin{array}{|c|c|c|c|c|c|c|c|}\hline\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\parpar\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(F_{0}^{3}\) & \(F_{0}^{2}F_{1}\) & \(F_{0}^{2}F_{5}\) & \(F_{0}F_{1}^{2}\) & \(F_{0}F_{1}F_{5}\) & \(F_{0}F_{5}^{2}\) & \(F_{1}^{3}\) & \(F_{1}^{2}F_{5}\) & \(F_{1}F_{5}^{2}\) & \(F_{5}^{3}\) \\ \hline
0 & 0 & \(-\frac{1}{6}\) & 0 & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & 0 & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) & \(\frac{5}{2}\) \\ \hline \end{tabular}
\begin{tabular}{|c|c||c|c|c|c|} \hline \(\bullet\) & \(F_{0}\) & \(F_{1}\) & \(F_{5}\) \\ \hline \hline \(C_{05}\) & \(-\frac{1}{6}\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) \\ \hline \(C_{15}\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) \\ \hline \(C_{01}\) & 0 & 0 & \(\frac{1}{2}\) \\ \hline \end{tabular}
Thus, we see that the proper transform on \(Y_{1}\) of the divisor \(L(u)\) is nef for \(u\in[3,5]\), and it intersects the curve \(C_{15}\) trivially for \(u=5\). Since \(C_{15}\) is unique in its numerical equivalence class, we consider another small \(\mathbb{Q}\)-factorial modification \(Y_{1}\dashrightarrow Y_{2}\) such that
\[Y_{2}=\big{(}\mathbb{C}^{6}\setminus Z(I_{2})\big{)}/\mathbb{G}_{m}^{3},\]
where the torus-action is again given by the matrix \(M\) and the irrelevant ideal
\[I_{2}=\langle x_{1},x_{2}\rangle\cap\langle x_{4},x_{5}\rangle\cap\langle x_{ 1},x_{5}\rangle\cap\langle x_{0},x_{2},x_{3}\rangle\cap\langle x_{0},x_{3},x_ {4}\rangle.\]
Then the fan of \(Y_{2}\) is generated by the same vectors, but the cone structure is different:
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \(F_{0}^{3}\) & \(F_{0}^{2}F_{1}\) & \(F_{0}^{2}F_{5}\) & \(F_{0}F_{1}^{2}\) & \(F_{0}F_{1}F_{5}\) & \(F_{0}F_{5}^{2}\) & \(F_{1}^{3}\) & \(F_{1}^{2}F_{5}\) & \(F_{1}F_{5}^{2}\) & \(F_{5}^{3}\) \\ \hline \(-\frac{1}{2}\) & \(\frac{1}{2}\) & \(\frac{1}{3}\) & \(-\frac{1}{2}\) & 0 & \(-1\) & \(\frac{1}{2}\) & 0 & 0 & 3 \\ \hline \end{tabular}
\begin{tabular}{|c|c||c|c|c|c|c|c|} \hline \(\bullet\) & \(F_{0}\) & \(F_{1}\) & \(F_{5}\) \\ \hline \hline \(C_{05}\) & \(\frac{1}{3}\) & 0 & \(-1\) \\ \hline \(C_{03}\) & \(\frac{-2}{3}\) & 1 & 1 \\ \hline \(C_{01}\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & 0 \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(F_{0}\) & \(F_{0}^{2}F_{1}\) & \(F_{0}^{2}F_{5}\) & \(F_{0}F_{1}^{2}\) & \(F_{0}F_{1}F_{5}\) & \(F_{0}F_{5}^{2}\) & \(F_{1}^{3}\) & \(F_{1}^{2}F_{5}\) & \(F_{1}F_{5}^{2}\) & \(F_{1}F_{5}^{3}\) \\ \hline \(-\frac{1}{2}\) & \(\frac{1}{2}\) & \(\frac{1}{3}\) & \(-\frac{1}{2}\) & 0 & \(-1\) & \(\frac{1}{2}\) & 0 & 0 & 3 \\ \hline \end{tabular}
\begin{tabular}{|c|c||c|c|c|c|c|} \hline \(F_{0}^{3}\) & \(F_{0}^{2}F_{1}\) & \(F_{0}^{2}F_{5}\) & \(F_{0}F_{1}^{2}\) & \(F_{0}F_{1}F_{5}\) & \(F_{0}F_{5}^{2}\) & \(F_{1}^{3}\) & \(F_{1}^{2}F_{5}\) & \(F_{1}F_{5}^{2}\) & \(F_{1}F_{5}^{2}\) & \(F_{5}^{3}\) \\ \hline \(-\frac{1}{2}\) & \(\frac{1}{2}\) & \(\frac{1}{3}\) & \(-\frac{1}{2}\) & 0 & \(-1\) & \(\frac{1}{2}\) & 0 & 0 & 3 \\ \hline \end{tabular}
\begin{tabular}{|c|c||c|c|c|c|c|} \hline \(\bullet\) & \(F_{0}\) & \(F_{1}\) & \(F_{5}\) \\ \hline \hline \(C_{05}\) & \(\frac{1}{3}\) & 0 & \(-1\) \\ \hline \(C_{03}\) & \(\frac{-2}{3}\) & 1 & 1 \\ \hline \(C_{01}\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & 0 \\ \hline \end{tabular}
\begin{tabular}{|c|c||c|c|c|c|c|} \hline \(\bullet\) & \(F_{0}\) & \(F_{1}\) & \(F_{5}\) \\ \hline \hline \(C_{05}\) & \(\frac{1}{3}\) & 0 & \(-1\) \\ \hline \(C_{03}\) & \(\frac{-2}{3}\) & 1 & 1 \\ \hline \(C_{01}\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & 0 \\ \hline \end{tabular}
\begin{tabular}{|c|c||c|c|c|c|c|c|} \hline \(\bullet\) & \(F_{0}\) & \(F_{1}\) & \(F_{5}\) \\ \hline \hline \(C_{05}\) & \(\frac{1}{3}\) & 0 & \(-1\) \\ \hline \(C_{03}\) & \(\frac{-2}{3}\) & 1 & 1 \\ \hline \(C_{01}\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & 0 \\ \hline \end{tabular}
\begin{tabular}{|c|c||c|c|c|c|c|c|} \hline \(\bullet\) & \(F_{0}\) & \(F_{1}\) & \(F_{5}\) \\ \hline \hline \(C_{05}\) & \(\frac{1}{3}\) & 0 & \(-1\) \\ \hline \(C_{03}\) & \(\frac{-2}{3}\) & 1 & 1 \\ \hline \(C_{01}\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & 0 \\ \hline \end{tabular}
\begin{tabular}{|c|c||c|c|c|c|c|} \hline \(\bullet\) & \(F_{0}\) & \(F_{1}\) & \(F_{5}\) \\ \hline \hline \(C_{05}\) & \(\frac{1}{3}\) & 0 & \(-1\) \\ \hline \(C_{03}\) & \(\frac{-2}{3}\) & 1 & 1 \\ \hline \(C_{01}\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & 0 \\ \hline \end{tabular}
\begin{tabular}{|c|c||c|c|c|c|c|} \hline \(\bullet\) & \(F_{0}\) & \(F_{1}\) & \(F_{5}\) \\ \hline \hline \(C_{05}\) & \(\frac{1}{3}\) & 0 & \(-1\) \\ \hline \(C_{03}\) & \(\frac{-2}{3}\) & 1 & 1 \\ \hline \(C_{01}\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & 0 \\ \hline \end{tabular}
\begin{tabular}{|c|c||c|c|c|c|c|c|} \hline \(
The proper transform on \(Y_{2}\) of the divisor \(L(u)\) is nef for \(u\in[5,6]\), and it intersects both curves \(C_{01}\) and \(C_{05}\) trivially for \(u=6\). Furthermore, if \(u\in[6,9]\), then the negative part of the Zariski decomposition of the divisor \(L(u)\) on the threefold \(Y_{2}\) is
\[N(u)=(u-6)F_{1}+\frac{u-6}{3}F_{5},\]
while the positive part is \(P(u)\sim_{\mathbb{R}}(9-u)(F_{0}+F_{1}+\frac{1}{3}F_{5})\). This gives
\[\operatorname{vol}\bigl{(}L(u)\bigr{)}=\begin{cases}13-\frac{u^{3}}{18}&u\in[ 0,3],\\ \frac{-u^{2}+3+23}{2}&u\in[3,5],\\ \frac{1}{2}u^{3}-8u^{2}+\frac{3}{2}u&u\in[5,6],\\ -\frac{1}{9}u^{3}+3u^{2}-27u+81&u\in[6,9].\end{cases}\]
Integrating, we get \(S_{A}(F_{0})=\frac{127}{26}\). Since \(A_{Y,\Delta}(F_{0})=5\), we get \(\frac{A_{Y,\Delta}(F_{0})}{S_{A}(F_{0})}=\frac{130}{127}>1\).
Next we construct a partial common toric resolution for \(Y_{0}\), \(Y_{1}\), \(Y_{2}\), which is easy to see from fan toric picture: we want to add the following rays:
\[v_{6} =(3,2,0)\in\langle v_{1},v_{2}\rangle\cap\langle v_{0},v_{5}\rangle,\] \[v_{7} =(1,0,-1)\in\langle v_{0},v_{3}\rangle\cap\langle v_{0},v_{3}\rangle,\] \[v_{8} =(3,1,0)\in\langle v_{1},v_{2}\rangle\cap\langle v_{0},v_{3}\rangle.\]
Set \(\widetilde{Y}\) be the toric variety corresponding to \(v_{0},\dots,v_{8}\) with the following cone structure:
Then we have the following toric diagram:
where toric maps can be described as follows:
\begin{tabular}{|c|c|c|c|c|} \hline map & center & weights & exceptional divisor & relation \\ \hline \hline \(\psi_{0}\) & \(x_{1}=x_{2}=0\) & \((3,2)\) & \(\{x_{6}=0\}\) & \(3v_{1}+2v_{2}=v_{6}\) \\ \hline \(\psi_{1}\) & \(x_{0}=x_{5}=0\) & \((1,3)\) & \(\{x_{6}=0\}\) & \(v_{0}+3v_{5}=v_{6}\) \\ \hline \(\sigma_{1}\) & \(x_{1}=x_{5}=0\) & \((1,1)\) & \(\{x_{7}=0\}\) & \(v_{1}+v_{5}=v_{7}\) \\ \hline \(\sigma_{2}\) & \(x_{0}=x_{3}=0\) & \((1,2)\) & \(\{x_{7}=0\}\) & \(v_{0}+2v_{3}=v_{7}\) \\ \hline \(\psi^{\prime}\) & \(x_{1}=x_{5}=0\) & \((1,1)\) & \(\{x_{7}=0\}\) & \(v_{1}+v_{5}=v_{7}\) \\ \hline \(\sigma^{\prime}\) & \(x_{0}=x_{5}=0\) & \((1,3)\) & \(\{x_{6}=0\}\) & \(v_{0}+3v_{5}=v_{6}\) \\ \hline \(\psi_{01}\) & \(x_{1}=x_{6}=0\) & \(\frac{1}{2}(3,1)\) & \(\{x_{8}=0\}\) & \(3v_{1}+v_{6}=2v_{8}\) \\ \hline \(\sigma_{12}\) & \(x_{0}=x_{7}=0\) & \(\frac{1}{2}(1,3)\) & \(\{x_{8}=0\}\) & \(v_{1}+3v_{7}=2v_{8}\) \\ \hline \end{tabular} Here, \(\frac{1}{2}(a,b)\) indicates that the variety has an \(\mathbb{A}_{1}\)-singularity along the center of blow up.
Now, we set \(\varphi_{0}=\psi_{01}\circ\psi^{\prime}\circ\psi_{0}\), \(\varphi_{1}=\psi_{01}\circ\psi^{\prime}\circ\psi_{1}\), \(\varphi_{2}=\sigma_{12}\circ\sigma^{\prime}\circ\sigma_{2}\). Let \(\widetilde{F}_{i}\) be the toric divisor \(\{x_{i}=0\}\subset\widetilde{Y}\). Then
\[\varphi_{0}^{*}(F_{0})\sim_{\mathbb{Q}}\widetilde{F}_{0},\] \[\varphi_{0}^{*}(F_{1})\sim_{\mathbb{Q}}\widetilde{F}_{1}+3 \widetilde{F}_{6}+\widetilde{F}_{7}+3\widetilde{F}_{8},\] \[\varphi_{0}^{*}(F_{5})\sim_{\mathbb{Q}}\widetilde{F}_{5}+ \widetilde{F}_{7},\] \[\varphi_{1}^{*}(F_{0})\sim_{\mathbb{Q}}\widetilde{F}_{0}+ \widetilde{F}_{6}+\frac{1}{2}\widetilde{F}_{8},\] \[\varphi_{1}^{*}(F_{1})\sim_{\mathbb{Q}}\widetilde{F}_{1}+ \widetilde{F}_{7}+\frac{3}{2}\widetilde{F}_{8},\] \[\varphi_{1}^{*}(F_{5})\sim_{\mathbb{Q}}\widetilde{F}_{5}+3 \widetilde{F}_{6}+\widetilde{F}_{7}+\frac{3}{2}\widetilde{F}_{8},\] \[\varphi_{2}^{*}(F_{0})\sim_{\mathbb{Q}}\widetilde{F}_{0}+ \widetilde{F}_{6}+\widetilde{F}_{7}+2\widetilde{F}_{8},\] \[\varphi_{2}^{*}(F_{1})\sim_{\mathbb{Q}}\widetilde{F}_{1},\] \[\varphi_{2}^{*}(F_{5})\sim_{\mathbb{Q}}\widetilde{F}_{5}+3 \widetilde{F}_{6}.\]
Using this, we describe the Zariski decomposition of the divisor \(\varphi_{0}^{*}(L(u))\) as follows:
\[\widetilde{P}(u)\sim_{\mathbb{R}}\begin{cases}(9-u)\widetilde{F}_{0}+3 \widetilde{F}_{1}+\widetilde{F}_{5}+9\widetilde{F}_{6}+4\widetilde{F}_{7}+9 \widetilde{F}_{8}&u\in[0,3],\\ (9-u)\widetilde{F}_{0}+3\widetilde{F}_{1}+\widetilde{F}_{5}+(12-u)\widetilde{ F}_{6}+4\widetilde{F}_{7}+\frac{21-u}{2}\widetilde{F}_{8}&u\in[3,5],\\ (9-u)\widetilde{F}_{0}+3\widetilde{F}_{1}+\widetilde{F}_{5}+(12-u)\widetilde{ F}_{6}+(9-u)\widetilde{F}_{7}+2(9-u)\widetilde{F}_{8}&u\in[5,6],\\ (9-u)(\widetilde{F}_{0}+\widetilde{F}_{1}+\frac{1}{3}\widetilde{F}_{5}+2 \widetilde{F}_{6}+\widetilde{F}_{7}+2\widetilde{F}_{8})&u\in[6,9],\end{cases}\]
and
\[\widetilde{N}(u)=\begin{cases}0&u\in[0,3],\\ (u-3)\widetilde{F}_{6}+\frac{u-3}{2}\widetilde{F}_{8}&u\in[3,5],\\ (u-3)\widetilde{F}_{6}+(u-5)\widetilde{F}_{7}+(2u-9)\widetilde{F}_{8}&u\in[5,6],\\ (u-6)\widetilde{F}_{1}+\frac{u}{3}\widetilde{F}_{5}+(2u-9)\widetilde{F}_{6}+(u- 5)\widetilde{F}_{7}+(2u-9)\widetilde{F}_{8}&u\in[6,9].\end{cases}\]
where \(\widetilde{P}(u)\) is the positive part, and \(\widetilde{N}(u)\) is the negative part.
Now, we describe \(\widetilde{P}(u)|_{\widetilde{F}_{0}}\) and \(\widetilde{N}(u)|_{\widetilde{F}_{0}}\) for every \(u\in[0,9]\). We have \(\widetilde{Y}=(\mathbb{C}^{9}\setminus\widetilde{I})/\mathbb{G}_{m}^{6}\), where the torus action is given by the matrix
\[\widetilde{M}=\left(\begin{array}{ccccccccc}x_{0}&x_{1}&x_{2}&x_{3}&x_{4}&x_ {5}&x_{6}&x_{7}&x_{8}\\ 0&1&1&1&2&0&0&0&0\\ 0&0&0&0&1&1&0&0&0\\ 1&0&1&3&3&0&0&0&0\\ 0&0&1&3&6&0&1&0&0\\ 0&0&1&1&3&0&0&1&0\\ 0&0&2&3&6&0&0&0&1\end{array}\right),\]
and the irrelevant ideal
\[\widetilde{I}= \langle x_{0},x_{3}\rangle\cap\langle x_{0},x_{5}\rangle\cap \langle x_{0},x_{7}\rangle\cap\langle x_{1},x_{2}\rangle\cap\langle x_{1},x_ {5}\rangle\cap\langle x_{1},x_{6}\rangle\cap\langle x_{2},x_{7}\rangle\cap \langle x_{2},x_{8}\rangle\] \[\cap\langle x_{3},x_{6}\rangle\cap\langle x_{3},x_{8}\rangle \cap\langle x_{4},x_{5}\rangle\cap\langle x_{4},x_{6}\rangle\cap\langle x_{4},x_{7}\rangle\cap\langle x_{4},x_{8}\rangle\cap\langle x_{5},x_{8}\rangle.\]
To obtain a similar description of the surface \(\widetilde{F}_{0}\), set \(x_{0}=0\), eliminate the first row in \(\widetilde{M}\), and set \(x_{3}=x_{5}=x_{7}=1\), since \(\widetilde{I}\subset\langle x_{0},x_{3}\rangle\cap\langle x_{0},x_{5}\rangle \cap\langle x_{0},x_{7}\rangle\). The resulting matrix is
\[\left(\begin{array}{ccccc}x_{1}&x_{2}&x_{4}&x_{6}&x_{8}\\ 3&2&3&0&0\\ 0&0&3&1&0\\ 0&1&3&0&1\end{array}\right).\]
Using this, we see that \(\widetilde{F}_{0}=(\mathbb{C}^{5}\setminus Z(I_{\widetilde{F}_{0}}))/\mathbb{ G}_{m}^{3}\), where the torus action is given by
\[\left(\begin{array}{ccccc}z_{1}&z_{2}&z_{3}&z_{4}&z_{5}\\ 1&1&2&0&0\\ 0&1&0&1&0\\ 0&1&1&0&1\end{array}\right),\]
and \(I_{\widetilde{F}_{0}}=\langle z_{1},z_{3}\rangle\cap\langle z_{1},z_{4} \rangle\cap\langle z_{2},z_{4}\rangle\cap\langle z_{2},z_{5}\rangle\cap \langle z_{3},z_{5}\rangle\). We can see from the matrices that
\[x_{1}\big{|}_{\widetilde{F}_{0}}=z_{1},\quad x_{2}^{3}\big{|}_{\widetilde{F}_ {0}}=z_{3},\quad x_{4}\big{|}_{\widetilde{F}_{0}}=z_{2},\quad x_{6}^{3}\big{|}_ {\widetilde{F}_{0}}=z_{4},\quad x_{8}^{3}\big{|}_{\widetilde{F}_{0}}=z_{5}.\]
The fan of the toric surface \(\widetilde{F}_{0}\) is given by
\[w_{1}=(1,0),\quad w_{2}=(-1,-2),\quad w_{3}=(0,1),\quad w_{4}=(1,2),\quad w_{5 }=(1,1)\]
with obvious cone structure. For \(i\in\{1,2,3,4,5\}\), let \(C_{i}\) be the curve in \(\widetilde{F}_{0}\) given \(z_{i}=0\). The cone of effective divisors of the surface \(\widetilde{F}_{0}\) is generated by the curves \(C_{1}\), \(C_{4}\), \(C_{5}\), and their intersection form is given in the following table:
\[\begin{array}{|c||c|c|c|}\hline\bullet&C_{1}&C_{4}&C_{5}\\ \hline\hline C_{1}&-\frac{1}{2}&0&1\\ \hline C_{4}&0&-1&1\\ \hline C_{5}&1&1&-2\end{array}\]
Further, we compute
\[\widetilde{P}(u)\big{|}_{\widetilde{F}_{0}}\sim_{\mathbb{R}}\begin{cases}\frac{u}{3 }C_{1}+\frac{u}{3}C_{4}+\frac{u}{3}C_{5}&u\in[0,3]\\ \frac{u}{3}C_{1}+C_{4}+(\frac{1}{2}+\frac{u}{6})C_{5}&u\in[3,5]\\ \frac{u}{3}C_{1}+C_{4}+(3-\frac{u}{3})C_{5}&u\in[5,6]\\ (6-\frac{2u}{3})C_{1}+(3-\frac{u}{3})C_{4}+(3-\frac{u}{3})C_{5}&u\in[6,9],\end{cases}\]
and
\[\widetilde{N}(u)\big{|}_{\widetilde{F}_{0}}=\begin{cases}0&u\in[3,5],\\ \frac{u-3}{6}(2C_{4}+C_{5})&u\in[3,5],\\ \frac{u-3}{3}C_{4}+\frac{2u-9}{3}C_{5}&u\in[5,6],\\ (u-6)C_{1}+\frac{2u-9}{3}(2C_{4}+C_{5})&u\in[6,9].\end{cases}\]
Let \(\theta\colon\widetilde{F}_{0}\to F_{0}\) be the morphism induced by \(\varphi_{0}\). Then \(\theta\) is a birational morphism that contracts \(C_{4}\) and \(C_{5}\). Set \(\overline{C}_{1}=\theta(C_{1})\), \(\overline{C}_{2}=\theta(C_{2})\), \(\overline{C}_{3}=\theta(C_{3})\), identify \(F_{0}=\mathbb{P}(1,1,2)\) with coordinates \(\bar{z}_{1}\), \(\bar{z}_{2}\), \(\bar{z}_{3}\) such that \(\overline{C}_{1}=\{\bar{z}_{1}=0\}\), \(\overline{C}_{2}=\{\bar{z}_{2}=0\}\), \(\overline{C}_{3}=\{\bar{z}_{3}=0\}\), where \(\bar{z}_{1}\) and \(\bar{z}_{2}\) are coordinates of weight \(1\), and \(\bar{z}_{3}\) is a coordinate of weight \(2\). Then
\[\theta\big{(}C_{4}\big{)}=\theta\big{(}C_{5}\big{)}=\overline{C}_{1}\cap \overline{C}_{3}=[0:1:0],\]
and \(\theta\) is a composition of the ordinary blow up at the point \([0:1:0]\) with the consecutive blow up at the point on the proper transform of the curve \(\overline{C}_{3}\). Note that \(C_{5}\) is the proper transform of the exceptional curve for the first blow up and \(C_{4}\) is the exceptional curve for the second blow up.
Let \(B_{0}\) be the proper transform on \(Y_{0}\) of the surface \(B\). Set \(\Delta_{0}=\frac{1}{2}B_{0}\) and \(B_{F_{0}}=B_{0}|_{F_{0}}\). Then, changing the coordinates \(\bar{z}_{1}\), \(\bar{z}_{2}\), \(\bar{z}_{3}\), we may also assume that
\[B_{F_{0}}=\big{\{}\overline{z}_{1}^{2}+\overline{z}_{2}^{2}=\overline{z}_{3} \big{\}}\subset F_{0}.\]
This curve is smooth, it does not contain the singular point of \(F_{0}\), and \([0:1:0]\not\in B_{F_{0}}\). The geometry of the surface \(F_{0}\) can be illustrated by the following picture:
Note that the surface \(Y_{0}\) is singular along the curve \(\overline{C}_{3}\). We set
\[\Delta_{F_{0}}=\frac{1}{2}B_{F_{0}}+\frac{2}{3}\overline{C}_{3}.\]
Then \(K_{F_{0}}+\Delta_{F_{0}}\sim_{\mathbb{Q}}(K_{Y_{0}}+\Delta_{0})|_{F_{0}}\), and \(\Delta_{F_{0}}\) is the corresponding different [33].
Now, we are ready to apply [2, 6, 17]. Let \(Q\) be a point in \(F_{0}\), let \(C\) be a smooth curve in the surface \(F_{0}\) that contains \(Q\), let \(\widetilde{C}\) be its proper transform on \(\widetilde{F}_{0}\). For \(u\in[0,9]\), let
\[t(u)=\inf\Big{\{}v\in\mathbb{R}_{\geqslant 0}\ \big{|}\ \text{the divisor}\ \widetilde{P}(u)\big{|}_{\widetilde{F}_{0}}-v\widetilde{C}\ \text{is pseudo- effective}\Big{\}}.\]
For real number \(v\in[0,t(u)]\), let \(P(u,v)\) and \(N(u,v)\) be the positive part and the negative part of the Zariski decomposition of the divisor \(\widetilde{P}(u)|_{\widetilde{F}_{0}}-v\widetilde{C}\), respectively. Set
\[S_{L}\big{(}W^{F_{0}}_{\bullet,\bullet};C\big{)}=\frac{3}{A^{3}}\int\limits_{ 0}^{9}\big{(}\widetilde{P}(u)\big{|}_{\widetilde{F}_{0}}\big{)}^{2}\text{ord} _{\widetilde{C}}\big{(}\widetilde{N}(u)\big{|}_{\widetilde{F}_{0}}\big{)}du+ \frac{3}{A^{3}}\int\limits_{0}^{9}\int\limits_{0}^{t(u)}\big{(}P(u,v)\big{)}^ {2}dvdu.\]
Write \(\theta^{*}(C)=\widetilde{C}+\Sigma\) for an effective divisor \(\Sigma\) on the surface \(\widetilde{F}_{0}\). For \(u\in[0,9]\), write
\[\widetilde{N}(u)\big{|}_{\widetilde{F}_{0}}=d(u)\widetilde{C}+N^{\prime}(u),\]
where \(d(u)=\text{ord}_{\widetilde{C}}(\widetilde{N}(u)|_{\widetilde{F}_{0}})\), and \(N^{\prime}(u)\) is an effective divisor on \(\widetilde{F}_{0}.\) Set
\[S\big{(}W^{F_{0},C}_{\bullet,\bullet};Q\big{)}=\frac{3}{A^{3}}\int\limits_{0} ^{9}\int\limits_{0}^{t(u)}\big{(}P(u,v)\cdot\widetilde{C}\big{)}^{2}dvdu+F_{Q }\big{(}W^{F_{0},C}_{\bullet,\bullet}\big{)}\]
for
\[F_{Q}\big{(}W^{F_{0},C}_{\bullet,\bullet}\big{)}=\frac{6}{A^{3}}\int\limits_{ 0}^{9}\int\limits_{0}^{t(u)}\big{(}P(u,v)\cdot\widetilde{C}\big{)}\cdot\text{ ord}_{Q}\Big{(}\big{(}N^{\prime}(u)+N(u,v)-(v+d(u))\Sigma\big{)}\big{|}_{ \widetilde{C}}\Big{)}dvdu,\]
where we consider \(Q\) as a point in \(\widetilde{C}\) using the isomorphism \(\widetilde{C}\cong C\) induced by \(\theta\).
We will choose \(C\) such that the pair \((F_{0},C+\Delta_{F_{0}}-\text{ord}_{C}(\Delta_{F_{0}})C)\) has purely log terminal singularities. In this case, the curve \(C\) is equipped with an effective divisor \(\Delta_{C}\) such that
\[K_{C}+\Delta_{C}\sim_{\mathbb{Q}}\big{(}K_{F_{0}}+C+\Delta_{F_{0}}-\text{ord} _{C}(\Delta_{F_{0}})C\big{)}\big{|}_{C},\]
and the pair \((C,\Delta_{C})\) has Kawamata log terminal singularities. The \(\mathbb{Q}\)-divisor \(\Delta_{C}\) is known as the different, and it can be computed locally near any point in \(C\), see [33] for details.
Let \(\mathbf{F}\) be a prime divisor over \(Y\) such that \(P=C_{Y}(\mathbf{F})\). Recall that
\[\beta_{Y,\Delta}(\mathbf{F})=A_{Y,\Delta}(\mathbf{F})-S_{A}(\mathbf{F})=A_{Y, \Delta}(\mathbf{F})-\frac{1}{A^{3}}\int_{0}^{\infty}\text{vol}\big{(}A-u \mathbf{F}\big{)}du.\]
Suppose \(\beta_{Y,\Delta}(\mathbf{F})\leqslant 0\). Then, using [17, Corollary 4.18], we obtain
\[1\geqslant\frac{A_{Y,\Delta}(\mathbf{F})}{S_{A}(\mathbf{F})}\geqslant\delta_ {P}(Y,\Delta)\geqslant\min\left\{\frac{A_{Y,\Delta}(F_{0})}{S_{A}(F_{0})}, \inf_{Q\in F_{0}}\min\left\{\frac{A_{F_{0},\Delta_{F_{0}}}(C)}{S_{A}(W^{F_{0},C}_{\bullet,\bullet};C)},\frac{A_{C,\Delta_{C}}(Q)}{S(W^{F_{0},C}_{\bullet, \bullet};Q)}\right\}\right\},\]
where the choice of \(C\) in the infimum depends on \(Q\). Thus, since \(\frac{A_{Y,\Delta}(F_{0})}{S_{A}(F_{0})}\geqslant 1\), we have
\[\inf_{Q\in F_{0}}\min\left\{\frac{A_{F_{0},\Delta_{F_{0}}}(C)}{S_{A}(W^{F_{0},C}_{\bullet,\bullet};C)},\frac{A_{C,\Delta_{C}}(Q)}{S(W^{F_{0},C}_{\bullet, \bullet};Q)}\right\}\leqslant 1.\]
In fact, since \(\frac{A_{Y,\Delta}(F_{0})}{S_{A}(F_{0})}=\frac{130}{127}>1\), it follows from [17, Corollary 4.18] and [2, Theorem 3.3] that we have a strict inequality:
\[\inf_{Q\in F_{0}}\min\left\{\frac{A_{F_{0},\Delta_{F_{0}}}(C)}{S_{A}(W_{\bullet,\bullet}^{F_{0}};C)},\frac{A_{C,\Delta_{C}}(Q)}{S(W_{\bullet,\bullet,\bullet}^ {F_{0},C};Q)}\right\}<1.\]
Let us use this to obtain a contradiction, which would finish the proof of Proposition 5.8.
Namely, we will show that for every point \(Q\in F_{0}\), there exists a smooth irreducible curve \(C\subset F_{0}\) such that \(Q\in C\), the log pair \((F_{0},C+\Delta_{F_{0}}-\mathrm{ord}_{C}(\Delta_{F_{0}})C)\) has purely log terminal singularities, and the following two inequalities hold:
\[S_{A}\big{(}W_{\bullet,\bullet}^{F_{0}};C\big{)}\leqslant A_{F_{0},\Delta_{F_ {0}}}(C) \tag{5.19}\]
and
\[S(W_{\bullet,\bullet,\bullet}^{F_{0},C};Q)\leqslant A_{C,\Delta_{C}}(Q). \tag{5.20}\]
To be precise, we will choose the curve \(C\) as follows:
* if \(Q\in\overline{C}_{1}\), we let \(C=\overline{C}_{1}\),
* if \(Q\not\in\overline{C}_{1}\) and \(Q\in\overline{C}_{3}\), we let \(C=\overline{C}_{3}\),
* if \(Q\not\in\overline{C}_{1}\cup\overline{C}_{3}\), we let \(C\) to be the unique curve in \(|\overline{C}_{1}|\) such that \(Q\in C\).
**Lemma 5.21**.: _Let \(Q\) be a point in \(\overline{C}_{1}\). Set \(C=\overline{C}_{1}\). Then (5.19) and (5.20) hold._
Proof.: Note that \(A_{F_{0},\Delta_{F_{0}}}(C)=1\) and \(\Sigma=\overline{C}_{4}+\overline{C}_{5}\). We have
\[d(u)=\begin{cases}0&u\in[0,6]\\ u-6&u\in[6,9],\end{cases}\]
and
\[t(u)=\begin{cases}\frac{u}{3}&u\in[0,6],\\ 6-\frac{2u}{3}&u\in[6,9].\end{cases}\]
Moreover we have
\[N(u,v)=\begin{cases}v(C_{4}+C_{5})&u\in[0,3],\ v\in[0,\frac{u}{3}],\\ \frac{v}{2}C_{5}&u\in[3,5],\ v\in[0,\frac{u}{3}-1],\\ \frac{3v+3-u}{3}C_{4}+\frac{6v+3-u}{6}C_{5}&u\in[3,5],\ v\in[\frac{u}{3}-1, \frac{u}{3}],\\ 0&u\in[5,6],\ v\in[0,u-5],\\ \frac{v+5-u}{2}C_{5}&u\in[5,6],\ v\in[u-5,\frac{u}{3}-1],\\ \frac{3v+3-u}{3}C_{4}+\frac{3v+9-2u}{3}C_{5}&u\in[5,6],\ v\in[\frac{u}{3}-1, \frac{u}{3}],\\ 0&u\in[6,9],\ v\in[0,3-\frac{u}{3}],\\ \frac{3v+u-9}{3}(C_{4}+C_{5})&u\in[6,9],\ v\in[3-\frac{u}{3},6-\frac{2u}{3}], \end{cases}\]
\[P(u,v)\sim_{\mathbb{R}}\begin{cases}\frac{u-3v}{3}(C_{1}+C_{4}+C_{5})&u\in[0,3],\ v \in[0,\frac{u}{3}],\\ \frac{u-3v}{3}C_{1}+C_{4}+\frac{3+u-3v}{6}C_{5}&u\in[3,5],\ v\in[0,\frac{u}{3}-1 ],\\ \frac{u-3v}{3}(C_{1}+C_{4}+C_{5})&u\in[3,5],\ v\in[\frac{u}{3}-1,\frac{u}{3}], \\ \frac{u-3v}{3}C_{1}+C_{4}+\frac{9-u}{3}C_{5}&u\in[5,6],\ v\in[0,u-5],\\ \frac{u-3v}{3}C_{1}+C_{4}+\frac{3+u-3v}{6}C_{5}&u\in[5,6],\ v\in[u-5,\frac{u}{3}-1 ],\\ \frac{u-3v}{3}(C_{1}+C_{4}+C_{5})&u\in[5,6],\ v\in[\frac{u}{3}-1,\frac{u}{3}], \\ (\frac{18-2u-3v}{3}C_{1}+\frac{9-u}{3}(C_{4}+C_{5})&u\in[6,9],\ v\in[0,3-\frac{u}{3 }],\\ \frac{18-2u-3v}{3}(C_{1}+C_{4}+C_{5})&u\in[6,9],\ v\in[3-\frac{u}{3},6-\frac{2u }{3}],\end{cases}\]
which gives
\[\left(P(u,v)\right)^{2}=\begin{cases}\frac{(u-3v)^{2}}{18}&u\in[0,3],\ v\in[0, \frac{u}{3}],\\ \frac{u}{3}-v-\frac{1}{2}&u\in[3,5],\ v\in[0,\frac{u}{3}-1],\\ \frac{(u-3v)^{2}}{18}&u\in[3,5],\ v\in[\frac{u}{3}-1,\frac{u}{3}],\\ -\frac{u^{2}}{2}+uv-\frac{v^{2}}{2}-13+\frac{16}{3}u-6v&u\in[5,6],\ v\in[0,u-5 ],\\ \frac{u}{3}-v-\frac{1}{2}&u\in[5,6],\ v\in[u-5,\frac{u}{3}-1],\\ \frac{(u-3v)^{2}}{18}&u\in[5,6],\ v\in[\frac{u}{3}-1,\frac{u}{3}],\\ -2u+9+\frac{u^{2}}{9}-\frac{v^{2}}{2}&u\in[6,9],\ v\in[0,3-\frac{u}{3}],\\ \frac{(18-2u-3v)^{2}}{18}&u\in[6,9],\ v\in[3-\frac{u}{3},6-\frac{2u}{3}],\end{cases}\]
and
\[P(u,v)\cdot C=\begin{cases}\frac{u-3v}{6}&u\in[0,3],\ v\in[0,\frac{u}{3}],\\ \frac{1}{2}&u\in[3,5],\ v\in[0,\frac{u}{3}-1],\\ \frac{u-3v}{6}&u\in[3,5],\ v\in[\frac{u}{3}-1,\frac{u}{3}],\\ \frac{6-u+v}{2}&u\in[5,6],\ v\in[0,u-5],\\ \frac{1}{2}&u\in[5,6],\ v\in[u-5,\frac{u}{3}-1],\\ \frac{u-3v}{6}&u\in[5,6],\ v\in[\frac{u}{3}-1,\frac{u}{3}],\\ \frac{v}{2}&u\in[6,9],\ v\in[0,3-\frac{u}{3}],\\ \frac{18-2u-3v}{6}&u\in[6,9],\ v\in[3-\frac{u}{3},6-\frac{2u}{3}].\end{cases}\]
Integrating, we get \(S(W^{F_{0}}_{\bullet,\bullet};C)=\frac{10}{13}<1=A_{F_{0},\Delta_{F_{0}}}(C)\), so (5.19) holds.
Similarly, we compute \(S(W^{F_{0},C}_{\bullet,\bullet,\bullet};Q)=\frac{9}{52}+F_{Q}(W^{F_{0},C}_{ \bullet,\bullet,\bullet})\), where
\[F_{Q}\big{(}W^{F_{0},C}_{\bullet,\bullet,\bullet}\big{)}=\begin{cases}\frac{1 }{12}&Q=\overline{C}_{1}\cap\overline{C}_{3},\\ 0&\text{otherwise}.\end{cases}\]
Observe that
\[A_{C,\Delta_{C}}(Q)=\begin{cases}\frac{1}{2}&Q=\overline{C}_{1}\cap B_{F_{0}},\\ \frac{1}{2}&Q=\overline{C}_{1}\cap\overline{C}_{2},\\ \frac{1}{3}&Q=\overline{C}_{1}\cap\overline{C}_{3},\\ 1&\text{otherwise}.\end{cases}\]
Thus, we have
\[\frac{A_{C,\Delta_{C}}(Q)}{S(W_{\bullet,\bullet,\bullet}^{F_{0},C},Q)}=\begin{cases} \frac{13}{10}&Q=\overline{C}_{1}\cap\overline{C}_{3},\\ \frac{26}{9}&Q=\overline{C}_{1}\cap\overline{C}_{2},\\ \frac{26}{9}&Q=\overline{C}_{1}\cap B_{F_{0}},\\ \frac{52}{9}&\text{otherwise}.\end{cases}\]
which implies (5.20).
**Lemma 5.22**.: _Let \(Q\) be a point in \(\overline{C}_{3}\setminus\overline{C}_{1}\). Set \(C=\overline{C}_{3}\). Then (5.19) and (5.20) hold._
Proof.: For \(u\in[0,9]\), we have \(d(u)=0\) and \(N^{\prime}(u)=\widetilde{N}(u)|_{\widetilde{F}_{0}}\). Since \(\widetilde{C}\sim C_{3}+2C_{4}+C_{5}\), we have
\[t(u)=\begin{cases}\frac{u}{6}&u\in[0,6],\\ \frac{9-u}{3}&u\in[6,9].\end{cases}\]
We compute
\[N(u,v)=\begin{cases}2vC_{4}+vC_{5}&u\in[0,3],\ v\in[0,\frac{u}{6}],\\ 0&u\in[3,5],\ v\in[0,\frac{u-3}{6}],\\ \frac{u-3}{6}(2C_{4}+C_{5})&u\in[3,5],\ v\in[\frac{u-3}{6},\frac{u}{6}],\\ 0&u\in[5,6],\ v\in[0,\frac{6-u}{3}],\\ \frac{3v+u-6}{3}(C_{4})&u\in[5,6],\ v\in[\frac{6-u}{3},\frac{2u-9}{3}]\\ \frac{6u+3-u}{3}(C_{4})+\frac{v+9-2u}{3}(C_{4})&u\in[5,6],\ v\in[\frac{2u-9}{3}, \frac{u}{6}],\\ \frac{2u-9}{3}(C_{4}+C_{5})&u\in[6,9],\ v\in[0,\frac{9-u}{3}],\end{cases}\]
and
\[P(u,v)\sim\begin{cases}\frac{u-6v}{3}(C_{1}+C_{4}+C_{5})&u\in[0,3],\ v\in[0, \frac{u}{6}],\\ \frac{u-6v}{3}C_{1}+\frac{3+u-6v}{6}C_{5}+C_{4}&u\in[3,5],\ v\in[0,\frac{u-3}{6 }],\\ \frac{u-6v}{3}(C_{1}+C_{4}+C_{5})&u\in[3,5],\ v\in[\frac{u-3}{6},\frac{u}{6}], \\ \frac{u-6v}{3}C_{1}+\frac{9-u-3v}{3}C_{5}+C_{4}&u\in[5,6],\ v\in[0,\frac{6-u}{3 }],\\ \frac{u-6v}{3}C_{1}+\frac{9-u-3v}{3}(C_{5}+C_{4})&u\in[5,6],\ v\in[\frac{6-u}{3 },\frac{2u-9}{3}],\\ \frac{u-6v}{3}(C_{1}+C_{4}+C_{5})&u\in[5,6],\ v\in[\frac{2u-9}{3},\frac{u}{6}], \\ \frac{9-u-3v}{3}(2C_{1}+C_{4}+C_{5})&u\in[6,9],\ v\in[0,\frac{9-u}{3}],\end{cases}\]
which gives
\[\left(P(u,v)\right)^{2}=\begin{cases}\frac{u^{2}}{18}+2v^{2}-\frac{2}{3}uv&u \in[0,3],\ v\in[0,\frac{u}{6}],\\ \frac{u}{3}-2v-\frac{1}{2}&u\in[3,5],\ v\in[0,\frac{u-3}{6}],\\ \frac{u^{2}}{18}+2v^{2}-\frac{2}{3}uv&u\in[3,5],\ v\in[\frac{u-3}{6},\frac{u}{6 }],\\ \frac{16}{3}u-2v-\frac{13}{2}u^{2}&u\in[5,6],\ v\in[0,\frac{6-u}{3}],\\ 4u-6v-9-\frac{7}{18}u^{2}+v^{2}+\frac{2}{3}uv&u\in[5,6],\ v\in[\frac{6-u}{3}, \frac{2u-9}{3}]\\ 9-6v-2u+v^{2}+\frac{u^{2}}{9}+\frac{2}{3}uv&u\in[5,6],\ v\in[\frac{2u-9}{3}, \frac{u}{6}],\\ \frac{2u-9}{3}(C_{4}+C_{5})&u\in[6,9],\ v\in[0,\frac{9-u}{3}],\end{cases}\]
\[P(u)\cdot C=\begin{cases}\frac{u}{3}-2v&u\in[0,3],\ v\in[0,\frac{u}{6}],\\ 1&u\in[3,5],\ v\in[0,\frac{u-3}{6}],\\ \frac{u}{3}-2v&u\in[3,5],\ v\in[\frac{u-3}{6},\frac{u}{6}],\\ 1&u\in[5,6],\ v\in[0,\frac{6-u}{3}],\\ 3-v-\frac{u}{3}&u\in[5,6],\ v\in[\frac{6-u}{3},\frac{2u-9}{3}]\\ \frac{u}{3}-2v&u\in[5,6],\ v\in[2u-9,\frac{u}{3},\frac{u}{6}],\\ 3-\frac{u}{3}-v&u\in[6,9],\ v\in[0,\frac{9-u}{3}].\end{cases}\]
Thus, integrating we get \(S(W^{F_{0}}_{\bullet,\bullet};C)=\frac{10}{39}<\frac{1}{3}=A_{F_{0},\Delta_{F _{0}}}(C)\), so (5.19) holds.
Since \(Q\neq\overline{C}_{1}\cap\overline{C}_{3}\), we have \(F_{Q}(W^{F_{0},C}_{\bullet,\bullet})=0\), which gives \(S(W^{F_{0},\overline{C}_{3}}_{\bullet,\bullet,\bullet};Q)=\frac{9}{26}\). But
\[A_{C,\Delta_{C}}(Q)=\begin{cases}\frac{1}{2}&Q\in B_{F_{0}},\\ 1&Q\not\in B_{F_{0}}.\end{cases}\]
Thus, we have
\[\frac{A_{C,\Delta_{C}}(Q)}{S(W^{F_{0}}_{\bullet,\bullet};C)}= \begin{cases}\frac{13}{10}&Q\in B_{F_{0}},\\ \frac{26}{9}&Q\not\in B_{F_{0}},\end{cases}\]
which implies (5.19).
**Lemma 5.23**.: _Let \(Q\) be a point in \(F_{0}\) such that \(Q\not\in\overline{C}_{1}\cup\overline{C}_{3}\), and let \(C\) be the unique curve in the pencil \(|\overline{C}_{1}|\) that contains \(Q\). Then (5.19) and (5.20) hold._
Proof.: Note that \(A_{F_{0},\Delta_{F_{0}}}(C)=1\), and \(\widetilde{C}\sim C_{1}+C_{4}+C_{5}\). We have
\[t(u)=\begin{cases}\frac{u}{3}&u\in[0,3],\\ 1&u\in[3,6],\\ \frac{9-u}{3}&u\in[6,9].\end{cases}\]
For every \(u\in[0,9]\), we have \(d(u)=0\) and \(N^{\prime}(u)=\widetilde{N}(u)|_{\widetilde{F}_{0}}\). We compute
\[N(u,v)=\begin{cases}0&u\in[0,3],\ v\in[0,\frac{u}{3}],\\ 0&u\in[3,5],\ v\in[0,1],\\ 0&u\in[5,6],\ v\in[0,6-u],\\ (v+u-6)C_{1}&u\in[5,6],\ v\in[6-u,1],\\ vC_{1}&u\in[6,9],\ v\in[0,3-\frac{u}{3}],\end{cases}\]
and
\[P(u,v)\sim\begin{cases}\frac{u-3v}{3}(C_{1}+C_{4}+C_{5})&u\in[0,3],\ v\in[0, \frac{u}{3}],\\ \frac{u-3v}{3}C_{1}+(1-v)C_{4}+\frac{3+u-6v}{6}C_{5}&u\in[3,5],\ v\in[0,1],\\ \frac{u-3v}{3}C_{1}+(1-v)C_{4}+\frac{9-u-3v}{3}C_{5}&u\in[5,6],\ v\in[0,6-u],\\ \frac{18-2u-6v}{3}C_{1}+(1-v)C_{4}+(\frac{9-u-3v}{3}C_{5}&u\in[5,6],\ v\in[6-u,1 ],\\ \frac{9-u-3v}{3}(2C_{1}+C_{4}+C_{5})&u\in[6,9],\ v\in[0,3-\frac{u}{3}].\end{cases}\]
which gives
\[\left(P(u,v)\right)^{2}=\begin{cases}\frac{(u-3v)^{2}}{18}&u\in[0,3],\ v\in[0, \frac{u}{3}]\\ -\frac{1}{2}+\frac{u}{3}-\frac{1}{3}uv+\frac{1}{2}v^{2}&u\in[3,5],\ v\in[0,1]\\ -\frac{u^{2}}{2}-\frac{uv}{3}+\frac{v^{2}}{2}-13+\frac{16}{3}u&u\in[5,6],\ v\in[0,6-u]\\ 5+\frac{2uv}{3}+v^{2}-\frac{2u}{3}-6v&u\in[5,6],\ v\in[6-u,1]\\ \frac{(3-\frac{u}{3}-v)^{2}}{2}&u\in[6,9],\ v\in[0,3-\frac{u}{3}],\end{cases}\]
and
\[P(u)\cdot\widetilde{C}=\begin{cases}\frac{u-3v}{6}&u\in[0,3],\ v\in[0,\frac{u }{3}]\\ \frac{u-3v}{6}&u\in[3,5],\ v\in[0,1]\\ \frac{u-3v}{6}&u\in[5,6],\ v\in[0,6-u]\\ \frac{9-u-3v}{3}&u\in[5,6],\ v\in[6-u,1]\\ \frac{9-u-3v}{3}&u\in[6,9],\ v\in[0,3-\frac{u}{3}].\end{cases}\]
Thus, integrating we get \(S(W^{F_{0}}_{\bullet,\bullet};C)=\frac{9}{26}<1=A_{F_{0},\Delta_{F_{0}}}(C)\), so (5.19) holds.
Since \(Q\not\in\overline{C}_{1}\cup\overline{C}_{3}\), we have \(F_{Q}(W^{F_{0},C}_{\bullet,\bullet})=0\) and
\[A_{C,\Delta_{C}}(Q)=\begin{cases}\frac{1}{2}&Q\in B_{F_{0}},\\ 1&Q\not\in B_{F_{0}}.\end{cases}\]
Integrating, we get \(S(W^{F_{0},C}_{\bullet,\bullet};Q)=\frac{10}{39}\), so that
\[\frac{A_{C,\Delta_{C}}(Q)}{S(W^{F_{0}}_{\bullet,\bullet};C)}=\begin{cases} \frac{39}{20}&Q\in B_{F_{0}},\\ \frac{39}{10}&Q\not\in B_{F_{0}},\end{cases}\]
which implies (5.19).
Lemmas 5.21, 5.21, 5.23 completes the proof of Proposition 5.8.
## 6. On the K-moduli spaces
In this section, we prove Corollary 1.14. The proof of Corollary 1.15 is almost identical, so we omit it. To start with, let us present the following well known assertion.
**Lemma 6.1**.: _Let \(X\) be a smooth Fano threefold. Then_
\[h^{0}\left(X,T_{X}\right)-h^{1}\left(X,T_{X}\right)=\chi(X,T_{X})=\frac{-K_{X} ^{3}}{2}-18+b_{2}(X)-\frac{b_{3}(X)}{2},\]
_where \(b_{2}(X)\) and \(b_{3}(X)\) are the second and the third Betti numbers of \(X\), respectively._
Proof.: The required assertion immediately follows from the Akizuki-Nakano vanishing theorem and the Hirzebruch-Riemann-Roch theorem, since \(-K_{X}\cdot c_{2}(X)=24\).
Now, let us use notations and assumptions introduced in Corollary 1.14.
**Lemma 6.2**.: _Let \(f\in T\) and \(X_{f}\) be the Casagrande-Druel 3-fold constructed from \(\{f=0\}\). Suppose that \(f\) is GIT semistable with respect to the \(\Gamma\)-action. Then \(X_{f}\) is K-semistable._
Proof.: There exists a one-parameter subgroup \(\lambda\colon\mathbb{G}_{m}\to\Gamma\) such that
\[[f_{0}]=\lim_{t\to 0}\lambda(t)\cdot[f]\]
is a GIT polystable point in \(T\). Let \(X_{0}\) be the corresponding Casagrande-Druel threefold constructed from \(\{f_{0}=0\}\). Then it follows from Theorem 1.12 that \(X_{0}\) is K-polystable. On the other hand, the subgroup \(\lambda\) gives isotrivial flat degeneration of \(X_{f}\) to \(X_{0}\), which implies that \(X_{f}\) is K-semistable, because K-semistability is an open condition.
Now, we are ready to prove Corollary 1.14.
Proof of Corollary 1.14.: Since the construction of Casagrande-Druel 3-folds is functorial, there exists a \(\Gamma\)-equivariant flat morphism \(\pi_{T}\colon X_{T}\to T\) such that
\[\pi_{T}^{-1}([f])\cong X_{f}.\]
We set \(X_{T^{\rm ss}}=\pi_{T}^{-1}(T^{\rm ss})\). Then the restriction morphism \(X_{T^{\rm ss}}\to T^{\rm ss}\) is a \(\Gamma\)-equivariant flat family of K-semistable Fano 3-folds by Lemma 6.2.
Let \(\{T^{\rm ss}/\Gamma\}\) be the fibered category over \((\operatorname{Sch}/\mathbb{C})_{\rm fppf}\) in the sense of [29, Example 4.6.7]. Then the family \(X_{T^{\rm ss}}\to T^{\rm ss}\) gives a morphism \(\{T^{\rm ss}/\Gamma\}\to\mathcal{M}_{3,28}^{\rm Kss}\) of fibered categories. This induces the morphism
\[\big{[}T^{\rm ss}/\Gamma\big{]}\to\mathcal{M}_{3,28}^{\rm Kss}\]
between Artin stacks, since \([T^{\rm ss}/\Gamma]\) is the stackification of \(\{T^{\rm ss}/\Gamma\}\) (see [29, Remark 4.6.8]).
Since \(M\) is the good moduli space of \([T^{\rm ss}/\Gamma]\), it follows from [4, Theorem 6.6] that there exists a natural morphism
\[\Phi\colon M\to M_{3,28}^{\rm Kps}\]
that maps \([f]\) to \([X_{f}]\). This morphism is injective. Indeed, if \(f_{1}\) and \(f_{2}\) are points in \(T\), then the corresponding Casagrande-Druel 3-folds \(X_{f_{1}}\) and \(X_{f_{2}}\) are isomorphic if and only if the points \(f_{1}\) and \(f_{2}\) are contained in one \(\Gamma\)-orbit.
Observe that \(M\) is normal. Take \([f]\in M\). Since the deformations of the 3-fold \(X_{f}\) are unobstructed by Proposition 2.9, the variety \(M_{3,28}^{\rm Kps}\) is also normal at \([X_{f}]\) by Luna's etale slice theorem [5, Theorem 1.2]. Moreover, if \(X_{f}\) is smooth, then
\[\dim_{[X_{f}]}\big{(}M_{3,28}^{\rm Kps}\big{)}\leqslant h^{1}\big{(}X_{f},T_ {X_{f}}\big{)}=\dim(M)\]
by Lemma 6.1, since \(h^{0}(X,T_{X})=\dim(\operatorname{Aut}(X))=1\). Therefore, using the injectivity of \(\Phi\), we see that the image \(\Phi(M)\subset M_{3,28}^{\rm Kps}\) is a connected component, and \(\Phi\) is an isomorphism onto this connected component by Zariski's main theorem.
The variety \(M_{(3.9)}^{\rm Kps}\) is well-studied [20]. Let us describe \(M_{(4.2)}^{\rm Kps}\cong T^{\rm ss}\,/\!\!/\,\Gamma\). Recall that
\[T=\mathbb{P}\left(H^{0}\left(V,\mathcal{O}_{V}(2,2)\right)^{\vee}\right)\]
and \(\Gamma=(\operatorname{SL}_{2}(\mathbb{C})\times\operatorname{SL}_{2}(\mathbb{ C}))\rtimes\boldsymbol{\mu}_{2}\), where \(V=\mathbb{P}^{1}\times\mathbb{P}^{1}\). Set \(\Gamma_{0}=\operatorname{SL}_{2}(\mathbb{C})\times\operatorname{SL}_{2}( \mathbb{C})\).
**Proposition 6.3** (Noam Elkies).: _One has \(T^{\rm ss}\,/\!\!/\,\Gamma_{0}\cong T^{\rm ss}\,/\!\!/\,\Gamma\cong\mathbb{P}( 1,2,3)\)._
Proof.: Let \(W=H^{0}\left(V,\mathcal{O}_{V}(2,2)\right)\), let \(S\) be the symmetric algebra of \(W^{\vee}\), let \(S^{\Gamma_{0}}\) be its subalgebra of invariants for the natural \(\Gamma_{0}\)-action, and let \(H(t)\) be its Hilbert series:
\[H(t)=\sum_{k\geqslant 0}\dim\Big{(}\big{(}\mathrm{Sym}^{k}(W^{\vee})\big{)}^{ \Gamma_{0}}\Big{)}t^{k}.\]
Then its follows from [32, SS11.9] or [13, SS4.6] that
\[H(t)=\int_{0}^{1}\int_{0}^{1}\frac{2-z_{1}^{2}-z_{1}^{-2}}{2}\cdot\frac{2-z_{2}^{ 2}-z_{2}^{-2}}{2}\cdot\prod_{j_{1},j_{2}\in\{-1,0,1\}}\frac{1}{1-t\cdot z_{1}^ {j_{1}}z_{2}^{2j_{2}}}d\phi_{1}d\phi_{2}\]
with \(|t|<1\), where \(z_{1}=e^{2\pi\sqrt{-1}\phi_{1}}\) and \(z_{2}=e^{2\pi\sqrt{-1}\phi_{2}}\). This gives
\[H(t)=\frac{1}{(1-t^{2})(1-t^{3})(1-t^{4})}.\]
Let us find generators of \(S^{\Gamma_{0}}\). Consider the standard basis
\[x_{0}^{2}y_{0}^{2},x_{0}^{2}y_{0}y_{1},x_{0}^{2}y_{1}^{2},x_{0}x_{1}y_{0}^{2},x _{0}x_{1}y_{0}y_{1},x_{0}x_{1}y_{1}^{2},x_{1}^{2}y_{0}^{2},x_{1}^{2}y_{0}y_{1},x_{1}^{2}y_{1}^{2}\]
of the space \(W\), let \(a_{00},a_{01},a_{02},a_{10},a_{11},a_{12},a_{20},a_{21},a_{22}\) be the dual basis of the space \(W^{\vee}\), and let \(J_{2}\), \(J_{3}\), \(J_{4}\) be the coefficients of the characteristic polynomial of the matrix
\[\begin{pmatrix}\frac{1}{2}a_{11}&-a_{10}&-a_{01}&2a_{00}\\ a_{12}&-\frac{1}{2}a_{11}&-2a_{02}&a_{01}\\ a_{21}&-2a_{20}&-\frac{1}{2}a_{11}&a_{10}\\ 2a_{22}&-a_{21}&-a_{12}&\frac{1}{2}a_{11}\end{pmatrix}\]
such that \(J_{k}\in\operatorname{Sym}^{k}(W^{\vee})\) for \(k\in\{2,3,4\}\). Then \(J_{2}\), \(J_{3}\), \(J_{4}\) are \(\Gamma_{0}\)-invariant, and these polynomials are algebraically independent, which gives \(S^{\Gamma_{0}}=\mathbb{C}[J_{2},J_{3},J_{4}]\), so that
\[T^{\operatorname{ss}}\,/\!\!/\,\Gamma_{0}\cong\mathbb{P}(2,3,4)\cong\mathbb{ P}(1,2,3).\]
Since the polynomials \(J_{2}\), \(J_{3}\), \(J_{4}\) are also \(\Gamma\)-invariant, we also get \(T^{\operatorname{ss}}\,/\!\!/\,\Gamma_{0}\cong T^{\operatorname{ss}}\,/\!\!/\,\Gamma\).
_Remark 6.4_.: In fact, Proposition 6.3 is a classical result -- Peano [31] and Turnbull [38] showed that \(S^{\Gamma_{0}}\) is generated by \(J_{2}\), \(J_{3}\), \(J_{4}\), see [38, SS12] and [30, Pages 242-246].
The surface \(M^{\operatorname{Kps}}_{(4.2)}\) is a component of the K-moduli space of smoothable Fano threefolds. Another two-dimensional component of this K-moduli space has been described in [9], and all its one-dimensional components have been described in [1].
| ```
Fanovarietiesの新たな亜種(Casagrande-Druelvarieties)を導入します。このFanovarietiesの新たな亜種は、Fano二重 couvérのn-次元varietiesです。このCasagrande-DruelvarietiesをK-polystableであるかどうかを推測します。これは、二重 couvérとその基底空間がK-polystableである場合に成立します。私たちは、滑らかなCasagrande-Druel threefoldの証明と、双曲面から構成されるCasagrande-Druel varietiesの証明を行い、この結論は、n>d>n/2>1の双曲面で、degree 2d で ramifiedです。この結果として、滑らかなK-polystable Fano threefoldのK-moduli spaceの接続成分を記述します。 3.9と4.2のMori-Mukai分類におけるこのK-moduli |
2310.00045 | Phases of Wilson Lines: Conformality and Screening | We study the rich dynamics resulting from introducing static charged
particles (Wilson lines) in 2+1 and 3+1 dimensional gauge theories. Depending
on the charges of the external particles, there may be multiple defect fixed
points with interesting renormalization group flows connecting them, or an
exponentially large screening cloud can develop (defining a new emergent length
scale), screening the bare charge entirely or partially. We investigate several
examples where the dynamics can be solved in various weak coupling or double
scaling limits. Sometimes even the elementary Wilson lines, corresponding to
the lowest nontrivial charge, are screened. We consider Wilson lines in 3+1
dimensional gauge theories including massless scalar and fermionic QED$_4$, and
also in the ${\mathcal N}=4$ supersymmetric Yang-Mills theory. We also consider
Wilson lines in 2+1 dimensional conformal gauge theories such as QED$_3$ with
bosons or fermions, Chern-Simons-Matter theories, and the effective theory of
graphene. Our results in 2+1 dimensions have potential implications for
graphene, second-order superconducting phase transitions, etc. Finally, we
comment on magnetic line operators in 3+1 dimensions ('t Hooft lines) and argue
that our results for the infrared dynamics of electric and magnetic lines are
consistent with non-Abelian electric-magnetic duality. | Ofer Aharony, Gabriel Cuomo, Zohar Komargodski, Márk Mezei, Avia Raviv-Moshe | 2023-09-29T18:00:02 | http://arxiv.org/abs/2310.00045v1 | # Phases of Wilson Lines: Conformality and Screening
###### Abstract
We study the rich dynamics resulting from introducing static charged particles (Wilson lines) in 2+1 and 3+1 dimensional gauge theories. Depending on the charges of the external particles, there may be multiple defect fixed points with interesting renormalization group flows connecting them, or an exponentially large screening cloud can develop (defining a new emergent length scale), screening the bare charge entirely or partially. We investigate several examples where the dynamics can be solved in various weak coupling or double scaling limits. Sometimes even the elementary Wilson lines, corresponding to the lowest nontrivial charge, are screened. We consider Wilson lines in 3+1 dimensional gauge theories including massless scalar and fermionic QED\({}_{4}\), and also in the \(\mathcal{N}=4\) supersymmetric Yang-Mills theory. We also consider Wilson lines in 2+1 dimensional conformal gauge theories such as QED\({}_{3}\) with bosons or fermions, Chern-Simons-Matter theories, and the effective theory of graphene. Our results in 2+1 dimensions have potential implications for graphene, second-order superconducting phase transitions, etc. Finally, we comment on magnetic line operators in 3+1 dimensions ('t Hooft lines) and argue that our results for the infrared dynamics of electric and magnetic lines are consistent with non-Abelian electric-magnetic duality.
## 1 Introduction and summary
* 1.1 Scalar and fermionic QED\({}_{4}\)
* 1.2 Non-Abelian gauge theories and \({\cal N}=4\) SYM
* 1.3 2+1 dimensional critical points
* 1.4 Comments on 't Hooft lines
* 2 Scalar QED\({}_{4}\)
* 2.1 Two DCFT fixed points
* 2.2 Stability in the subcritical regime
* 2.3 Screening in the supercritical regime
* 2.4 Effective defect field theory for screened Wilson lines
* 2.5 Constraints from 0-Form symmetry and multi-flavor scalar QED\({}_{4}\)
* 2.6 Constraints from 1-form symmetry and QED\({}_{4}\) with charge \(q_{\phi}\) particles
* 3 Fermionic QED\({}_{4}\)
* 3.1 Dirac fermion on AdS\({}_{2}\times S^{d-2}\)
* 3.2 Conformal Wilson lines: old and new fixed points
* 3.2.1 Dirac fermion in AdS\({}_{2}\) and boundary RG flows
* 3.2.2 Defect fixed points in four dimensions
* 3.3 Partial charge screening from double-trace perturbation
* 3.4 Supercritical Wilson lines
* 4 Non-Abelian gauge theory
* 4.1 Non-Abelian saddle point
* 4.2 Example: \({\cal N}=4\) SYM
* 4.3 Holographic description
* 5 2+1 dimensional CFTs
* 5.1 Chern-Simons theories with and without matter
* 5.2 Large \(N_{f}\) critical points
* 5.2.1 QED\({}_{3}\) with \(2N_{f}\) fermions
* 5.2.2 Comments on scalar QED\({}_{3}\)
* 5.2.3 \(U(1)_{k}\) with \(2N_{f}\) fermions
* 5.3 Graphene
* 6 't Hooft line operators
* 6.1 't Hooft lines in Abelian gauge theories
* 6.2 't Hooft lines in non-Abelian gauge theories and S-duality
* 7
A Details on scalar QED\({}_{4}\)A.1 Defect propagator and double-trace deformation for subcritical chargeA.2 Tachyons for a supercritical Coulomb potentialA.3 The effective potential from the soliton solutionA.4 Quantization of the screening cloud for a light massive scalarB Details on fermionic QED\({}_{4}\)B.1 Defect propagator and double-trace deformation for subcritical chargeB.2 Massive Dirac-Coulomb equation for subcritical chargeB.3 Massive Dirac-Coulomb equation for supercritical chargeB.4 Massless Dirac equation for supercritical chargeC Vector boson instabilityD Details on Wilson lines in large \(N_{f}\) QED\({}_{3}\)D.1 The saddle-point equationsD.2 The fluctuation determinant via zeta function regularizationD.3 Solving the saddle-point equations
## 1 Introduction and summary
Conformal Field Theory is by now a mature subject in some ways. A great deal is understood about the space of local operators and their correlation functions, see [1] for a review.
Yet, relatively little is understood about extended operators. The simplest class of extended operators are line operators. For a line operator that is stretching in the time direction at a point \(\vec{x}=0\) in space, we say that the line operator is conformal if it preserves an
\[SL(2,\mathbb{R}) \tag{1}\]
subgroup of the conformal group. The \(SL(2,\mathbb{R})\) subgroup acts at \(\vec{x}=0\) by \(t\to\frac{\alpha t+\beta}{\gamma t+\delta}\), with \(\alpha\delta-\beta\gamma=1\). A conformal line operator admits local defect operators \(\hat{O}_{i}(t)\) transforming under \(SL(2,\mathbb{R})\). The defect operators have scaling dimensions \(\hat{\Delta}_{i}\geq 0\) and one can perform an Operator Product Expansion (OPE) among them. Very importantly, bulk local operators can be expanded as \(|\vec{x}|\to 0\) in terms of defect operators in the following schematic form
\[O(x,t)=\sum a_{i}\,x^{-\Delta_{O}+\hat{\Delta}_{i}}\hat{O}_{i}(t). \tag{2}\]
In particular, in the presence of a line operator, bulk operators can have a nonzero one-point function due to the unit operator appearing on the right hand side of (2). The \(a_{i}\) on the right hand side of (2), along with the \(\hat{\Delta}_{i}\) and also the defect OPE coefficients, are observables
associated to conformal line defects. This whole structure is referred to as a Defect Conformal Field Theory (DCFT), see [2] for a review. A special defect operator which always appears in nontrivial DCFTs is the displacement operator [3; 4], whose scaling dimension is \(\hat{\Delta}=2\). Integrating this operator on the defect is equivalent to changing the shape of the defect.
When one defines line operators in a conformal field theory, it is not guaranteed that they are conformal line operators preserving \(SL(2,\mathbb{R})\). While the bulk theory remains conformal, there can be renormalization group flows between line operators, and one typically expects the deep infrared limit of line operators to be a DCFT. The space of distinct DCFTs in a given CFT is far from understood. Line operators are of great interest also in the context of condensed matter, since they represent localized impurities/defects. The long distance limit of various defects in condensed matter is a subject that goes back to the Kondo problem [5].
We would like to make three general remarks on the general theory of line defects in CFTs:
* Renormalization group (RG) flows on line defects are constrained by the so-called defect entropy [6]. For the connection between the defect entropy and entanglement entropy see [7; 8; 9]. This allows one to make consistency checks on various proposals, and sometimes to prove that a DCFT cannot be trivial (screened).
* Local bulk operators transform in a representation of the ordinary (0-form) symmetry of the CFT, \(G\). The interplay of line operators with the 0-form symmetry of the theory is more complicated. First of all, it is possible for a conformal line operator to break the symmetry \(G\) altogether. That means that the intersections of the line with the co-dimension 1 \(G\)-surfaces are not topological. If we wrap the line operator with the co-dimension 1 \(G\)-surfaces, we must obtain a new line operator in which the bulk VEVs of \(G\)-charged operators change accordingly. For a continuous symmetry \(G\) this means that there are tilt operators on the defect which have \(\hat{\Delta}=1\) exactly. These are analogous to the displacement operator. See [10; 11] for a review and some examples of tilt operators. If the line operator preserves \(G\),1 defect operators that appear on the right hand side of (2) are in representations of \(G\). However, defect-changing operators, and in particular, end-point operators, do not have to be in representations of \(G\). Physically, line operators can be viewed as capturing the response of the CFT to heavy/external objects. The full symmetry in the presence of the heavy objects could be an extension of \(G\). Some simple examples (without gauge fields) where the end-point operators indeed only transform in projective representations of \(G\) were studied in [13; 14; 15; 16] along with many examples in 1+1 dimensions in the context of the Kondo defect (see [17] for a review with an emphasis on the screening cloud, a theme we will discuss below in higher dimensions). We will see examples of this phenomenon in gauge theories in this paper.
If the end-point operators transform in a projective representation of \(G\), this has consequences for RG flows on such lines, since they cannot become trivial lines with no degeneracy (i.e. the unit operator). Indeed, if they were completely trivial lines with no degeneracy in the infrared, we would not be able to attach operators in projective representations to them. This argument is presented in more detail in the body of the paper.
* If the theory admits a one-form symmetry \(\Gamma\) then the line operators transform under \(\Gamma\)[18]. For a line operator to furnish a nontrivial DCFT it is not necessary for it to be charged under \(\Gamma\). However, under additional assumptions that we will discuss later, it is possible to prove that a line defect transforming under \(\Gamma\) must have a nonzero displacement operator at long distances.
A special class of line operators, that exist in any gauge theory in any dimension, are Wilson lines
\[W_{R}(\gamma)=\text{tr}_{R}\left(P\exp\!\left(i\int_{\gamma}A_{ \mu}dx^{\mu}\right)\right), \tag{3}\]
labelled by a representation \(R\) of the gauge group and by a closed, or infinite, contour \(\gamma\).
Historically, Wilson lines have been introduced to diagnose the confinement/deconfinement transition in gapped theories. Then, the interesting Wilson lines are those charged under the one-form symmetry \(\Gamma\), since such Wilson lines serve as order parameters for the confinement/deconfinement transition.
Here our interest is in conformal field theories. Then, as discussed above, Wilson lines are interesting observables whether or not the Wilson line transforms nontrivially under \(\Gamma\). In fact, Wilson lines are interesting line operators even in theories with trivial \(\Gamma\).
A peculiarity of (3) that makes them into intriguing line operators is that there is no free continuous parameter in the definition (3). As we will see, that does not mean that no RG flow takes place!
This paper is an extended version of [19]. The central goal of this paper is to determine the long distance limit of (3) as a function of the representation \(R\). We will investigate this question in various examples of conformal gauge theories in four and three space-time dimensions. There are two complementary ways to analyze this question.
* In the "bulk approach", we view the insertion of the Wilson line (3) as setting a boundary condition for the dynamical fields of the gauge theory at \(\vec{x}=0\), which includes an electric field emanating from there; as usual we need to regularize this by putting the boundary and the boundary condition at some \(|\vec{x}|=r_{0}\), and asking what the theory behaves like for \(|\vec{x}|\gg r_{0}\). This corresponds to the infrared limit of the defect. The simplest possible answer, is that the lowest energy state just involves the electric field going as \(1/|\vec{x}|^{2}\) (recall that in Abelian gauge theories the dimension of \(F_{\mu\nu}\) is always \(\Delta=2\)). In other cases we will find that the dynamical fields react non-trivially to
the Wilson line source, and screen it, either partially or completely. The infrared can then be trivial or partially screened. Another important comment is that specifying the electric field at \(|\vec{x}|=r_{0}\) is not sufficient. The boundary conditions (and possible boundary interactions) of the charged bulk fields at the insertion should be specified too, and this leads to new coupling constants that must be added to (3) to define the problem properly. These coupling constants inevitably run and lead to much of the rich dynamics that we will encounter here and review soon.
* In the "defect approach", we view the insertion of the Wilson line (3) as modifying the action of the gauge theory by some extra terms that are localized on a "defect" at \(\vec{x}=0\). One can then discuss a renormalization group flow of the conformal gauge theory in the presence of these extra defect terms (with an ultraviolet cutoff \(\mu\), which is inversely related to \(r_{0}\) discussed above); for simplicity we can assume that the bulk theory has already flowed to its low-energy fixed point, and then the non-trivial flow involves only the action of the defect. This approach is convenient since it utilizes the ideas behind the renormalization group more directly. While the charge of the Wilson line is quantized and does not flow under the renormalization group, we will see that in many cases other couplings (localized on the defect) related to the additional fields in the theory do flow non-trivially (and, as we remarked above, ignoring them is inconsistent), reproducing in a different language the bulk physics discussed above.
In the rest of this section we briefly summarize our results.
### Scalar and fermionic QED\({}_{4}\)
Scalar and Fermionic massless QED\({}_{4}\) are not strictly conformal theories (due to the Landau pole) and hence ideas of DCFT do not rigorously apply. However, at weak enough gauge coupling the running coupling constant is an insignificant perturbation (and furthermore there is a double scaling limit in which it is truly a subleading effect, as will be explained in what follows), and there the physics of these models does lend itself to the language of DCFT. Also, understanding these examples will be a valuable springboard towards more complicated 3+1 dimensional gauge theories which are truly conformal. Needless to say, understanding Wilson lines in QED is of great interest in and of itself. It is perhaps surprising that there is much new to say on this subject.
We will consider Wilson lines of charge \(q\) in either scalar or fermion QED\({}_{4}\). These are given by the "naive" expression:
\[W_{q}(\gamma)=\exp\biggl{(}iq\int_{\gamma}A_{\mu}dx^{\mu}\biggr{)}\, \tag{4}\]
and we take the contour \(\gamma\) to be localized at \(\vec{x}=0\).
For concreteness let us start from scalar QED\({}_{4}\) with a single complex charge 1 scalar field \(\phi\). If one tries to interpret (4) as a conformal defect, one can compute the scaling
dimension of the gauge-invariant bilinear \(\phi^{\dagger}\phi\). In the bulk at sufficiently weak coupling the scaling dimension is of course \(\Delta=2\). But we can also ask about the scaling dimension of \(\phi^{\dagger}\phi\) as a defect operator. As we will show in section 2, one finds
\[\hat{\Delta}_{\phi^{\dagger}\phi}=1+\sqrt{1-\frac{e^{4}q^{2}}{4\pi^{2}}}. \tag{5}\]
This formula is reliable as long as the fine structure constant is small enough (more precisely, it is _exact_ in the limit of \(e\to 0\) and \(e^{2}|q|\) fixed). A consistency check is that for \(q=0\) the bulk and defect scaling dimensions coincide.
Interestingly, (5) implies that for \(\frac{e^{2}|q|}{2\pi}=1\) the operator becomes marginal on the defect, while for \(\frac{e^{2}|q|}{2\pi}>1\) there is a disease with our defect. Since the operator is marginal at \(\frac{e^{2}|q|}{2\pi}=1\) and slightly irrelevant as we approach \(\frac{e^{2}|q|}{2\pi}=1\) from below, we learn that ignoring it in RG flows is inconsistent, and hence we must study the more general line defect
\[W_{q}(\gamma)=\exp\biggl{(}iq\int_{\gamma}A_{\mu}dx^{\mu}-g\int\phi^{\dagger} \phi\,|dx|\biggr{)}. \tag{6}\]
The parameter \(g\) has a nontrivial beta function with the following properties:
* For \(\frac{e^{2}|q|}{2\pi}<1\) there are two fixed points. One corresponds to a stable DCFT (with no relevant operators) and the other to an unstable DCFT (with one relevant operator, \(\phi^{\dagger}\phi\)).
* For \(\frac{e^{2}|q|}{2\pi}=1\) the two fixed points merge and the operator \(\phi^{\dagger}\phi\) is marginal.
* For \(\frac{e^{2}|q|}{2\pi}>1\) the coupling \(g\) flows to \(-\infty\).
This behavior is reminiscent of how conformality is (presumably) lost in QCD (a.k.a. Miransky scaling [20]). Here, conformality is lost at \(\frac{e^{2}|q|}{2\pi}=1\) in the sense that no DCFTs with finite \(g\) exist for \(\frac{e^{2}|q|}{2\pi}>1\). However, the flow \(g\to-\infty\) must still be analyzed to determine the infrared behavior of Wilson lines with sufficiently large charge.
Again, analogously to QCD, one finds that an exponentially low energy scale is generated when \(\frac{e^{2}|q|}{2\pi}\) is slightly larger than 1 (dimensional transmutation). The dynamics is that of an exponentially large cloud of bosons surrounding the Wilson line. We present the properties of the cloud, which is essentially a new soliton, and argue that it screens the charge of the Wilson line entirely. We find that the defects with \(\frac{e^{2}|q|}{2\pi}>1\) are trivial DCFTs in the infrared for a generic bulk scalar potential.
While we find two fixed points for \(\frac{e^{2}|q|}{2\pi}<1\), we do not claim that our analysis of that region is complete. Indeed, for sufficiently small \(\frac{e^{2}|q|}{2\pi}\) the quartic \(|\phi|^{4}\) defect operator becomes relevant in the unstable defect fixed point, and the dynamics must be re-analyzed. We leave this for the future.
For fermionic QED\({}_{4}\) the story, which we analyze in section 3, is conceptually similar, except that the instability occurs at \(\frac{e^{2}|q|}{4\pi}=1\), which in nature corresponds to \(|q|\sim 137\). The
fact that nuclei with charge \(\sim 137\) lead to difficulties with the Dirac equation was observed already decades ago [21]. Another difference from the scalar theory is that for \(\frac{e^{2}|q|}{4\pi}>1\) we find an exponentially large charged condensate of fermions that only screens the line down to \(\frac{e^{2}|q|}{4\pi}\simeq 1\), and not to a trivial line.
In summary, unlike in pure Maxwell theory, in QED Wilson lines with sufficiently large \(|q|\) are screened, i.e. do not lead to new interesting DCFTs. The transition between the two regimes involves the annihilation of two fixed points and dimensional transmutation due to the running coupling \(g\) on the Wilson line. Furthermore, for small \(|q|\), there are multiple fixed points, not all of which we have analyzed.
We also consider two interesting variations on the above themes. The first variation is multi-flavor scalar QED. Our soliton that screens the Wilson line for \(\frac{e^{2}|q|}{2\pi}>1\) then transforms nontrivially under an internal symmetry and there is thus some sort of symmetry breaking - a zero mode of the soliton. We show that this zero mode must be integrated over, and the true screening cloud does not lead to symmetry breaking. The Wilson line leaves no measurable trace of its existence at distances much larger than the cloud. It is therefore completely transparent to all bulk observables. However, on the Wilson line itself, depending on its charge, there is some degeneracy of states (a 0+1d TQFT stacked on the trivial, screened line), which follows from symmetry considerations alone. This happens precisely because of a fact we already mentioned above: the end-point operators in the multi-flavor model are in a projective representation of the symmetry group, and this constrains the infrared limit of the line defect.
The second variation on the above themes is to consider QED\({}_{4}\) with a scalar of charge \(q_{\phi}>1\) but no scalars of charge 1. Now the theory has a \(\mathbb{Z}_{q_{\phi}}\) electric one-form symmetry, and hence Wilson lines of \(q\neq 0\) mod \(q_{\phi}\) cannot flow to trivial DCFTs, no matter how large \(|q|\) is. Similarly, in the \(\mathcal{N}=4\) supersymmetric Yang-Mills (SYM) theory with, say, gauge group \(SU(2)\), a Wilson line in a large half-integer spin representation cannot be completely screened due to the \(\mathbb{Z}_{2}\) one-form symmetry. The question of what precisely is the infrared limit in these two cases goes beyond the leading order we analyze here. We speculate about the possible infrared phases that are consistent with the one-form symmetry in both cases.
### Non-Abelian gauge theories and \(\mathcal{N}=4\) Sym
Much of what we have found for Abelian theories carries over to essentially all weakly coupled conformal gauge theories in four dimensions. We discuss non-Abelian theories in section 4. Let us consider for concreteness the Wilson lines in the \(\mathcal{N}=4\) SYM theory with gauge group \(SU(2)\).
There are multiple possible Wilson lines in this theory, including supersymmetric versions of the Wilson line which include also scalar fields from the vector multiplet, and which preserve some fraction of the supersymmetry. These Wilson lines were the subject of many investigations in the last decades, see e.g. [22]. All the supersymmetric Wilson lines break the \(SO(6)_{R}\) global symmetry of \(\mathcal{N}=4\) SYM to a subgroup. Here we are interested in the \(SO(6)_{R}\)-invariant Wilson line (3), which breaks all of the supersymmetry. As in the Abelian
case, one must not ignore scalar bilinears, which turn out to be important defect operators. We find again that for Wilson lines in the spin \(s\) representation of \(SU(2)\), when \(g_{YM}^{2}s\sim 1\) the Wilson line is screened (for large half-integral \(s\) the one-form symmetry prevents the line from being completely screened). Therefore, as the coupling constant is increased, fewer Wilson lines survive as nontrivial DCFTs with \(SO(6)_{R}\) symmetry. Since the theory admits electric-magnetic duality, this suggests that \(SO(6)_{R}\)-invariant 't Hooft lines have interesting dynamics already at weak coupling. In other words, there should be very few nontrivial \(SO(6)_{R}\)-invariant 't Hooft lines at weak coupling. We will see that this is indeed the case!
### 2+1 dimensional critical points
Wilson lines in conformal 2+1 dimensional theories, which we analyze in section 5, are interesting both theoretically but also because they correspond to charged impurities in 2+1 dimensional second order quantum phase transitions, and hence the predictions we make may be testable (in addition, there are recent numerical techniques which appear very promising [23] as well as advances on bootstrapping defects, e.g. [24; 25; 26; 27; 28; 29]). We analyze 2+1 dimensional scalar and fermionic QED\({}_{3}\), with and without a Chern-Simons term. In these theories again Wilson lines of small enough charge flow to nontrivial DCFTs, while the others are screened.
We consider both the tricritical and the ordinary scalar QED\({}_{3}\), which are related to second order superconducting transitions. In the tricritical point, all the Wilson lines that survive in the scaling limit we study are trivial (what that means precisely is that the number of nontrivial Wilson lines is much smaller than \(N_{f}\) for large \(N_{f}\)), while for the ordinary scalar QED\({}_{3}\) we expect the number of nontrivial Wilson lines to scale with \(N_{f}\) for large \(N_{f}\) (but we do not determine the value of the critical charge in this theory). Extrapolating to small \(N_{f}\) this has repercussions for charged impurities in the superconducting phase transition, and there could also be implications for 3d dualities. For the fermionic QED\({}_{3}\) we find that the number of nontrivial Wilson lines scales as \(\sim N_{f}\) and we determine in detail the precise bound on the charge of conformal line operators. We do not analyze explicitly the fate of Wilson lines with super-critical charge in any of these 2+1 dimensional examples, i.e. we do not compute in detail the screening cloud solitons, and we leave it for future work as well.
Finally, we study the 2+1 dimensional theory of graphene. This has four 2+1 dimensional fermions coupled non-relativistically to the electric field in 3+1 dimensions. Charged impurities of relatively low charge are screened and a cloud develops [30]. For better analytic control, we consider a generalization of graphene with \(2N_{f}\) fermions, and compute the critical charge in the limit of large \(N_{f}\) and compare with the experimental result [31]. Charged impurities with charge smaller than the critical one admit conformal line phases and interesting RG flows that have not been observed yet.
### Comments on 't Hooft lines
We end this paper in section 6 with a few comments on 't Hooft lines in Abelian and non-Abelian gauge theories. We emphasize the properties of 't Hooft lines as DCFTs. We compute the anomalous dimension of \(\phi^{\dagger}\phi\) as a defect operator in scalar QED\({}_{4}\). Unlike the situation with Wilson lines, it always remains irrelevant, and in fact, picks up a large positive anomalous dimension as the charge of the monopole or of the scalar field grows.
For fermionic QED\({}_{4}\), it is well known that the lowest angular momentum modes of the fermion can penetrate the centrifugal barrier and the fermions should then be treated carefully in the background of a monopole. We reinterpret these statements in terms of the spectrum of the defect. We show that there exists a marginal operator at tree level which is a fermion bilinear. Therefore, at tree level there is a continuous conformal manifold of possible 't Hooft lines in fermionic QED\({}_{4}\), corresponding to different boundary conditions for the fermions at the defect. This manifold is lifted at one loop and only one fixed point remains (see [32] and references therein).
We note that internal symmetries that participate in a nontrivial two-group structure with the magnetic one-form symmetry are necessarily broken by the 't Hooft loops. This leads to tilt operators with \(\hat{\Delta}=1\) exactly. (In gauge theories with \(\sum_{i}q_{i}\neq 0\), but where \(\sum_{i}q_{i}^{3}\neq 0\), no spherically symmetric stationary 't Hooft loops exist. This can be interpreted as due to a two-group involving the Lorentz symmetry.)
In the \(\mathcal{N}=4\) SYM theory, we consider 't Hooft loops which are \(SO(6)\)-invariant. These are non-BPS 't Hooft loops which we can study at weak coupling. We argue that with gauge group \(SU(N)\), they are all screened - there is an instability towards condensing vector bosons which presumably form a screening cloud, canceling the bare magnetic field at the core altogether. With non-simple gauge groups, such as \(PSU(N)=SU(N)/\mathbb{Z}_{N}\), we argue that there exist nontrivial 't Hooft line DCFTs, corresponding to anti-symmetric representations. This is consistent with the \(\mathbb{Z}_{N}\) magnetic 1-form symmetry. This picture is nicely consistent with \(S\)-duality which exchanges \(PSU(N)\) and \(SU(N)\) gauge groups [33]. We expect that all Wilson lines are screened as we increase the coupling constant in the \(PSU(N)\) gauge theory. Wilson lines in the \(SU(N)\) gauge theory cannot completely disappear though at strong coupling due to the electric \(\mathbb{Z}_{N}\) one-form symmetry. The minimal conjecture, that only \(N\) Wilson lines survive at strong coupling, is compatible with the fact that we have exactly \(N\) nontrivial 't Hooft lines at weak coupling in the \(PSU(N)\) gauge theory.
Scalar QED\({}_{4}\)
### Two DCFT fixed points
We first consider scalar QED\({}_{4}\) in (mostly minus) Minkowski signature with a charge \(q\) Wilson line extending in the time direction:
\[\begin{split}\langle W_{q}\,\mathcal{O}_{1}\ldots\rangle& =\int D\phi\,DA_{\mu}\,\exp\left[i\int d^{4}x\,\left(\mathcal{L}[A, \phi]-q\,\delta^{3}(\vec{x})A_{0}(x)\right)\right]\,\mathcal{O}_{1}\ldots\,,\\ \mathcal{L}[A,\phi]&=-\frac{1}{4e^{2}}F_{\mu\nu}^{2} +|D_{\mu}\phi|^{2}-\frac{\lambda}{2}|\phi|^{4}\,.\end{split} \tag{1}\]
The setup (1) defines a so-called defect QFT (DQFT). In the following we will study the properties of such DQFTs as a function of the charge of the Wilson line \(q\) at weak coupling \(e^{2}\sim\lambda\ll 1\). In most of the section we will assume that the scalar mass is tuned to zero, in order to get interesting low-energy physics.
By rescaling \(\phi=\Phi/e\), we can see that taking the scaling limit
\[\begin{split}& e\to 0\,,\quad\lambda\to 0\,,\quad q\to\infty\,,\\ &\frac{\lambda}{e^{2}}=\text{fixed}\,,\quad q\,e^{2}=\text{fixed }\,,\end{split} \tag{2}\]
leads to a problem that can be treated in the saddle point or semiclassical approximation, i.e. by solving classical differential equations. The saddle point equations are
\[\begin{split}\partial_{\mu}F^{\mu\nu}+J^{\nu}&=e^{ 2}q\,\delta_{t}^{\nu}\,\delta^{3}(\vec{x})\,,\\ D_{\mu}D^{\mu}\Phi+\frac{\lambda}{e^{2}}|\Phi|^{2}\Phi& =0\,,\end{split} \tag{3}\]
where the expression of the current is \(J_{\mu}=\frac{i}{e}(\Phi^{\dagger}\partial_{\mu}\Phi-\partial_{\mu}\Phi^{ \dagger}\Phi-2ie|\Phi|^{2})\). In this section we look for interesting classical solutions of this system, which would be related to various infrared phases of the line operator.
The first, obvious classical solution is
\[A_{t}=\frac{e^{2}q}{4\pi r}\,,\qquad\Phi=0\,. \tag{4}\]
This is the intuitive solution corresponding to the Coulomb field of a point charge and vanishing scalar field. This solution automatically obeys the \(SL(2,\mathbb{R})\) symmetry and hence leads to a DCFT. However we will find that, depending on the parameters, the resulting DCFT may be sick and hence a different classical solution would have to be identified.
To investigate the DCFT associated to the saddle point (4) we consider fluctuations around the saddle point. We focus on scalar fluctuations, and we find that close to the Wilson
loop they behave as
\[\Phi(x) =\sum\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
It will be technically advantageous to exploit the fact that massless scalar QED\({}_{4}\) is classically conformally invariant, and perform the Weyl transformation2
Footnote 2: Whether the theory is a DCFT at the classical level depends on the boundary conditions we choose. However, even if the boundary conditions break (boundary) conformal invariance, we can perform the Weyl transformation, since the bulk remains conformal.
\[\begin{split} ds^{2}&=r^{2}\left[\frac{-dt^{2}+dr^{ 2}}{r^{2}}+ds_{S^{2}}^{2}\right]\\ &\equiv r^{2}d\tilde{s}_{\text{AdS}_{2}\,\times\,S^{2}}^{2}\end{split} \tag{7}\]
which maps the flat space problem to a problem in AdS\({}_{2}\times S^{2}\), with the defect now at the asymptotic boundary of AdS\({}_{2}\). The scalar fluctuations in AdS are related to those in (5) through \(\Phi_{A}=\frac{1}{r}\tilde{\Phi}_{A}\) and the gauge field background from (4) is unchanged. Through this mapping, we can readily borrow results from the AdS/CFT literature about boundary conditions on scalar fields; we give a quick overview below, with the final result obtained in (22) and figure 1. Specializing to the \(\ell=0\) mode, the near-boundary behavior is:
\[\tilde{\Phi}_{\omega}(r)=\alpha_{\omega}\,r^{1/2-\nu}\left(1+\frac{q\,\omega \,r}{\nu-1/2}+\ldots\right)+\beta_{\omega}\,r^{1/2+\nu}\left(1-\frac{q\,\omega \,r}{\nu+1/2}+\ldots\right)\,. \tag{8}\]
We will drop the tilde from \(\Phi\) from here onwards.
It will be useful for our purposes below to introduce a small radial cutoff at \(r=r_{0}\) and never remove it throughout the computation. First, we return to the question of boundary conditions. These are determined from the variation of the action and keeping track of boundary terms. Varying the action \(S_{\text{bulk}}=\int_{\text{AdS}_{2}\,\times\,S^{2}}\sqrt{-g}\,\mathcal{L}[A,\phi]\) from (1) and imposing the bulk equations of motions gives the boundary term (for \(\nu>0\))
\[\delta S_{\text{bulk}}=r_{0}^{-2\nu}\,\frac{1-2\nu}{2}\int d\omega\left(\alpha _{\omega}^{\dagger}\delta\alpha_{\omega}+\text{c.c.}\right)+\text{(subleading )}\,. \tag{9}\]
The subleading terms will be important at the next step, where we will write them out explicitly. The leading term vanishes if
\[\alpha_{\omega}=\alpha_{\omega}^{\dagger}=0\,, \tag{10}\]
whereas \(\beta_{\omega}\) is a fluctuating degree of freedom. These boundary conditions are analogous to the Dirichlet boundary conditions, since the more singular mode in (8) is set to zero. Since the boundary terms have to vanish identically on the equations of motion, at finite \(r_{0}\) we need to slightly correct the boundary conditions, or add additional boundary terms, to cancel the subleading terms. We will only write out the most important such terms. The same comment applies below.
We can read out the scaling dimension of the defect operators for the conformal boundary condition (10). We see that the \(\beta_{\omega}\) correspond to operators of dimension \(1/2+\nu\). Remember
that we are studying a gauge theory and hence only gauge-invariant operators should be considered, and thus the scaling dimension of the bilinear \(\beta^{\dagger}\beta\) is
\[\hat{\Delta}(\beta^{\dagger}\beta)=1+\sqrt{1-4g^{2}}\;. \tag{11}\]
For \(g\to 0\) (i.e. the trivial defect \(q=0\)) the scaling dimension becomes \(\hat{\Delta}=2\), coinciding with the dimension of the bulk operator \(\Phi^{\dagger}\Phi\). Therefore the boundary condition (10) defines the usual Wilson line operator, in the sense that if we set \(q=0\) this boundary condition means that there is no defect at all.3
Footnote 3: From the perspective of standard perturbation theory (where \(q\) is usually taken to be \(O(1)\)), the result for \(\nu\) in (5) re-sums the contribution of infinitely many Feynman diagrams to the anomalous dimensions of defect operators of the form \(\partial^{\#}\Phi^{\dagger}\partial^{\#}\Phi\). This is analogous to what happens in other semiclassical limits, see e.g. [34; 35; 36; 14] in a similar context. We checked explicitly the agreement of the semiclassical result (5) with a one-loop diagrammatic calculation of the anomalous dimension of \(\Phi^{\dagger}\Phi\) on the defect.
There is a twist in the story: the boundary condition (10) is not unique. We can add boundary terms and they can change the boundary conditions and the boundary operator spectrum [37; 38; 39; 40; 41]. Let us add the following boundary term:
\[S_{\rm bdy}^{(1)}=-\frac{1-2\nu}{2}\int_{r=r_{0}}dt\,\sqrt{-\hat{g}}\,|\Phi|^ {2}\,,\qquad d\hat{s}^{2}=\frac{-dt^{2}}{r_{0}^{2}}\,, \tag{12}\]
which is carefully tailored to cancel the leading term in the variation (9). Combined with \(S_{\rm bulk}\) we now have the variation:
\[\begin{split}\delta\left(S_{\rm bulk}+S_{\rm bdy}^{(1)}\right)= \int d\omega\,\Big{[}2\nu\left(\beta_{\omega}^{\dagger}\delta\alpha_{\omega}+ \text{c.c.}\right)+2\nu\,r_{0}^{2\nu}\left(\beta_{\omega}^{\dagger}\delta \beta_{\omega}+\text{c.c.}\right)\\ +\frac{2q\,\omega\,r_{0}^{1-2\nu}}{2\nu-1}\left(\alpha_{\omega}^{ \dagger}\delta\alpha_{\omega}+\text{c.c.}\right)+\text{(subleading)}\Big{]} \;.\end{split} \tag{13}\]
Since for \(g^{2}<1/4\) we have \(0<\nu<1/2\), the first term is dominant and permits the choice of boundary condition4
Footnote 4: Note that the boundary condition in (10) remains viable. With that choice the boundary term we added evaluates to \(0\) (to the order in \(r_{0}\) we are working).
\[\beta_{\omega}=\beta_{\omega}^{\dagger}=0\,. \tag{14}\]
\(\alpha_{\omega}\) becomes a boundary degree of freedom with dimension \(1/2-\nu\). Again, only bilinears are gauge-invariant. In the AdS/CFT literature this is known as the 'alternative quantization' [37] of the scalar \(\Phi\). The boundary condition (14) describes a new DCFT, with a different operator spectrum from the usual Wilson line defined by (10). In particular, the lowest dimensional gauge-invariant operator is the bilinear \(\alpha^{\dagger}\alpha\), which has scaling dimension \(\hat{\Delta}=1-2\nu<1\) and it is therefore relevant. This will be important below, as adding this operator to the action leads to an RG flow.
A remark is in order. Our analysis of the linearized fluctuations suggests that alternative quantization defines a unitary DCFT for arbitrary \(0<\nu<1/2\), i.e. for arbitrary
\(2\pi^{2}/e^{2}\). This would mean that even low-charge Wilson lines have two different possible fixed points. As we will explain in the next section, this conclusion is not entirely correct due to nonlinear interactions between the fluctuations. Eventually, we will only claim that alternative quantization defines a healthy DCFT in the window \(0<\nu<1/4\), which means that there are two fixed points for the Wilson line starting from charge \(|q|=\frac{\sqrt{3}\pi}{e^{2}}\) up to the unitarity bound \(|q|=\frac{2\pi}{e^{2}}\).
In our problem we have \(1/2>\nu>0\), which in the infrared fixed point (10) means that the lowest lying bilinear operators cover the range from \(\hat{\Delta}=2\) with no impurity to \(\hat{\Delta}=1\) at the bound \(\nu=0\), while in the alternative quantization fixed point (14) we cover the range from \(\hat{\Delta}=0\) when there is no impurity to \(\hat{\Delta}=1\) at the bound \(\nu=0\). These are consistent with the unitarity bound \(\Delta\geq\max\left(\frac{d-2}{2},0\right)\), which for \(d=1\) is \(\hat{\Delta}\geq 0\).5
Footnote 5: The alternative quantization window commonly quoted in the literature is \(0<\nu<1\). However, the range \(1/2<\nu<1\) (which is not realized in our problem) clearly would give rise to a non-unitary alternatively quantization DCFT since the scaling dimension would be negative. The scalar theory in \(AdS_{2}\) in this mass range therefore develops a sickness with alternative boundary conditions earlier than in higher dimensions. See [42] for related comments.
There is a way to interpolate between alternative and standard quantization: they are connected by an RG flow (referred to as the double-trace flow in AdS/CFT) that is triggered by adding the relevant operator \(\left|\alpha\right|^{2}\) to the alternative quantization DCFT action. This is implemented by the additional boundary term:
\[S_{\text{bdy}}^{(2)}=-f_{0}\int_{r=r_{0}}dt\,\sqrt{-\hat{g}}\,r_{0}^{2\nu}| \Phi|^{2}\,. \tag{15}\]
This term is chosen so that it reduces to \(\left|\alpha\right|^{2}\) in the limit \(r_{0}\to 0\). Upon imposing \(\delta\left(S_{\text{bulk}}+S_{\text{bdy}}^{(1)}+S_{\text{bdy}}^{(2)}\right) =0\) and defining the dimensionless coupling constant \(f\) through \(f_{0}=fr_{0}^{-2\nu}\), we obtain the boundary condition
\[\begin{split}\frac{\beta_{\omega}}{\alpha_{\omega}}& =\frac{f_{0}}{2\nu-f_{0}r_{0}^{2\nu}}\\ &=\frac{f}{2\nu-f}\,r_{0}^{-2\nu}\,,\end{split} \tag{16}\]
where we took \(r_{0}\omega\ll 1\) and only kept the leading term.6 For \(\nu>0\) in the limit \(r_{0}\to 0\) this is the well known result for the double-trace deformation [38; 39; 40; 41]. To extract the beta function of \(f\) for arbitrary \(\nu\) we demand that the boundary conditions (16) (and hence the physical theory) are left invariant by a simultaneous rescaling of the cutoff and the coupling. This Callan-Symanzik style argument [20] implies that \(r_{0}\frac{\partial(\beta_{\omega}/\alpha_{\omega})}{\partial r_{0}}-\beta_{f }\frac{\partial(\beta_{\omega}/\alpha_{\omega})}{\partial f}=0\), which leads to the beta function
Footnote 6: While we may contemplate whether \(f_{0}\) needs to be a lot smaller than the cutoff scale \(r_{0}^{-2\nu}\), and hence \(f\) needs to be infinitesimal, this result is trustworthy for \(f=O(1)\).
\[\beta_{f}=-2\nu f+f^{2}\,. \tag{17}\]
For real \(\nu\), we find two fixed points: \(f=0\) is the UV (alternative quantization) and \(f=2\nu\) the IR (standard quantization) DCFT limit of the resulting RG flow. The RG flow corresponds
to interpolating between the boundary conditions in (2.14) and (2.10). For \(f<0\) we have \(\beta_{f}>0\) leading to the runaway behavior \(f\to-\infty\) in the IR. In other words, in the alternative quantization fixed point, which exists for positive \(\nu\) at \(f=0\), with positive sign for \(f\) the deformation (2.15) leads to the standard Wilson line fixed point, while with a negative sign for \(f\), a long flow towards infinitely negative \(f\) ensues and the dynamics has to be understood. We will provide a physical interpretation of this runaway in the next section.
Let us consider in some detail the special case when the coupling \(f\) is marginal, \(\nu=0\), and only exhibits logarithmic running. Then the two falloffs in (2.5) degenerate and we have to implement the change of basis
\[\alpha_{\omega}=\frac{1}{2}\left(-\frac{a_{\omega}}{\nu}+b_{\omega}\right)r_{0 }^{\nu}\qquad\text{and}\qquad\beta_{\omega}=\frac{1}{2}\left(\frac{b_{\omega }}{\nu}+a_{\omega}\right)r_{0}^{-\nu}\,, \tag{2.18}\]
for some arbitrary cutoff radius \(r_{0}\). Then we can take the limit \(\nu\to 0\):
\[\Phi\sim b_{\omega}\sqrt{r}\log(r/r_{0})+a_{\omega}\sqrt{r}\,,\qquad\nu=0\,. \tag{2.19}\]
We can directly read off the evolution of the coupling constant \(f\):
\[f_{*}\log(r/r_{*})+1=f(r_{0})\log(r/r_{0})+1\quad\implies\quad f(r_{0})=\frac{ f_{*}}{1-f_{*}\log(r_{0}/r_{*})}, \tag{2.20}\]
where \(r_{*}\) is some reference scale, or equivalently
\[\beta_{f}=f^{2}\,,\qquad\nu=0\,. \tag{2.21}\]
This agrees with [41] for \(\nu=0\). Note that while for \(\nu>0\) we could have taken the \(r_{0}\to 0\) limit from early in the calculation, it is essential to keep \(r_{0}\) finite to make sense of the marginal case with \(\nu=0\) that corresponds to the critical Wilson loop. The cutoff is also necessary to study the supercritical case, where \(\nu\) is imaginary and no real DCFT exists. We will study the supercritical case below.
Since our theory is naturally equipped with a cutoff \(r_{0}\), the beta function (2.17) should make sense also in the region \(g^{2}>1/4\), where the coupling \(f\) however cannot be thought as a perturbation of a (unitary) DCFT. There is a trick to rewrite the beta function and the coupling in a way that would make sense with real couplings for both \(g^{2}>1/4\) and \(g^{2}<1/4\). To see this, note that in terms of the dimensionless coupling \(f=f_{0}r_{0}^{2\nu}\), the boundary action with the double-trace deformation can be written as:
\[S_{\text{bdy}}^{(1)}+S_{\text{bdy}}^{(2)}=-\frac{1+2\hat{f}}{2}\int_{r=r_{0}} dt\sqrt{-\hat{g}}\,|\Phi|^{2}\,,\qquad\hat{f}\equiv f-\nu\,. \tag{2.22}\]
We can make sense of this in the region where \(\nu^{2}<0\) by choosing a complex \(f\), so that the coupling \(\hat{f}\) in (2.22) is real. In terms of \(\hat{f}\) the beta function (2.17) reads:
\[\beta_{\hat{f}}=-\nu^{2}+\hat{f}^{2}\,. \tag{2.23}\]
his result holds both for \(\nu^{2}\geq 0\) and \(\nu^{2}<0\). Crucially, the coupling \(\hat{f}\) is real in both regions, making sure that the theory stays unitary.
For \(\nu^{2}<0\) the beta function (23) does not admit fixed points at finite coupling. Instead, it is associated with a dimensional transmutation phenomenon usually referred to as "walking" behavior [43; 20]. Suppose that the RG flow starts from a small initial value for the coupling \(\hat{f}(\mu_{UV})\). The coupling constantly decreases along the RG, but the rate is slower near \(\hat{f}=0\) where the beta function is small. Eventually \(-\hat{f}(\mu)\) blows up at a scale \(\mu_{IR}\) given by
\[\mu_{IR}=\mu_{UV}e^{-\frac{\pi}{|\nu|}}\,. \tag{24}\]
We see that the dynamically generated scale is exponentially separated from the UV one for small \(\nu^{2}<0\). There is therefore dimensional transmutation on the line defect. We will discuss the physical implications of this RG flow in section 2.3.
In summary, we learn that there are two DCFT fixed points for subcritical Wilson lines in scalar QED in the double scaling limit. They correspond to different boundary conditions for the field \(\phi\) and are connected by an RG flow. The flow is triggered by the gauge-invariant relevant defect operator \(\left|\phi\right|^{2}\) that has dimension \(1-2\nu\) in the UV DCFT (alternative quantization) and that becomes irrelevant with dimension \(1+2\nu\) in the IR DCFT. For \(\nu=0\), corresponding to the critical Wilson line with \(q=2\pi/e^{2}=1/(2\alpha_{\rm QED})\), the two fixed points merge. For \(q>2\pi/e^{2}\) they annihilate and there is no DCFT. Instead there is a runaway towards large negative \(\hat{f}\).
The fixed point annihilation corresponds to the supercritical regime of Wilson lines, whose physics we explore in section 2.3. This problem is closely related to another one we encountered above: If we deform the subcritical UV DCFT (alternative quantization) with \(f<0\), we encounter a runaway behavior. Finally, in the following we also analyze the stability of the alternative quantization DCFT in the subcritical regime at the nonlinear level.
Figure 1: An illustration of the \(\beta\)-function associated with the parameter \(\hat{f}\) in equation (23).
### Stability in the subcritical regime
The abstract DQFT viewpoint was very useful to interpret the \(f>0\) regime of the phase diagram, where we found two DCFTs. In particular, standard quantization (with \(f=2\nu\)) is stable for small deformations. From the beta function in (17) we also found a runaway behavior for \(f<0\). It is not uncommon that such a behavior indicates instability. We will show below that this is indeed the case. We will find that for \(\nu>1/4\) this instability leads to the formation of a classical soliton that we construct numerically. Perhaps more surprisingly, we will find hints that for \(\nu>1/4\) the alternative quantization fixed-point admits solitons with arbitrary negative energy for any value of the deformation \(f\). We will explain why \(\nu=1/4\) is special.
To perform the analysis, we view our setup as a problem in differential equations. By changing \(f\) we are changing the boundary conditions for the equations of motion (EOM) (3). Since these are nonlinear, it is possible that there is an interesting phase diagram as we change \(f\).
The profile \(\Phi=0\), \(A_{t}=g/r\) is always a solution of the EOM. The RG analysis predicts that for \(f>0\) this solution is stable. For \(f<0\) we will find that it becomes unstable, and a new soliton solution takes over. The instability can be diagnosed in two different ways. First, in appendix A.1 we compute the \(\left\langle\alpha_{\omega}\alpha_{\omega}^{\dagger}\right\rangle\) retarded two-point function and show that for \(f<0\) it has a tachyon pole in the upper half plane, the telltale sign of a dynamical instability. The endpoint of the instability is the soliton that we construct below. Second, we show that the soliton has lower energy than the \(\Phi=0\), \(A_{t}=g/r\) solution, which is a "thermodynamical" demonstration of instability.
A simple argument establishes that the soliton cannot end up partially screening the Wilson line: the screening has to be complete.7 Let us assume that partial screening was possible; in the IR we have \(A_{t}=g^{\prime}/r\). If \(\Phi=\)const, the Maxwell equation (3) is not satisfied due to the \(A_{t}{|\Phi|}^{2}\) term in the gauge current. So we have to assume that \(\Phi\) is small. We know the possible small \(\Phi\) behavior in a Coulomb background from (5), \(\Phi\sim r^{1/2\pm\nu}\) with \(0<\nu<1/2\). Hence \(\Phi\) is growing (instead of decaying), and it starts backreacting on \(A_{t}\), ruining the assumed Coulomb behavior. We have thus reached a contradiction, and the only possible way out is to have complete screening (or \(\Phi=0\) throughout the bulk). Next we construct the explicit new soliton corresponding to screened Wilson loops.
Footnote 7: That is to leading order in the double scaling limit. Later we discuss situations in which total screening is in tension with symmetry considerations at subleading order in \(e^{2}\).
To perform the computation, first, we forget about the boundary conditions at \(r=r_{0}\) and construct solutions to the EOMs of \(\Phi\) and \(A_{t}\) which are regular as \(r\to\infty\). Since the equations are second order in derivatives and regularity provides two conditions, the resulting solution depends on two constants. One is simply a length scale \(\xi\), while the other is a dimensionless parameter that we denote by \(c\). These parametrize the asymptotic form of the solution as \(r\to\infty\). Explicitly, the asymptotics are different depending on the value of \(\bar{\lambda}\equiv\lambda/e^{2}\), and are given by (we have obtained a couple of more orders of the asymptotic expansions that we
suppress here to avoid clutter):
\[\begin{split}\lambda=0:&\qquad\begin{cases}\Phi=\frac{c}{ \sqrt{2}}+\ldots\\ r\,eA_{t}=(r/\xi)^{-\frac{1}{2}-\sqrt{\frac{1}{4}+c^{2}}}+\ldots\\ \bar{\lambda}<2:&\qquad\begin{cases}\Phi=\frac{1}{\sqrt{2\lambda}\log(r/ \xi)}+\ldots\\ r\,eA_{t}=c\left[\log(r/\xi)\right]^{-1/\bar{\lambda}}+\ldots\\ \bar{\lambda}>2:&\qquad\begin{cases}\Phi=\frac{1}{2\sqrt{\log(r/\xi)}} \left[1+\cdots+c\left[\log(r/\xi)\right]^{1-\frac{\bar{\lambda}}{2}}+\ldots \right]\\ r\,eA_{t}=\frac{\sqrt{\bar{\lambda}-2}}{2\sqrt{\log(r/\xi)}}\left[1+\cdots+ \frac{2c}{\lambda-2}\left[\log(r/\xi)\right]^{1-\frac{\bar{\lambda}}{2}}+\ldots \right]\end{cases}\end{split}\end{split} \tag{25}\]
Note that \(c\) denotes different things in the different cases and for \(\bar{\lambda}>2\) it is hiding at a subleading order (as a noninteger power term). Also note that we set the AdS radius equal to one, which makes up for the missing dimensions in the above equations.
Given a solution with the above asymptotics as \(r\to\infty\), we may integrate the EOMs towards smaller \(r\). We may then obtain the near-defect boundary conditions, namely the charge of the Wilson line \(g\) and the value of the double-trace coefficient \(f\), that correspond to a given choice of the parameters \(\xi\) and \(c\).
In practice, we can only solve the EOMs numerically. To this aim, we set \(\xi=1\) and use the asymptotics as initial data for the numerical integration of the EOM starting at some \(r=r_{c}\) and integrating towards smaller \(r\) (up to some small \(r_{0}\)). Let us denote the resulting solution by \(\{\varphi^{(c)}(r),\,\mathcal{A}_{t}^{(c)}(r)\}\). We can reinstate \(\xi\) by a simple rescaling, thereby obtaining a two-parameter family of solutions
\[\left\{\varphi^{(c)}(r/\xi),\,\frac{\mathcal{A}_{t}^{(c)}(r/\xi)}{\xi}\right\}\,. \tag{26}\]
If we denote the near-defect asymptotic data corresponding to the solution \(\{\varphi^{(c)}(r),\,\mathcal{A}_{t}^{(c)}(r)\}\) by \(\{\alpha^{(c)},\,\beta^{(c)},\,g^{(c)}\}\), as in (8):8
Footnote 8: The subleading behavior of the gauge field provides an additional boundary datum. We omit it below, as it does not play a role in our discussion.
\[\begin{split}\varphi^{(c)}(r)&=\alpha^{(c)}\,r^{1/ 2-\nu}+\beta^{(c)}\,r^{1/2+\nu}\,,\\ \mathcal{A}_{t}^{(c)}&=\frac{g^{(c)}}{r}\,,\end{split} \tag{27}\]
then the two-parameter family of solutions in (26) has asymptotic data
\[\left\{\alpha=\frac{\alpha^{(c)}}{\xi^{1/2-\nu}},\,\beta_{\rm sol}(\alpha)= \frac{\beta^{(c)}}{\xi^{1/2+\nu}},\,g^{(c)}\right\}\,, \tag{28}\]
where by writing \(\beta_{\rm sol}(\alpha)\) we emphasize that the family of solitons characterized by _fixed \(c\) and varying \(\xi\)_ gives a curve in the \((\alpha,\beta)\) plane.
From (28), we learn that we can use \(c\) as a proxy for \(q\) (or \(g\)). Then for fixed \(q\) (or equivalently \(c\)) we can use \(\xi\) to tune the absolute value of the ratio of \(\alpha/\beta_{\rm sol}(\alpha)\). The sign of the ratio \(\beta^{(c)}/\alpha^{(c)}\) determines the sign of the coupling \(f\) corresponding to the so-constructed soliton. More in detail, combining (16) with (28), for infinitesimal \(f\) we have
\[\frac{\beta^{(c)}}{\alpha^{(c)}}=\frac{f}{2\nu}\,(\xi/r_{0})^{2\nu}\,. \tag{29}\]
Since without loss of generality we choose \(\Phi\) to be positive near the boundary giving \(\alpha>0\), it is the sign of \(\beta^{(c)}\) that correlates with that of \(f\). Naively, one might expect all the solitons that can be constructed in this way to correspond to a negative value of the double-trace coefficient \(f\) (since this is the region where an instability exists for positive \(\nu\), while for positive \(f\) we expect the dominant saddle point to be (4)). Rather surprisingly, we find the following intriguing pattern of signs as a function of \(g=e^{2}q/(4\pi)\) with blue negative and red positive:
We will explain this pattern below analytically, showing that the red regions are associated with the existence of dangerous configurations whose energy seems unbounded from below. These have a simple interpretation in terms of the renormalization group, as we will see.
From (28) we observe that from the combination
\[s(g)\equiv-\frac{\beta_{\rm sol}(\alpha)}{\alpha^{\frac{1/2+\nu}{1/2-\nu}}}=- \frac{\beta^{(c(g))}}{\left[\alpha^{(c(g))}\right]^{\frac{1/2+\nu}{1/2-\nu}}} \tag{30}\]
\(\xi\) drops out, hence \(s(g)\) is a useful characterization of the nonlinear response of our system to turning on \(\Phi\). The sign of \(s(g)\) is then anti-correlated with the sign of \(f\) that is required to get a solitonic solution. A useful construction borrowed from the AdS/CFT literature is as follows [44, 45]. To avoid clutter let us use the notation \(A=\sqrt{\alpha^{\dagger}\alpha}\), \(B=\sqrt{\beta^{\dagger}\beta}\) and specify the general non-linear boundary condition
\[B=\mathcal{W}^{\prime}(A)\,. \tag{31}\]
For example the interpolating flow between alternative and standard boundary conditions corresponds to \(\mathcal{W}(A)=fA^{2}/2\). We also define
\[\begin{split}\mathcal{W}_{0}(A)&\equiv\left(\frac{ 1}{2}-\nu\right)s(g)\,A^{\frac{1}{1/2-\nu}}\,,\\ \mathcal{V}(A)&\equiv 4\nu\left[\mathcal{W}(A)+ \mathcal{W}_{0}(A)\right]\,,\end{split} \tag{32}\]
Figure 2: Sign of \(f\) corresponding to the numerical soliton as a function of \(g\).
where \({\cal V}(A)\) is the effective potential. That is, \({\cal V}(A)\) is genuinely the (leading order) quantum effective potential, i.e. (minus) the 1PI effective action evaluated for constant \(\langle A\rangle\); see Appendix A.3 for a derivation similar to [44, 45]. One can verify that the solitonic solution that satisfies the boundary condition (31) is a critical point of \({\cal V}(A)\); this is a consistency check of the formalism. (Recall that \({\cal W}^{\prime}_{0}(A)=s(g)\,A^{\frac{1/2+\nu}{1/2-\nu}}=-B_{\rm sol}(A)\), where we used (30).) Since the value of the effective potential is zero for the naive saddle \(A=0\) and negative for the soliton critical point, we conclude that it is the energetically favored configuration, hence establishing thermodynamic stability.
Next we ask if we can provide an analytic understanding of the sign structure of \(s(g)\) (previewed below (29)). We will first explain what happens to the bulk scalar profile at the special points where the sign of \(s(g)\) flips, and then we interpret the values of \(g\) where the sign flips take place from the point of view of the defect renormalization group.
The near boundary analysis of the equations (at \(\omega=0\), but going beyond the terms displayed in (5)) gives for \(\Phi\):
\[\begin{split}\Phi=&\alpha\,r^{1/2-\nu}\left[1+ \alpha^{2}\,\frac{1+2(1+\bar{\lambda})\nu}{4\nu(1-2\nu)(1-4\nu)}\,r^{1-2\nu}+ \ldots\right]+\beta\,r^{1/2+\nu}\left[1+\ldots\right]\\ &+\left(\text{cross terms between $\alpha$ and $\beta$}\right).\end{split} \tag{33}\]
Note that the exponent in the \(\alpha^{3}\) term coincides with that of the \(\beta\) term for \(\nu=\frac{1}{4}\). Exactly at this point, the coefficient of the \(\alpha^{3}\) term diverges. Using the relation between \(\beta\) and \(\alpha\) from (30), we see that the only way for us to get a regular scalar profile \(\Phi\) (which we expect, since nothing drastic happens in the bulk), is to have:
\[s(g)=-\frac{(3+\bar{\lambda})/4}{\nu(g)-1/4}+\ldots,\qquad(\text{for $\nu\to 1/4$}). \tag{34}\]
Since \(\nu=1/4\) corresponds to \(g=\sqrt{3}/4=0.43\), we have successfully explained the first sign change (counting from \(g=1/2\)) of \(s(g)\) that we see on the diagram below (5), see also figure 3. It turns out that all the sign changes are explained by an \(\alpha^{2k+1}\) term colliding with the \(\beta\) term, giving \(\nu_{k}=\frac{k}{2(k+1)}\) for the \(k\)'th sign change point.
From the defect point of view we have already mentioned that something special happens at \(\nu=1/4\), now we can explain why. At these points the operators \(\left(\alpha^{\dagger}\alpha\right)^{k+1}\) become marginal. On the other side of this transition point, we have a new relevant operator, which can be added to the action (included in \({\cal W}\)) without spoiling the UV alternative quantization DCFT.9 This can be understood straightforwardly also from (14), where we have found that in alternative quantization the scaling dimension of the charged boson is \((1-2\nu)/2\), and hence, at \(\nu=1/4\) the quartic operator becomes relevant in the alternative quantization fixed point.
Footnote 9: We also need new boundary terms for each new relevant operator that supplements (12). A worked out example can be found in Appendix D of [45].
Irrespective of what coefficients we choose for the relevant terms, the asymptotics of the effective potential is always determined by the conformal term \(s(g)A^{\frac{1}{1/2-\nu}}\), hence when
\(s(g)<0\) the effective potential is unbounded from below.10 We speculate that in our case this indicates an instability of the system, as it seems to allow for the construction of a configuration with arbitrarily negative energy.11 However, we have not constructed such a configuration explicitly and we have not found a defect QFT explanation for the potential loss of stability of alternative quantization. One scenario is that since the quartic operator becomes relevant, the alternative quantization fixed point is lost, due to the joint beta function of the bilinear and quartic operator not having mutual zeroes. What we are sure about is that alternative quantization is a healthy DCFT in the regime \(\sqrt{3}/4<g<1/2\). We leave the interesting problem of understanding the regime \(g<\sqrt{3}/4\) for future work.
Footnote 10: In itself this does not in itself signal pathology as evidenced by the negative effective potential for the Hubbard-Stratonovich field in the large \(N\) critical \(O(N)\) model [46], which is a perfectly healthy theory.
Footnote 11: A circumstantial evidence is that the analogous transition in holography from \(s>0\) to \(s<0\) is accompanied by the loss of the positive energy theorem [44].
We end this subsection with three examples of soliton solutions in the regime \(\sqrt{3}/4<g<1/2\) for the three cases of \(\bar{\lambda}=\lambda/e^{2}\) considered above, see figure 4. These are the lowest energy states when \(f<0\) and the runaway behavior of the RG equation (17) leads to the physical interpretation of complete screening of the Wilson line. For these plots, we determine the value of \(r_{0}\) from the dimensionful coupling \(f_{0}\) as \(r_{0}=(f/f_{0})^{1/(2\nu)}\). By (29), this equals12
Footnote 12: That is, we set the dimensionless coupling \(f=-2\nu/100\), so that \(|f|\ll 2\nu\) and that \(f\) is negative.
\[r_{0}=\left[\frac{2\nu}{f}\,\frac{\beta^{(c)}}{\alpha^{(c)}}\right]^{1/(2\nu )}\xi\,. \tag{35}\]
We expect \(r_{0}\) to be the scale where nonlinearities operate, and the core of the screening cloud should be localized on this scale. Indeed, the three examples shown in figure 4 confirm this
Figure 3: Plot of \(s(g)\) for \(\lambda=0\) obtained from numerics. In the inset the numerical data are plotted together with the analytic formula (34) (red dashed line) describing the behavior of \(s(g)\) near the singularity at \(g=\sqrt{3}/4\); we get perfect agreement.
expectation. The core region is followed by an extended tail region, as explored in (2.25) from the point of view of differential equations. A complementary IR DQFT perspective on these tails is given in section 2.4, while in this section we provided a UV perspective on screening.
### Screening in the supercritical regime
The bulk physics of the supercritical regime resembles that of the subcritical regime with the boundary conditions triggering an instability to forming a screening scalar cloud. There are two distinctions: the screening cloud forms for any boundary condition (i.e. for either sign of \(f\), as is evident from the beta function (2.23) which leads to negative infinite \(f\) regardless of the initial conditions for imaginary \(\nu\)) and the cloud slightly above criticality is generically exponentially large, \(\exp\left(\pi/|\nu|\right)\). These differences originate from the near defect dynamics as we explain below.
Let us recall from section 2.1 that in the supercritical regime there is no genuine DCFT and we need to keep the cutoff \(r_{0}\) finite. It is natural to take the boundary condition at the cutoff surface to be
\[0=\left(\hat{f}+\frac{1}{2}\right)\Phi+\partial_{n}\Phi\big{|}_{r_{0}}\,, \tag{2.36}\]
where we take \(n\) to point towards the origin, we converted the boundary condition (2.16) into one given in terms of \(\Phi\), and we look for a real scalar profile. We use the obvious boundary conditions \(F_{0r}|_{r_{0}}=g/r_{0}^{2}\) for the gauge field. The profiles \(\Phi=0\) and \(A_{t}=g/r\) will always be a solution, but for any \(\hat{f}\) we will always find (infinitely many) scalar solitons. The reason for the existence of an infinite family of solutions is that the solution of the linearized equations has a discrete scale symmetry which can be used to generate new solutions from existing ones. The same phenomenon was discussed in [20; 42; 47].
Figure 4: Plots of the scalar profile (blue) and the electric field \(E=F_{tr}\) (orange) as functions of the distance from the probe charge, all normalized to be dimensionless. The analysis was carried out in the subcritical regime for \(\frac{e^{2}q}{2\pi}=0.9\) corresponding to \(g=0.45\). The left plot is for \(\lambda=0\), the middle is for \(\bar{\lambda}=\lambda/e^{2}=\frac{1}{2}\), and the right one is for \(\bar{\lambda}=8\). These curves were obtained numerically by solving the saddle point equations of motion (2.3) following the three step procedure explained in the beginning of this section. Note that on the left plot the scalar (multiplied by \(r\)) goes to a constant in the IR, in the middle one it decays slower than the electric field (multiplied by \(r^{2}\)), while in the right plot, their decay rate is the same (\(\sim 1/\sqrt{\log(r)}\)).
In more detail, the solutions for the linearized equations are \(r^{1/2+i|\nu|}\), so that we can write:
\[\Phi=C\sqrt{r}\,\cos\left(|\nu|\log\left(\frac{r}{r_{0}}\right)- \gamma\right)\,, \tag{37}\]
where \(C\) is fixed by bulk regularity of the full nonlinear problem and \(\gamma\) is fixed by the boundary condition (36) to be:
\[\gamma=\arctan\left(\hat{f}/|\nu|\right)\,. \tag{38}\]
Under discrete scale transformations the profile (37) transforms by scaling and hence also satisfies the boundary condition (36) and gives rise to a new soliton
\[r\to\Lambda^{n}\,r\,,\qquad\Phi(r)\to(-1)^{n}\Lambda^{n/2}\,\Phi( r)\,,\qquad\Lambda\equiv\exp\left(-\frac{\pi}{|\nu|}\right)\,. \tag{39}\]
Since the envelope of \(\Phi\) in (37) grows, the solution eventually exits the linear regime and stops oscillating. The discrete scale invariance is broken by nonlinear effects. Let \(C=C_{0}\) give rise to a regular solution of the equations with zero nodes of the \(\Phi\) profile. Then by the discrete scale invariance (39), the amplitude \(C_{n}\approx C_{0}\,\Lambda^{n/2}\) with \(n\in\mathbb{Z}_{+}\) will also give rise to a regular scalar profile with \(n\) nodes.13
Footnote 13: We write approximately equal, since some nonlinear effects correct the profile (37). For increasing \(n\) these corrections are decreasing in importance. We note that we may regard the \(\Phi=0\) solution as corresponding to \(C_{\infty}\).
From the infinitely many potential solitons characterized by \(C_{n}\), we have to choose the one that is physically realized: this can be done by comparing the energies of field configurations or by dynamical stability analysis. In Appendix A.2 we determine the spectrum of fluctuations around the \(\Phi=0\) background and we find infinitely many tachyon modes with sizes \(R_{k}\simeq\Lambda^{-k}\,r_{0}\) with \(k\geq 1\). Since we can treat the \(n\)th soliton (with parameter \(C_{n}\)) as consisting of a linearized oscillating region of size \(R_{\text{lin},n}\simeq\Lambda^{-n}\,r_{0}\) followed by a nonlinear region, we can fit tachyons with \(k\leq n\) (a total of \(n\) of them) into the linearized regions, and we find that the all \(C_{n>0}\) solitons are unstable. Hence we conclude that only the \(C_{0}\) soliton is stable, since it lacks a large linearized region, where tachyons could reside.
As in the subcritical case discussed in section 2.2, there are three possible IR asymptotics of the scalar soliton depending on the value of \(\bar{\lambda}=\lambda/e^{2}\) as listed in (25). Starting from these and setting \(\xi=1\) we obtain a scalar soliton. We then reinstate \(\xi\) as in (26). We choose \(\xi\) such that we satisfy the boundary condition (36) at \(r=r_{0}\). An illustrative example is given in figure 5, where we chose \(\xi=\xi_{0}\) such that Dirichlet boundary conditions are obeyed. Since the soliton is oscillating in the small \(\Phi\) region as in (37), it is always possible to satisfy any boundary conditions from the class (36). This is unlike the subcritical case, where the sign of \(f\) decided if we have a soliton solution. In fact with the choice \(\xi_{n}=\Lambda^{-n}\,\xi_{0}\) we again obtain a soliton that obeys the same boundary conditions. The corresponding asymptotic amplitude is \(C_{n}\). This is clearly demonstrated in figure 5.
The most striking feature of supercritical clouds is that for small \(|\nu|\) the core of the soliton is huge, of order \(R_{\rm cloud}\simeq\frac{r_{0}}{\Lambda}=r_{0}\exp{(\eta/|\nu|)}\), where the constant \(\eta=O(1)\) is determined by nonlinear physics. In contrast, in the subcritical case the soliton has a natural size, \(R_{\rm cloud}\simeq r_{0}\), with \(r_{0}\) fixed by the dimensionful coupling constant \(f_{0}\sim r_{0}^{-2\nu}\) as demonstrated in figure 4. Since the tail of the cloud is identical to what we have already shown for the subcritical case in figure 4, in figure 6 we only show a \(\bar{\lambda}=1/2\) cloud.
We conclude this section by providing an intuitive RG explanation of the \(C_{n}\) solitons
Figure 5: Plot of the screening cloud for the supercritical case with \(g\sim 2\), \(\lambda=0\), and Dirichlet boundary condition for the scalar. We cut off part of this solution with \(r<r_{0}\) thereby obtaining the physical profiles. If we set \(r_{0}=r^{(0)}\) (rightmost gray line) we get the stable scalar soliton with Dirichlet boundary condition. If we set \(r_{0}=r^{(n)}\) (the \((n+1)\)th gray line), we get the scalar soliton with \(n\) nodes (and \(n\) tachyons). A subtlety is that in the oscillating region \(g\) changes slightly, so we have to make slight adjustments in parameter space to keep \(g\) constant as we increase the number of nodes.
Figure 6: Plot of the screening cloud for the supercritical case with \(g=0.51\). The striking feature of this plot is the large cloud size for which the theoretical estimate \(R_{\rm cloud}/r_{0}\approx 4\cdot 10^{13}\) is consistent with the plotted numerical result. We have chosen \(\bar{\lambda}=1/2\) and the tail region is identical to the (middle) \(\bar{\lambda}=1/2\) plot in figure 4. We chose Dirichlet boundary condition \(\Phi(r_{0})=0\) for the scalar field (corresponding to \(\hat{f}\to\infty\) in (2.36)). Different boundary conditions for the scalar field lead to a qualitatively similar plot.
that we constructed numerically. The exponential size for the cloud \(R_{\rm cloud}\simeq r_{0}\exp{(\pi/|\nu|)}\) associated with the ground state solution \(C_{0}\) is clearly a consequence of the "walking" behavior discussed around (24). To understand the solutions with \(n\geq 1\) nodes, it is convenient to re-express the boundary condition (36) in terms of the angle \(\gamma\) parametrizing the linear solution (37):
\[e^{i\gamma}=\frac{|\nu|+if}{\sqrt{f^{2}+|\nu|^{2}}}\equiv z\,. \tag{40}\]
In terms of the phase \(z\), the solution to the beta-function (23) for \(\nu^{2}<0\) reads
\[z(\mu)=z(\mu_{0})\left(\frac{\mu}{\mu_{0}}\right)^{i|\nu|}\,, \tag{41}\]
where \(\mu\) is the running scale and \(\mu_{0}\) some reference initial scale. (41) describes a cyclic RG flow with period \(\Lambda^{-2}\). In practice, the beta-function (23) describes only the linear regime, and the nonlinearities drive the RG flow away from the cyclic regime.
Similarly to the Efimov effect [48], the discrete scale invariance (39) is a consequence of the approximate cyclic RG flow (41). The \(C_{n}\) solitons are then simply interpreted as RG flows in which the fields linger in the linear regime for \(\sim n/2\) cycles before entering the nonlinear regime and screening the Wilson line. We remark however that the ground state solution always exits the linear regime before performing a full cycle.
### Effective defect field theory for screened Wilson lines
Remarkably, the numerical analysis in the previous sections provides an exact solution for the defect RG flow triggered by unstable Wilson lines, both in the case of negative double-trace deformations and supercritical charges. In this section we complement that analysis by interpreting the long distance tail of the screening cloud in terms of an _effective defect field theory_ description of the final stages of the flow.
Let us consider first the theory with no quartic coupling, \(\lambda=0\). In this case the long distance limit of the screening cloud analyzed in the previous section admits a non-trivial one-point function for the scalar field with conformal scaling:
\[\langle|\Phi|^{2}(r)\rangle=\frac{e^{4}v^{2}}{2(4\pi r)^{2}}\,, \tag{42}\]
where \(v\sim 1/e^{2}\) is a dimensionless number which depends upon the initial charge and the boundary condition.
In the absence of gauge fields, conformal defects sourcing the scalar operator can be constructed by straightforwardly integrating the fundamental field along the line contour, see e.g. [49; 50; 51; 52]. An equally explicit construction is not available in a gauge theory. Since all gauge-invariant operators have engineering dimension larger or equal than 2, the effective defect field theory corresponding to (42) cannot be obtained by deforming the trivial line defect with a local operator. Rather, it should be understood in terms of boundary conditions
for the scalar field at \(r\to 0\). To write the corresponding defect explicitly, we notice that (42) is equivalent to a constant profile for the AdS\({}_{2}\) rescaled field; in other words, (42) describes a _Higgs phase_ on AdS\({}_{2}\). It is therefore natural to decompose the scalar field into a radial and a Goldstone component
\[\Phi(x)=\frac{1}{\sqrt{2}}h(x)e^{i\pi(x)}\,,\qquad\Phi^{\dagger}(x)=\frac{1}{ \sqrt{2}}h(x)e^{-i\pi(x)}\,, \tag{43}\]
so that the action reads
\[S=\frac{1}{e^{2}}\int d^{4}x\left[-\frac{1}{4}F_{\mu\nu}^{2}+\frac{1}{2}( \partial h)^{2}+\frac{1}{2}h^{2}(\partial_{\mu}\pi-A_{\mu})^{2}\right]\,. \tag{44}\]
To obtain the profile (42) we then simply introduce a source in the Higgs equations of motion
\[-\partial^{2}h+A_{\mu}^{2}h=-e^{2}v\,\delta^{3}(x_{\perp})\,,\qquad\partial_{ \mu}F^{\mu\nu}+h^{2}(\partial^{\nu}\pi-A^{\nu})=\partial_{\mu}\left[h^{2}( \partial^{\mu}\pi-A^{\mu})\right]=0\,, \tag{45}\]
with the solution (up to gauge transformations)
\[A_{\mu}=\pi=0\,,\qquad h=\frac{e^{2}v}{4\pi r}\equiv h_{s}(r)\,. \tag{46}\]
The source in (45) can be formally represented with a term localized at \(r=0\)
\[S_{D}=v\int_{r=0}dt\,h=v\int_{r=0}dt\sqrt{2|\Phi|^{2}}\,. \tag{47}\]
We stress that, despite its formal representation (47), this defect cannot be understood as a perturbation of the trivial defect by a local operator, but it is rather thought as setting a boundary condition for the scalar field, somewhat similarly to a 't Hooft line. As in that case, the corresponding defect is perfectly local. To appreciate this point further, one could imagine obtaining such a line operator by starting from an interface separating the theory (44) from a deformed model with a bulk potential \(V(|\Phi|^{2})\) which Higgses the gauge group. The defect (47) is then obtained upon deforming the interface into a cylinder of radius \(r_{0}\) along the time direction, and taking the limit \(r_{0}\to 0\) while simultaneously scaling the coefficients of the potential \(V(|\Phi|^{2})\) with inverse powers of \(r_{0}\) according to their dimension.
In practice, to concretely study the model given by (47) and (44) we simply need to expand the fields around the saddle-point (46). We consider a gauge-fixing inspired by the usual Feynman - 't Hooft choice
\[S_{g.f.}=-\frac{1}{2e^{2}}\int d^{4}x\left(\partial_{\mu}A^{\mu}+h_{s}^{2}\pi \right)^{2}\,. \tag{48}\]
Upon rescaling fluctuations with a factor of \(e\), the quadratic action reads
\[S+S_{g.f.}\simeq\int d^{4}x\left\{-\frac{1}{2}(\partial_{\mu}A_{\nu})^{2}+ \frac{1}{2}h_{s}^{2}A_{\mu}^{2}+\frac{1}{2}(\partial\delta h)^{2}+h_{s}^{2} \left[\frac{1}{2}(\partial\pi)^{2}-\frac{1}{2}h_{s}^{2}\pi^{2}\right]\right\}\,, \tag{49}\]
where \(\delta h=h-h_{s}\). Clearly \(\delta h\) behaves as a free field in the absence of a defect. By studying the propagators of \(A_{\mu}\) and \(\pi\), we find that the lowest dimensional operator in the bulk-to-defect OPE of the \(U(1)\) current \(j_{\mu}\simeq h_{s}^{2}(\partial_{\mu}\pi-A_{\mu})\) has dimension
\[\delta=\frac{1}{2}+\sqrt{\frac{1}{4}+\frac{e^{4}v^{2}}{(4\pi)^{2}}}\,. \tag{50}\]
In particular, there is a (defect) scalar operator with dimension \(\delta\) corresponding to the \(r\to 0\) limit of \(j_{0}\). The corresponding deformation of the defect action (47) can be written as
\[\delta S_{D}=-\tilde{q}\int_{r=0}dt(A_{0}-\dot{\pi})\,. \tag{51}\]
Note that because of the nontrivial profile of the Higgs field \(h\sim 1/r\) we can write gauge-invariant defect operators using both the gauge field and the Goldstone mode. The corresponding coupling \(\tilde{q}\) in (51) is thus not quantized, and it is in fact irrelevant since \(\delta>1\). Analyzing perturbatively the deformation (51), we find the following one-point function for the gauge field14
Footnote 14: This is derived from the propagator of \(A_{0}\), whose zero-mode, with the chosen gauge-fixing, behaves analogously to an AdS\({}_{2}\) scalar field with dimension \(\delta\).
\[\langle F_{0i}\rangle\propto x^{i}\frac{e^{2}\tilde{q}}{4\pi r^{2+\delta}}\,. \tag{52}\]
(52) agrees with the functional form for the screening tail of the gauge field previously derived from the equations of motion in (25) (setting \(c=\frac{e^{2}v}{4\pi}\) in (25)). Further subleading corrections to the screening cloud are reproduced by other irrelevant deformations of the defect (47).
Let us now discuss the theory with a quartic coupling
\[S=\frac{1}{e^{2}}\int d^{4}x\left[-\frac{1}{4}F_{\mu\nu}^{2}+\frac{1}{2}( \partial h)^{2}-\frac{\lambda}{8e^{2}}h^{4}+\frac{1}{2}h^{2}(\partial_{\mu}\pi -A_{\mu})^{2}\right]\,. \tag{53}\]
Inspired by the previous analysis, we consider the following defect deformation
\[S_{D}=\int_{r=0}dt\left[v\,h-\tilde{q}(A_{0}-\dot{\pi})\right]\,. \tag{54}\]
We focus on the double-scaling limit
\[e^{2}\sim\lambda\to 0\,,\quad v\sim\tilde{q}\to\infty\quad\text{with}\quad e^{2} v\sim e^{2}\tilde{q}=\text{fixed}\,. \tag{55}\]
In this limit the Goldstone mode can be neglected. Including the gauge-fixing (48) we thus consider
\[S+S_{g.f.}+S_{D}=\frac{1}{e^{2}}\int d^{4}x\left[-\frac{1}{2}( \partial_{\mu}A_{\nu})^{2}+\frac{1}{2}h^{2}A_{\mu}^{2}+\frac{1}{2}(\partial h )^{2}-\frac{\lambda}{8e^{2}}h^{4}\right]+\int_{r=0}dt\left(v\,h-\tilde{q}\,A_ {0}\right)\,. \tag{56}\]
In what follows, we self-consistently focus on the regime \(e^{2}v\sim e^{2}\tilde{q}\ll 1\). In this regime the one-point functions for the scalar and gauge field admit the following expansion
\[\langle h(r)\rangle =\frac{e^{2}v}{4\pi r}\left[F_{0}\left(\frac{\tilde{q}}{v},\frac{ \lambda}{e^{2}},r\right)+\frac{e^{4}v^{2}}{(4\pi)^{2}}F_{1}\left(\frac{\tilde{q }}{v},\frac{\lambda}{e^{2}},r\right)+O\left(\frac{e^{8}v^{4}}{(4\pi)^{4}} \right)\right]\,, \tag{57}\] \[\langle A_{0}(r)\rangle =\frac{e^{2}v}{4\pi r}\left[G_{0}\left(\frac{\tilde{q}}{v},\frac{ \lambda}{e^{2}},r\right)+\frac{e^{4}v^{2}}{(4\pi)^{2}}G_{1}\left(\frac{\tilde{ q}}{v},\frac{\lambda}{e^{2}},r\right)+O\left(\frac{e^{8}v^{4}}{(4\pi)^{4}} \right)\right]\,. \tag{58}\]
The leading order terms \(F_{0}\) and \(G_{0}\) are determined from the linearized equations of motion
\[\partial^{2}h=e^{2}v\delta^{3}(x_{\perp})\,,\qquad\partial^{2}A_{\mu}=e^{2} \tilde{q}\delta^{3}(x_{\perp})\,, \tag{59}\]
from which we obtain
\[F_{0}\left(\frac{\tilde{q}}{v},\frac{\lambda}{e^{2}},r\right)=1\,,\qquad G_{ 0}\left(\frac{\tilde{q}}{v},\frac{\lambda}{e^{2}},r\right)=\frac{\tilde{q}}{v }\,. \tag{60}\]
Diagrammatically, the leading order result is associated with a single insertion of the defect couplings and no insertion of the bulk vertices, as in figure 6(a).
The subleading contributions arise from the diagrams in figure 6(b). The resulting integrals are divergent; as usual in QFT, this signals a nontrivial RG flow for the defect couplings \(v\) and \(\tilde{q}\). To extract the corresponding beta-functions, we evaluate the divergent parts of \(F_{1}\) and \(G_{1}\) in dimensional regularization:
\[\begin{split}& F_{1}\left(\frac{\tilde{q}}{v},\frac{\lambda}{e^{2}},r \right)=\left(\frac{\tilde{q}^{2}}{v^{2}}-\frac{\lambda}{2e^{2}}\right) \times\frac{1}{2\varepsilon}+\text{finite}\,,\\ & G_{1}\left(\frac{\tilde{q}}{v},\frac{\lambda}{e^{2}},r\right)= -\frac{\tilde{q}}{v}\times\frac{1}{2\varepsilon}+\text{finite}\,,\end{split} \tag{61}\]
w
Figure 7: Diagrams contributing to the scalar and gauge field one-point functions. Dashed lines denote scalar fields while wiggly lines stand for gauge fields. The solid line represents the defect and dots stand for bulk couplings..
where \(\varepsilon=4-d\). For the physical one-point functions (2.57) and (2.58) to be finite, we need to rewrite the defect couplings in terms of bare ones. Working in the minimal subtraction scheme, we find
\[\begin{split} v\to v_{0}&=vM^{\varepsilon/2}\left[1+ \frac{e^{4}v^{2}}{(4\pi)^{2}}\left(\frac{\lambda}{2e^{2}}-\frac{\tilde{q}^{2}}{v ^{2}}\right)\frac{1}{2\varepsilon}+\ldots\right]\,,\\ \tilde{q}\to\tilde{q}_{0}&=\tilde{q}M^{\varepsilon/2 }\left[1+\frac{e^{4}v^{2}}{(4\pi)^{2}}\frac{1}{2\varepsilon}+\ldots\right]\,, \end{split} \tag{2.62}\]
where \(M\) is the sliding scale. Demanding that the bare couplings \(v_{0}\) and \(\tilde{q}_{0}\) be independent of \(M\), with textbook manipulations [53] we obtain the beta-functions for the physical couplings \(v\) and \(\tilde{q}\):
\[\begin{split}\beta_{v}&=\frac{\partial v}{\partial \log(M)}=v\left[\left(\frac{\lambda e^{2}v^{2}}{2}-e^{4}\tilde{q}^{2}\right)+O \left(e^{8}v^{4}\right)\right]\,,\\ \beta_{\tilde{q}}&=\frac{\partial\tilde{q}}{\partial \log(M)}=\tilde{q}\left[e^{4}v^{2}+O\left(e^{8}v^{4}\right)\right]\,.\end{split} \tag{2.63}\]
The equations (2.63) imply that both \(v\) and \(\tilde{q}\) run logarithmically to zero in the IR (\(M\to 0\)). We therefore conclude that the defect (2.54), describing a fully screened Wilson line, flows to a trivial defect in the IR.15 Hence the description (2.54) of screened Wilson lines as a scalar line is useful as an _intermediate energy description_. In the following we show how to reproduce the tail of the screening cloud previously derived from the classical equations of motions.
Footnote 15: For \(\tilde{q}=0\), the model effectively reduces to the pinning field defect in \(d=4\), which was studied in [49; 50] and also flows to a trivial defect.
We consider the one-point functions for the scalar and the gauge field in the long distance limit
\[\langle h(r)\rangle\stackrel{{ r\to\infty}}{{=}}\frac{e^{2}v(1/ r)}{4\pi r}\,,\qquad\langle A_{0}(r)\rangle\stackrel{{ r\to\infty}}{{=}}\frac{e^{2}\tilde{q}(1/r)}{4\pi r}\,, \tag{2.64}\]
where the couplings are expressed at the scale \(1/r\). For sufficiently large \(r\), the coupling can be written using the asymptotic solution to (2.63) for \(t=-\log(M/M_{0})\gg 1/(e^{2}v)\), where \(M_{0}\) is the scale at which the initial conditions for the couplings are specified; physically, \(M_{0}\) represents the cut-off of the effective description (2.54). The explicit result depends on the ratio \(\lambda/e^{2}\equiv\bar{\lambda}\). For \(\bar{\lambda}<2\), we find
\[\begin{split}& e^{2}v(M_{0}e^{-t})\stackrel{{ t\to\infty}}{{=}}\frac{t^{-1/2}}{\sqrt{\bar{\lambda}}}+b^{2}\frac{\sqrt{ \lambda}}{2(\bar{\lambda}-1)}t^{1/2-2/\bar{\lambda}}+\ldots\,,\\ & e^{2}\tilde{q}(M_{0}e^{-t})\stackrel{{ t\to\infty}}{{=}}b\,t^{-1/\bar{\lambda}}+\ldots\,,\end{split} \tag{2.65}\]
while for \(\bar{\lambda}>2\) the asymptotic solution reads
\[\begin{split}& e^{2}v(M_{0}e^{-t})\stackrel{{ t\to\infty}}{{=}}\frac{t^{-1/2}}{\sqrt{2}}+b\frac{\bar{\lambda}-2}{2\sqrt{2}}t^{1 /2-\bar{\lambda}/2}+\ldots\,,\\ & e^{2}\tilde{q}(M_{0}e^{-t})\stackrel{{ t\to\infty}}{{=}}\frac{\sqrt{\bar{\lambda}-2}}{2}t^{-1/2}+b\frac{\sqrt{ \bar{\lambda}-2}}{2}t^{1/2-\bar{\lambda}/2}\ldots\,.\end{split} \tag{2.66}\]
The parameter \(b\) in (2.65) and (2.66) depends upon the initial condition for the coupling constants.16 Unsurprisingly, using (2.65) and (2.66) in the one-point functions (2.64) we recover the form for the tail of the screening cloud (2.25) previously derived in section 2.2.
Footnote 16: An additional free parameter shows up in the asymptotic solutions (2.65) and (2.66) in the subleading orders; we did not report additional corrections to (2.65) and (2.66) since these depend upon the higher order terms neglected in the beta functions (2.63).
### Constraints from 0-Form symmetry and multi-flavor scalar QED\({}_{4}\)
In this section we briefly discuss the generalization of our results to multi-flavor QED\({}_{4}\). We consider the action
\[S=\frac{1}{e^{2}}\int d^{4}x\left[|D_{\mu}\Phi_{a}|^{2}-\frac{ \lambda}{2e^{2}}(|\Phi_{a}|^{2})^{2}-\frac{1}{4}F_{\mu\nu}^{2}\right]-q\int dtA _{0}\,, \tag{2.67}\]
where \(a=1,2,\ldots,N\). The theory is invariant under the action of the internal symmetry group \(PSU(N)=SU(N)/\mathbb{Z}_{N}\) which rotates among the scalars.17 Consider inserting a Wilson line of charge \(q\). This represents the wordline of a massive external particle of charge \(q\). If we represent the external particle by a heavy massive field \(\Psi\) with no \(PSU(N)\) quantum numbers, then the total global symmetry of the system is now \((U(1)\times SU(N))/\mathbb{Z}_{\mathbb{N}}\) where the \(U(1)\) factor is particle number, normalized such that \(\Psi\) carries charge 1, and \(\mathbb{Z}_{N}\) is generated by a rotation of \(\Psi\) by angle \(q/N\) accompanied by a transformation in \(SU(N)\) given by the matrix diag(\(\exp(2\pi i/N),...,\exp(2\pi i/N)\)). The identification by \(\mathbb{Z}_{N}\) means that in a sector with one \(\Psi\) particle the \(SU(N)\) representation must have \(q\) mod \(N\) boxes in the Young diagram. This means that the state in the presence of a Wilson line of charge \(q\) must transform under a representation with \(q\) mod \(N\) boxes, i.e. the Wilson line can only end on operators transforming in a projective representation of \(PSU(N)\). (To make this precise, we can insert the Wilson loop as a localized charge on the sphere.) This does not mean that the infrared cannot be completely screened for \(q\neq 0\) mod \(N\). It is possible that the infrared theory has a decoupled representation on the line and all the bulk Green functions coincide with those without a defect. In this situation the infrared \(g\) function is the dimension of the representation. The line operator is simply the trivial line defect stacked with a quantum mechanical system with vacuum degeneracy. Similar comments apply whenever we are dealing with symmetric defects in a system whose global symmetry \(G\) can be nontrivially centrally extended (i.e. whenever \(H^{2}(G,U(1))\) is nontrivial). In some situations this can lead to interesting constraints related to the \(g\) theorem, since the infrared \(g\) function is given by the dimension of the representation if the line is otherwise screened.
Footnote 17: The symmetry group is \(PSU(N)\) and not \(SU(N)\) because all gauge-invariant operators transform in representations whose Young diagram consists of \(p=0\) mod \(N\) boxes.
Let us now analyze in detail the Wilson line in the theory (2.67).
For \(0<|q|\leq\frac{2\pi}{e^{2}}\) each of the \(\ell=0\) modes of the scalars \(\Phi_{a}\) admit either standard or alternate boundary conditions on the defect. This leads overall to \(2^{N}\) fixed points, many of which partially break the internal symmetry group. To analyze the defect RG flows in
this setup, consider the fixed points where all fields are in alternate quantization. As in the discussion around (2.15), this is achieved by supplementing the action (2.67) with the following defect term
\[S^{(1)}_{bdry}=-\frac{1-2\nu}{2}\int_{r=r_{0}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In the first two cases the line preserves \(PSU(N)\), while in the last case it is explicitly broken and therefore there are protected tilt operators.
We end this section with some comments on the case with \(e^{2}|q|>2\pi\) where the Wilson lines are supercritical and expected to be screened. At a classical level, the analysis proceeds along the lines of section 2.3; in particular the classical scalar profile _spontaneously_ breaks the internal symmetry. However, quantum-mechanically we have to integrate over the zero modes of the screening saddle-point.18 Therefore only flavor singlets acquire an expectation value in the screening cloud, e.g.
Footnote 18: In more detail, it was recently argued, under general assumptions, that a line defect that spontaneously breaks a continuous internal symmetry can only flow to a decoupled one-dimensional sector on the line, tensored with a DCFT which does not break the symmetry [54]. Therefore at large distances only singlet operators are allowed to acquire a VEV.
\[\langle\Phi_{a}^{*}\Phi_{b}(r)\rangle\propto\delta_{ab}\,. \tag{74}\]
Considering the equivalent theory on AdS\({}_{2}\times S^{2}\), the long distance limit of the screened line is well approximated by a defect setting Dirichlet boundary conditions for the (AdS\({}_{2}\) rescaled) radial mode, as in section 2.4, with Neumann boundary conditions imposed on the Goldstone modes.
This shows that sufficiently far from the line, i.e. much farther than the screening cloud, the bulk expectation values and Green functions are those of the theory without the defect, i.e. there is screening in this sense. However, as we argued above, the symmetries of the system force the line defect to carry a representation with \(q\) mod \(N\) boxes under \(SU(N)\). In the language of defect QFT, this implies that supercritical Wilson lines with charge \(q\neq 0\) mod \(N\) do not furnish simple line defects, and they are completely screened in the bulk. In particular, supercritical Wilson lines with \(q\neq 0\mod N\) admit a nontrivial \(g\)-function in the deep infrared. We will see another example in 5.2.2.
### Constraints from 1-form symmetry and QED\({}_{4}\) with charge \(q_{\phi}\) particles
In the previous section we discussed the dynamics of line defects when there are interesting constraints from 0-form symmetry, i.e. when the infrared symmetry can be extended by the external (heavy) particles. Here we discuss the constraints imposed by 1-form symmetry. To motivate the discussion consider a charge \(q\) Wilson line in a theory of a charge \(q_{\phi}>1\) scalar field:
\[S=\frac{1}{e^{2}}\int d^{4}x\left[|D_{\mu}\Phi|^{2}-\frac{\lambda}{2e^{2}}(| \Phi|^{2})^{2}-\frac{1}{4}F_{\mu\nu}^{2}\right]-q\int dt\,A_{0}\,, \tag{75}\]
with \(D=\partial-iq_{\phi}A\).
One might expect that in such a theory Wilson lines with charge \(q\mod q_{\phi}\neq 0\) cannot be fully screened by the scalar particles. Let us make this precise. Wilson lines with charge \(q\mod q_{\phi}\neq 0\) are charged under the electric \(\mathbb{Z}_{q_{\phi}}\) one-form symmetry of the theory. We are therefore led to the following question: what can be inferred about line defects charged under a one-form symmetry? This question is very general and arises in several different contexts; we will encounter it again in section 6 in the analysis of 't Hooft lines. In the following we
this discussion we will discuss the implications of our findings for charge \(q_{\phi}\) scalar QED.
It is useful to introduce some terminology that we will use below:
* A line defect is a nontrivial DCFT if and only if the displacement operator is nonzero: \(D_{\perp}\neq 0\).
* A line defect is said to be topological if the displacement operator vanishes \(D_{\perp}=0\), i.e. it is trivial as a DCFT, but the line defect can braid nontrivially with co-dimension 2 surfaces.
* A line defect \(L\) is said to be completely trivial if none of the two definitions above apply, i.e. if it is completely transparent (that is trivial as a DCFT and also transparent to co-dimension 2 surfaces).
In this language, the lines which are stacked with a 0+1 dimensional TQFT that we have encountered in the previous subsection are completely trivial (but not simple).
We will now argue that with some additional conditions the existence of a one-form symmetry implies that the charged Wilson line must necessarily define a _nontrivial_ DCFT.
A line \(L\) is charged under a one-form symmetry if and only if it braids nontrivially with a topological co-dimension 2 operator. Figure 8 represents a line defect that is charged under a one-form symmetry in 2+1 dimensions. In the figure \(L\) is a line defect, \(A\) is a one-form symmetry charge and \(\omega\neq 1\) is a root of unity. Note that this immediately implies that the line \(L\) cannot be completely trivial. The interesting question, that we address below, is under which conditions the one-form symmetry forces the displacement operator to be nontrivial.
Note that if there is an intertwining operator between the lines \(L_{1}\) and \(L_{2}\), such as in figure 9, then the two lines carry the same charge under the one-form symmetry. In particular, if a line can end (meaning that either of \(L_{1}\) or \(L_{2}\) is trivial), then the line is not charged under the one-form symmetry. Importantly, this does not mean that the line furnishes a trivial
Figure 8: A line defect charged under a one-form symmetry in 2+1 dimensions.
DCFT in general. The example of line defects in 2+1 dimensional TQFT (which are trivial, topological, but not transparent) that cannot end demonstrates that just because a line cannot end, it is not necessary a nontrivial DCFT.
Let us assume that the 1-form symmetry charge \(A\) can be cut open. This means that the one-form symmetry charge can be terminated on codimension 3 (twist) operators, as illustrated in figure 10 in 2+1 dimensions. The end points of \(A\) are not topological in general.
In this case, it is evident that \(L\) cannot be topological. This is because if we move \(L\) through \(A\) we get a phase \(\omega\), but if we move \(L\) to the same final location without crossing \(A\) then we do not get a phase. Therefore we have shown that if the one-form symmetry charge can be cut open, the Wilson lines charged under it must have a nontrivial displacement operator \(D_{\perp}\neq 0\).
In summary, if the one-form symmetry charge \(A\) can be terminated on codimension 3 operators, then charged lines cannot furnish a trivial DCFT. The central question is therefore when can the one-form symmetry charges be cut open. Here we will make two comments about it.
Figure 10: A one-form symmetry generator cut open in 2+1 dimensions.
Figure 9: An intertwining operator \({\cal O}_{12}\) between the lines \(L_{1}\) and \(L_{2}\).
In \(d=3\) both the one-form symmetry surface and the Wilson line are 1 dimensional defects. If the one-form symmetry has an anomaly then the one-form symmetry lines certainly cannot be cut open because they are charged under themselves. One can find many conformal gauge theories with vanishing one-form symmetry anomaly. For instance, this is the case in ABJM theory.
In 4d gauge theories, the electric 1-form symmetry surfaces can in many cases terminate on improperly quantized 't Hooft lines. Hence Wilson lines charged under the 1-form symmetry cannot be topological, and must furnish a nontrivial DCFT in such theories. This certainly applies in pure Yang-Mills theory with gauge group \(SU(N)\), QED with charge \(q_{\phi}\) particles, and in \({\cal N}=4\) SYM theory with gauge group \(SU(N)\).
Let us now return to (75) where lines with charge \(q\mod q_{\phi}\neq 0\) are charged under the one-form symmetry and should flow to nontrivial DCFTs at large distances. Note that this argument does not specify which properties this DCFT should have; in particular it does not imply that the electric field should be non-zero at large distances. All the argument says is that there should be some remaining response to displacing the line defect.
It is instructive to discuss this prediction within the formalism of section 2.4, where we described the effective defect field theory describing the long distance limit of a screened Wilson line.19 In that setup we can model a Wilson of charge \(q\mod q_{\phi}=\delta q\neq 0\) as a small perturbation of a neutral (under the one-form symmetry) line of charge \(q-\delta q\) (for \(\delta q\sim q_{\phi}\sim O(1)\ll q\)). To this aim we simply add to the EFT defect action (54) the following perturbation
Footnote 19: Note that in this model all lines with charge \(q>2\pi/(q_{\phi}e^{2})\) are unstable.
\[\delta S_{D}=-\delta q\int_{r=0}dt\,A_{0}=-\delta q\int_{r=0}dt(A_{0}-\dot{\pi} /q_{\phi})-\frac{\delta q}{q_{\phi}}\int_{r=0}dt\,\dot{\pi}\,. \tag{76}\]
The rest of the analysis proceeds as in the case \(q_{\phi}=1\). In particular in perturbation theory we can neglect the total derivative in (76) and proceed similarly to what we did below (51) and (54), where we showed that the operator \(A_{0}-\dot{\pi}/q_{\phi}\) is irrelevant (both for \(\lambda=0\) and \(\lambda>0\)). Note however that the Goldstone field is \(2\pi\)-periodic and thus for \(\delta q\neq 0\) the last term in (76), while it has no effect in perturbation theory, implies a nontrivial braiding between the defect and the one-form symmetry surface operator, as expected.
We may contrast these findings with the argument above, namely that Wilson lines of charge \(\delta q\neq 0\) cannot furnish trivial DCFTs. For zero quartic coupling, \(\lambda=0\), the scalar admits a nontrivial conformal one-point function (46) and the line defines thus a nontrivial DCFT. It is perhaps still surprising that we do not measure any Coulomb field at large distances. Physically, this is because in a massless theory the \(\Phi\)-particles may have arbitrarily delocalized wave-function; it is therefore possible to store fractional units of charge at \(r\to\infty\).20
Footnote 20: This is similar to the fate of Wilson lines in the Schwinger model (QED\({}_{2}\)) with fermions of charge \(q_{\phi}>1\). As in our setup, there the electric flux of Wilson lines with charge \(q\neq 0\mod q_{\psi}\) is not fully screened by massive fermions, while it is in the massless limit [55]. A similar behavior appears also in QCD\({}_{2}\) with massless adjoint fermions, where fundamental Wilson lines are screened. In all cases, the IR limit of charged line defects
Footnote 20: This may be seen e.g. from the comments below (2.76) and the fact that the scalar field one-point function (2.46) is modified so that it decays exponentially to a constant value for \(r\gg|m|^{-1}\).
To substantiate this interpretation, in appendix A.4 we study the screening cloud for a scalar of small mass squared \(m^{2}>0\), such that \(m^{-1}\gg R_{cloud}\), where \(R_{cloud}\) is the scale over which the massless soliton we have found is localized. In that case, due to the mass term, the scalar profile decays exponentially at distances of order of the Compton wavelength \(r\sim m^{-1}\). We find that Wilson lines charged under the electric one-form symmetry retain an \(O(1)\) amount of charge. Namely, we show that the flux of the electric field \(r^{2}F_{tr}\) decreases until it reaches a minimum at distances \(r\sim 1/m\). After that the flux increases again and eventually settles into a constant \(O(1)\) value. Therefore the Wilson line is nontrivial and, even at distances such that the scalar profile has decayed completely, \(r\gg 1/m\), there is an \(O(1)\) remnant Coulomb field.
For a small negative mass \(m^{2}<0\) instead the bulk theory flows to a Higgs phase, described by a \(\mathbb{Z}_{q_{\phi}}\) gauge theory. In \(\mathbb{Z}_{q_{\phi}}\) gauge theory the one-form symmetry surface operator cannot be cut-open because of the emergent two-form symmetry, and thus there is no obstruction for a charged line to be topological in the IR; this is obviously the fate of Wilson lines with charge \(\delta q\neq 0\).21
Footnote 21: This may be seen e.g. from the comments below (2.76) and the fact that the scalar field one-point function (2.46) is modified so that it decays exponentially to a constant value for \(r\gg|m|^{-1}\).
The situation is more puzzling for the massless theory with a nonzero quartic coupling \(\lambda>0\). Indeed, the beta functions (2.63) imply that the defect approaches logarithmically a trivial DCFT in the infrared, irrespectively of the charge of the scalar field. This is tension with the conclusion that Wilson lines charged under the electric one-form symmetry should furnish nontrivial DCFTs. The resolution of this apparent paradox might require analyzing the fate of Wilson lines beyond the double-scaling limit (2.55) in which we worked so far. We leave the investigation of this fascinating issue for future work.
We conclude this section by noticing that in scalar QED there is a \(U(1)\) magnetic one-form symmetry, whose topological charge can be terminated on improperly quantized Wilson lines. This implies that all 't Hooft lines, which are charged under the magnetic one form symmetry, furnish nontrivial DCFTs. Physically, this is because there are no monopoles to screen them. We will analyze 't Hooft lines in greater detail in section 6.
Fermionic QED\({}_{4}\)
In this section we consider fermionic QED in \(d=4\) dimensions in the presence of a Wilson line of charge \(q>0\), extending in the time direction. The action is given by:
\[\begin{split} S&=S_{\psi,A}-q\int dt\,A_{0}\,,\\ S&{}_{\psi,A}=\frac{1}{e^{2}}\int d^{4}x\left[-\frac{ 1}{4}F_{\mu\nu}^{2}+i\bar{\Psi}_{D}\not{D}\Psi_{D}\right],\end{split} \tag{3.1}\]
where \(\Psi_{D}\) is a massless Dirac spinor in four dimensions that carries a charge 1 under the \(U(1)\) gauge group, \(F_{\mu\nu}\) is the electromagnetic field tensor, \(\not{D}\equiv\Gamma^{\mu}D_{\mu}\) where \(D_{\mu}=\partial_{\mu}-iA_{\mu}\) denotes the gauge covariant derivative, and \(\Gamma^{\mu}\) are the Dirac Gamma matrices in \(d=4\), satisfying \(\{\Gamma^{\mu},\Gamma^{\nu}\}=2\eta^{\mu\nu}\).
As in the previous section, we tune the fermion mass to zero and work in the semiclassical limit specified by the following double-scaling limit :
\[\begin{split} e&\to 0,\quad q\to\infty,\\ e^{2}q&=\text{fixed}.\end{split} \tag{3.2}\]
In this limit the generated mass scale associated with QED becomes infinite and thus we can ignore any RG flow in the bulk. In this limit we expand the fields around the classical saddle point \(A_{0}=\frac{e^{2}q}{4\pi r}=\frac{q}{r}\), \(\psi_{D}=0\). In the rest of this section we analyze the fluctuations of the Dirac field in the Coulomb background profile.
### Dirac fermion on AdS\({}_{2}\times S^{d-2}\)
When studying scalar QED in section 2.1, it was convenient to map the theory to AdS\({}_{2}\times S^{2}\) and perform a Kaluza-Klein (KK) decomposition over the sphere. We shall adopt a similar strategy for the model (3.1). In this section we thus describe the general KK decomposition for a Dirac field. For future purposes we consider aribtrary spacetime dimensions \(d\), specializing to \(d=4\) later.
We consider a \(d\)-dimensional Dirac fermion, which consists of \(2^{\lfloor d/2\rfloor}\) complex components, coupled to an external gauge field. The action is:
\[S=\int d^{d}x\,i\bar{\Psi}_{D}\left(\not{\partial}-i\not{A}\right)\Psi_{D}\,. \tag{3.3}\]
Just like in (2.7), we can map the theory from \(\mathbb{R}^{d}\) to AdS\({}_{2}\times S^{d-2}\) via a Weyl rescaling
\[ds^{2}=r^{2}\left[\frac{dt^{2}-dr^{2}}{r^{2}}-d\Omega_{d-2}^{2}\right]=r^{2}d \tilde{s}_{\text{AdS}_{2}\times S^{d-2}}^{2}\,. \tag{3.4}\]
In a factorized geometry of the form \(\mathcal{M}_{2}\times S^{d-2}\), such as that in (3.4), there exists a convenient decomposition of the Dirac field and the associated Clifford algebra [56]. The fermionic field is written in terms of the following expansion:
\[\Psi_{D}=\frac{1}{r^{\frac{d-1}{2}}}\sum_{\ell,s}\sum_{\delta=+,-}\psi^{( \delta)}_{\ell s}(t,r)\otimes\chi^{(\delta)}_{\ell s}(\hat{n})\,, \tag{3.5}\]
where \(\psi^{(\delta)}_{\ell s}(t,r)\) are AdS\({}_{2}\) Dirac spinors with two complex components, while the \(\chi^{(\delta)}_{\ell s}(\hat{n})\) are the spinor harmonics on \(S^{d-2}\) with \(2^{\lfloor d/2\rfloor-1}\) components. The summation over \(\ell\) runs over the non-negative integers, \(\ell=0,1,2,...\); for every \(\ell\) the \(\chi_{\ell s}\) form a representation of spin \(j=|\ell|+1/2\) of the Spin\((d-1)\) group, with the index \(s\) running over the components.22 The spinor harmonics \(\chi^{(\delta)}_{\ell\,s}(\hat{n})\) satisfy the following equation [56; 57]:
Footnote 22: For general \(\ell\) and \(d\) the multiplicity of \(s\) is given by \(\frac{2^{\lfloor\frac{d-2}{2}\rfloor(d-3+\ell)!}}{\ell(d-3)!}\)[57].
\[\not{\nabla}_{S^{d-2}}\chi^{(\pm)}_{\ell s}(\hat{n})=\pm i\left(\ell+\frac{d-2 }{2}\right)\chi^{(\pm)}_{\ell s}(\hat{n})\,, \tag{3.6}\]
where \(\not{\nabla}_{S^{d-2}}\) is the Dirac operator on \(S^{d-2}\), as well as the orthogonality relation:
\[\int d\Omega_{d-2}\,\chi^{\dagger\,(\delta)}_{\ell s}(\hat{n})\chi^{(\delta^{ \prime})}_{\ell^{\prime}s^{\prime}}(\hat{n})=\delta_{\ell\ell^{\prime}}\delta _{ss^{\prime}}\delta^{\delta\delta^{\prime}}\,. \tag{3.7}\]
We can similarly decompose the \(d\)-dimensional gamma matrices \(\Gamma\). We denote by \(\gamma^{0}\) and \(\gamma^{1}\) the two-dimensional gamma matrices in Lorentzian signature, satisfying \(\{\gamma^{a},\gamma^{b}\}=2\eta^{ab}=2\,{\rm diag}(1,-1)\) (\(a,b=0,1\)). We additionally introduce a \(2^{\lfloor(d-2)/2\rfloor}\times 2^{\lfloor(d-2)/2\rfloor}\) dimensional representation of the Euclidean Clifford algebra \(\hat{\gamma}^{i}_{E}\), \(i=1,2,\ldots,d-2\), which satisfies \(\{\hat{\gamma}^{i}_{E},\hat{\gamma}^{j}_{E}\}=2\delta^{ij}\). Then we have the following decomposition:
\[\Gamma^{0} =\gamma^{0}\otimes\hat{\mathds{1}},\] \[\Gamma^{1} =\gamma^{1}\otimes\hat{\mathds{1}},\] \[\Gamma^{2} =i\gamma^{3}\otimes\hat{\gamma}^{1}_{E},\] \[\Gamma^{3} =i\gamma^{3}\otimes\hat{\gamma}^{2}_{E}, \tag{3.8}\] \[\vdots\] \[\Gamma^{d-1} =i\gamma^{3}\otimes\hat{\gamma}^{d-2}_{E},\]
where \(\gamma^{3}\) is the \(2\times 2\) AdS\({}_{2}\) chirality matrix defined by
\[\gamma^{3}=\gamma^{0}\gamma^{1}, \tag{3.9}\]
and \(\hat{\mathds{1}}\) is the identity matrix of dimension \(2^{\lfloor(d-2)/2\rfloor}\times 2^{\lfloor(d-2)/2\rfloor}\).23
Footnote 23: Note that in \(d=3\) under the decomposition on AdS\({}_{2}\times S^{1}\), \(\hat{\gamma}^{1}_{E}=\hat{\mathds{1}}=1\) are just numbers.
Using (3.5) and (3.8) we can write the action (3.3) as a sum over the AdS\({}_{2}\) spinors:
\[S=\sum_{\ell,s}\,\sum_{\delta=+,-}\int_{\text{AdS}_{2}}d^{2}x\sqrt{g}\,\bar{ \psi}^{(\delta)}_{\ell s}\left[i\left(\not{\nabla}_{\text{AdS}_{2}}-i\not{A} \right)-\delta i\gamma^{3}m_{\ell}\right]\psi^{(\delta)}_{\ell s}\,, \tag{3.10}\]
where \(\bar{\psi}^{(\delta)}_{\ell s}=(\psi^{(\delta)}_{\ell s})^{\dagger}\gamma^{0}\) and the masses \(m_{\ell}\) are given by
\[m_{\ell}=\ell+\frac{d-2}{2}\,. \tag{3.11}\]
In (3.10), \(\not{\nabla}_{\text{AdS}_{2}}=\gamma^{a}e^{\mu}_{a}\nabla_{\mu}\) is the AdS\({}_{2}\) Dirac operator, with \(e^{\mu}_{a}\) the vielbeins (which we take to be in the diagonal convention), and \(\nabla_{\mu}\) the covariant derivative; similarly \(\not{A}=\gamma^{a}e^{\mu}_{a}A_{\mu}\). We may bring the action (3.10) to a more symmetric form by performing the following axial transformation:24
Footnote 24: Note that this transformation is not anomalous since we rotate the fields \(\psi^{(+)}_{\ell s}\) and \(\psi^{(-)}_{\ell s}\) by opposite angles.
\[\begin{split}\psi^{(\pm)}_{\ell s}&\to e^{\mp i \frac{\pi}{4}\gamma^{3}}\psi^{(\pm)}_{\ell s},\\ \bar{\psi}^{(\pm)}_{\ell s}&\to(\psi^{(\pm)}_{\ell s })^{\dagger}e^{\pm i\frac{\pi}{4}\gamma^{3}}\gamma^{0}.\end{split} \tag{3.12}\]
In this basis, the action (3.10) takes a simple form:
\[S=\sum_{\ell,s}\,\sum_{\delta=+,-}\int_{\text{AdS}_{2}}d^{2}x\sqrt{g}\,\bar{ \psi}^{(\delta)}_{\ell s}\left[i\left(\not{\nabla}_{\text{AdS}_{2}}-i\not{A} \right)-m_{\ell}\right]\psi^{(\delta)}_{\ell s}. \tag{3.13}\]
The action (3.10), in addition to the internal \(Spin(d-1)\) symmetry and the \(U(1)\) gauge transformations, is clearly invariant under \(SO(2)\) rotations acting on \(\{\psi^{(+)}_{\ell s},\psi^{(-)}_{\ell s}\}\), i.e.
\[\psi^{(\pm)}_{\ell s}\to\cos(\theta)\psi^{(\pm)}_{\ell s}\mp\sin(\theta)\psi^{ (\mp)}_{\ell s}\,. \tag{3.14}\]
In even dimensions, this global symmetry is the AdS\({}_{2}\) avatar of the axial symmetry of the massless Dirac action. To see this in \(d=4\), we recall the standard definition of \(\Gamma^{5}\):
\[\Gamma^{5}=i\Gamma^{0}\Gamma^{1}\Gamma^{2}\Gamma^{3}. \tag{3.15}\]
The axial transformation of the Dirac spinor in flat space reads:
\[\Psi_{D}\to e^{i\Gamma^{5}\theta}\Psi_{D},\qquad\bar{\Psi}_{D}\to\bar{\Psi}_{ D}e^{i\Gamma^{5}\theta}, \tag{3.16}\]
which is a symmetry of the theory (3.1) (at a classical level). Using the decompositions (3.5), (3.8), and recalling the field redefinitions (3.12), (3.16) is easily seen to be equivalent to (3.14).
The action (3.1) for the Dirac field in flat \(d=4\) space is also invariant under the following discrete parity transformation
\[\begin{split}\Psi_{D}(t,r,\theta,\phi)&\to\Gamma^ {5}\Gamma^{2}\Psi_{D}(t,r,\pi-\theta,\phi+\pi),\\ \bar{\Psi}_{D}(t,r,\theta,\phi)&\to\bar{\Psi}_{D}(t, r,\pi-\theta,\phi+\pi)\Gamma^{5}\Gamma^{2},\end{split} \tag{3.17}\]
where \(\theta\) and \(\phi\) are the azimuthal angles in spherical coordinate system. The above translates (up to an overall real factor) into the following transformation for the reduced fields on \(AdS_{2}\):
\[\psi^{(\pm)}_{\ell s}\to\pm\psi^{(\pm)}_{\ell s},\qquad\bar{\psi}^{(\pm)}_{ \ell s}\to\pm\bar{\psi}^{(\pm)}_{\ell s}, \tag{3.18}\]
which clearly leaves the action (3.10) invariant. Note that the transformations (3.14) and (3.18) form an \(O(2)\) group. The transformation rule (3.14) of the fermions under axial symmetry as well as the discrete symmetry (3.18) will be useful to classify defect operators made of fermion bilinears in section 3.2.2.
In conclusion, the action for a \(d\)-dimensional massless Dirac field in the presence of an Abelian gauge field can be decomposed into a sum over KK modes with angular momentum \(j=|\ell|+1/2=1/2,3/2,\ldots\), each corresponding to a Dirac field in AdS\({}_{2}\). The result is compactly given in (3.13). For what follows it is important to note that the \(\ell=0\) modes in the decomposition (3.5) have the lowest mass, see (3.11). Their degeneracy is \(2\times 2^{\lfloor\frac{d-2}{2}\rfloor}\), where the first factor of 2 arises from the index \(\delta=+,-\), while the second factor is related to the spin degeneracy associated with \(s\).
### Conformal Wilson lines: old and new fixed points
#### 3.2.1 Dirac fermion in AdS\({}_{2}\) and boundary RG flows
Motivated by the decomposition that led to (3.13), in this section we study a single Dirac fermion in AdS\({}_{2}\) in the presence of a Coulomb field \(A_{0}=g/r\). The action is:25
Footnote 25: Here we used \(\tilde{\psi}\overset{\leftrightarrow}{\not{\nabla}}\psi=\frac{1}{2}\bar{ \psi}\gamma^{a}\nabla_{a}\psi-\frac{1}{2}\left(\nabla_{a}\bar{\psi}\right) \gamma^{a}\psi\), which ensures that the action is exactly Hermitian (and not just up to boundary terms).
\[S=\int_{\text{AdS}_{2}}d^{2}x\sqrt{g}\,\bar{\psi}\left[i\left( \overset{\leftrightarrow}{\not{\nabla}}_{\text{AdS}_{2}}-i\not{A}\right)-m \right]\psi\,. \tag{3.19}\]
By restoring the proper indices \(\psi\to\psi^{(\delta)}_{\ell s}\) and setting the mass \(m\to m_{\ell}\) as in (3.11), we recover (3.13). In the following we consider arbitrary \(m>0\) and \(g>0\). We choose the following representation for the gamma matrices
\[\gamma^{0}=\sigma_{1}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\qquad\gamma^{1}=i\sigma_{3}=\begin{pmatrix}i&0\\ 0&-i\end{pmatrix}. \tag{3.20}\]
Following the analysis in section 2.1, we can extract the scaling dimension of defect fermionic operators by studying the equations of motion of the Dirac field for \(r\to 0\). Neglecting the time dependence, the equations of motion associated with the action (3.19) for the Dirac fermion on AdS\({}_{2}\) coupled to a Coulomb field \(A_{0}=g/r\) are given by:
\[\left[i\left(r\gamma^{1}\partial_{r}-\frac{1}{2}\gamma^{1}-ig \gamma^{0}\right)-m\right]\psi=0\,. \tag{3.21}\]
We decompose the field explicitly in its components as
\[\psi\equiv\begin{pmatrix}\chi\\ \xi\end{pmatrix}, \tag{3.22}\]
where \(\chi\) and \(\xi\) are single-component complex Grassmannian fields, in terms of which (3.21) reads:
\[\begin{split}\left(r\partial_{r}-\frac{1}{2}+m\right)\chi-g\xi& =0\,,\\ \left(r\partial_{r}-\frac{1}{2}-m\right)\xi+g\chi&=0\,. \end{split} \tag{3.23}\]
To leading order near the line defect, the dependence of the modes in the radial coordinate \(r\) is of the form \(\sim r^{\Delta}\), for both \(\chi\) and \(\xi\). Substituting such a dependence into the equations above yields a quadratic equation for the scaling dimension \(\Delta\) of the (non-gauge-invariant) boundary operators associated with \(\psi\). This results in the following:
* For \(m^{2}>g^{2}\): there are two real solutions to the quadratic equation for the scaling dimensions, given by \(\Delta_{\pm}=\frac{1}{2}\pm\sqrt{m^{2}-g^{2}}\). They correspond to the two possible conformal boundary conditions for the fermionic modes, as will be detailed below.
* For \(m^{2}<g^{2}\) there are no real solutions for the scaling dimensions \(\Delta\).
Thus when \(m^{2}=g^{2}\), the parameter \(g\) (which is related to the charge \(q>0\) of the Wilson line) is at a critical value \(g_{c}\). For \(g<g_{c}\) there are two unitary conformal boundary conditions, while for \(g>g_{c}\) there are no real solutions for the scaling dimension of the fermionic mode. We will see that this implies an instability of the vacuum for \(g>g_{c}\). Both boundary conditions are normalizable in the window \(0<\sqrt{m^{2}-g^{2}}<1/2\), where the upper limit arises from the unitarity bound \(\Delta>0\). This behavior is analogous to the one which was observed for scalar QED in the previous section. In \(d=4\), the mass of the lowest \(\ell=0\) mode is \(m=1\) and criticality is achieved for \(g=g_{c}=1\). Using \(g=\frac{e^{2}q}{4\pi}\) we obtain the critical value \(q_{c}=4\pi/e^{2}\), in agreement with classic results in the literature [21]. Note that this value of \(q_{c}\) differs by a factor \(1/2\) from the one obtained for a scalar. In \(d=3\), \(m=\frac{1}{2}\), and criticality implies \(g_{c}=\frac{1}{2}\), in agreement with the previously known results (see e.g. [58] and references therein).
Using the real world value for the electromagnetic coupling, the previous analysis gives a critical charge for point-like nuclei \(q_{c}\approx 137\). In practice to estimate the real value of the critical charge one needs to account for both the size of the nucleus \(r_{0}\) and the mass of the electron \(m_{e}\), and the critical charge is much larger, \(q_{c}\approx 173\)[59; 21]. The huge discrepancy between the real world instability and the massless result might be surprising given the smallness of the dimensionless product \(r_{0}m_{e}\approx 10^{-3}\). As we will explain in section 3.4, this discrepancy is naturally expained as a consequence of dimensional transmutation, and is similar to the explanation of the proton mass in QCD.
In the rest of this section we analyze the subcritical regime \(m^{2}>g^{2}\), postponing a discussion of the supercritical instability to section 3.4. In particular we will show that, analogously to what we found in scalar QED, the two conformal boundary conditions are related by RG flow (when both are allowed).
It is convenient to define the following dimensionless parameter:
\[\nu\equiv\sqrt{m^{2}-g^{2}}. \tag{101}\]
In the subcritical regime, the parameter \(\nu\) is real and positive, and when it is also within the range \(\nu<\frac{1}{2}\), both boundary conditions discussed above result in normalizable modes for the
fermion.26 The leading order physical solution near the boundary at \(r\to 0\) explicitly reads:
Footnote 26: Of course, for \(g=0\) this regime corresponds to the usual double quantization window in AdS, see e.g. [60].
\[\begin{split}\chi&=\alpha r^{\frac{1}{2}-\nu}+\frac{g }{m+\nu}\beta\,r^{\frac{1}{2}+\nu},\\ \xi&=\beta r^{\frac{1}{2}+\nu}+\frac{g}{m+\nu}\alpha \,r^{\frac{1}{2}-\nu},\end{split} \tag{3.25}\]
where \(\alpha\) and \(\beta\) are two independent Grassmann modes that depend only on the line coordinate \(t\) and where we have omitted subleading terms whose coefficients are fixed by \(\alpha,\beta\); see (2.8) for such terms in the scalar case. The mode expansion (3.25) formally describes also the critical case \(m^{2}=g^{2}\) by setting
\[\alpha=\left(\frac{a}{4}-\frac{bg}{2\nu}\right)r_{0}^{\nu},\qquad\beta=\left( \frac{a}{4}+\frac{bg}{2\nu}\right)r_{0}^{-\nu}, \tag{3.26}\]
where \(a\) and \(b\) are complex Grassmann fields, and \(r_{0}\) is an arbitrary cutoff radius. In the critical limit \(\nu\to 0\) the fermionic components read:
\[\begin{split}\chi&\to\frac{\sqrt{r}}{2}\left(a-b \right)+g\sqrt{r}\,b\log\left(\frac{r}{r_{0}}\right)\,,\\ \xi&\to\frac{\sqrt{r}}{2}\left(a+b\right)+g\sqrt{r} \,b\log\left(\frac{r}{r_{0}}\right)\,.\end{split} \tag{3.27}\]
We now continue the discussion in the spirit of the analysis presented in subsection 2.1. In particular we want to construct the appropriate boundary terms corresponding to the two possible conformal boundary conditions in the window \(0<\nu<1/2\), for which both modes in (3.25) are normalizable.
Note first that, unlike the scalar case, the on-shell action vanishes for arbitrary boundary conditions, since it is linear in derivatives. The variation of the action (3.19) for configurations which satisfy the bulk equations of motion is written purely in terms of the boundary modes as follows
\[\begin{split}\delta S&=-\frac{i}{2}\int_{r=r_{0}} dt\,\sqrt{gg^{rr}}\left(\bar{\psi}\gamma^{1}\delta\psi-\delta\bar{\psi}\gamma^{1} \psi\right)\\ &=\frac{1}{2}\int_{r=r_{0}}dt\,\sqrt{gg^{rr}}\left(\xi^{\dagger} \delta\chi-\chi^{\dagger}\delta\xi+\delta\chi^{\dagger}\xi-\delta\xi^{\dagger }\chi\right)\\ &=\frac{\nu}{m+\nu}\int_{r=r_{0}}dt\,\left(\bar{\beta}\delta \alpha-\bar{\alpha}\delta\beta+\delta\bar{\alpha}\beta-\delta\bar{\beta}\alpha \right),\end{split} \tag{3.28}\]
where we use \(\bar{\alpha}\), \(\bar{\beta}\) to denote \(\alpha^{\dagger}\), \(\beta^{\dagger}\), respectively, and \(r_{0}\) is a small cutoff radius. As in subsection 2.1, we do not impose Dirichlet boundary conditions, but leave boundary modes free to fluctuate. Thus we are faced again with the question of adding boundary terms such that the variation of the action vanishes for either \(\alpha=0\) or \(\beta=0\), while leaving the other mode free to fluctuate.
At this stage, it is technically convenient to notice that we can use an arbitrary linear combination of the following four bilinears: \(\bar{\psi}\psi\), \(\bar{\psi}\gamma^{0}\psi\), \(i\bar{\psi}\gamma^{1}\psi\) and \(i\bar{\psi}\gamma^{3}\psi\). For infinitesimal \(r_{0}\)
such that we can neglect subleading terms in (3.25), this is equivalent to considering the most general linear combination of bilinears in the boundary modes: \(\bar{\beta}\beta r_{0}^{2\nu}\), \(\bar{\beta}\alpha\), \(\bar{\alpha}\beta\) and \(\bar{\alpha}\alpha r_{0}^{-2\nu}\). In the following it will be simpler to write operators directly in terms of the boundary modes.
One possible choice of a boundary term \(S_{bdy}^{(1)}\) would be:
\[S_{bdy}^{(1)}=\frac{\nu}{m+\nu}\int_{r=r_{0}}dt\left(\bar{\beta}\alpha+\bar{ \alpha}\beta+2\bar{\beta}\beta r_{0}^{2\nu}\right). \tag{3.29}\]
This term is chosen so that it admits a smooth limit for \(\nu\to 0\), for which it reduces to \(S_{bdy}^{(1)}=\frac{g}{4m}\left(\bar{a}b+\bar{b}a\right)\) using (3.27). The total variation \(\delta S+\delta S_{bdy}^{(1)}\) then reads:
\[\delta S+\delta S_{bdy}^{(1)}=\frac{2\nu}{m+\nu}\int_{r=r_{0}}dt\left[\bar{ \beta}\delta\alpha+\delta\bar{\alpha}\beta+\left(\delta\bar{\beta}\beta+\bar{ \beta}\delta\beta\right)r_{0}^{2\nu}\right], \tag{3.30}\]
which vanishes for \(\beta=\bar{\beta}=0\), and corresponds to alternate quantization, where the most singular falloff in (3.25) is allowed to fluctuate; in the limit \(\nu\to 0\) (3.30) reduces to \(\delta S+\delta S_{bdy}^{(1)}=\frac{g}{4m}\left(\delta\bar{a}b+\bar{b}\delta a\right)\), and thus sets the logarithmic mode to zero, \(\bar{b}=b=0\) in (3.27).
The other fixed point is obtained considering the following boundary term:
\[S_{bdy}^{(2)}=-\frac{\nu}{m+\nu}\int_{r=r_{0}}dt\left(\bar{\beta}\alpha+\bar{ \alpha}\beta+2\bar{\alpha}\alpha r_{0}^{-2\nu}\right)\,. \tag{3.31}\]
It can be checked that the total variation \(\delta S+\delta S_{bdy}^{(2)}\) vanishes for \(\bar{\alpha}=\alpha=0\) and thus corresponds to standard quantization, in which the less singular term in (3.25) is allowed to fluctuate. We also note that the boundary term (3.31) coincides with (3.29) in the limit \(\nu\to 0\).
The two fixed points are related by RG flow. As in the scalar case, this is triggered by a double-trace relevant perturbation \(\bar{\alpha}\alpha\) of the alternate quantization boundary fixed point. In practice, it is convenient to keep the cutoff radius \(r_{0}\) finite and consider the following deformation of the theory specified by (3.29)
\[S_{bdy}^{DTD}=-2f_{0}\int_{r=r_{0}}dt\,r_{0}^{2\nu}\left(\bar{\beta}\beta r_{ 0}^{2\nu}+\bar{\beta}\alpha+\bar{\alpha}\beta+\bar{\alpha}\alpha r_{0}^{-2\nu }\right) \tag{3.32}\]
where \(f_{0}\) is a dimensionful (bare) coupling. The deformation (3.32) is in general fully equivalent to a standard double-trace \(\sim\bar{\alpha}\alpha\) when considered as a perturbation of the UV DCFT corresponding to \(\beta=\bar{\beta}=0\). In the limit \(r_{0}\to 0\) it reduces explicitly to \(-2f_{0}\bar{\alpha}\alpha\). However the combination in (3.32) is chosen so that in the \(\nu\to 0\) limit it becomes \(-f_{0}\bar{a}a/2\), which is the appropriate double-trace deformation for the logarithmic case (see e.g. [41]).
We now require that the total variation of the action and boundary terms vanishes: \(\delta S+\delta S_{bdy}^{(1)}+\delta S_{bdy}^{DTD}=0\). The boundary condition fixes the ratio between the modes:
\[\beta=c\,\alpha\,,\qquad c=\frac{f_{0}(m+\nu)}{\nu-(m+\nu)\,f_{0}r_{0}^{2\nu} }\,. \tag{3.33}\]
Note that the limit \(r_{0}\to 0\) simply yields \(\beta=(f_{0}(m+\nu)/\nu)\alpha\), while in the limit \(\nu\to 0^{+}\) with finite \(r_{0}\), plugging \(\beta=c\,\alpha\) into the modes expansion (3.25) yields \(b=f_{0}\,a\) in terms of the modes in (3.27).
From (3.33) we can compute the beta function associated with the perturbation (3.32). To this aim we denote by \(f\) the dimensionless coupling, \(f=f_{0}r_{0}^{2\nu}\). From the Callan-Symanzik equation, one finds the following beta function
\[\beta_{f}=-2\nu f+2\,(m+\nu)\,f^{2}. \tag{3.34}\]
The beta function (3.34) is the main result of this section. It has the same physical significance as in the scalar case discussed in section 2.1. It admits two fixed points: an unstable one at \(f=0\), corresponding to alternate boundary conditions \(\beta=0\), and a stable one at \(f=\nu/(m+\nu)\), corresponding to standard boundary conditions \(\alpha=0\). For \(f>0\) and \(\nu>0\) (3.34) thus describes the RG flow from alternate to standard boundary conditions. At \(\nu=0\) the two fixed points merge into a unique one at \(f=0\).
For \(f<0\) the coupling has a runaway behavior toward \(f=-\infty\). Analogously to the scalar case, this is associated with an instability of the vacuum. We will analyze this instability in section 3.3, where we will argue that as a consequence of the Pauli exclusion principle it leads to the screening of a single unit of charge.
In the limit \(g\to 0\) the beta function (3.34) describes the well-known double-trace RG flow in AdS\({}_{2}\) from alternative to standard quantization.27 In particular the beta function (3.34) vanishes for \(m=g=0\), since then \(\nu=0\). This can be understood considering the solution (3.27) in the limit \(\nu\to 0\). Indeed from (3.27) we see that the logarithmic falloffs are proportional to \(g\). Taking the limit \(g\to 0\) in (3.27) we thus find two independent complex modes proportional to \(\sqrt{r}\). This implies that for \(\nu=g=0\) there is a marginal operator which rotates between the possible conformal boundary conditions. Correspondingly, the beta function (3.34) vanishes for \(m=\nu=0\). In practice this regime is not relevant for our discussion of Wilson lines, since \(m_{0}=\frac{d-2}{2}>0\) for \(d>2\).
Footnote 27: It can be checked that (3.34) indeed holds for arbitrary spacetime dimensions \(d\) and agrees with the previous result in [61] up to coupling redefinitions.
As a final comment, we note that, irrespectively of the value of \(\nu\), the alternate fixed point does not admit relevant perturbations other than the one we considered, \(\bar{\alpha}\alpha\). Indeed higher trace deformations of the form \((\bar{\alpha}\alpha)^{n}\) vanish for \(n>1\) since the modes are Grassmanian, and all other defect operators involve derivatives and are thus irrelevant. This is to be contrasted with the scalar setup, where higher trace deformations could become relevant at the alternate fixed point, as discussed in section 2.2.
#### 3.2.2 Defect fixed points in four dimensions
We now apply the analysis of the previous section to the case of a Dirac fermion in \(d=4\). We thus consider the action (3.13) and focus on the \(\ell=0\) modes of the decomposition (3.5), as these have the lowest mass in AdS\({}_{2}\). As explained in section 3.1, there are four \(\ell=0\) modes. We denote them simply by \(\psi_{s}^{(\delta)}\), where \(\delta=+,-\) and \(s=+\frac{1}{2},-\frac{1}{2}\) such that \(\psi_{s}^{(+)}\), and \(\psi_{s}^{(-)}\) transform both as doublets under the rotation group \(SU(2)\). The vector \(\{\psi_{s}^{(+)},\psi_{s}^{(-)}\}\) rotates under the action of the \(O(2)\) group associated with the axial transformations and parity.
To extend the analysis of subsection 3.2.1 to \(d=4\) we thus promote \(\psi\to\psi_{s}^{(\delta)}\), \(\chi\to\chi_{s}^{(\delta)}\) and \(\xi\to\xi_{s}^{(\delta)}\), such that as in (3.22):
\[\psi_{s}^{(\delta)}\equiv\left(\begin{array}{c}\chi_{s}^{(\delta)}\\ \xi_{s}^{(\delta)}\end{array}\right). \tag{3.35}\]
As a result, the modes \(\alpha\) and \(\beta\) in (3.25) are promoted to \(\alpha_{s}^{(\delta)}\), \(\beta_{s}^{(\delta)}\) respectively. We introduce the notation:
\[\beta^{(\delta)}=\begin{pmatrix}\beta_{+\frac{1}{2}}^{(\delta)}\\ \beta_{-\frac{1}{2}}^{(\delta)}\end{pmatrix}\,,\quad\alpha^{(\delta)}= \begin{pmatrix}\alpha_{+\frac{1}{2}}^{(\delta)}\\ \alpha_{-\frac{1}{2}}^{(\delta)}\end{pmatrix}\,,\qquad\bar{\beta}^{(\delta)}= \left(\beta_{\frac{1}{2}}^{\dagger\,(\delta)},\beta_{-\frac{1}{2}}^{\dagger\, (\delta)}\right),\quad\bar{\alpha}^{(\delta)}=\left(\alpha_{\frac{1}{2}}^{ \dagger\,(\delta)},\alpha_{-\frac{1}{2}}^{\dagger\,(\delta)}\right)\,, \tag{3.36}\]
to conveniently denote the \(SU(2)\) doublets.
In \(d=4\), \(m=m_{0}=1\) and \(g=\frac{e^{2}q}{4\pi}\), where \(q>0\) is the charge of the Wilson line. According to the discussion below equation (3.23), the fields \(\psi_{s}^{(\delta)}\) admit both standard and alternate boundary conditions for \(0<\sqrt{1-e^{4}q^{2}/(4\pi)^{2}}<1/2\). Differently than in the case of scalar QED\({}_{4}\) described in subsection 2.1, this condition provides both a lower and a upper bound on \(q\):28
Footnote 28: Using the physical value of the QED coupling, in natural units this condition reads \(119<q<137\).
\[\frac{\sqrt{3}}{2}<\frac{e^{2}q}{4\pi}<1\,. \tag{3.37}\]
In this window, for each of the modes \(\psi_{s}^{(\delta)}\) there are two conformal boundary conditions, leading to \(2^{4}=16\) defect fixed points overall. We will focus on this window in what follows, and study the corresponding RG flows.
Consider the fixed point where \(\beta_{s}^{(\delta)}=0\) for all modes. This is specified by a defect term of the form (3.29) (promoting \(\beta\to\beta_{s}^{(\delta)}\) etc., as discussed above). We are interested in deformations of this fixed point by fermion bilinear operators on the Wilson line. It is natural to classify the possible bilinears according to their \(SU(2)\times SO(2)\) charges, associated with the symmetries discussed in subsection 3.1. For convenience, we use the notation \(\sigma^{K}\equiv\left(\mathds{1},\sigma^{i}\right)\), where \(K=0,\cdots,3\), the matrix \(\mathds{1}\) is the \(2\times 2\) dimensional identity matrix and \(\sigma^{i}\), \(i=1,2,3\) are the Pauli matrices. Then, in the notation (3.36), the most general gauge-invariant bilinear defect operator without derivatives can be written as a linear combination of the following
terms:
\[\Phi^{(\delta\gamma)K}\equiv r_{0}^{2\nu}\left(\bar{\beta}^{(\delta)}\sigma^{K} \beta^{(\gamma)}r_{0}^{2\nu}+\bar{\beta}^{(\delta)}\sigma^{K}\alpha^{(\gamma)}+ \bar{\alpha}^{(\delta)}\sigma^{K}\beta^{(\gamma)}+r_{0}^{-2\nu}\bar{\alpha}^{( \delta)}\sigma^{K}\alpha^{(\gamma)}\right). \tag{3.38}\]
There are 16 independent real bilinears that are invariant under the gauge symmetry: eight preserve the global \(SO(2)\), among which two preserve the \(SU(2)\) while the other six break it, and eight that break the global \(SO(2)\), among which two are invariant under \(SU(2)\) while the remaining six break it. In addition, eight of the bilinears are invariant under parity \(P\) while the other eight break it. There is a single bilinear invariant under all symmetries. The classification is summarized in table 1.
We may now calculate the beta-functions associated with the fermion bilinears introduced in table 1. The derivation is analogous to the one discussed in subsection 2.5 for multi-flavor scalar QED\({}_{4}\). We perturb the DCFT by adding the most general relevant perturbation on the line. This can be written as:
\[S_{DTD}=-2\int dt\,r_{0}^{2\nu}\left(\bar{\beta}F_{0}\beta r_{0}^{2\nu}+\bar{ \beta}F_{0}\alpha+\bar{\alpha}F_{0}\beta+r_{0}^{-2\nu}\bar{\alpha}F_{0}\alpha \right), \tag{3.39}\]
where \(F_{0}\) is a \(4\times 4\) symmetric matrix, which collectively denote all the bare coupling constants associated with the double-trace deformations (3.38). \(\alpha,\beta\) in the above carry four-components each in accordance with all the possible combinations of \((\delta)\) and \(s\). Explicitly, the coupling constants in table 1 are related to the matrix \(F_{0}\) via
\[\lambda_{A}=\frac{1}{4}\operatorname{Tr}\left(\Sigma^{A}F_{0}\right), \tag{3.40}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Bilinears & \(SU(2)\) & \(SO(2)\) & \(P\) & \# \\ \hline \(f_{0}^{0}\left(\Phi^{++0}+\Phi^{--0}\right)\) & \(\check{\check{\prime}}\) & \(\check{\check{\prime}}\) & \(\check{\check{\prime}}\) & 1 \\ \(ik_{0}^{0}\left(\Phi^{+-0}-\Phi^{-+0}\right)\) & \(\check{\check{\prime}}\) & \(\check{\check{\prime}}\) & \(\check{\times}\) & 1 \\ \(f_{0}^{i}\left(\Phi^{++i}+\Phi^{--i}\right)\) & \(\times\) & \(\check{\check{\prime}}\) & \(\check{\check{\prime}}\) & 3 \\ \(ik_{0}^{i}\left(\Phi^{+-i}-\Phi^{-+i}\right)\) & \(\times\) & \(\check{\check{\prime}}\) & \(\times\) & 3 \\ \(h_{0}^{0}\left(\Phi^{+-0}+\Phi^{-+0}\right)\) & \(\check{\check{\prime}}\) & \(\times\) & \(\times\) & 1 \\ \(q_{0}^{0}\left(\Phi^{++0}-\Phi^{--0}\right)\) & \(\check{\check{\prime}}\) & \(\times\) & \(\check{\check{\prime}}\) & 1 \\ \(h_{0}^{i}\left(\Phi^{+-i}+\Phi^{-+i}\right)\) & \(\times\) & \(\times\) & \(\times\) & 3 \\ \(q_{0}^{i}\left(\Phi^{++i}-\Phi^{--i}\right)\) & \(\times\) & \(\times\) & \(\check{\check{\prime}}\) & 3 \\ \hline \end{tabular}
\end{table}
Table 1: A classification of the gauge-invariant fermion bilinear line operators. The last column represents the number of independent bilinears of the specified type. \(f_{0}^{K}\), \(k_{0}^{K}\), \(h_{0}^{K}\) and \(q_{0}^{K}\) denote the (real) bare coupling constants.
with \(\lambda_{A}=f_{0}^{0},f_{0}^{i},b_{0}^{0},k_{0}^{i},h_{0}^{0},h_{0}^{i},q_{0}^{0},q _{0}^{i}\), and the matrices \(\Sigma^{A}\) are \(4\times 4\) matrices given by:
\[\begin{split}\Sigma^{f^{0}}&=\begin{pmatrix}\sigma^{ 0}&0\\ 0&\sigma^{0}\end{pmatrix}\,,\quad\Sigma^{f^{i}}&=\begin{pmatrix}\sigma^{i}&0 \\ 0&\sigma^{i}\end{pmatrix}\,,\quad\Sigma^{k^{0}}&=\begin{pmatrix}0&i\sigma^{0}\\ -i\sigma^{0}&0\end{pmatrix}\,,\quad\Sigma^{k^{i}}&=\begin{pmatrix}0&i\sigma^{i} \\ -i\sigma^{i}&0\end{pmatrix},\\ \Sigma^{h^{0}}&=\begin{pmatrix}0&\sigma^{0}\\ \sigma^{0}&0\end{pmatrix}\,,\quad\Sigma^{h^{i}}&=\begin{pmatrix}0&\sigma^{i} \\ \sigma^{i}&0\end{pmatrix}\,,\quad\Sigma^{q^{0}}&=\begin{pmatrix}\sigma^{0}&0 \\ 0&-\sigma^{0}\end{pmatrix}\,,\quad\Sigma^{q^{i}}&=\begin{pmatrix}\sigma^{i}&0 \\ 0&-\sigma^{i}\end{pmatrix}.\end{split} \tag{3.41}\]
Similarly to the analysis around (3.33), by requiring that the total variation of the action and boundary terms vanish we find the following ratios between the modes:
\[\beta=C\alpha,\qquad C=(m+\nu)F_{0}\cdot\left[\nu\mathds{1}-(m+\nu)F_{0}r_{0} ^{2\nu}\right]^{-1}, \tag{3.42}\]
where \(\mathds{1}\) is the \(4\times 4\) identity matrix. Defining the dimensionless coupling as \(F=F_{0}r_{0}^{2\nu}\), from the Callan-Symanzik equation we find the beta-function:
\[\beta_{F}=-2\nu F+2(m+\nu)F\cdot F\,. \tag{3.43}\]
It follows that the beta-functions for each of the couplings in table 1 is given by:
\[\beta_{\lambda^{A}}=\frac{1}{4}\operatorname{Tr}\left(\Sigma^{A}\beta_{f} \right), \tag{3.44}\]
where \(\beta_{F}\) is the \(4\times 4\) matrix whose terms are given by (3.43).
As an illustration, we write explicitly the system of beta functions for the \(SU(2)\) preserving couplings (i.e. setting \(f^{i}=k^{i}=h^{i}=q^{i}=0\)):
\[\begin{split}\beta(f^{0})&=-2\nu f^{0}+2(m+\nu) \left[(f^{0})^{2}+(k^{0})^{2}+(h^{0})^{2}+(q^{0})^{2}\right]\,,\\ \beta(k^{0})&=-2\nu k^{0}+4(m+\nu)k^{0}f^{0}\,,\\ \beta(h^{0})&=-2\nu h^{0}+4(m+\nu)h^{0}f^{0}\,,\\ \beta(q^{0})&=-2\nu q^{0}+4(m+\nu)q^{0}f^{0}\,.\end{split} \tag{3.45}\]
The fixed points are classified similarly to the analysis in subsection 2.5, according to:
* \(\left(f^{0},k^{0},h^{0},q^{0}\right)=(0,0,0,0)\): an unstable fixed point which corresponds to alternate boundary conditions to all modes. The anomalous dimensions read: \(\left(\gamma_{f^{0}},\gamma_{k^{0}},\gamma_{h^{0}},\gamma_{q^{0}}\right)=(-2 \nu,-2\nu,-2\nu,-2\nu)\).
* \(\left(f^{0},k^{0},h^{0},q^{0}\right)=\left(\frac{\nu}{m+\nu},0,0,0\right)\): a stable fixed point that corresponds to standard boundary conditions for all modes. The anomalous dimensions read: \(\left(\gamma_{f^{0}},\gamma_{k^{0}},\gamma_{h^{0}},\gamma_{q^{0}}\right)=(2\nu, 2\nu,2\nu,2\nu)\).
* \(f^{0}=\frac{\nu}{2(m+\nu)}\) while \((k^{0})^{2}+(h^{0})^{2}+(q^{0})^{2}=\frac{\nu^{2}}{4(m+\nu)^{2}}\): a family of unstable fixed points corresponding to mixed boundary conditions. For example, there are two \(SO(2)\) preserving fixed points with \(q^{0}=h^{0}=0\) and \(\left(f^{0},k^{0}\right)=\left(\frac{\nu}{2(m+\nu)},\mp\frac{\nu}{2(m+\nu)}\right)\), with anomalous dimensions \(\left(\gamma_{f^{0}},\gamma_{k^{0}},\gamma_{h^{0}},\gamma_{q^{0}}\right)=(\mp 2 \nu,\pm 2\nu,0,0)\).
It is worth mentioning that similarly to the analysis of the fixed points structure in the \(SU(2)\) preserving case discussed above, one can consider perturbing the UV DCFT with \(SU(2)\) breaking deformations as described in table 1, and straightforwardly find fixed points that correspond to DCFTs (invariant under \(SL(2,\mathbb{R})\)) that break spatial rotation symmetry.
Finally, let us remark that, differently than in the case of a single AdS\({}_{2}\) Dirac fermion analyzed in subsection 3.2.1, as we lower the charge below criticality there are additional operators that become relevant or marginal on the line at the alternate quantization fixed point. In fermionic QED\({}_{4}\), 4-fermion operators become relevant when \(\frac{e^{2}q}{4\pi}<\frac{\sqrt{15}}{4}\). Because of the fermionic statistics, there is only a finite number of marginal or relevant operators. The term that contains a polynomial of the highest number of fields is an 8-fermion term and it becomes marginal when \(\frac{e^{2}q}{4\pi}<\frac{\sqrt{55}}{8}\).29
Footnote 29: Using the physical value for the fine structure constant, 4-fermion defect operators become relevant for \(q<132\), and 8-fermion ones for \(q<127\).
### Partial charge screening from double-trace perturbation
Consider the model (3.19) tuned to alternate boundary conditions. In this section we study the effect of a double-trace deformation of the form (3.32) with negative coefficient, i.e. we study the model
\[S=S_{alternate}-f\int_{r=0}dt\,\bar{\alpha}\alpha\,,\qquad f<0\,, \tag{3.46}\]
where \(S_{alternate}\) schematically denotes the action for the UV defect fixed point. In the following we always assume \(\nu>0\).
As remarked in section 3.2.1, the beta function (3.34) shows that a negative double-trace coupling flows to infinitely large values. In the scalar case, we found that this kind of RG flow is associated with the existence of an instability of the vacuum. Somewhat similarly, we will argue that a negative double-trace deformation leads to a change in the structure of the vacuum also in the fermionic case. However, the fate and the signature of this instability are different than in the scalar case as a consequence of fermionic statistics. In particular a negative double-trace deformation for a single \(AdS_{2}\) Dirac fermion leads to the screening of a single unit of charge.
In this section we will also introduce some tools that will be relevant in section 3.4, where we will discuss the supercritical Coulomb potential. Our discussion in this section will largely be inspired by classic results about QED in strong electromagnetic fields [59]. Several technical details are given in appendix B.
When we analyzed scalar QED\({}_{4}\) with a negative double-trace deformation in section 2.2, we found that the retarded Green's function of the defect mode displayed a tachyon pole in the upper half plane. This is not the case for the fermion defect propagator. In appendix B.1 we compute the propagator \(G_{\alpha}(\omega)\) for the mode \(\alpha(\omega)\) in the theory (3.46) and verify explicitly
that no tachyon pole exists.30 Instead, the Green's function takes qualitatively the same form for both signs of \(f\).
Footnote 30: The absence of a tachyon pole is in fact expected as a consequence of the Fermi statistics as opposed to Bose statistics [40].
To understand the physical implication of the double-trace deformation with \(f<0\), it is simplest to momentarily consider a massive theory. Consider the \(4d\) model (3.3) with a mass term \(\delta S=-\int d^{4}xM\bar{\Psi}_{D}\Psi_{D}\). Upon performing the KK decomposition on \(\text{AdS}_{2}\times S^{2}\) explained in section 3.1, we find that this amounts to modifying the action (3.13) by a term
\[\delta S_{M}=\sum_{\ell,s}\sum_{\delta=\pm}i\delta\int_{\text{AdS}_{2}}d^{2}x \,\sqrt{g}\,rM\bar{\psi}_{\ell s}^{(\delta)}\gamma^{3}\psi_{\ell s}^{(\delta)}\,, \tag{3.47}\]
where the overall factor of \(\delta=\pm\) arises from the redefinition in (3.12). Note that the mass term breaks explicitly both the axial symmetry (3.14) that rotates the \((\pm)\) fields, as it should, and part of the \(\text{AdS}_{2}\) isometries. We are thus led to consider the model (3.46) deformed by the term
\[\delta S_{M}^{(\pm)}=\pm i\int_{\text{AdS}_{2}}d^{2}x\sqrt{g}\,rM\bar{\psi} \gamma^{3}\psi\,, \tag{3.48}\]
where \(M>0\) and we will consider both a positive and a negative prefactor for generality. Note the deformation (3.48) vanishes for \(r\to 0\) and thus does not modify the near defect behavior of the field (3.25). Therefore the boundary conditions read
\[\beta/\alpha=\frac{m+\nu}{\nu}f=\text{sgn}(f)\mu^{2\nu}\,, \tag{3.49}\]
where we defined for convenience \(\mu\equiv\left(\frac{m+\nu}{\nu}|f|\right)^{1/(2\nu)}\) as the mass scale introduced by the deformation, and momentarily considered both signs for \(f\). The massless limit is recovered for \(M/\mu\to 0\).
As well known, in the presence of a mass gap the spectrum for the Dirac equation in an external potential organizes itself into an infinite number of discrete (bound) states with frequency \(-M<\omega<M\) (with an accumulation point for \(\omega\to M\)), a positive energy continuum for \(\omega\geq M\) and a negative energy continuum with \(\omega\leq-M\). In appendix B.2, we study the discrete part of the spectrum and find the quantization condition on the frequencies \(\omega_{n}\) of the discrete bound states at \(f=0\) and \(f\to+\infty\), the latter case coinciding with the well known relativistic Hydrogen atom spectrum [59]. As we increase \(\mu/M\) while keeping \(f\) positive, all the bound states at \(f=0\) increase their energy and smoothly approach the standard quantization energies \(\omega_{n}\) (corresponding to \(f\to+\infty\)). For negative \(f\) instead, as we increase \(\mu\) while keeping \(M\) fixed we find that the lowest energy bound state decreases its energy \(\omega_{0}\). Eventually, \(\mu\) reaches a critical value given by
\[\mu_{c}^{(\pm)}\equiv gM\left[\frac{\pi\,2^{2\nu}(m\pm\nu)}{g\,\sin\left(2\pi \nu\right)\Gamma(2\nu)\Gamma(1+2\nu)}\right]^{\frac{1}{2\nu}}\,, \tag{3.50}\]
where the \((\pm)\) distinguishes the two signs in (3.48); in the following we will drop this supscript for notational simplicity. At \(\mu=\mu_{c}\) we find \(\omega_{0}=-M\), and the bound state becomes completely delocalized; for larger values of \(\mu\) the state joins the negative energy continuum and the bound state ceases to exist. We will prove at the end of this section that for \(\mu\gtrsim\mu_{c}\) this state still manifests itself as a resonant pole in the second sheet of the retarded Green's function.31 This phenomenon is referred to as the _dive_ of a bound state into the negative energy continuum [59]. All other bound states smoothly approach the standard quantization (corresponding to \(f\to+\infty\)) energies as \(f\to-\infty\).
Footnote 31: To avoid the mention of resonances one can introduce an IR cutoff \(r_{far}\gg 1/M,1/\mu\), so that the full spectrum is discrete, and a resonance simply corresponds to a single state mixing with many nearby (quasi-continuum) states (see e.g. the discussion in section 2.2 of [62]). The picture of a _diving bound state_ then simply follows by continuity of the spectrum and the fact that discrete states cannot disappear as we change continuously the potential. Similar arguments are at the heart of two related well known classical results: Levinson’s theorem in quantum mechanics [63; 64] and the Friedel sum rule in condensed matter physics [65; 66].
The dive of a bound state implies that the vacuum of the theory acquires one unit of (negative) charge. To see this, we need to properly define the vacuum. While many choices are ultimately equivalent in the \(M\to 0\) limit, a natural one is to define the vacuum as the state which minimizes the following modified Hamiltonian [59]:
\[\hat{H}_{M}=\hat{H}-M\hat{Q}\,, \tag{3.51}\]
where \(\hat{H}\) is the Dirac-Coulomb Hamiltonian and \(\hat{Q}\) the gauge charge (normalized so that the field has negative unit charge). In old-fashioned language, this means that we consider as filled holes all states with energy less than \(-M\), while states with energy larger than \(-M\) are particle excitations on top of the vacuum. The definition (3.51) is natural if we imagine turning on the potential adiabatically starting from the usual vacuum. The term \(M\hat{Q}\) is a chemical potential, which accounts for the fact that, by charge conservation, the transition to the new ground state can only happen by creation of an electron-positron pair, with a positron that escapes far away from the Wilson line.
According to the Hamiltonian (3.51), the dive of the bound state into the low energy continuum at \(\omega<-M\) for \(\mu>\mu_{c}\) is thus interpreted as a change in the nature of the state from a particle energy level to a hole in the Dirac sea. Since all holes must be filled in the ground state, this leads to the screening of one unit of charge. The same remains true in the massless limit. We can understand this phenomenon physically by interpreting the double-trace deformation in (3.46) as an attractive potential localized on the defect. For sufficiently large \(\mu\), the potential traps an electron energy level close to the defect, similarly to the creation of a bound state by a Dirac delta function potential in quantum mechanics.
We finally discuss how to compute the charge cloud created by this process around the Wilson line. Note that in the double-scaling limit (3.2) we can safely neglect the change in the electromagnetic field induced by this process, differently than in the case of scalar QED\({}_{4}\) analyzed in section 2.2, where we needed to account for the backreaction of the large-charge scalar cloud that formed.
Consider the mode decomposition of the Dirac field in terms of the two continuum modes and the discrete states,
\[\psi(t,r)=\int_{M}^{\infty}\frac{d\omega}{2\pi}e^{-i\omega t}\psi_{\omega}(r)b_{ \omega}+\sum_{-M<\omega_{n}<M}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We could now use (3.56) and (3.57) to compute the charge density profile. It is however obvious on dimensional grounds that for \(\mu\gg M\) the charge is localized at distances \(r\sim 1/\mu\) from the defect, and in the massless limit the IR defect simply corresponds to a Wilson line with charge \(q-1\) and standard boundary conditions. We instead conclude this section by showing how (3.57) implies a discontinuity for the screened charge as \(\mu\) changes from below to above the critical value \(\mu_{c}\) for fixed \(M\). Our analysis will also elucidate the aforementioned relation between _diving_ bound states and resonances.
Consider the difference between the charge density for \(\mu=\mu_{c}+\delta\mu\) and \(\mu=\mu_{c}-\delta\mu\) for a positive \(\delta\mu/M\ll 1\). From (3.56) and (3.57) we see that the only significant contribution to the charge density in this limit arises from the lowest bound state \(\psi_{0}(r)\) and the negative energy continuum:
\[\left\langle j_{0}(r)\right\rangle_{\mu=\mu_{c}+\delta\mu}-\left \langle j_{0}(r)\right\rangle_{\mu=\mu_{c}-\delta\mu}=-\frac{1}{2}\psi_{0}^{ \dagger}(r)\psi_{0}(r)\\ +\int_{-\infty}^{-M}\frac{d\omega}{2\pi}\left\{\text{Im}(\text{ Tr}[\gamma^{0}S_{R}(\omega;r,r)]_{\mu=\mu_{c}+\delta\mu})-\text{Im}(\text{ Tr}[\gamma^{0}S_{R}(\omega;r,r)]_{\mu=\mu_{c}-\delta\mu})\right\}+O(\delta\mu/M)\,. \tag{3.60}\]
Standard arguments about mixing of isolated states with a continuum let us express the difference in the second line in terms of the wave-function \(\psi_{0}(r)\) of the _diving_ state just below criticality [59; 67]:
\[\text{Im}(\text{Tr}[\gamma^{0}S_{R}(\omega;r,r)]_{\mu=\mu_{c}+ \delta\mu})-\text{Im}(\text{Tr}[\gamma^{0}S_{R}(\omega;r,r)]_{\mu=\mu_{c}- \delta\mu})\\ =-\frac{\psi_{0}^{\dagger}(r)\psi_{0}(r)\Gamma_{res}/2}{(\omega-E _{res})^{2}+\Gamma_{res}^{2}/4}+O\left(\frac{\delta\mu}{M}\right)\,, \tag{3.61}\]
where \(E_{res}+M=O(\delta\mu)\) (with \(E_{res}<-M\)) while \(\Gamma_{res}=O(\delta\mu^{2}/M)\). In other words, the _diving_ bound state became a resonance in the negative energy continuum.34 In practice (3.61) only applies _locally_ for \(r\lesssim 1/M\), since the analytic continuation of the wave-function \(\psi_{0}\) changes its behavior at infinity when \(\omega_{0}\) becomes complex. Using (3.61) in (3.60) we conclude
Footnote 34: It may be argued that the width \(\Gamma_{res}\) is associated with the inverse decay time of the _wrong vacuum_, where the hole is not filled [59].
\[\left\langle j_{0}(r)\right\rangle_{\mu=\mu_{c}+\delta\mu}-\left\langle j_{0} (r)\right\rangle_{\mu=\mu_{c}-\delta\mu}=-\psi_{0}^{\dagger}(r)\psi_{0}(r)+O( \delta\mu/M)\,. \tag{3.62}\]
(3.62) implies a discontinuity of the Green's function at \(\mu=\mu_{c}\). Integrating (3.62) we recover the expected discontinuity for the screening charge:
\[Q_{screen}|_{\mu=\mu_{c}+\delta\mu}-Q_{screen}|_{\mu=\mu_{c}-\delta\mu}=-1\,. \tag{3.63}\]
As explained in section 3.1, in \(d=4\) there are 4 independent \(\ell=0\) modes. Thus, negative double-trace deformations of the UV fixed point may lead to up to 4 units of charge screening. Intriguingly, this remains true also for massive Dirac fields. It would be interesting to analyze further possible implications of this analysis for real world nuclei (note that the window (3.37) implies \(q>119\) for our world, corresponding to theoretically predicted super-heavy elements).
### Supercritical Wilson lines
In this section we address the fate of Wilson lines with supercritical charge, \(q>q_{c}\). Differently from the scalar setup, we will argue that the charge of the Wilson line is screened only down to \(q=\lfloor q_{c}\rfloor\), as a consequence of the Pauli exclusion principle. We will be particularly interested in the nearly supercritical regime, where we will show that dimensional transmutation leads to an exponentially large matter cloud screening the Wilson line.
While our main focus is on \(4d\) massless QED, whenever possible we keep the notation general. Indeed our analysis applies almost verbatim to setups where the matter fields live in \(d=3\); we discuss some of these in section 5. In particular, our analysis is largely inspired by previous works on charged impurities in two-dimensional graphene sheets [30; 68; 69].
We consider the model (104) in the presence of a Wilson line of charge \(4\pi/e^{2}<q<8\pi/e^{2}\) so that \(m_{0}<g<m_{1}\) (in the notation of (105)). The trivial saddle-point \(\Psi_{D}=0\), \(A_{0}=g/r\) corresponds to the supercritical regime for the \(\ell=0\) modes of the decomposition (103), according to the discussion in section 3.2.1. In this case, the solution of the equations of motion (3.2.1) for \(r\to 0\) is written as35
Footnote 35: In this section we will omit the subscript \(s\) and the supscript (\(\delta\)) from the fields, since they are inessential for our analysis (besides introducing a degeneracy).
\[\chi =\frac{g}{m+i\tilde{\nu}}\beta r^{\frac{1}{2}+i\tilde{\nu}}+ \alpha r^{\frac{1}{2}-i\tilde{\nu}}\,, \tag{106}\] \[\xi =\beta r^{\frac{1}{2}+i\tilde{\nu}}+\frac{g}{m+i\tilde{\nu}} \alpha r^{\frac{1}{2}-i\tilde{\nu}}\]
where we let \(m=m_{0}\) and we defined
\[\tilde{\nu}=\sqrt{g^{2}-m^{2}}\,. \tag{107}\]
The nearly supercritical regime we will be interested in corresponds to \(\tilde{\nu}\ll 1\). (106) shows that there are no unitary conformal boundary conditions for the Dirac field as \(r\to 0\).
We are thus forced to choose non conformal boundary conditions on the defect. While our results are ultimately independent of this choice, for the sake of definiteness, we follow [21] and imagine that the charge of the Wilson line is localized inside a small cutoff surface at \(r=r_{0}\) (modelling the nucleus as a uniformly charged ball). Thus the Wilson line in (104) becomes
\[-q\int dt\,A_{0} \to -\frac{q}{4\pi}\int_{r=r_{0}}\!\!\!\!\!dt\,d\Omega_{2}\,A_{0}\,. \tag{108}\]
This implies that for \(\Psi_{D}=0\) the saddle-point profile for the gauge field reads
\[A_{0}=\begin{cases}\frac{g}{r_{0}}&\text{for }r<r_{0}\\ \frac{g}{r}&\text{for }r\geq r_{0}\,,\end{cases} \tag{109}\]
so that there is no electric field for \(r<r_{0}\). The \(\ell=0\) AdS\({}_{2}\) Dirac fields now satisfy standard boundary conditions for \(r/r_{0}\to 0\),
\[\psi\sim\left(\begin{array}{c}0\\ r^{\frac{1}{2}+\frac{m}{2}}\end{array}\right)\qquad\text{for $r\to 0$}\,, \tag{3.68}\]
as well as being continuous at \(r=r_{0}\).
We now show that in the presence of a supercritical field (3.67), the trivial saddle-point \(\Psi_{D}=0\) admits infinitely many diving states in the massless limit. In the spirit of section 3.3, we momentarily consider a field with mass \(M>0\). In appendix B.3, we find that for \(\tilde{\nu}\ll 1\) the condition for having a bound state with energy \(\omega=-M\) is36
Footnote 36: A solution with \(M\sim 1/r_{0}\), schematically corresponding to \(n=0\) in (3.69), may exist for different boundary conditions. Such an \(n=0\) diving state is somewhat analogous to the one created by a negative double-trace deformation discussed in section 3.3 and does not play any essential role for us.
\[\tilde{\nu}\log(2Mgr_{0})=\tilde{\nu}\eta-\pi n\,,\qquad n=1,2,\ldots\,, \tag{3.69}\]
where \(\eta\) is an \(O(1)\) number (which depends on the sign in (3.48) and which we determine in appendix B.3). (3.69) admits infinitely many solutions given by
\[M_{n}=M_{0}\,\Lambda^{n}\,,\qquad M_{0}=\frac{e^{\eta}}{2gr_{0}}\,,\qquad \Lambda=e^{-\pi/\tilde{\nu}}\,, \tag{3.70}\]
where \(\Lambda\) is the same small number we encountered in the discussion of scalar tachyons in subsection 2.3. Note also that just like in the scalar case there are multiple solutions and they (3.70) are log-periodic \(\log(M_{n}/M_{n+1})=\pi/\tilde{\nu}\). We comment more on this below.
Imagine now lowering \(Mr_{0}\) for a single AdS\({}_{2}\) Dirac fermion. When \(M>M_{1}\), all bound states have energy \(\omega>-M\). For \(M_{1}>M\geq M_{2}\) there is one diving state, then as we lower to \(M_{2}>M\geq M_{3}\) we have two diving states, etc.. In general, for \(M_{n}>M\geq M_{n+1}\), \(n\) states have dived into the negative energy continuum. Somewhat pathologically, in the massless limit \(Mr_{0}\to 0\) infinitely many states have joined the negative energy continuum. Physically this implies that the trivial saddle-point with gauge field (3.67) is not a good approximation to the true ground state. This is reminiscent of the discussion around (3.61): in appendix B.4 we show that the diving states are reflected in the existence of infinitely many resonances in the negative energy continuum,37 whose (complex) frequencies satisfy a logarithmic periodicity property analogous to (3.70); this fact was formerly pointed out in [30].
Footnote 37: Somewhat improperly, we call resonances complex poles of the retarded Green’s function analytically continued to the second sheet. These are in one-to-one correspondence with solutions of the Dirac-Coulomb equation satisfying outgoing boundary conditions: \(\psi_{n}\sim e^{-i\omega_{n}t}e^{i\omega_{n}r}\) with \(\operatorname{Re}\omega_{n}<0\) and \(\operatorname{Im}\omega_{n}<0\). Hence the \(\psi_{n}\)’s decay in time and grow exponentially for \(r\to\infty\). We remark however that the corresponding frequencies have comparable real and imaginary part \(\operatorname{Re}\omega_{n}\sim\operatorname{Im}\omega_{n}\), signifying that these cannot be understood as usual resonances, which arise due to a weak mixing between a discrete and a continuum spectrum as in the discussion which led to (3.61).
The result (3.69) in \(d=4\) was originally derived by Pomeranchuk and Smorodinsky [21], who used it to argue that the critical charge \(q_{c}\) for real world
than 137, the value that is obtained in the massless theory. In fact, for realistic values of \(r_{0}\) and \(M\), we get \(q_{c}\approx 173\div 175\)[59]. This is because \(M_{1}\ll 1/r_{0}\) for small \(\tilde{\nu}\), as (114) shows. In light of the connection with the _walking_ behavior associated with the fixed-point merger, as we discussed in section 2.1 for scalar QED, the discrepancy between the massless result and the real world is understood as a consequence of dimensional transmutation. Indeed, the mass of the first diving state \(M_{1}\) coincides parametrically with the scale \(\mu_{IR}\) at which the double-trace coupling blows up, cf. (24), given a UV scale \(\mu_{UV}\sim 1/r_{0}\). This is analogous to dimensional transmutation in QCD, where the proton mass parametrically coincides with the strong coupling scale of the one-loop beta function of the gauge coupling.
Another important remark is the following. The log-periodic structure of the solution (114) reflects an approximately cyclic RG, as for the scalar tachyons discussed in section 2.3. Such an RG structure implies that the mass scale \(M_{n}\) at which the \(n\)th state dives is exponentially larger than \(M_{n+1}\) for \(\tilde{\nu}\ll 1\).38 Note that this is also true for the first diving state, since \(M_{1}\) is exponentially smaller than the cutoff scale \(1/r_{0}\) as we commented above. A finite small mass \(M\) provides an IR cutoff to this periodic flow after \(\sim\frac{\tilde{\nu}}{\pi}\log(M_{1}/M)\) cycles.
Footnote 38: This is completely analogous to the phenomenon of Efimov bound states [20, 48].
In the massless limit, the periodic flow does not persist at arbitrary long distances once we account for the screening cloud created by the matter field and the corresponding backreaction of the gauge field. In particular, after \(q-q_{c}\) units of charge have been screened, the Coulomb field becomes subcritical. According to the analysis in the previous sections, no further instability can occur beyond this point (up to the one discussed in section 3.3, which may only change the final charge by an \(O(1)\) amount) and the RG flow terminates at the standard quantization fixed point. Nonetheless, the approximate cyclic flow plays an important role at intermediate scales; we will momentarily use this observation to our advantage to estimate the size of the screening cloud for \(\tilde{\nu}\ll 1\). Note that this is different from scalar QED, for which, as we discussed in section 2.3, all the screening solitons corresponding to more than one RG cycle are unstable.
To this aim, let us consider the formula (109) expressing the charge density in terms of the single-particle (AdS\({}_{2}\)) wave-functions:
\[\langle j_{0}(r)\rangle_{\mathbb{R}^{d}}=\frac{\kappa_{0}/2}{\Omega_{d-1}r^{ d-1}}\int_{0}^{\infty}\frac{d\omega}{2\pi}\left[\psi_{\omega}^{\dagger}(r) \psi_{\omega}(r)-\psi_{-\omega}^{\dagger}(r)\psi_{-\omega}(r)\right]\,, \tag{115}\]
where the prefactor arises due to rescaling to flat space and \(\kappa_{0}=2^{\lfloor\frac{d}{2}\rfloor}(=4\) in \(d=4)\) is the degeneracy of the \(\ell=0\) modes; note the result is spherically symmetric since we are summing over all the spinor harmonics of the degenerate modes. In appendix B.4 we show that the wave-functions satisfy the following property
\[\psi_{\omega}(\Lambda^{n}\,r)\;\simeq\Lambda^{n/2}\,\psi_{\Lambda^{n}\omega} \left(r\right),\qquad n\in\mathds{Z}\,, \tag{116}\]
which holds as long as \(\omega r_{0}\ll 1\) and \(\Lambda^{n}\omega r_{0}\ll 1\). The property (116) implies that in the absence of backreaction the charge density at distances \(r\gg r_{0}\) satisfies a log-periodicity
property similar to (111):
\[r^{d-1}\langle j_{0}(r)\rangle_{\mathbb{R}^{d}}\simeq(\Lambda^{-n}\,r)^{d-1} \langle j_{0}(\Lambda^{-n}\,r)\rangle_{\mathbb{R}^{d}}\qquad\text{for $n\in\mathbb{Z}$}\,. \tag{112}\]
This property (112) was formerly noticed in [69]. It reflects the aforementioned cyclic RG flow. In particular, there are \(n\kappa_{0}\) units of screening charge between some \(r\gg r_{0}\) and \(\Lambda^{-n}\,r\), for every \(n\).39
Footnote 39: To see this, consider introducing a mass \(M\) such that \(M_{n+1}\ll M\ll M_{n}\), for which thus \(n\kappa_{0}\) states dived into the negative energy continuum. Such a deformation provides an IR cutoff to the radius of the screening cloud at distances \(R\sim 1/M\) with \(1/M_{n}\ll R\ll 1/M_{n+1}\). Consistency demands that there are exactly \(n\kappa_{0}\) units of screening charge for \(r\lesssim R\). By iteration of this argument for different \(n\), we conclude that there must be \(\kappa_{0}\) units of screening charge localized at distances \(R\sim\Lambda^{-n}\,r_{0}\) for every \(n\).
Let us now define \(r_{1}\) to be the radius of the region inside which \(\kappa_{0}\) units of screening charge are contained; while its precise value depends on the boundary condition, we generically expect \(r_{1}\sim r_{0}/\Lambda\). According to the aforementioned periodicity property, it is not until exponentially larger distances \(r_{2}\simeq r_{1}/\Lambda\) that an additional \(\kappa_{0}\) units of charge get screened. In between we may thus safely assume that the Coulomb potential is well approximated by
\[A_{0}\simeq\frac{e^{2}(q-\kappa_{0})}{4\pi r}\qquad\text{for $r_{1}\ll r \ll r_{2}$}\,. \tag{113}\]
This implies that in computing the radius \(r_{2}\simeq r_{1}e^{\pi/\tilde{\nu}}\) we should use the value of \(\tilde{\nu}\) corresponding to the backreacted gauge field (113). This is a small correction to \(r_{2}\) itself in the double-scaling limit (10). We can now repeat this process self-consistently for \(r_{3}\), \(r_{4}\), etc., where \(r_{n}\) denotes the size of the region where \(n\kappa_{0}\) units of charge have been screened. In general, defining
\[\tilde{\nu}(n)=\sqrt{\frac{e^{4}(q-n\kappa_{0})^{2}}{(4\pi)^{2}}-m^{2}}\,, \tag{114}\]
this leads to the following the equation
\[\log(r_{n}/r_{n-1})=\frac{\pi}{\tilde{\nu}(n)}\,. \tag{115}\]
For a sufficiently supercritical charge (but still such that \(\tilde{\nu}\ll 1\)) we may treat \(n\) as a continuous variable and approximate (115) with a differential equation
\[\frac{dn}{d\log(r)}\simeq\frac{\tilde{\nu}(n)}{\pi}=\frac{1}{\pi}\sqrt{\frac{ e^{4}(q-n\kappa_{0})^{2}}{(4\pi)^{2}}-m^{2}}\,, \tag{116}\]
where \(-\kappa_{0}n(r)\) is the amount of screened charged at distance \(r\). We obtain the ratio \(r_{n}/r_{0}\) by integrating this equation:
\[\begin{split}\log(r_{n})-\log(r_{0})&\simeq\int_{0} ^{n}dx\frac{\pi}{\sqrt{\frac{e^{4}(q-\kappa_{0}x)^{2}}{(4\pi)^{2}}-m^{2}}}\\ &=\frac{4\pi^{2}}{e^{2}\kappa_{0}}\left[\cosh^{-1}\left(\frac{q}{ q_{c}}\right)-\cosh^{-1}\left(\frac{(q-\kappa_{0}n)}{q_{c}}\right)\right]\,, \end{split} \tag{117}\]
where we wrote the last line using the value for the critical charge \(q_{c}=4\pi m/e^{2}\). In particular we obtain an estimate for the total radius of the screening cloud by setting \(q-\kappa_{0}n=q_{c}\):
\[R_{cloud}\simeq r_{0}\exp\left[\frac{4\pi^{2}}{e^{2}\kappa_{0}}\cosh^{-1}\left( \frac{q}{q_{c}}\right)\right]\simeq r_{0}\exp\left[\frac{8\pi^{2}}{e^{2}\kappa _{0}}\sqrt{\frac{q-q_{c}}{2q_{c}}}\right]\,, \tag{110}\]
where we expanded for \(q/q_{c}-1\ll 1\). (111)-(110) were formerly derived in [68] via different, less direct means. (110) predicts an exponentially large cloud in the limit where \(e\) is infinitesimal and \(\tilde{\nu}=m\sqrt{q^{2}/q_{c}^{2}-1}\) is fixed (and small). Note that the exponent in (110) is larger by a factor of 2 than the naive estimate that does not account for backreaction \(R_{cloud}^{(naive)}\simeq\exp\left[\frac{q-q_{c}}{\kappa_{0}}\frac{\pi}{\tilde {\nu}(0)}\right]\simeq\exp\left[\frac{4\pi^{2}}{e^{2}\kappa_{0}}\sqrt{\frac{q -q_{c}}{2q_{c}}}\right]\).
The extrapolation of the first equation of (110) to \(q\gg q_{c}\) predicts a power law increase for the radius of the cloud \(R_{cloud}\sim r_{0}(2q/q_{c})^{4\pi^{2}/(e^{2}\kappa_{0})}\). In the future it would be interesting to compare this behavior with a more accurate analysis of the screened line, beyond the regime \(\tilde{\nu}\ll 1\). The numerical methods previously developed to study Fermi surfaces in AdS/CFT [70; 71; 72] might prove useful in this context.
We finally comment on the generalization to fermions with charge \(q_{\psi}>1\). In this case Wilson lines are screened to the largest possible value \(q_{IR}\leq q_{c}\), which is compatibe with the condition that the charge difference \(q-q_{IR}\) has to be quantized in units of \(q_{\psi}\). In particular, the IR limit of a supercritical Wilson line is always a non-trivial (as well as a non-topological) defect, in agreement with the general constraints discussed in section 2.6.
## 4 Non-Abelian gauge theory
### Non-Abelian saddle point
In this section we discuss the generalization of our analysis to weakly coupled non-Abelian conformal gauge theories in \(4d\),40 focusing on the illustrative case of an \(SU(2)\) gauge group. Schematically, the action of the models of interest is given by:
Footnote 40: Former discussions of instabilities for Wilson lines in non-Abelian gauge theories can be found in [73; 74; 75; 76].
\[\mathcal{L}_{bulk}=-\frac{1}{4g_{YM}^{2}}F^{a}_{\mu\nu}F^{\mu\nu}_{a}+\frac{ \theta}{32\pi^{2}}F^{a}_{\mu\nu}\tilde{F}^{\mu\nu}_{a}+\text{matter}\,, \tag{111}\]
where \(a=1,2,3\) and \(F^{a}_{\mu\nu}=\partial_{\mu}A^{a}_{\nu}-\partial_{\nu}A^{a}_{\mu}+\varepsilon^ {abc}A^{b}_{\mu}A^{c}_{\nu}\). Relevant examples of such theories include \(\mathcal{N}=4\) SYM and the \(\mathcal{N}=2\) SCFT with \(N_{f}=4\) hypermultiplets in the fundamental.
Our former analysis of the DCFT fixed points associated with a Wilson line in QED crucially relied on expanding the gauge field around a "Coulomb"-like fixed point. To do the same in the non-Abelian gauge theory we introduce a convenient representation of the line operator. Consider a Wilson line in the \((2s+1)\)-dimensional representation of \(SU(2)\)
\[W_{s}=\text{Tr}\left[Pe^{i\int dx^{\mu}A^{a}_{\mu}T^{a}}\right]\,, \tag{112}\]
where \(T^{a}\) form a spin-\(s\) representation of the \(SU(2)\) algebra. An equivalent representation of the defect (4.2) can be given in terms of a bosonic \(SU(2)\) doublet \(z=\{z_{1},z_{2}\}\) on the line, subject to the constraint \(\bar{z}z=2s\). In this formulation, the total action (bulk and defect) of the defect quantum field theory (DQFT) reads
\[S=S_{bulk}+\int d\tau\left[i\bar{z}\dot{z}+\bar{z}\frac{\sigma^{a}}{2}z\,A^{a}_ {\mu}\dot{x}^{\mu}\right]\,,\qquad\bar{z}z=2s\,, \tag{4.3}\]
where \(\sigma^{a}\) are Pauli matrices and \(x^{\mu}(\tau)\) is an affine parametrization of the line contour. The action (4.3) is invariant under \(SU(2)\) gauge transformations, and thanks to the constraint it is also invariant under the \(U(1)\) gauge transformations \(z\to e^{i\alpha(\tau)}z\). We refer the reader to [14; 77] for details on the equivalence between (4.2) and (4.3). In the representation (4.3), the color matrices are given by the following bilinear operator:41
Footnote 41: More precisely, (4.4) involves a point splitting procedure \(T^{a}=\lim_{\eta\to 0^{+}}\bar{z}(\tau+\eta)\frac{\sigma^{a}}{2}z(\tau)\), see [14]. This subtlety will not play a role in our analysis.
\[T^{a}(\tau)=\bar{z}(\tau)\frac{\sigma^{a}}{2}z(\tau). \tag{4.4}\]
Physically, the variable \(z\) is a quantum-mechanical representation of the color degrees of freedom of the heavy probe modeled by the Wilson line.
The representation (4.3) makes it straightforward to generalize the analysis of the previous sections to the non-Abelian case. Rescaling \(z\to\sqrt{s}z\) we recast (4.3) as
\[S=\frac{1}{g_{YM}^{2}}\hat{S}_{bulk}+s\int dt\left[i\bar{z}\dot{z}+\bar{z} \frac{\sigma^{a}}{2}z\,A^{a}_{0}\right]\,, \tag{4.5}\]
with \(\bar{z}z=2\), where we pulled out explicitly the coupling in front of the bulk action \(S_{bulk}=\hat{S}_{bulk}/g_{YM}^{2}\) and we assumed a straight line at \(r=0\). It is then clear that we can work in the double-scaling limit
\[g_{YM}^{2}\to 0\,,\quad s\to\infty\quad\text{with}\quad g_{YM}^{2}s=\text{ fixed}\,. \tag{4.6}\]
The saddle-point profile (assuming trivial values for the matter fields) takes the form
\[z=z_{0}=\text{const.}\,,\qquad A^{a}_{0}=\frac{g_{YM}^{2}s}{4\pi r}\,\bar{z}_ {0}\frac{\sigma^{a}}{2}z_{0}\,. \tag{4.7}\]
There is a \(S^{2}\) manifold of saddle-points: this is accounted for by the integration over the zero modes which rotate the solution as \(z_{0}\to Uz_{0}\), where \(U\) is an arbitrary element of \(SU(2)\), modulo the \(U(1)\) gauge transformations. The integration over the zero modes has a trivial effect on (gauge-invariant) correlation functions. If we take only the first component of \(z_{0}\) to be non-zero we obtain \(A^{3}_{0}=g_{YM}^{2}s/(4\pi r)\), as in the Abelian case for charge \(q=s\).
On the saddle-point (4.7) we may then effectively decompose the matter fields according to their charge under the _unbroken_\(U(1)\) generated by the direction \(T^{a}\propto\bar{z}_{0}\frac{\sigma^{a}}{2}z_{0}\). For instance, a field in the fundamental decomposes into components of charge \(-1\) and charge \(1\), a field in the adjoint has a neutral component and charge \(\pm 2\) components, etc.42 The rest of the analysis thus proceeds as in the Abelian case. In particular we find that
* A scalar in the \(2S+1\) representation of \(SU(2)\) becomes tachyonic when \(\left|\frac{g_{YM}^{2}s}{2\pi}\overline{s}\right|>1\);
* A fermion in the \(2S+1\) representation of \(SU(2)\) leads to an instability for \(\left|\frac{g_{YM}^{2}s}{4\pi}\overline{s}\right|>1\);
* The charged components of the vector bosons also become tachyonic when \(\left|\frac{g_{YM}^{2}s}{6\pi}\overline{s}\right|>1\) (see appendix C for the derivation).
Additionally, depending on the flavor group and the charge \(s\) of the line defect, several fixed points may exist; these are connected by RG flows analogous to the ones discussed in the Abelian case (including the one corresponding to screening).
The generalization to arbitrary gauge groups \(G\) is straightforward (still working in the semi-classical large-representation limit as above). In some cases, like symmetric and antisymmetric representations of \(SU(N)\), there is a simple worldline action for the Wilson line as in (114), while in other cases writing down a worldline action is more complicated (see, for instance, [78, 79]). If the Wilson line is in a representation \(R\), and \(\vec{\rho}_{R}\) is the highest weight vector of this representation, then without loss of generality we choose the Wilson line to generate an electric field in the Cartan subalgebra going as \(\vec{A_{0}}=g_{YM}^{2}\vec{\rho}_{R}/4\pi r\) (and we will have zero modes that will rotate this inside the group as in the discussion above). Here we normalized the roots to square to one, to agree with the \(SU(2)\) case discussed above. If we have in the bulk a scalar field in a representation \(r\) with weights \(\vec{\mu}_{r}\), then it obtains an effective mass on AdS\({}_{2}\) proportional to \(g_{YM}^{2}|\vec{\rho}_{R}\cdot\vec{\mu}_{r}|\), and we have an instability whenever for some component of this field this becomes larger than \(2\pi\). The analysis for fermions and gauge bosons is similar with instabilities at \(4\pi\) and \(6\pi\), respectively, as above.
For example, if we consider the \(SU(N)\)\({\cal N}=4\) SYM theory, where all fields are in the adjoint representation, the first instability arises from the adjoint scalars, and specifically for the scalar with \(\vec{\mu}=\vec{\alpha}_{1}+\cdots+\vec{\alpha}_{N-1}\).43 If we write the highest weight vector of the Wilson line as \(\vec{\rho}_{R}=\sum_{k=1}^{N-1}\lambda_{k}\vec{\mu}_{k}\), where \(\lambda_{k}\) are non-negative integers and \(\vec{\mu}_{k}\) are the fundamental weights of \(SU(N)\) (satisfying \(\vec{\mu}_{k}\cdot\vec{\alpha}_{j}=\delta_{jk}\)), then the instability arises when \(g_{YM}^{2}\sum_{k=1}^{N-1}\lambda_{k}=4\pi\). For any fixed non-zero value of \(g_{YM}^{2}\), only a finite number of Wilson line representations lead to stable DCFTs.
Footnote 43: This is the generic case. For special weight vectors of the Wilson line, some other scalars will also become unstable at the same time, but not before.
The extrapolation44 of our results to the 't Hooft large \(N\) limit with fixed \(g_{YM}^{2}N\) suggests that we must consider representations with weights of order \(N\) in order to obtain instabilities. We will discuss the holographic interpretation of this below.
Footnote 44: Strictly speaking, the semiclassical saddle-point described in this section is only guaranteed to apply to the large representation limit at small coupling \(g_{YM}^{2}\) with fixed \(N\)[34]. We nonetheless expect the qualitative features of our results to survive in the ’t Hooft large \(N\) limit.
### Example: \(\mathcal{N}=4\) Sym
Let us discuss in more detail the concrete example of the \(\mathcal{N}=4\) SYM theory with gauge group \(SU(2)\). As well known, the theory consists of 6 scalars \(\Phi_{i}\) in the \(\mathbf{6}\) of the \(SO(6)\simeq SU(4)\)\(R\)-symmetry group, 4 Dirac fermions in the \(\mathbf{4}\) of \(SU(4)\), and the non-Abelian gauge field (which is not charged under the \(R\)-symmetry). All matter fields are in the adjoint of the gauge group \(SU(2)\).
The \(\mathcal{N}=4\) SYM theory has famous half-supersymmetric Wilson lines which involve a coupling to the scalar fields that breaks the \(SU(4)\) R-symmetry; these always flow to stable DCFTs, and we will discuss them more below. Here we consider Wilson lines that preserve the \(SU(4)\) R-symmetry. This does not allow any coupling to single scalar fields on top of (4.2), but scalar and fermion bi-linears are allowed.
Let us consider the (non-supersymmetric) Wilson line (4.2). According to the previous discussion, all matter fields are stable around the saddle-point (4.7) as long as \(g_{YM}^{2}|s|\leq 2\pi\), above which value the scalars develop an instability. It is instructive to analyze explicitly defect operators in this setup. For concreteness, we will focus on scalar bilinears and consider the case where standard boundary conditions are imposed on all fields. We denote the scalars as \(\Phi_{i}^{a}\), where \(a\) is the \(SU(2)\) index and \(i\) is an \(SO(6)\) index. In the formalism of (4.3), we can construct \(SU(2)\) invariants by contracting the \(SU(2)\) indices with the line color matrix \(T^{a}\), and the most general gauge-invariant defect operators made from two scalars take the form:
\[\mathcal{O}^{(1)}_{ij} =\frac{1}{s^{2}}\Phi_{i}^{a}\Phi_{j}^{b}T^{a}T^{b}=\mathcal{O}^{( 1)}_{ji}\,, \tag{4.8}\] \[\mathcal{O}^{(2)}_{ij} =\Phi_{i}^{a}\Phi_{j}^{a}-\frac{1}{s^{2}}\Phi_{i}^{a}\Phi_{j}^{b} T^{a}T^{b}=\mathcal{O}^{(2)}_{ji}\,,\] (4.9) \[\mathcal{O}^{(3)}_{ij} =\frac{1}{s}\varepsilon_{abc}\Phi_{i}^{a}\Phi_{j}^{b}T^{c}=- \mathcal{O}^{(3)}_{ji}\,. \tag{4.10}\]
Without loss of generality we can consider a saddle point such that \(T^{a}=s\delta_{3}^{a}\), since the zero-modes' integration does not affect gauge-invariant correlators. Adapting to the \(U(1)\) unbroken by the saddle-point (4.7), \(\Phi_{a}^{i}\) can be written in terms of a neutral component \(\phi_{i}^{3}\equiv\Phi_{i}^{3}\) and a charge 1 complex field \(\phi_{i}^{\pm}\equiv\frac{1}{\sqrt{2}}\left(\Phi_{i}^{1}\pm i\Phi_{i}^{2}\right)\). In terms of this decomposition the quadratic expansion of the operators in (4.8) is:
\[\mathcal{O}^{(1)}_{ij} =\phi_{i}^{3}\phi_{j}^{3}+\ldots\,, \tag{4.11}\] \[\mathcal{O}^{(2)}_{ij} =\left(\phi_{i}^{+}\phi_{j}^{-}+\phi_{j}^{+}\phi_{i}^{-}\right)+ \ldots\,,\] (4.12) \[\mathcal{O}^{(3)}_{ij} =i\left(\phi_{i}^{+}\phi_{j}^{-}-\phi_{j}^{+}\phi_{i}^{-}\right)+ \ldots\,. \tag{4.13}\]
From the analysis of the previous section we thus conclude that to leading order in the double-scaling limit (4.6) the dimension of the defect operators is
\[\Delta\left(\mathcal{O}^{(1)}_{ij}\right)=2\,,\qquad\Delta\left(\mathcal{O}^{( 2)}_{ij}\right)=\Delta\left(\mathcal{O}^{(3)}_{ij}\right)=1+\sqrt{1-\frac{g_{ YM}^{4}s^{2}}{4\pi^{2}}}\,. \tag{4.14}\]
We can also consider more general fixed points where some operators are instead relevant and the \(SO(6)\) group is broken to a subgroup, analogously to the discussion in section 2.5.
Thus, as in the previous subsection, \(SO(6)\) preserving Wilson lines become unstable to scalar condensation for \(g_{YM}^{2}|s|>2\pi\). The screening mechanism is completely analogous to the one discussed for scalar QED with no quartic coupling; note in particular that the screening cloud naturally aligns on a flat direction, such that the potential trivializes \(V\sim\text{Tr}\{[\Phi_{i},\Phi_{j}]^{2}\}=0\). Therefore the IR DCFT admits a nontrivial one-point function for the scalar field, with a coefficient that depends on the initial charge and the boundary condition. Such a coefficient therefore represents a marginal parameter in the double-scaling limit. We expect that quantum corrections will lift this marginal direction. At a quantum level, the \(R\)-symmetry group is preserved by the scalar cloud according to the discussion in subsection 2.5.
The generalization of our discussion to \(SU(N)\) gauge group is straightforward, with (4.2) deformed by \(\Phi_{i}^{a}T^{a}\Phi_{j}^{b}T^{b}\) and all the other possible billinears; the only difference is that for \(N>2\) there are more than three independent bilinear operators.
We close this section with some comments on a well studied generalization of the standard Wilson line (4.2), which also includes a coupling to the adjoint scalar:
\[W_{s}^{\text{BPS}}=\text{Tr}_{2s+1}\left[P\exp\left(\int_{\mathcal{C}}dt(i\dot {x}^{\mu}A_{\mu}^{a}+\zeta|\dot{x}|\Phi_{1}^{a})T^{a}\right)\right]\,. \tag{4.15}\]
The coupling to the scalar (conventionally chosen in the "1" direction) breaks the \(R\) symmetry group to \(SO(5)\). For \(\zeta=1\) the line (4.15) additionally preserves half of the supersymmetry charges, and many exact results are available about this case [78, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95].45 The coupling \(\zeta\) has a nontrivial beta function to one-loop order in perturbation theory, and there is a nontrivial RG flow from the standard fixed point at \(\zeta=0\) to the superconformal one at \(\zeta=1\)[99] (see also [34, 79, 100, 101]).
Footnote 45: For a general approach to supersymmetric line defects in diverse dimensions see [96, 97, 98].
Let us consider the operator (4.15) in the double-scaling limit (4.6). The main difference with respect to the previous case is that the saddle-point (4.7) now also includes a nontrivial scalar profile proportional to \(\zeta\):
\[\Phi_{i}^{a}=\frac{\zeta g_{YM}^{2}s}{4\pi r}\bar{z}_{0}\frac{\sigma^{a}}{2}z_ {0}\,. \tag{4.16}\]
It is now straightforward to repeat the analysis in section 2.1 and compute the scaling dimensions of the defect operators (4.8). Since to leading order in the double-scaling limit the running of the scalar coupling in (4.15) is negligible, we can write the result for arbitrary values of \(\zeta\). For \(\frac{g_{YM}^{4}s^{2}}{4\pi^{2}}\left(1-\zeta^{2}\right)<1\) the corresponding DCFT is unitary and the possible values of the scaling dimensions are given by
\[\Delta\left(\mathcal{O}_{ij}^{(1)}\right)=2\,,\qquad\Delta\left(\mathcal{O}_{ ij}^{(2)}\right)=\Delta\left(\mathcal{O}_{ij}^{(3)}\right)=1\pm\sqrt{1-\frac{g_{ YM}^{4}s^{2}}{4\pi^{2}}\left(1-\zeta^{2}\right)}\,, \tag{4.17}\]
where the sign in front of the square root for each operator depends on the boundary conditions as before. Importantly, for \(\zeta=1\) no value of \(s\) leads to an instability: this is a consequence of the supersymmetry preserved by the line, which as well known ensures that the ground state has zero energy. Additionally, for \(\zeta=1\) there is always a unique Wilson line.
The final remark concerns the RG flow from \(\zeta=0\) to \(\zeta=1\) initially studied in [99]. Our analysis shows that the starting point of such an RG flow is ill-defined for \(s>2\pi/g_{YM}^{2}\). The running of \(\zeta\) can nonetheless be analyzed for smaller values of \(s\) in the double-scaling limit (4.6). This in general requires the analysis of one-loop corrections around the saddle-point profile; see [34] for recent progress in this direction.
### Holographic description
The \(SU(N)\)\(\mathcal{N}=4\) SYM theory is famously dual to type IIB string theory on \(AdS_{5}\times S^{5}\), with a fixed weakly coupled string theory background appearing in the 't Hooft limit of large \(N\) with fixed \(g_{YM}^{2}N\). Supersymmetric Wilson lines have been extensively discussed in this context, but for the non-supersymmetric Wilson lines (4.2) the discussion has mostly been limited to the fundamental representation. For that representation, as discussed in [102], the Wilson line (4.2) maps to a string ending on the appropriate contour on the boundary of \(AdS_{5}\), with Neumann boundary conditions for the \(S^{5}\) position of the string.46 This string is stable under \(SO(6)\)-preserving deformations, consistent with our discussion above.
Footnote 46: As opposed to the supersymmetric Wilson loop that obeys Dirichlet boundary conditions.
Our discussion suggests that instabilities should occur for non-supersymetric Wilson lines with weights of order \(N\). (The stability of Wilson lines in small representations follows simply from the large \(N\) factorization of correlation functions.) Supersymmetric WLs with weights of that order may be described by D-branes [78; 103; 104]; for instance, the supersymmetric WL in the \(k\)'th anti-symmetric representation is described by a D5-brane wrapping an \(S^{4}\in S^{5}\) (and carrying some electric field that gives it the appropriate fundamental string charge). It seems natural to conjecture that non-supersymmetric Wilson lines in representations with weights of order \(N\) would be described by non-BPS D-branes wrapping the \(S^{5}\); for instance, the straight anti-symmetric representation Wilson line may be described by a non-BPS D6-brane on AdS\({}_{2}\times S^{5}\) (with an appropriate electric field on AdS\({}_{2}\)). At large 't Hooft coupling where the \(S^{5}\) is weakly curved, any such non-BPS D-brane has a tachyonic instability, and it is tempting to identify this with the instability discussed above; note that for large 't Hooft coupling any WL with weights of order \(N\) is expected to be unstable. Condensation of the tachyon in a non-uniform fashion that breaks \(SO(6)\) to \(SO(5)\) can describe the flow to the supersymmetric WLs, while the end-point of an \(SO(6)\)-preserving tachyon condensation is less clear. It would be interesting to study further the holographic description of non-supersymmetric WLs in various representations and their instabilities.
## 5 2+1 dimensional CFTs
In this section we analyze Wilson lines in \(2+1\) dimensional CFTs, focusing on Abelian gauge theories. The main difference with respect to four-dimensional theories is that the standard kinetic term \(F_{\mu\nu}^{2}\) for the gauge field is not conformal. Nonetheless, Abelian gauge fields lead to interesting conformal fixed points in Chern-Simons theories or when they interact with matter fields in certain strongly coupled models, some of which may be analyzed perturbatively in a large \(N_{f}\) expansion. Additionally, it is possible to couple Abelian gauge fields in four dimensions to matter fields confined on a three-dimensional interface, a setup that for instance describes the long wavelength limit of graphene. We discuss several examples below.
### Chern-Simons theories with and without matter
Here we review some of the properties of Chern-Simons theories with gauge group \(U(1)\) coupled to a fermion or a scalar, and then study the Wilson line operators of these theories.
The Chern-Simons term at level \(k\) is given by
\[kS_{CS}=\frac{k}{4\pi}\int d^{3}x\epsilon^{\mu\nu\rho}A_{\mu}\partial_{\nu}A_{ \rho}. \tag{110}\]
Without additional matter fields it describes a topological theory. Wilson lines with electric charge \(q\) harbor magnetic flux \(-2\pi q/k\). The magnetic flux is localized to the worldline. This familiar fact can be reproduced by solving the equations of motion in the presence of the line operator \(e^{iq\int dtA_{0}}\), leading to
\[\frac{k}{2\pi}F_{xy}=q\delta^{2}(x_{\perp})\,, \tag{111}\]
where \(x_{\perp}\) stands for the coordinates on the plane \((x,y)\). In other words, we have a holonomy in polar coordinates:
\[A_{\theta}=\frac{q}{k}. \tag{112}\]
Since there are no charged matter fields in this theory, Wilson lines with arbitrary \(q\) do not lead to instabilities. Note that for \(q=nk\), with \(n\in\mathds{Z}\) the Wilson line harbors an integer multiple of the \(2\pi\) flux unit, which is why such a line is transparent in the language of topological field theory. Such a Wilson line is equivalent to a shift of the gauge field by an integral holonomy \(A_{\theta}\to A_{\theta}+n\). Even in the presence of dynamical matter fields \(\Phi_{a}\) (of any spin) an integral holonomy \(A_{\theta}=n\) does not have any physical consequence, as it can be eliminated via a field redefinition of the form \(\Phi_{a}\to e^{-in\theta}\Phi_{a}\). (Note that this redefinition preserves the boundary conditions around the defect only for \(n\in\mathds{Z}\).) More generally this implies that Wilson lines with different charges in Chern-Simons matter theories are identified modulo \(k\): \(q\sim q+k\). This observation will be important in section 5.2.3.
Adding matter fields to (110) famously leads to a very rich set of nontrivial conformal field theories in 2+1 dimensions. These theories can be analyzed perturbatively for \(k\gg 1\). Let us consider adding \(N_{f}\) scalar fields \(\Phi^{i}\) of charge 1 under the gauge symmetry and
Dirac fermions \(\psi^{i}\) of charge 1. There are four independent \(SU(N_{f})\)-invariant marginal terms in the bulk:
\[\bar{\psi}_{i}\psi^{i}\Phi^{\dagger}_{k}\Phi^{k}\,,\,\bar{\psi}_{i}\psi^{j}\Phi^{ \dagger}_{j}\Phi^{i}\,,\,(\bar{\psi}_{i}\bar{\psi}_{j}\Phi^{i}\Phi^{j}+\psi^{i} \psi^{j}\Phi^{\dagger}_{i}\Phi^{\dagger}_{j})\,,\,(\Phi^{\dagger}_{i}\Phi^{i}) ^{3}. \tag{108}\]
The system of beta functions was written in [105]. At large \(k\) and fixed \(N_{f}\), there are perturbative fixed points where all the couplings with fermions scale like \(1/k\) while the sextic coupling scales like \(1/k^{2}\). Curiously, without fermions, no perturbative fixed points exist. In the following we will not need to know the precise value of the couplings at which the fixed points appear. The action reads
\[\begin{split} S=kS_{CS}+\int d^{3}x\left[|D\Phi_{i}|^{2}+i\bar{ \psi}_{i}\not{D}\psi^{i}+\frac{\alpha}{k}\bar{\psi}_{i}\psi^{i}\Phi^{\dagger}_ {k}\Phi^{k}+\frac{\beta}{k}\bar{\psi}_{i}\psi^{j}\Phi^{\dagger}_{j}\Phi^{i} \right.\\ \left.+\frac{\gamma}{4k}(\bar{\psi}_{i}\bar{\psi}_{j}\Phi^{i} \Phi^{j}+\psi^{i}\psi^{i}\Phi^{\dagger}_{i}\Phi^{\dagger}_{j})-\frac{h}{6k^{2} }(\Phi^{\dagger}_{i}\Phi^{i})^{3}\right]\,,\end{split} \tag{109}\]
where we normalized the couplings so that the coefficients \(\alpha\), \(\beta\), \(\gamma\) and \(h\) are all \(O(1)\) at the fixed points of interest.
Here we would like to make some observations about Wilson lines in these theories. Let us consider a Wilson line of a charge \(q\) particle. This amounts to again deforming the action (109) by \(-q\int A_{0}\delta^{2}(x_{\perp})\). It is convenient to normalize the fields \(\Phi\to\sqrt{k}\Phi\) and \(\psi\to\sqrt{k}\psi\). Then all four vertices (108) become of order \(O(k)\) and hence the action admits a double scaling limit for large \(k\) and fixed \(q/k\):
\[\begin{split} S=k\Bigg{\{}S_{CS}+\int d^{3}x\left[|D\Phi|^{2}+i \bar{\psi}\not{D}\psi+\alpha\bar{\psi}_{i}\psi^{i}\Phi^{\dagger}_{k}\Phi^{k}+ \beta\bar{\psi}_{i}\psi^{j}\Phi^{\dagger}_{j}\Phi^{i}\right.\\ \left.+\frac{1}{4}\gamma(\bar{\psi}_{i}\bar{\psi}_{j}\Phi^{i} \Phi^{j}+\psi^{i}\psi^{i}\Phi^{\dagger}_{i}\Phi^{\dagger}_{j})-\frac{h}{6}( \Phi^{\dagger}_{i}\Phi^{i})^{3}\right]-\frac{q}{k}\int d^{3}xA_{0}\delta^{2}(x _{\perp})\Bigg{\}}\.\end{split} \tag{110}\]
The parameters \(\alpha,\beta,\gamma,h\) are all \(O(1)\) in this normalization.
The classical solution we will be expanding about has \(\Phi=0\), fermions in their ground state, and the gauge field given by (107). The fluctuations around this background can be analyzed by writing dropping nonlinear terms in the action as in section 2.1. Consider first the scalar field. We decompose the field in components with different angular momenta as
\[\Phi=\sum_{\ell=-\infty}^{\infty}\frac{e^{i\ell\theta}}{\sqrt{r}}R_{\ell}(t,r)\,, \tag{111}\]
where we can interpret the \(\{R_{\ell}(t,r)\}\) as the KK modes of the scalar for the theory on AdS\({}_{2}\times S^{1}\). Going to frequency space \(R_{\ell}(t,r)=e^{-i\omega t}R_{\ell}(r)\), the linearized equations of motion read
\[-\partial_{r}^{2}R_{\ell}+\frac{-\frac{1}{4}+\left(\ell-\frac{q}{k}\right)^{2} }{r^{2}}R_{\ell}=\omega^{2}R_{\ell}\,. \tag{112}\]
The coefficient of the \(1/r^{2}\) term in (111) corresponds to the AdS\({}_{2}\) mass of the \(\ell\)'th KK mode. Since this coefficient is always greater than the BF bound, \(-1/4\), for every \(\ell\) and \(q/k\), we find there is no perturbative instability for the Wilson line. Note however that for \(q=0\) the mass of the \(\ell=0\) KK mode sits exactly at the BF bound, and thus may be destabilized by arbitrary small perturbations. This observation will be important for the theories that we analyze in the next subsections.
There is still more to say since the mode \(R_{\ell}\) admits two possible conformal boundary conditions if \(|\ell-\frac{q}{k}|<1\). Let us focus first on standard boundary conditions. Proceeding as in the previous sections, from (111) we find the following scaling dimensions for (non-gauge-invariant) defect operators:
\[\Delta(D_{z}^{\ell}\Phi)=\frac{1}{2}+\left|\ell-\frac{q}{k}\right|\,\quad\Delta(D_{z}^{\ell} \Phi)=\frac{1}{2}+\left|-\ell-\frac{q}{k}\right|\,, \tag{112}\]
where we denoted the transverse complex coordinate by \(z=x-iy\) (and \(\bar{z}=x+iy\)). Note that for small \(q/k\) (112) indeed corresponds to small corrections to the classical dimension \(\frac{1}{2}+|\ell|\).
Consider now deforming the action by quadratic terms corresponding to operators with \(0<|\ell-\frac{q}{k}|<1/2\). Since \(\ell\) is integral, there is at most one \(\ell\) satisfying \(|\ell-\frac{q}{k}|<1/2\); we denote it by \(\ell_{0}\) and assume \(\ell_{0}>0\) for simplicity. We can then deform the line operator by \(\delta S\propto\int dtD_{z}^{\ell_{0}}\Phi D_{\bar{z}}^{\ell_{0}}\Phi^{\dagger}\). This deformation is irrelevant, but formally it leads to a UV fixed point, at which the scaling dimension is flipped for \(\ell=\ell_{0}\), i.e. \(\Delta(D_{z}^{\ell_{0}}\Phi)=\frac{1}{2}-|\ell_{0}-\frac{q}{k}|\), while the scaling dimension of the operators with \(\ell\neq\ell_{0}\) remain the same as in the infrared fixed point. The dimension of the bilinear \(D_{z}^{\ell_{0}}\Phi D_{\bar{z}}^{\ell_{0}}\Phi^{\dagger}\) in the UV fixed point is \(1-2|\ell_{0}-\frac{q}{k}|<1\), which is positive only for \(|\ell_{0}-\frac{q}{k}|<1/2\). Note however that as we change \(|q/k|\) quartic operators may become relevant, and the fate of the ultraviolet fixed point has to be re-considered similarly to section 2.2.
As in the analysis of 4d QED, we can ask what happens when we deform the ultraviolet defect fixed point by a negative coupling double-trace deformation. Let us focus on the case where \(|\frac{q}{k}|<1/2\), so that the operator with flipped scaling dimension at the UV fixed point is \(\Phi^{\dagger}\Phi\). The perturbation we consider is given by (in Minkowski signature)
\[S_{DCFT_{UV}}\to S_{DCFT_{UV}}+f^{1-\Delta_{\Phi^{\dagger}\Phi}}\int_{r=0}\!\! \!\!dt\,\Phi^{\dagger}\Phi\,, \tag{113}\]
where \(f>0\) and \(\Delta_{\Phi^{\dagger}\Phi}=1-2|\frac{q}{k}|\). As in four dimensions, this deformation leads to a classical instability of the vacuum, and the new ground state is provided by a nontrivial solitonic profile. This profile can be interpreted as an RG flow to a different, screened, defect. Unlike in scalar QED\({}_{4}\), here we do not solve for the RG flow numerically, and we content ourselves with providing the endpoint of this flow, focusing on the theory with a single scalar and nonzero sextic coupling \(\frac{h}{6}\left(\Phi^{\dagger}\Phi\right)^{3}\). In this case, a straightforward asymptotic analysis of the equations of motion shows that at large distances from the defect both the electric and magnetic field decay faster than \(1/r^{2}\), where \(r\) is the distance from the defect, hence the gauge field is fully
screened. Instead, the scalar decays according to a conformal scaling law with a coefficient which does not depend on \(q\):
\[\langle\Phi^{\dagger}\Phi(r)\rangle=\frac{h^{-1/2}/2}{r}\quad\text{for }r\gg f^{-1}\,. \tag{111}\]
(111) represents a nontrivial one-point function for the scalar field, which Higgses the gauge field close to the defect. Note that the coefficient is fixed in terms of the bulk coupling.47
Footnote 47: Interestingly, we can represent the boundary conditions leading to (111) in terms of the following defect:
\[\mathcal{D}=\exp\left[i\pi\int_{r=0}\!\!dt\,\Phi^{\dagger}\Phi\right]\,. \tag{112}\]
To prove that the defect (112) indeed leads to (111) it is enough to take the variation of the scalar action including the defect term (112). Upon regularizing the defect by introducing an infinitesimal thickness \(r_{0}\), to be taken to zero at the end of the calculation, it is easily seen that the variations of both the boundary term and the bulk action vanish on a solution of the form (111). The subleading falloff of the scalar and the gauge field depend on the ratio \(h/(q/k)\) and we will not discuss them here.
An analogous discussion concerns the fluctuations of the Dirac field. We find that no value of \(q/k\) leads to an instability. In terms of the decomposition discussed in section 3.1, we see that the holonomy (103) simply shifts the AdS\({}_{2}\) mass of the \((\pm,\ell)\) mode as \(m_{\ell}^{(\pm)}\to m_{\ell}^{(\pm)}\pm q/k\), where \(m_{\ell}^{(\pm)}=\frac{1}{2}+\ell=|j|\) in terms of the angular momentum \(j\in\frac{1}{2}+\mathbb{Z}\). We conclude that the scaling dimensions of defect operators corresponding to KK modes with different spin are given by
\[\Delta_{j}=\frac{1}{2}+\left|j+\frac{q}{k}\right|\,, \tag{113}\]
which for small \(q/k\) is a small perturbation of the free theory result. As in the scalar case, when \(\left|j+\frac{q}{k}\right|<1/2\) for some \(j=j_{0}\) there exists a UV fixed point on the Wilson line, which flows to the infrared one via a double-trace deformation.
### Large \(N_{f}\) critical points
#### 5.2.1 \(\text{QED}_{3}\) with \(2n_{f}\) fermions
Abelian gauge fields coupled to matter fields are expected to lead to several interesting conformal fixed points. Perhaps the most interesting and well studied example is given by QED\({}_{3}\) with \(2N_{f}\) charge 1 Dirac fields (complex fermions with two components):
\[\mathcal{L}=i\sum_{a=1}^{2N_{f}}\bar{\Psi}_{a}\left(\not{\partial}-i\not{A} \right)\Psi_{a}\,, \tag{114}\]
where we omitted the kinetic term for the gauge field, since it is irrelevant in the sense of RG. The theory (114) enjoys a \(SU(2N_{f})\) internal symmetry and it is parity invariant; Chern-Simons terms are therefore disallowed.
The model (5.14) is believed to flow to an interacting CFT, at least for sufficiently large \(N_{f}\). The theory (5.14) can be studied perturbatively in the \(\varepsilon\)-expansion [106] and in the large \(N_{f}\) limit [107]. We will focus on the latter limit in what follows.
To leading order in \(N_{f}\) the fermions behave as free fields. The gauge field has a more interesting large \(N_{f}\) limit instead. To see this, it is convenient to integrate out the fermions in (5.14) and write a nonlocal action for the gauge field. In Euclidean signature this reads:
\[\begin{split} S_{eff}[A]&\equiv-2N_{f}\text{Tr} \left[\log\left(\not{\partial}-i\not{A}\right)\right]\\ &=\text{const.}+\frac{N_{f}}{16}\int\frac{d^{3}k}{(2\pi)^{3}}A_{ \mu}(k)|k|\left(\delta^{\mu\nu}-\frac{k^{\mu}k^{\nu}}{k^{2}}\right)A_{\nu}(-k) +\ldots\,,\end{split} \tag{5.15}\]
where in the second line we expanded around \(A_{\mu}=0\) and computed the loop integral with a gauge-invariant regulator. Therefore, to leading order in \(1/N_{f}\), the gauge field two-point function is given by:
\[\langle A_{\mu}(k)A_{\nu}(-k)\rangle=N_{f}\frac{8}{|k|}\left(\delta_{\mu\nu}- \frac{k_{\mu}k_{\nu}}{k^{2}}\right)+\text{gauge dependent terms}\,. \tag{5.16}\]
Equivalently, the result (5.16) can be seen as the resummation of infinitely many bubble diagrams with fermion loops.
We now consider the theory (5.14) in the presence of a Wilson line. Upon integrating out the fermions, the Euclidean action reads:
\[S_{q}[A]=-2N_{f}\text{Tr}\left[\log\left(\not{\partial}-i\not{A}\right) \right]+iq\int d\tau A_{0}\,, \tag{5.17}\]
where the factor of \(i\) in front of \(q\) is due to the Euclidean signature of the metric and we assume \(q\sim N_{f}\). The gauge field sourced by the Wilson line is determined by the following nonlocal equation:
\[\frac{\delta\text{Tr}\left[\log\left(\not{\partial}-i\not{A}\right)\right]}{ \delta A_{\mu}(x)}=i\delta_{0}^{\mu}\frac{q}{2N_{f}}\delta^{2}(x_{\perp})\,. \tag{5.18}\]
Because of conformal invariance, the field which solves (5.18) is Coulomb-like
\[F_{\tau i}=iE\frac{x^{i}}{r^{3}}\,, \tag{5.19}\]
where \(E\) is a nontrivial function of \(q/N_{f}\). For \(q/N_{f}\ll 1\), we can linearize the fluctuation determinant using (5.15) and solve for \(E\):
\[E=\frac{4q}{\pi N_{f}}+O\left(\frac{q^{2}}{N_{f}^{2}}\right)\,. \tag{5.20}\]
According to the general analysis in section 3.2.1, in the presence of the electric field (5.19), the scaling dimensions of the single-trace defect operators with spin \(j=\pm\frac{1}{2}\) is given by
\[\Delta=\frac{1}{2}+\sqrt{\frac{1}{4}-E^{2}}\,. \tag{5.21}\]
When the electric field becomes as large as \(|E|=1/2\) the \(j=\pm 1/2\) modes of the Dirac field develop an instability. In the following we would like to determine the critical value of \(q/N_{f}\) for which \(|E|=1/2\). Clearly, the linearized approximation (110) is not enough and we need to solve the saddle-point equation (109) in the nonlinear regime.
To this aim, we have to compute the fluctuation determinant in (108) for arbitrary values of \(E\). This can be conveniently done by exploiting Weyl invariance to map the theory to AdS\({}_{2}\times S^{1}\). We provide details on the calculation in appendix D. The result for \(E=E(q/N_{f})\) is shown in blue in figure 11, where we also compare with the linearization (110) (in red). We find that the critical value for the instability is found at:
\[\left|E\left(\frac{q_{c}}{N_{f}}\right)\right|=\frac{1}{2}\quad \Longrightarrow\quad\frac{|q_{c}|}{N_{f}}\simeq 0.56\,. \tag{111}\]
We also comment that the functional determinant develops an imaginary part for \(E>1/2\), in agreement with the existence of an instability.
Note that the result for the critical charge in (111) is larger than the one obtained by naively extrapolating the linear approximation (110). In particular, there are more than \(N_{f}\) independent stable lines (counting both positive and negative values of \(q\)). An intuitive justification for this fact is as follows. Imagine adding a mass term to the model (105) in a maximally parity breaking form. As well known, integrating out all the fermions in this setup results in a \(U(1)_{N_{f}}\) Chern-Simons theory in the IR. The latter is a topological theory which admits \(N_{f}\) independent Wilson lines. Therefore it is natural to expect that the number of independent stable lines in the UV theory should also be at least \(N_{f}\). Interestingly, the linear
Figure 11: Plot of the value of \(E\) (in blue) in (108) as a function of \(q/N_{f}\) (for \(q>0\)) as determined from (109). The red line corresponds to the linearized result in (110); as expected, the two curves perfectly agree for small \(q/N_{f}\).
extrapolation (5.20) would give less than \(N_{f}\) stable lines (as easily read from the red line on figure 11), which would lead to a tension with the above RG argument.
The most physically interesting value of \(N_{f}\) for the model at hand is \(N_{f}=2\)[108; 109]. In this case the result (5.22) suggests the existence of two nontrivial Wilson lines. It would be interesting to analyze the effect of subleading corrections in \(1/N_{f}\) for this prediction.
Finally, we comment that for \(0<|E|<1/2\) the \(j=\pm 1/2\) modes admit alternate boundary conditions on the line, and lead to several ultraviolet fixed points on the defect. Since the fluctuation determinant in (5.18) implicitly depends on the boundary conditions of the Dirac field at the defect, the relation between the electric field and \(q/N_{f}\) at these fixed points is different than the one at the infrared fixed point shown in figure 11. It would be interesting to determine the new curve, which is constrained to have the same endpoint at \(E=1/2\) as the one in figure 11.
#### 5.2.2 Comments on scalar QED\({}_{3}\)
We now consider theories with \(N_{f}\gg 1\) charged scalars \(\Phi_{a}\) coupled to an Abelian gauge field:
\[\mathcal{L}=\sum_{a=1}^{N_{f}}|D_{\mu}\Phi_{a}|^{2}-V(|\Phi_{a}|^{2})\,, \tag{5.23}\]
where \(D_{\mu}=\partial_{\mu}-iA_{\mu}\) and we omitted again the kinetic term for the gauge field. The theory (5.23) admits several multicritical fixed points in the large \(N_{f}\) limit depending on the potential \(V(|\Phi_{a}|^{2})\), which may break the \(SU(N_{f})\) symmetry, see e.g. [110]. Parity forbids a Chern-Simons term as in (5.14).
As before, to leading order in \(N_{f}\) the dynamics of the gauge field follows from integrating out the scalar fields in (5.23), leading to a propagator for the gauge field proportional to (5.16). Below we briefly comment about the fate of Wilson lines in this class of theories.
Let us consider first the tricritical theory, which is defined by \(V(|\Phi_{a}|^{2})=0\) at large \(N_{f}\)[110]. It is easy to see that all Wilson lines are unstable in the double-scaling limit \(q\to\infty\), \(N_{f}\to\infty\) with \(q/N_{f}=\text{fixed}\). By conformal invariance, inserting a Wilson line results in a Coulomb field of the form (5.19). For \(q=0\) the singlet scalar bilinear \(\sum_{a}|\Phi_{a}|^{2}\) has dimension \(\Delta=1\) and provides a marginal deformation of the trivial line defect. Therefore the AdS\({}_{2}\) mass of the \(\ell=0\) mode of the scalar sits exactly at the BF bound for \(q=0\), and for any value of \(q\neq 0\) the electric field leads to an instability.48
Footnote 48: Equivalently, this means that the one-loop determinant of the scalar fields is complex for arbitrary (real) values of \(E\)[111].
The instability does not imply necessarily that Wilson lines are trivial in the infrared. Indeed, as in subsection 2.5, for \(q\neq 0\) mod \(N_{f}\) the endpoint operators transform in a nontrivial projective representation of \(PSU(N_{f})\). Therefore, even if the electric and scalar fields were fully screened, there must be a 0+1 dimensional system on the line furnishing a representation of \(SU(N_{f})\) with \(q\) mod \(N_{f}\) boxes. Since the operator in the adjoint of \(SU(N_{f})\)\(\Phi_{a}\Phi_{b}^{\dagger}-\frac{1}{N_{f}}\delta_{a}^{b}|\Phi|^{2}\) has scaling dimension 1 in the large \(N_{f}\) limit, it can couple to the 0+1 dimensional system
via a marginal coupling and one is required to consider \(1/N_{f}\) corrections to understand the true infrared limit, and whether there is a fixed point akin to the spin impurity fixed points discussed in [14; 34; 112].
The situation is different in the presence of a non trivial potential. Consider in particular the critical theory which is obtained including a \(SU(N_{f})\) invariant deformation \(V=\lambda(\sum_{a}|\Phi_{a}|^{2})^{2}\). Via a standard Hubbard-Stratonovich transformation, the singlet scalar bilinear \(\sum_{a}|\Phi_{a}|^{2}\) is seen to have dimension \(\Delta=2\) for \(N_{f}\to\infty\) at the IR fixed point. For small \(|q|/N_{f}\) the dimension of the singlet bilinear defect operator can be obtained perturbatively, i.e. \(\Delta=2+O(q^{2}/N_{f}^{2})\), and no instability is expected until a critical value \(|q|=q_{c}\sim N_{f}\). We thus expect \(\sim N_{f}\) stable Wilson lines at the critical fixed point.
As in the tricritical theory, for \(q\neq 0\) mod \(N_{f}\) we have a projective representation of \(PSU(N_{f})\) living on the line, and again (around the trivial saddle-point \(\Phi=A_{\mu}=0\)) there is a marginal coupling due to the adjoint bilinear, which might be important to take into account at the next order in \(1/N_{f}\).
In the future it would be interesting to compute the value of \(q_{c}/N_{f}\) similarly to the previous section in the critical theory with \(V=\lambda(\sum_{a}|\Phi_{a}|^{2})^{2}\).49 This analysis might provide hints about the fate of Wilson lines in the \(N_{f}=1\) theory, i.e. the Abelian-Higgs model. Because of particle-vortex duality, Wilson lines in the critical Abelian-Higgs model should correspond to defects in the \(O(2)\) model. Some implications of the duality for Wilson lines were discussed in [113].
Footnote 49: To this aim, one would also need to compute the one-point function of the Hubbard-Stratonovich field \(\sigma\sim\sum_{a}|\Phi_{a}|^{2}\) for \(q\neq 0\), since this contributes to the AdS\({}_{2}\) mass of the fundamental fields similarly to [50].
#### 5.2.3 \(U(1)_{k}\) with \(2n_{f}\) fermions
As a final example, we consider the theory that we obtain upon adding a level \(k\) Chern-Simons term to the action (5.14). In the presence of a charge \(q\) Wilson line the action reads
\[S_{q}=\int d^{3}x\left[i\sum_{a=1}^{2N_{f}}\bar{\Psi}_{a}\left(\not{\partial}- i\not{A}\right)\Psi_{a}+\frac{k}{4\pi}\varepsilon^{\mu\nu\rho}A_{\mu}\partial_{ \nu}A_{\rho}\right]-q\int dtA_{0}\,. \tag{5.24}\]
The theory (5.24) admits a natural triple scaling limit for \(N_{f}\sim k\gg 1\) with \(q/k\) (and hence \(q/N_{f}\)) fixed. We would like to determine for which values of \(q\) there is an instability of the matter fields in this limit. To this aim we analyze below the response of the gauge field to the Wilson line.
Despite the similarity with the action (5.14), the Chern-Simons term has a remarkable consequence as explained in section 5.1: Wilson lines with different charges are identified modulo \(k\). We therefore need to analyze only Wilson lines with charges \(-\frac{|k|}{2}<q\leq\frac{|k|}{2}\).
Proceeding as in the previous section, the equation of motion for the gauge field in Euclidean signature can be written as
\[2\frac{\delta\text{Tr}\left[\log\left(\not{\partial}-i\not{A}\right)\right]}{ \delta A_{\mu}(x)}-i\frac{k}{4\pi N_{f}}\varepsilon^{\mu\nu\rho}F_{\nu\rho}=i \delta_{0}^{\mu}\frac{q}{N_{f}}\delta^{2}(x_{\perp})\,. \tag{5.25}\]
The most general solution consistent with conformal invariance is given by a Coulomb field with a holonomy in the angular direction (which is allowed since the Chern-Simons term breaks parity):
\[F_{\tau r}=i\frac{E}{r^{2}}\qquad\text{and}\qquad A_{\theta}=b=\text{const.}\,, \tag{101}\]
where both \(E\) and \(b\) are functions of \(k/N_{f}\) and \(q/N_{f}\). Note that the fluctuation determinant \(\text{Tr}\left[\log\left(\not{\partial}-i\not{A}\right)\right]\) is a periodic function of \(b\) according to the former discussion.
In the presence of the electromagnetic field (101), the scaling dimension of the defect operator corresponding to the spin \(j\) KK mode of the fermion is given by
\[\Delta_{j}=\frac{1}{2}+\sqrt{\left(j+b\right)^{2}-E^{2}}\,,\qquad j\in\frac{1} {2}+\mathds{Z}\,. \tag{102}\]
The theory develops an instability towards charge screening when \(\left(j+b\right)^{2}-E^{2}<0\) for some value of \(j\).
When \(|q|\ll|k|,N_{f}\) we can linearize the fluctuation determinant and find explicitly the values of \(E\) and \(b\) which solve (100):
\[E\simeq\frac{4\pi N_{f}q}{16k^{2}+\pi^{2}N_{f}^{2}}\,,\qquad b\simeq\frac{16 kq}{16k^{2}+\pi^{2}N_{f}^{2}}\,. \tag{103}\]
It is less trivial to solve (100) for general values of \(q\). In practice, rather than solving (100) for fixed values of \(k_{R}\equiv k/N_{f}\) and \(q_{R}\equiv q/N_{f}\), it is easier to do the opposite. Namely, given a certain value of \(E\) and \(b\) for which \(\left(j+b\right)^{2}-E^{2}>0\) for all \(j\in\mathbb{Z}+\frac{1}{2}\), we determine the values of \(k\) and \(q\) that solve (100). Since, as we explained before, an integral holonomy is unphysical, it is enough to determine the region \(R\) in the \((k,q)\) plane where \(b\in(-1/2,1/2)\) and \(|E|<1/2-|b|\), so that no mode is tachyonic.
In appendix D we compute numerically the functional determinant and determine the region \(R\). The result is perhaps surprising. We find that the region \(R\) spanned by the possible values of \((k_{R},q_{R})\) strictly includes the one specified by the inequality \(-|k|/2\leq q\leq|k|/2\), which sets the number of independent Wilson lines. This implies that there exists at least one real stable saddle-point solution to (100) for all Wilson lines (remember that \(q\sim q+k\)). Additionally, the region \(R\) includes points where \(|q|>|k|/2\). Since the region \(R\) is obtained by restricting the value of the holonomy to \(|b|<1/2\), via shifts of the form \(b\to b\pm n\) and \(q\to q\mp kn\), with \(n\in\mathds{N}\), the points in \(R\) for which \(|q|>|k|/2\) correspond to additional saddle-points in the physical region \(|q|\leq|k|/2\). In other words, for certain values of \(q\) and \(k\) there are multiple saddle-point solutions for the gauge field.
Our results are summarized in figure 12, where we separate the physical region \(|q|\leq|k|/2\) into smaller subregions according to the number of saddle-points found. Note that the number of solutions corresponding to a given charge \(q\) increases as we lower \(|k|/N_{f}\). We did not analyze the question of stability of these saddle-points; we expect that the only stable saddle-points at a nonperturbative level are those with the minimal absolute value for the holonomy \(b\) (when restricting to \(-|k|/2\leq q\leq|k|/2\)). It would be interesting to confirm or disprove this expectation.
In conclusion, in the theory (110) Wilson lines with different charges are identified modulo \(k\), because of the Chern-Simons term. We find that all Wilson lines are stable.
Wilson lines in various other Chern-Simons-Matter theories are of great interest as well (e.g. due to the their connection with boson-fermion duality, holography etc). We do not study them here. For some recent results see [114; 115; 116; 117].
### Graphene
It is well known that in a layer of graphene, due to its peculiar lattice structure, the quasiparticles at the Fermi energy are described in terms of an effective Lorentz-invariant theory consisting of four three-dimensional Dirac fermions moving at an effective speed \(v_{f}\approx 1/300\)[118], in the usual relativistic units where the speed of light is set to one. Since \(v_{f}\ll 1\), these quasiparticles experience an enhanced coupling to the \(3+1\)-dimensional Coulomb field [118]:
\[e_{eff}^{2}=\frac{e^{2}}{v_{f}}\gg e^{2}\,. \tag{111}\]
Naively using the modified coupling in the formula for the Coulomb potential sourced by a Wilson line, \(A_{0}=\frac{e_{eff}^{2}q}{4\pi r}\), the general analysis in section 3 implies that there should be an instability towards charge screening for \(|q|\geq|q_{c}|=2\pi/e_{eff}^{2}\simeq 0.2\). This observation motivated several works in the condensed matter literature, see [118] for a review. Of particular relevance
Figure 12: In this plot we separate the \((k/N_{f},q/k)\) plane into different regions according to the number of saddle-points; the legend on the right associates the color to the number of solutions. The plot is restricted to the physical regions \(|q|\leq|k|/2\), and to \(1/6<k/N_{f}<2.5\); a specular plot can be drawn for negative \(k\).
for us is the analysis of the screening cloud and the related resonances in [68; 69; 30], which largely inspired our analysis in section 3.4. Remarkably, this instability and the corresponding screening cloud were experimentally observed in [31] by introducing an external ion close to the material layer.
In practice, the formula (111) neglects important polarization effects that arise because of the strong interaction. To model these effects in a controlled setup it was proposed in [119] to study a model of \(2N_{f}\gg 1\) Dirac fields living on an interface coupled to the Coulomb field \(A_{0}\):
\[S=-\frac{1}{e^{2}}\int dzd^{3}x\frac{1}{4}(\partial_{i}A_{0})^{2}+i\sum_{a=1} ^{2N_{f}}\int_{z=0}d^{3}x\bar{\Psi}_{a}\left[\frac{1}{v_{f}}\gamma^{0}\left( \partial_{0}-iA_{0}\right)+\gamma^{i}\partial_{i}\right]\Psi_{a}\,, \tag{112}\]
where \(\Psi_{a}\) are Dirac fields as in section 5.2.1. The coupling to \(A_{0}\) is fixed by gauge-invariance and breaks the emergent Lorentz-invariance on the interface. Note that the model (112) neglects the spatial components \(A_{i}\) of the gauge-field since their interaction with the Dirac quasiparticles is not enhanced by \(v_{f}\). For \(N_{f}=2\) (112) describes the low energy limit of graphene, but following [119] we allowed for an arbitrary number of fermions. See also [120; 121] for discussions of related models.
In the model (112) the bulk coupling \(e^{2}\) is given by the QED value and cannot be renormalized by interactions with the fields on the interface. Due to the lack of Lorentz invariance, the value of \(v_{f}\) may instead be renormalized by interactions. It was shown in [119] that the velocity \(v_{f}\), and thus the effective strength of the coupling (111), undergo a nontrivial RG flow at order \(1/N_{f}\). The RG admits an IR relativistic fixed-point at \(v_{f}\to\infty\) and a UV quantum critical point (corresponding to \(v_{f}=0\)) with Lifshitz scaling. Due to the smallness of the measured value of \(v_{f}\), this suggests that the physical theory might display approximate Lifshitz scaling.
In the following we will study the model (112) in the presence of a Wilson line of charge \(q\) at \(x=y=z=0\). We will work in the triple-scaling limit defined by
\[N_{f}\sim\frac{1}{e_{eff}^{2}}\sim q\to\infty\quad\text{with}\quad e_{eff}^{2 }N_{f}\sim e_{eff}^{2}q=\text{fixed}\,. \tag{113}\]
In this limit the running of the velocity \(v_{f}\) can be neglected. We will use below the technology that we developed in the analysis of QED\({}_{3}\) in the large \(N_{f}\) limit to compute the critical charge in this approximation.50
Footnote 50: A former analysis of the Coulomb impurity problem in this model appeared in [122]; that work however focused on charges \(q\ll N_{f}\sim 1/e_{eff}^{2}\), in which case the fluctuation determinant in (111) can be linearized. This is not possible for nearly critical electric fields, as figure 11 shows.
Wick rotating to Euclidean signature, we integrate out the fermions as in section 5.2.1 to obtain the effective action for the gauge field
\[S[A]=\frac{1}{e_{eff}^{2}v_{f}^{2}}\int dzd^{3}x\frac{1}{2}(\partial_{i}A_{0} )^{2}-2N_{f}\text{Tr}\left[\log\left(\tilde{\not{\partial}}-i\not{A}/v_{f} \right)\right]+iq\int d\tau A_{0}\,, \tag{114}\]
where we defined \(\tilde{\partial}_{\mu}=\{\partial_{0}/v_{f},\partial_{i}\}\). The gauge field sourced by the line takes the form
\[A_{0}=v_{f}\bar{A}_{0}\,,\qquad\bar{A}_{0}=\frac{iE}{\sqrt{x_{\perp}^{2}+z^{2}} }\,, \tag{112}\]
which becomes critical for \(|E|=1/2\). The value of \(E\) is determined by the saddle-point equation
\[\partial_{i}^{2}\bar{A}_{0}+2e_{eff}^{2}N_{f}\frac{\delta\text{Tr}\left[\log \left(\tilde{\not{\partial}}-i\not{\bar{A}}\right)\right]}{\delta\bar{A}_{0}( x)}=i\,e_{eff}^{2}q\,\delta^{2}(x_{\perp})\delta(z)\,. \tag{113}\]
Since the rescaling \(t\to t/v_{f}\) leaves invariant the fermion one-loop determinant, we can use the result(109) to find the critical value for the charge
\[|q_{c}|=\frac{1}{2}\frac{4\pi}{e_{eff}^{2}}+0.56\,N_{f}\,. \tag{114}\]
The (unjustified) extrapolation of the result (114) to the physical theory, for which \(N_{f}=2\) and \(e_{eff}^{2}/(4\pi)=\alpha_{EM}/v_{f}\simeq 2.2\), gives \(|q_{c}|\simeq 1.35\), which is not too far from the experimentally observed \(|q_{c}|\approx 2\div 3\)[31].
We finally mention that it is possible to consider other instances of charged matter fields on a \(3d\) interface or boundary coupled to a four-dimensional Abelian gauge field. This setup often gives rise to a continuous family of BCFTs, parametrized by the gauge coupling \(e\) and the \(\theta\) angle of the theory [123]. We leave the analysis of Wilson lines in these theories for future work.
## 6 't Hooft line operators
### 't Hooft lines in Abelian gauge theories
We discuss the case of 4 space-time dimensions, and take the gauge group to be \(U(1)\), with matter fields that are some massless fermions and scalars with \(U(1)\) charges that we will specify later. Such a theory admits a magnetic \(U(1)\) one-form symmetry, since the current \((\star F)_{\mu\nu}\) is conserved due to the Bianchi identity \(dF=0\). Furthermore, the one-form symmetry operator \(e^{i\alpha\int_{\Sigma_{2}}F}\) can be cut open in a straightforward fashion, by just allowing \(\Sigma_{2}\) to have a boundary \(\partial\Sigma_{2}\). The non-genuine line operator on \(\partial\Sigma_{2}\) can be viewed as a Wilson line with fractional charge.
Therefore, our considerations in subsection 2.6 imply that 't Hooft lines in such theories cannot be trivial, or topological. This is simply because the magnetic field cannot be screened. Let us therefore make some remarks about the defect conformal theories arising from 't Hooft lines.
Inserting a 't Hooft line representing the worldline of a monopole of magnetic charge \(n\) leads to a boundary condition for the gauge field on a small \(S^{2}\) surrounding the 't Hooft line,
which can be written (up to gauge transformations) as:51
Footnote 51: To see that \(n\) is an integer we consider the region near the south pole, where we have \(A\sim nd\phi\), which can be interpreted as being due to a transparent solenoid if \(n\) is an integer. We adopted spherical coordinates \(ds^{2}=dt^{2}-\left[dr^{2}+r^{2}\left(d\theta^{2}+r^{2}\sin^{2}(\theta)d\phi^{2} \right)\right]\).
\[A=\frac{n}{2}(1-\cos(\theta))d\phi. \tag{108}\]
In fact, setting all the matter fields to vanish and adopting (108) everywhere in space leads to a solution of the equations of motion, and as before, we can now investigate the fluctuations around this background.
We start from a scalar field \(\Phi\) of charge \(q\) and we consider it in the background (108). We separate the variables as usual \(\Phi=e^{i\omega t}Y(\theta,\phi)\frac{1}{r}R(r)\), and find that the function \(Y(\theta,\phi)\) has to be a monopole spherical harmonic [124; 125; 126]:
\[\left[\Delta_{S^{2}}-\frac{qn/2}{\cos^{2}(\theta/2)}\left(i\partial_{\phi}+ \frac{qn}{2}\right)\right]Y_{qn/2,\ell m}=-\ell(\ell+1)\,Y_{qn/2,\ell m}\,. \tag{109}\]
The \(Y_{qn/2,\ell m}\) transform in the spin \(\ell\) representation of \(so(3)\). Most importantly,
\[\ell\in|qn/2|+\mathbb{Z}^{+}\,, \tag{110}\]
which famously leads to ground state degeneracy in the presence of a monopole as it removes the \(s\)-wave modes for nonzero \(qn\). We can now turn to the radial equation, assuming the angular part of the wave function is in a state with spin \(\ell=|qn/2|\). We obtain the radial wave equation:
\[-\partial_{r}^{2}R+\frac{|qn|}{2r^{2}}R=\omega^{2}R. \tag{111}\]
The potential is effectively always repulsive and there is no instability for any \(qn\). The dimension of the defect operator \(\Phi^{\dagger}\Phi\) is inferred from the behavior of \(R(r)\) near the origin, as before
\[\Delta=1+2\sqrt{\frac{|qn|}{2}+\frac{1}{4}}. \tag{112}\]
We see that the already for the minimal 't Hooft line with \(qn=1\) the dimension of the defect operator is \(1+\sqrt{3}\) which is larger than the bulk scaling dimension of \(\Phi^{\dagger}\Phi\), which is 2. The charged bosons therefore never furnish relevant or marginal operators on the 't Hooft line, unlike the case of the Wilson lines. (112) is a good approximation as long as \(e^{2},\lambda\ll 1\), where \(\lambda\) is the scalar quartic coupling. Note that unlike for Wilson lines, here no double scaling limit is necessary.
The analysis of fermions around a 't Hooft line is analogous. The Dirac equation for a 4 component fermion in spherical coordinates can be written as
\[\left[\gamma^{0}\frac{\partial}{\partial t}+\gamma^{r}\frac{\partial}{ \partial r}+\frac{1}{r}\gamma^{\theta}\frac{\partial}{\partial\theta}+\frac{ 1}{r\sin(\theta)}\gamma^{\phi}(\frac{\partial}{\partial\phi}-iqA_{\phi}) \right]\Psi=0\,, \tag{113}\]
acting on a four-component spinor with the following matrices:
\[\gamma^{0}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&-1&0\\ 0&0&0&-1\end{pmatrix}\,\quad\gamma^{r}=\begin{pmatrix}0&0&\cos(\theta)&\sin( \theta)e^{-i\phi}\\ 0&0&\sin(\theta)e^{i\phi}&-\cos(\theta)\\ -\cos(\theta)&-\sin(\theta)e^{-i\phi}&0&0\\ -\sin(\theta)e^{i\phi}&\cos(\theta)&0&0\end{pmatrix}\,\]
\[\gamma^{\theta}=\begin{pmatrix}0&0&-\sin(\theta)&\cos(\theta)e^{-i\phi}\\ 0&0&\cos(\theta)e^{i\phi}&\sin(\theta)\\ \sin(\theta)&-\cos(\theta)e^{-i\phi}&0&0\\ -\cos(\theta)e^{i\phi}&-\sin(\theta)&0&0\end{pmatrix},\quad\gamma^{\phi}= \begin{pmatrix}0&0&0&-ie^{-i\phi}\\ 0&0&ie^{i\phi}&0\\ 0&ie^{-i\phi}&0&0\\ -ie^{i\phi}&0&0&0\end{pmatrix}. \tag{100}\]
An important novelty compared to the boson, is that now there are modes with total angular momentum \(|qn|/2-1/2\) if \(|qn|>0\). This is the minimal achievable angular momentum. These modes are often called "spinor spherical harmonics of the third type" and the explicit formula in terms of the ordinary monopole harmonics with azimuthal angular momentum \(m\) is (denoting \(\mu=|qn|/2\)):
\[\Omega^{(3)}_{\mu,\mu-1/2,m}=\begin{pmatrix}-\sqrt{\frac{\mu-m+1/2}{2\mu+1}}Y_ {\mu,\mu,m-1/2}\\ \sqrt{\frac{\mu+m+1/2}{2\mu+1}}Y_{\mu,\mu,m+1/2}\end{pmatrix}. \tag{101}\]
See [126] for an exposition to this subject. An ansatz for the solution of the Dirac equation is \(\Psi=e^{-iEt}\frac{1}{r}\begin{pmatrix}F(r)\Omega^{(3)}_{\mu,\mu-1/2,m}\\ iG(r)\Omega^{(3)}_{\mu,\mu-1/2,m}\end{pmatrix}\). The Dirac equation then reduces to \(\frac{dG}{dr}=EF\,\frac{dF}{dr}=-EG\). There are two independent solutions here, \(F=e^{iEr},G=-ie^{iEr}\) and \(F=e^{-iEr},G=ie^{-iEr}\). The doublet \(\begin{pmatrix}F\\ G\end{pmatrix}\) is acted upon by the Hamiltonian \(H=\begin{pmatrix}0&\frac{d}{dr}\\ -\frac{d}{dr}&0\end{pmatrix}\). Since \(r=0\) plays the role of a boundary, one needs to impose a boundary condition for the existence of a well-defined variational problem (equivalently, the existence of a Hermitian Hamiltonian). Therefore we require that
\[\langle\begin{pmatrix}F\\ G\end{pmatrix},\begin{pmatrix}0&\frac{d}{dr}\\ -\frac{d}{dr}&0\end{pmatrix}\begin{pmatrix}F\\ G\end{pmatrix}\rangle=\langle\begin{pmatrix}0&\frac{d}{dr}\\ -\frac{d}{dr}&0\end{pmatrix}\begin{pmatrix}F\\ G\end{pmatrix},\begin{pmatrix}F\\ G\end{pmatrix}\rangle \tag{102}\]
The Hamiltonian is Hermitian if \(F^{*}(0)G(0)\) is purely real. The most general admissible solution is thus
\[F=Ae^{iEr}+Be^{-iEr}\,\quad G=-iAe^{iEr}+iBe^{-iEr}\,\quad B=e^{i\theta+i\pi/2}A\,, \tag{103}\]
where \(\theta\) is the \(\theta\)-angle (not to be confused with the azimuthal coordinate). (The \(\pi/2\) shift is a convention.) For instance, assuming \(\theta=0\) we get \(F\sim\cos(Er-\pi/4)\) and \(G\sim\sin(Er-\pi/4)\). The falloff of the wave function near the origin allows us to read the dimensions of defect
operators as usual. We can therefore interpret the coefficient \(A\) (or \(B\)) as an operator of dimension \(1/2\).52
Footnote 52: For \(qn=0\), i.e. the trivial defect, the wave functions behave as \(\sin(Er)/r\) for small \(r\) and the corresponding defect operator has dimension \(3/2\), which is nothing but the original bulk fermion on the trivial defect.
The conclusion is that for any non-zero \(n\) and \(q\) we have an operator of dimension \(1/2\) on the defect and hence gauge-invariant marginal bilinears of dimension \(1\). In particular there is a marginal bilinear of spin \(0\), which has a clear interpretation - it allows to change the \(\theta\) angle appearing in the boundary condition for the fermion in (111). The appearance of a \(\theta\) angle for monopoles in an Abelian gauge theory was noted already in [127]. We will soon see that this angle has a very simple interpretation following from the symmetries of the theory.
Before discussing in more detail the \(\theta\) angle and the corresponding marginal operator, we would like to generalize our discussion of the boundary conditions corresponding to a 't Hooft line to arbitrary \(U(1)\) gauge theories.
In the background of a monopole with \(n\) units of magnetic field, a charge \(q\) left moving 4d Weyl fermion reduces (in its lowest angular momentum mode on \(S^{2}\)) to \(nq\) left moving 2d fermions if \(nq>0\) and \(-nq\) right moving fermions for \(nq<0\). Either way, they transform in the spin (\(|nq|/2-1/2\)) representation of \(SU(2)\), which indeed has \(|nq|\) components. (For \(nq=0\) the special representations of spin (\(|nq|/2-1/2\)) do not exist.) These massless fermions live on a half-infinite line \(r\geq 0\). We must therefore choose boundary conditions at \(r=0\).
It is useful to quickly review some facts about boundary conditions in 2d. For a recent discussion of the connections between anomalies and boundary conditions see [128, 129]. Unless \(c_{L}=c_{R}\) no boundary condition which is time-translation-invariant exists. Similarly, the \(Tr[U(1)]^{2}\) anomaly precludes the existence of a \(U(1)\)-preserving boundary condition and the \(Tr[U(1)_{A}U(1)_{B}]\) anomaly precludes the existence of a boundary condition preserving both \(U(1)_{A}\) and \(U(1)_{B}\).
Since the 't Hooft line has to be gauge-invariant, we must insist that a \(U(1)\)-preserving boundary condition exists. The \(|nq|\) left moving fermions contribute to the \(Tr[U(1)]^{2}\) anomaly \(nq\times q^{2}=nq^{3}\) for \(q>0\), and the \(|nq|\) right moving fermions give \(-|nq^{3}|\) for \(q<0\). Summing them all up we have a condition equivalent to \(\sum q^{3}=0\) over all Weyl fermions. Therefore, as long as the original 4d gauge theory is consistent (free of gauge anomalies) we have no obstruction to picking a gauge-invariant boundary condition at the 't Hooft line.
For there to exist time-translational-invariant boundary conditions a necessary condition is that the number of left and right moving fermions coincides, so that a 2d gravitational anomaly is absent. This gives
\[\sum_{q_{i}>0}|q_{i}|-\sum_{q_{i}<0}|q_{i}|=0\,. \tag{112}\]
This is realized if the four dimensional theory has no anomaly of the form
\[\partial j_{gauge}\sim(\sum_{\text{weyl fermions}}q_{i})R\wedge R. \tag{113}\]
Traditionally, the 4d anomaly (111), which was first described in [130], is interpreted as an obstruction to gauge invariance in curved space. Here we see that upon introducing a 't Hooft line, it leads to an imbalance of left- and right-moving fermions and consequently, obstructs the existence of time-translationally and rotationally invariant 't Hooft lines. In modern parlance the anomaly (111) should be described by a two-group symmetry involving the magnetic one-form symmetry and Lorentz symmetry [131]. We conclude that it is not possible to construct 't Hooft lines that preserve rotational invariance and time translational invariance in such theories with a two-group symmetry. (The calculation of the \(SU(2)\) rotation symmetry anomalies is in section 5 of [32]. The conclusion is that if the theory is free of a gauge anomaly and the two-group symmetry involving the magnetic one-form symmetry and Lorentz symmetry is trivial, then there is no obstruction to choosing a boundary condition at the monopole which is rotationally invariant.)
Next let us consider a global \(U(1)_{Q}\) symmetry with charges \(Q_{i}\) such that \(\sum Q_{i}q_{i}^{2}=0\), i.e. it suffers from no ABJ anomaly (and thus it is a true continuous symmetry), but such that \(\sum Q_{i}^{2}q_{i}\neq 0\), namely, the global symmetry and the magnetic one-form symmetry furnish a two-group [131]. It is easy to see that the 2d modes in the lowest angular momentum sector have an anomaly \(Tr[U(1)_{Q}]^{2}\neq 0\) and hence no 't Hooft lines can preserve \(U(1)_{Q}\). Any line operator that violates a global symmetry leads to an exactly marginal "tilt" operator and hence there would be exactly marginal tilt operators corresponding to such \(U(1)_{Q}\) global symmetries.
Let us now comment on the axial symmetry with charges \(Q_{i}^{A}\), for which there is an ABJ anomaly in 4d \(\sum Q_{i}^{A}q_{i}^{2}\neq 0\). This anomaly removes the continuous symmetry in 4d. Depending on the monopole charge, a discrete subgroup can be preserved by the monopole boundary conditions. Furthermore, the \(\theta\) angle in (110) couples to an operator that is marginal at tree level, but already at the next order in \(e^{2}\) it becomes marginally irrelevant [32] (and references therein). Indeed, since there is no continuous axial symmetry in 4d, there is no reason to expect a tilt operator.
Let us summarize the main highlights about 't Hooft lines in Abelian gauge theories:
* The axial symmetry leads to a marginal operator at tree level, but this operator and the corresponding \(\theta\) angle are irrelevant in the full theory.
* Unless \(\sum_{i}q_{i}=0\), no 't Hooft lines which are time independent and rotationally symmetric exist.
* Bosons generally receive positive anomalous dimensions and do not lead to low-lying operators on the defect.
* Exactly marginal defect operators arise from global \(U(1)\) symmetries which participate in a nontrivial two-group with the magnetic one-form symmetry.
### 't Hooft lines in non-Abelian gauge theories and S-duality
In weakly coupled non-Abelian gauge theories, to specify a 't Hooft line, we can simply fix the magnetic fluxes for the gauge fields in the Cartan sub-algebra [132, 133]. Then, expanding around this classical solution with all the other fields vanishing, the main novelty that we encounter compared to the Abelian theory is that we also have spin 1 massless charged fields (the off-diagonal W-bosons).
Let us therefore generalize the discussion in the previous subsection to the problem of charge \(q\), spin \(s\) particles, with magnetic \(g\)-factor \(g_{m}\). We need to determine the centrifugal barrier for these particles. The total angular momentum is made out of the angular momentum \(\ell\) and the internal spin \(s\). The range of \(\ell\) is like for the bosonic wave functions in the presence of the unit monopole: \(\ell=|nq|/2,|nq|/2+1,...\).
The general result for the centrifugal barrier matrix is given by [126]
\[V=\frac{\ell(\ell+1)-n^{2}q^{2}/4-\tfrac{1}{2}nqg_{m}\hat{r}^{a}S^{a}}{r^{2}}\, \tag{133}\]
where \(S^{a}\) are spin \(s\) representation matrices and \(\hat{r}\) is the unit vector. The eigenvalues of \(\hat{r}^{a}S^{a}\) in principle can be between \(-s,-s+1,..,s\). However for the short representations with total angular momentum \(j=\ell-s\) which are possible for \(\ell\geq s\), only a subset of these values is realized, see for instance [134]: \(\hat{r}S\) has to be \(+s\) for positive \(nq\) and \(-s\) for negative \(nq\).
Without loss of generality, taking positive \(nq\) we find the centrifugal barrier for representations with spin \(|nq|/2-s\) is
\[V=\frac{1}{2}nq\frac{1-g_{m}s}{r^{2}}. \tag{134}\]
For scalars we have a repulsive centrifugal force exactly consistent with (134). For fermions with the standard magnetic moment \(g_{m}=2\) the numerator vanishes and we have no centrifugal barrier, as we have seen in the previous subsection.
Now let us discuss charged vector bosons. If these vector bosons are approximately fundamental particles, as they are in weakly coupled gauge theories, then we have \(g_{m}=2\). The discussion for vector bosons has to be split between the case of \(nq\geq 2\) and \(nq=0,1\). In the latter cases the special representation with spin \(nq/2-1\) which gives rise to (134) does not exist and there are no relevant defect operators associated to the vector bosons. In the case that \(nq\geq 2\) the formula (134) is valid, and we clearly see that the potential is attractive with coefficient \(-\tfrac{1}{2}nq\tfrac{1}{r^{2}}\), which for \(nq\geq 2\) always leads to an instability of the vector bosons - the bilinear operators associated to the vector bosons do not have a real scaling dimension, similarly to the super-critical Wilson lines. This means that we have to condense vector bosons with \(nq\geq 2\) and the infrared limit of such 't Hooft lines remains to be determined.
Let us now consider some examples - for instance, the \(\mathcal{N}=4\) SYM theory with gauge group \(SU(2)\) and the \(\mathcal{N}=4\) SYM theory with gauge group \(SO(3)\). Those theories have supersymmetric 't Hooft lines which are stable, but we discuss here the non-supersymmetric \(SO(6)_{R}\)-invariant 't Hooft lines which couple only to the gauge field. For \(SU(2)\) the minimal monopole has \(n=1\) and the vector boson charge is \(q=2\) (in units of the minimal charge).
Therefore, even the minimal 't Hooft line is unstable to W-boson condensation at weak coupling, and deep in the infrared it presumably becomes trivial. (The cloud of W-bosons that forms remains to be computed.) In the latter case, with gauge group, the charge of the W-boson is 1 and hence in the background of the minimal 't Hooft line we have no such instability and the minimal 't Hooft line should furnish a healthy conformal defect. Higher 't Hooft lines are all unstable to W-boson condensation, though.53
Footnote 53: For the SYM theory the statement would be that only ’t Hooft lines in the fully antisymmetric representation are unscreened at weak coupling. When we refer to the gauge theory we really have in mind the gauge theory which admits purely magnetic lines, and similarly, when we refer to gauge theory, we have in mind the one with purely magnetic lines [33].
These results are consistent with one-form symmetry. The theory has a magnetic one-form symmetry protecting the minimal 't Hooft line while the theory does not have such a magnetic one-form symmetry and hence the 't Hooft lines are unprotected.
Let us now make some comments about S-duality for the R-symmetry invariant lines. Let us start from the theory. As we crank up the coupling constant on the conformal manifold, we have seen that fewer and fewer Wilson lines remain as nontrivial infrared DCFTs. Presumably at strong coupling only the unique Wilson line in the fundamental representation remains, as it is protected by the electric one-form symmetry. By S-duality this should map to 't Hooft lines in weakly coupled gauge theory. Indeed, we have just argued that at weak coupling no 't Hooft lines other than the minimal one exist. For the gauge theory, Wilson lines again gradually disappear as the coupling is cranked up, presumably leaving none at strong coupling, consistently with the absence of any conformal 't Hooft lines in weakly coupled gauge theory. Therefore, our results for the non-supersymmetric line operators in SYM theory are consistent with -duality.
## Acknowledgements
We thank S. Bolognesi, I. Klebanov, J. Maldacena, M. Metlitski, S. Sachdev, N. Seiberg, A. Sever, S.-H. Shao, Y. Wang, and S. Yankielowicz for useful discussions. The work of OA was supported in part by an Israel Science Foundation (ISF) center for excellence grant (grant number 2289/18), by ISF grant no. 2159/22, by Simons Foundation grant 994296 (Simons Collaboration on Confinement and QCD Strings), by grant no. 2018068 from the United States-Israel Binational Science Foundation (BSF), by the Minerva foundation with funding from the Federal German Ministry for Education and Research, by the German Research Foundation through a German-Israeli Project Cooperation (DIP) grant "Holography and the Swampland", and by a research grant from Martin Eisenstein. OA is the Samuel Sebba Professorial Chair of Pure and Applied Physics. GC was supported by the Simons Foundation grants 488647, 397411 (Simons Collaboration on the Non-perturbative Bootstrap) and 994296 (Simons Collaboration on Confinement and QCD Strings). ZK, MM and ARM are supported in part by the Simons Foundation grant 488657 (Simons Collaboration on the Non-Perturbative Bootstrap) and the BSF grant no. 2018204. MM gratefully acknowledges the support and
hospitality from the Simons Center for Geometry and Physics during the final stages of this work. ARM is an awardee of the Women's Postdoctoral Career Development Award.
## Appendix A Details on scalar QED\({}_{4}\)
### Defect propagator and double-trace deformation for subcritical charge
In this section we consider a charged field in AdS\({}_{2}\), with Euclidean action:
\[S_{bulk}=\int_{\text{AdS}_{2}}d^{2}x\sqrt{g}\left[|D_{\mu}\Phi|^{2}+m^{2}|\Phi|^ {2}\right]\,, \tag{110}\]
where \(D_{\mu}=\partial_{\mu}-iA_{\mu}\) and \(A_{\mu}=-i\delta^{0}_{\mu}g/r\) is a subcritical Coulomb potential, i.e. such that \(1+4m^{2}-4g^{2}>0\). We assume \(g>0\) with no loss of generality. For such a model the near boundary (\(r\to 0\)) expansion of the operator \(\Phi\) reads:
\[\Phi\sim\alpha r^{1/2-\nu}+\beta r^{1/2+\nu}\,, \tag{111}\]
where \(\nu=\sqrt{1/4+m^{2}-g^{2}}>0\). We will compute the propagator for the mode \(\alpha\) at the alternate quantization fixed point \(\beta=0\). We will then use this result to obtain the exact propagator in the presence of a double-trace defect deformation of the form \(\sim f\bar{\alpha}\alpha\). We will finally argue that such a propagator displays a tachyon pole when the coefficient of the deformation is negative. This signals an instability of the trivial vacuum \(\Phi=0\), whose end point we analyze in section 2.2.
We start by computing the propagator at the alternate quantization fixed-point. We use the generating function approach, which is typically employed for AdS/CFT calculations [135]. To this aim, differently from in the main text, we consider the theory with the following Dirichlet boundary conditions in terms of the modes (111)
\[\beta(\tau)=\frac{1}{\nu}J(\tau)\,,\qquad\beta^{\dagger}(\tau)=\frac{1}{\nu} \bar{J}(\tau)\,, \tag{112}\]
where \(J\) is a complex fixed function that we will soon interpret as an external source. For the action to be stationary with boundary conditions of the form (112) we add the following boundary term:
\[S_{bdry}=\lim_{r_{0}\to 0}\int_{r=r_{0}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We now follow the GKPW prescription and interpret the theory with Dirichlet boundary conditions (A.3) as the deformation of the alternate fixed point by a complex source \(J\) for \(\alpha\)[136, 137]. It follows by Wick's theorem that the propagator for the boundary field \(\alpha\) at the alternate quantization fixed point is
\[\langle\alpha(\omega)\alpha^{\dagger}(\omega^{\prime})\rangle_{f=0}=2\pi\delta( \omega-\omega^{\prime})G^{(0)}_{\alpha}(\omega)\,,\qquad G^{(0)}_{\alpha}( \omega)=-\left[\frac{\alpha(\omega)}{J(\omega)}+\frac{\alpha^{\dagger}(\omega) }{\bar{J}(\omega)}\right]\,,\] (A.6)
where \(\alpha\) in \(G^{(0)}_{\alpha}\) is obtained from the boundary limit of a regular solution of the bulk equations of motion with boundary condition (A.3) for \(r\to 0\). The Fourier transform is defined according to
\[\alpha(\tau)=\int\frac{d\omega}{2\pi}e^{-i\omega\tau}\alpha(\omega)\,,\qquad \alpha^{\dagger}(\tau)=\int\frac{d\omega}{2\pi}e^{i\omega\tau}\alpha^{\dagger }(\omega)\,,\] (A.7)
and similarly for \(J(\tau)\), \(\bar{J}(\tau)\).
All that is left to do is to solve the Euclidean Klein-Gordon equation in a Coulomb potential:
\[-r^{2}(\partial_{0}-iA_{0})^{2}\Phi-r^{2}\partial_{r}^{2}\Phi+m^{2}\Phi=0\,.\] (A.8)
Using \(A_{0}=-ig/r\) and setting \(\Phi(\tau,r)=e^{-i\omega\tau}\Phi(r)\), we find
\[-r^{2}\partial_{r}^{2}\Phi+\left[m^{2}-(g+ir\omega)^{2}\right]\Phi=0\,.\] (A.9)
The most general solution of (A.9) can be written as a linear combination of Whittaker's W functions:
\[\Phi(r)=c_{1}W_{ig,-\nu}(2r\omega)+c_{2}W_{-ig,\nu}(-2r\omega)\,.\] (A.10)
In the following we focus on \(\omega>0\). From \(W_{x,y}(z)\stackrel{{ z\to\infty}}{{\propto}}e^{-z/2}\), we infer that regularity at \(r\to\infty\) implies that we need to set \(c_{2}=0\) in (A.10). We then extract \(\alpha\) and \(\beta\) from the comparison of (A.2) with the expansion of the Whittaker's function
\[W_{x,y}(z)\stackrel{{ z\to 0}}{{\sim}}z^{\frac{1}{2}-y}\frac{ \Gamma(2y)}{\Gamma\left(\frac{1}{2}-x+y\right)}+z^{\frac{1}{2}+y}\frac{ \Gamma(-2y)}{\Gamma\left(\frac{1}{2}-x-y\right)}\,.\] (A.11)
Similarly solving the equation for \(\Phi^{\dagger}\), we conclude
\[\frac{\alpha(\omega)}{\beta(\omega)}=\frac{\alpha^{\dagger}(\omega)}{\beta^{ \dagger}(\omega)}=(2\omega)^{-2\nu}\frac{\Gamma(2\nu)\Gamma\left(\frac{1}{2}- \nu-ig\right)}{\Gamma(-2\nu)\Gamma\left(\frac{1}{2}+\nu-ig\right)}\,,\qquad \omega>0\,.\] (A.12)
Note that \(\alpha\), \(\beta\) are not complex conjugates of \(\alpha^{\dagger}\), \(\beta^{\dagger}\) on the solution. The propagator then follows from (A.6):
\[G^{(0)}_{\alpha}(\omega)=(2\omega)^{-2\nu}\frac{4\Gamma(2\nu)\Gamma\left(\frac {1}{2}-\nu-ig\right)}{\Gamma(1-2\nu)\Gamma\left(\frac{1}{2}+\nu-ig\right)}\,, \qquad\omega>0\,.\] (A.13)
An important remark follows. Consider the Euclidean propagator (A.13) analytically continued to complex values of \(\omega=|\omega|e^{i\lambda}\). We find that for \(g>0\) and \(0<\nu<1/2\), the imaginary part of the propagator vanishes for a value \(\lambda=\lambda_{*}\) between \(0\) and \(\pi/2\):
\[\text{Im}\left[G^{(0)}_{\alpha}(|\omega|e^{i\lambda_{*}})\right]=0\quad\text{ for }0\leq\lambda_{*}<\pi/2\,.\] (A.14)
Additionally, such a zero is unique for \(-\pi/2\leq\lambda\leq\pi/2\). The property (A.14) follows by noticing that the equation \(\mathrm{Im}\left[G_{\alpha}^{(0)}(|\omega|e^{i\lambda_{*}})\right]=0\) is equivalent to
\[e^{4\pi g}\sin\left(2\left(\pi-2\lambda_{*}\right)\nu\right)-2e^{2\pi g}\sin \left(4\lambda_{*}\nu\right)-\sin\left(2\left(\pi+2\lambda_{*}\right)\nu \right)=0\,.\] (A.15)
(A.15) is obtained by writing the ratio \(G_{\alpha}^{(0)}/\left[G_{\alpha}^{(0)}\right]^{*}\) using the identity \(\Gamma(1/2+x)\Gamma(1/2-x)=\pi/\cos(\pi x)\) to simplify the Gamma functions. The unique solution of (A.15) for \(-\pi/2\leq\lambda\leq\pi/2\) can be written in a simple form in the limits \(g\to 0\) and \(g\to\infty\):
\[\lambda_{*}=\begin{cases}0+\dfrac{\pi\sin(\pi\nu)}{2\nu\cos(\pi\nu)}g+O\left( g^{3}\right)&\text{for $g\to 0$}\\ \dfrac{\pi}{2}-\dfrac{\sin(2\pi\nu)}{2\nu}e^{-2\pi g}+O\left(e^{-4\pi g} \right)&\text{for $g\to\infty$}\,.\end{cases}\] (A.16)
It can be checked that \(\lambda_{*}\) monotonically grows from \(0\) to \(\pi/2\) as \(g\) increases (with \(0<\nu<1/2\)). Note that the propagator (A.13) is real on the Euclidean axis for \(g=0\): \(G_{\alpha}^{(0)}(\omega)=\omega^{-2\nu}\frac{2^{2\nu+1}\Gamma(\nu)}{\Gamma(1- \nu)}>0\). We also find that the real part of the propagator is positive when the imaginary part vanishes, \(\mathrm{Re}\left[G_{\alpha}^{(0)}(|\omega|e^{i\lambda_{*}})\right]>0\).
We finally consider a double-trace defect deformation of the form
\[\delta S=f\int d\tau\alpha^{\dagger}\alpha\,.\] (A.17)
This deformation is relevant for \(\nu<1/2\). The exact Euclidean propagator in this case can be obtained from (A.6) by resumming the perturbative series in \(f\), leading to the well known result (see e.g. [39, 138]):
\[\langle\alpha(\omega)\alpha^{\dagger}(\omega^{\prime})\rangle=2\pi\delta( \omega-\omega^{\prime})G_{\alpha}(\omega)\,,\qquad G_{\alpha}(\omega)=\frac{G_ {\alpha}^{(0)}(\omega)}{1+fG_{\alpha}^{(0)}(\omega)}\,.\] (A.18)
The retarded propagator \(G_{\alpha,R}\) is obtained by analytically continuing the Euclidean expression from \(\omega>0\) as
\[G_{\alpha,R}(\omega_{L})=\begin{cases}G_{\alpha}(|\omega_{L}|e^{-i\frac{\pi}{ 2}})&\text{for $\omega_{L}>0$}\\ G_{\alpha}(|\omega_{L}|e^{i\frac{\pi}{2}})&\text{for $\omega_{L}<0$}\,.\end{cases}\] (A.19)
The property (A.14) implies that, for \(f<0\), the retarded propagator analytically continued to the upper half plane has a tachyon pole for \(\mathrm{Im}(\omega_{L})>0\) and \(\mathrm{Re}(\omega_{L})<0\). Such a pole corresponds to a solution of the classical equations of motion with purely outgoing boundary conditions which grows in time. Therefore it signals an instability of the vacuum. No such pathology occurs for \(f>0\), in which case we can safely expand the result at small frequencies \(\omega/|f|^{\frac{1}{2\nu}}\ll 1\) and find a result corresponding to an operator of scaling dimension \(\Delta=\frac{1}{2}+\nu\) (up to a contact term).
### Tachyons for a supercritical Coulomb potential
In this section we study the Klein-Gordon equation for a charged field in AdS\({}_{2}\) in an external potential,
\[r^{2}(\partial_{0}-iA_{0})^{2}\Phi-r^{2}\partial_{r}^{2}\Phi+m^{2}\Phi=0\,,\] (A.20)
where we work in Lorentzian signature, such that \(A_{0}=g/r\) with \(g^{2}>1/4+m^{2}\), corresponding to a supercritical Coulomb potential. We will be particularly interested in the regime \(\tilde{\nu}=\sqrt{g^{2}-m^{2}-1/4}\ll 1\). We introduce a cutoff at a small radius \(r=r_{0}\), and impose the most general linear boundary condition on \(\Phi\) as in (2.36):
\[\left[r\partial_{r}\Phi-\left(\frac{1}{2}+\hat{f}\right)\Phi\right]_{r=r_{0}} =0\,.\] (A.21)
In the following we show that the problem specified by (A.20) and (A.21) admits infinitely many tachyonic solutions with negative real part of the frequency.
We consider the following solution to (A.20):
\[\Phi\propto e^{-i\omega t}W_{ig,i\tilde{\nu}}(-2ir\omega)\,.\] (A.22)
(A.22) behaves as \(\Phi\propto e^{-i\omega t}e^{i\omega r}\) for \(r\to\infty\), and thus corresponds to purely outgoing boundary conditions for \(\text{Re}(\omega)<0\). The expansion for \(r\ll\omega^{-1}\) of the solution (A.22) takes the general form
\[\Phi\sim\alpha r^{1/2-i\tilde{\nu}}+\beta r^{1/2+i\tilde{\nu}}\,,\] (A.23)
where the ratio between the modes reads
\[\frac{\alpha}{\beta}=(-2i\omega)^{-2i\tilde{\nu}}\frac{\Gamma\left(2i\tilde{ \nu}\right)\Gamma\left(\frac{1}{2}-ig-i\tilde{\nu}\right)}{\Gamma\left(-2i \tilde{\nu}\right)\Gamma\left(\frac{1}{2}-ig+i\tilde{\nu}\right)}\,.\] (A.24)
Focusing on \(\omega\ll 1/r_{0}\), we can express the boundary condition (A.21) in terms of the modes (A.23). Using (A.24) and working at leading order in \(\tilde{\nu}\ll 1\), we find the condition:
\[\left(-2\,c\,\omega\,r_{0}\,e^{i\tilde{\gamma}}\right)^{-2i\tilde{\nu}}=1\,,\] (A.25)
where we defined
\[\tilde{\gamma}=\frac{\pi}{e^{2\pi g}+1}\,,\] (A.26)
and \(c\) is an \(O(1)\) positive number given by
\[c=\exp\left[\frac{1}{2}\psi\left(\frac{1}{2}+ig\right)+\frac{1}{2}\psi\left( \frac{1}{2}-ig\right)-\frac{1}{\hat{f}}+2\gamma_{E}\right]\,,\] (A.27)
with \(\gamma_{E}\) is the Euler-Mascheroni constant. (A.25) has infinitely many solutions given by
\[\omega_{n}=-\frac{1}{2cr_{0}}e^{-i\tilde{\gamma}-n\pi/\tilde{\nu}}\,,\qquad n \in\mathbb{Z}^{+}\,,\] (A.28)
where we excluded \(n\leq 0\) since our approximations break down for \(\omega\gtrsim 1/r_{0}\). Noticing that \(0<\tilde{\gamma}<\pi/2\) for \(g>0\) (as we assumed throughout this section), we see that the frequencies (A.28) have \(\text{Re}(\omega_{n})<0\) and \(\text{Im}(\omega_{n})>0\). The corresponding solutions (A.22) thus grow in time and signal an instability of the \(\Phi=0\) saddle-point. For the physical case of the \(\ell=0\) mode of a \(4d\) scalar we have \(m=0\), thus (from the condition of small \(\tilde{\nu}\)) \(g\simeq 1/2\) and we find
\[\text{Im}(\omega_{n})\simeq-0.13\,\text{Re}(\omega_{n})\quad\text{for }g=1/2\,.\] (A.29)
### The effective potential from the soliton solution
In a semiclassical theory the connected generating functional \(W[J]\) and one point function \(\bar{A}[J]\) are:
\[\begin{split} W[J]&=\left(S+\int dt\,JA\right) \Big{|}_{A\;\text{saddle}}\,,\\ \bar{A}[J]&=\frac{\delta W[J]}{\delta J}\,,\end{split}\] (A.30)
where in the first line we used the saddle point approximation and set \(\mathcal{W}(A)=0\) for simplicity, cf. (2.32). (We will restore it later.) The Legendre transform of \(W[J]\) is the 1PI effective action:
\[\Gamma[\bar{A}]=W[J[\bar{A}]]-\int dt\,J[\bar{A}]\,\bar{A}\,,\qquad J[\bar{A}] =-\frac{\delta\Gamma[\bar{A}]}{\delta\bar{A}}\,.\] (A.31)
Now we specialize to constant \(\bar{A}\) (and hence constant source \(J\)) and integrate the above equation to get:54
Footnote 54: We can also obtain that
\[W(J)=T\int_{0}^{\bar{A}(J)}d\bar{A}^{\prime}\,\left(J-J(\bar{A}^{\prime}) \right)\,;\] (A.32)
we can verify that this equation solves the second line of (A.30) by taking the \(J\) derivative.
\[\Gamma(\bar{A})\equiv T\,\mathcal{V}(\bar{A})=-T\int_{0}^{\bar{A}}d\bar{A}^{ \prime}\,J(\bar{A}^{\prime})\,.\] (A.33)
Now let us determine \(J\) in terms of the quantities we know. The saddle point condition from the first line of (A.30) is
\[0=\frac{\delta S}{\delta A}+J\quad\implies\quad J=-4\nu B(A)\,,\] (A.34)
where we used (2.13) combined with the fact that the phase of the scalar \(\Phi\) is constant for the soliton solution. Plugging this result back into (A.33) our final formula is:
\[\begin{split}\mathcal{V}(\bar{A})&=-4\nu\int_{0}^{ \bar{A}}d\bar{A}^{\prime}\,B(\bar{A}^{\prime})\\ &=4\nu\int_{0}^{\bar{A}}d\bar{A}^{\prime}\,s(g)\left(\bar{A}^{ \prime}\right)^{\frac{1/2+\nu}{1/2-\nu}}\\ &=4\nu\left(\frac{1}{2}-\nu\right)s(g)\left(\bar{A}\right)^{ \frac{1}{1/2-\nu}}\,,\end{split}\] (A.35)
which agrees with (32) with \({\cal W}(A)=0\). We can restore \({\cal W}(A)\) dependence by adding \(4\nu{\cal W}(A)\) to \(S\) in (134) and the first line of (132), which then shifts \(J\) by \(4\nu{\cal W}^{\prime}(A)\), which upon integrating over \(A\) as in (135) simply adds \(4\nu{\cal W}(A)\) to the result in (136) as stated in the main text in (32).
### Quantization of the screening cloud for a light massive scalar
In section 2.6 we proved that a line charged under a one-form symmetry cannot flow to a topological line in the IR, if the topological operator implementing the one-form symmetry can be cut-open. In this appendix we explore an application of this result to Wilson lines in QED\({}_{4}\) in the Coulomb phase with a charge \(q_{\phi}>1\) massive scalar. Because of the one-form symmetry, supercritical Wilson lines with charge \(q\neq 0\mod q_{\phi}\) cannot become topological in the IR. We show how this expectation is borne out by explicitly quantizing the screening soliton. We find that, for sufficiently small mass, the endpoint of the defect RG flow is a Wilson line of charge \(q_{IR}\neq 0\).
The action of the model we consider reads:
\[S=\frac{1}{e^{2}}\int d^{4}x\left[|\partial_{\mu}\Phi-iq_{\phi}A_{\mu}\Phi|^{2 }-m^{2}|\Phi|^{2}-\frac{1}{4}F_{\mu\nu}^{2}\right]\,, \tag{137}\]
where we take \(q_{\phi}>1\) (integer), \(m^{2}>0\) and we neglected the quartic scalar vertex for simplicity; this will not affect our considerations.
We now consider a Wilson line of charge \(q\gg 1\). As in section 2.3, we regulate this insertion by cutting off space at a surface \(r=r_{0}\). As we will focus on distances \(r\gg r_{0}\), the detailed form of the boundary conditions at \(r=r_{0}\) will not be important for us.
In the massless limit, the trivial saddle-point \(A_{0}=\frac{e^{2}q}{4\pi r}\) is unstable when the Wilson line is supercritical for \(q>2\pi/(q_{\phi}e^{2})\) or when deforming the alternate quantization fixed point by a double-trace operator with negative coefficient \(f\). In both cases we schematically denote \(R_{cloud}\) the radius of the screening cloud. For supercritical lines with \(\tilde{\nu}\ll 1\) this scales as \(R_{cloud}\sim r_{0}e^{\pi/\tilde{\nu}}\), where \(\tilde{\nu}=\sqrt{e^{4}q_{\phi}^{2}q^{2}/(4\pi^{2})-1}\). For double-trace deformations it is parametrically set by the coupling, \(R_{cloud}\sim|f|^{1/(2\tilde{\nu})}\), where \(\hat{\nu}=\sqrt{1-e^{4}q_{\phi}^{2}q^{2}/(4\pi^{2})}\).
A sufficiently large mass term \(m\gtrsim R_{cloud}^{-1}\) sets an IR cutoff for the screening cloud. The scalar therefore does not fully screen the Wilson line anymore, leaving a remnant Coulomb field at distances \(r\gtrsim m^{-1}\) irrespective of the value of \(q\mod q_{\phi}\). A quantitative analysis for \(q_{\phi}=1\) can be found in [59]. Quantum effects do not qualitatively change the IR limit of the Wilson line (though they might change the value of the charge of the endpoint by an \(O(1)\) amount for \(q_{\phi}>1\)).
In what follows we focus on the regime \(m\ll R_{cloud}^{-1}\). In this case, a naive extrapolation of the analysis of the massless setup in sections 2.2 and 2.3 suggests that the Coulomb field of unstable Wilson line is fully screened for every value of \(q\) and \(q_{\phi}\). We will see below that this is not the case and that for distances larger than the Compton wavelength there is a nontrivial electric flux, in agreement with the one-form symmetry charge.
Consider first the theory in the presence of a charge \(q\) Wilson line, such that \(q/q_{\phi}\in\mathds{N}\). In this case we do not expect any subtlety and we can describe the IR limit of the line defect at distances \(r\gtrsim R_{cloud}\) via the effective description (2.47). As discussed in section 2.4, this amounts to expanding the scalar field around a non-trivial solution of the equations of motion in the presence of a source (2.45). In the massive case, up to gauge transformations, the profile takes the following form
\[\langle\Phi(r)\rangle=\langle\bar{\Phi}(r)\rangle=\frac{h_{s}(r)}{\sqrt{2}}\,, \qquad h_{s}(r)=\frac{e^{2}v}{4\pi}\frac{e^{-mr}}{r}\,.\] (A.37)
We now quantize the zero modes of the theory in the background (A.37). We work in the gauge \(A_{r}=0\). This completely specifies the gauge up to \(r\)-independent gauge transformations. We define fluctuations as follows:
\[\Phi=\frac{h_{s}(r)}{\sqrt{2}}+e\frac{\delta\phi+i\delta\psi}{\sqrt{2}}\,, \qquad A_{0}=0+e\,a_{0}\,,\] (A.38)
where \(\delta\phi\) and \(\delta\psi\) are real and we neglected the angular components of the gauge field as these will not play a role in what follows. The quadratic action then reads:
\[S\simeq\int d^{4}x\left[\frac{1}{2}\left(\partial_{\mu}\delta\phi\right)^{2}+ \frac{1}{2}\left(\partial_{\mu}\delta\psi-q_{\phi}h_{s}(r)a_{0}\delta^{0}_{\mu }\right)^{2}-\frac{m^{2}}{2}\left(\delta\phi^{2}+\delta\psi^{2}\right)+\frac{1 }{2}\left(\nabla a_{0}\right)^{2}\right]+\ldots\,.\] (A.39)
The linearized scalar \(U(1)\) charge density is given by:
\[j_{0}=-iq_{\phi}\frac{\partial\mathcal{L}}{\partial\dot{\Phi}^{\dagger}}\Phi+ c.c=\frac{q_{\phi}}{e}h_{s}(r)\left(\delta\dot{\psi}-q_{\phi}h_{s}(r)a_{0} \right)+\ldots\,,\] (A.40)
which is normalized so that \(Q=\int j_{0}\in q_{\phi}\mathbb{Z}\), as we will see. (A.40) measures the charge density at distance \(r\gtrsim R_{cloud}\), since we are working in the effective intermediate energy description of the line, where only the tail of the screening cloud is visible.
From (A.39) we derive the equations of motion for \(\delta\psi\) and \(a_{0}\):
\[\begin{split}-\partial^{2}\delta\psi+q_{\phi}\dot{a}_{0}h_{s}(r) -m^{2}\delta\psi&=0\,,\\ \nabla^{2}a_{0}+q_{\phi}h_{s}(r)(\delta\dot{\psi}-q_{\phi}h_{s}(r )a_{0})&=0\,.\end{split}\] (A.41)
In what follows we will need three nontrivial solutions of the equations (A.41). The first is given by:
\[\delta\psi\propto h_{s}(r)\,,\quad a_{0}=0\,.\] (A.42)
This solution clearly corresponds to an infinitesimal \(U(1)\) rotation of the scalar profile (A.38).
To find the other, we set \(\delta\psi=\dot{a}_{0}=0\) and consider the radial equation for \(a_{0}\):
\[\frac{1}{r^{2}}\partial_{r}\left(r^{2}\partial_{r}a_{0}\right)=q_{\phi}^{2}h_{ s}^{2}(r)a_{0}\,.\] (A.43)
This equation can be solved numerically. It admits two solutions, which can be distinguished by their behavior for \(r\ll m^{-1}\):
\[a_{0}^{(1)}(r)\stackrel{{mr\to 0}}{{\sim}}\frac{1}{r^{\delta}}\,, \qquad a_{0}^{(2)}(r)\stackrel{{mr\to 0}}{{\sim}}\frac{1}{r^{1- \delta}}\,,\] (A.44)
where
\[\delta=\frac{1}{2}+\frac{1}{2}\sqrt{1+q_{\phi}^{2}\frac{e^{4}v^{2}}{4\pi^{2}}}>1\,,\] (A.45)
so that \(a_{0}^{(1)}\) is singular and \(a_{0}^{(2)}\) is regular for \(r\to 0\). The solutions in (A.44) are exact in the massless limit. For \(rm\gg 1\), both solutions take the asymptotic form \(a^{(i)}(r)\sim c_{1}^{(i)}+c_{2}^{(i)}/r\), where \(c_{1}^{(i)}\) and \(c_{2}^{(i)}\) are constants.55 We will not need the explicit expressions for \(a_{0}^{(1)}(r)\) and \(a_{0}^{(2)}(r)\) in what follows.
Footnote 55: Note that \(c_{1}^{(i)}\) can be removed by a large gauge transformation involving \(\delta\psi\).
For our purposes it is more convenient to consider the two nontrivial solutions of (A.43) in terms of two linear combinations of the modes \(a_{0}^{(1)}(r)\) and \(a_{0}^{(2)}(r)\) in (A.44). The first linear combination is such that the gauge field has no electric flux on the cutoff surface \(R_{0}\gtrsim R_{cloud}\) for the effective defect field theory description. (\(R_{0}\) is not to be confused with the UV cutoff \(r_{0}\).) We call this the _normalizable_ solution. This is formally written as
\[a_{0}^{(nor)}(r)=\alpha_{1}R_{0}^{\delta-1}a_{0}^{(1)}(r)+\alpha_{2}R_{0}^{- \delta}a_{0}^{(2)}(r)\quad\text{such that}\quad 4\pi r^{2}\partial_{r}a_{0}^{(nor)}(r)|_ {r=R_{0}}=0\,,\] (A.46)
where \(\alpha_{1}/\alpha_{2}=(\delta-1)/\delta\) for \(R_{0}m\ll 1\). Without loss of generality, we normalize \(\alpha_{1}\) and \(\alpha_{2}\) in (A.46) so that
\[\lim_{r\to\infty}4\pi r^{2}\partial_{r}a_{0}^{(nor)}(r)=-1\,,\] (A.47)
which because of Gauss's law (A.43) implies
\[4\pi q_{\phi}^{2}\int_{R_{0}}^{\infty}dr\,r^{2}h_{s}^{2}(r)a_{0}^{(nor)}(r)=-1\,.\] (A.48)
Importantly for what follows, when we take the massless limit \(m\to 0\) at fixed \(r/R_{0}\), we have \(\alpha_{2}\propto\alpha_{1}\to 0\) for the integral in (A.48) to converge.
For future purposes we also define another linear combination \(\tilde{a}_{0}(r)\) of the solutions (A.44) which has the same flux at \(r=R_{0}\) and at infinity:
\[4\pi r^{2}\partial_{r}\tilde{a}_{0}(r)|_{r=R_{0}}=\lim_{r\to\infty}4\pi r^{2} \partial_{r}\tilde{a}_{0}(r)=-1\,,\] (A.49)
that, because of Gauss's law (A.43), imply
\[4\pi q_{\phi}^{2}\int_{R_{0}}^{\infty}dr\,r^{2}h_{s}^{2}(r)\tilde{a}_{0}(r)=0\,.\] (A.50)
Explicitly this solution reads
\[\tilde{a}_{0}(r)=\beta_{1}R_{0}^{\delta-1}a_{0}^{(1)}(r)+\beta_{2}R_{0}^{- \delta}a_{0}^{(2)}(r)\,,\] (A.51)
where for \(R_{0}m\ll 1\) we have \(\beta_{1}=\alpha_{1}+1/(4\delta\pi)\) and \(\beta_{2}=\alpha_{2}\). Therefore in the massless limit \(\tilde{a}_{0}(r)\propto a^{(1)}(r)\), which is regular at infinity.
We plot the schematic form of the dimensionless electric flux \(r^{2}F_{tr}\) associated with the solutions above in figures (a)a and (b)b. In the normalizable solution \(a^{(nor)}(r)\), the flux
continuously increases until distances \(r\sim 1/m\), at which it becomes constant. For the solution \(\tilde{a}(r)\) instead the electric flux first decreases, reaching a minimum, and then it starts rising and asymptotically approaches a constant value. It is important to stress that the distance at which the electric flux reaches its minimum increases as we make the mass smaller, \(r_{min}\sim 1/m\), and it drifts to infinity in the massless limit.
When quantizing the theory, the two solutions (A.42) and (A.46) provide the zero-modes inside the decomposition of the fields \(\delta\psi\) and \(a_{0}\)
\[\begin{split}\delta\psi(t,x)&=h_{s}(r)\,\hat{x}+ \text{wave-modes}\,,\\ a_{0}(t,x)&=eq_{\phi}\,a_{0}^{(nor)}(r)\,\hat{p}+ \text{wave-modes}\,.\end{split}\] (A.52)
To bypass the full quantization of the system (which would require solving also for the wave modes) and directly find the commutation relation between \(\hat{x}\) and \(\hat{p}\), we impose the charge action on \(\Phi\):
\[[Q,\Phi]=q_{\phi}\Phi\quad\implies\quad[\delta\psi,Q]=iq_{\phi}h_{s}(r)\,,\] (A.53)
where we linearized around the solution (A.37). From (A.40) and (A.48) we read the charge operator
\[Q=-\frac{4\pi q_{\phi}^{2}}{e}\int_{R_{0}}^{\infty}dr\,r^{2}h_{s}^{2}(r)a_{0} (t,r)=q_{\phi}\hat{p}\,.\] (A.54)
We conclude that at quantum level the operators \(\hat{x}\) and \(\hat{p}\) form a canonical pair
\[[\hat{x},\hat{p}]=i.\] (A.55)
Remarkably, the decomposition (A.52) and the commutation relation (A.55) are all we need to construct solitonic states with the properly quantized charge. Explicitly, we notice that the phase of \(\Phi\) is a compact field. When linearizing around the background (A.37), this
Figure 13: The electric flux \(r^{2}E=r^{2}F_{tr}\) associated with the solutions \(a_{0}^{(nor)}(r)\) (figure 12(a)) and \(\tilde{a}(r)\) (figure 12(b)) for different values of the mass \(m\) (in units of the cutoff \(R_{0}\)). These plots were obtained by setting \(\frac{q_{\phi}e^{2}v}{4\pi}=1\).
implies that \(\hat{x}\) is defined only modulo \(2\pi\) in (111). Therefore, calling \(|0\rangle\) the state such that \(\hat{p}|0\rangle=0\), we can construct the following quantum states
\[|n\rangle=e^{in\hat{x}}|0\rangle\qquad\text{for }n\in\mathds{N}\,. \tag{112}\]
(112) implies that
\[\hat{p}|n\rangle=n|n\rangle\,, \tag{113}\]
and therefore, using (110) we conclude that the states \(|n\rangle\) have quantized values for the gauge charge in units of \(q_{\phi}\)
\[Q|n\rangle=nq_{\phi}|n\rangle\,. \tag{114}\]
(114) implies that the screeneing cloud for the state \(|n\rangle\) has an extra \(q_{\phi}n\) units of charge with respect to the ground state, which fully screens the Wilson line. The expectation value of the gauge field and the charge density are similarly computed from (111) and (111). Note that to linear order there is no difference in the expectation value of the scalar field profile between a state \(|n\rangle\) and \(|0\rangle\); this is no longer true at higher orders.
We are finally ready to address the issue of screening of Wilson lines with charge \(\notin q_{\phi}\mathds{N}\). To this aim, consider a Wilson line with charge \(q+\delta q\), with \(q/q_{\phi}\in\mathds{N}\) and \(\delta q\) an \(O(1)\) correction, \(\delta q\ll q\). We model this setup by perturbing the scalar IR defect previously analyzed via the following term
\[\delta S_{D}=-\delta q\int dt\,A_{0}\,. \tag{115}\]
This term induces a classical profile for the gauge field in addition to the quantized part. Thus (111) is modified to:
\[\begin{split}\delta\psi(t,x)&=h_{s}(r)\,\hat{x}+ \text{wave-modes}\,,\\ a_{0}(t,x)&=eq_{\phi}a_{0}^{(nor)}(r)\,\hat{p}+e \,\delta q\,\tilde{a}_{0}(r)+\text{wave-modes}\,.\end{split} \tag{116}\]
where \(\tilde{a}_{0}(r)\) is the solution (111) which has the same flux at \(r=R_{0}\) and \(r\to\infty\) (cf. (111) and figure 12(b)); by Gauss's law thus, \(\tilde{a}_{0}(r)\) does not contribute to the total charge of the cloud. It does however contribute to the electric flux, which for \(r\gg m^{-1}\) on a state \(|n\rangle\) is given by
\[\lim_{r\to\infty}4\pi r^{2}\partial_{r}a_{0}(r)|n\rangle=-e\left(q_{\phi}n+ \delta q\right)|n\rangle\,. \tag{117}\]
From (117) we conclude that when \(\delta q=0\mod q_{\phi}\), the flux is fully screened on the state \(|-\delta q/q_{\phi}\rangle\). This state is obviously the energetically favored one. The expectation value of the gauge field on this state reads
\[\langle a_{0}(r)\rangle=\delta q\,e\left[\tilde{a}_{0}(r)-a_{0}^{(nor)}(r) \right]=\frac{e\,\delta q}{4\pi\delta}R_{0}^{\delta-1}a_{0}^{(1)}(r)\,, \tag{118}\]
where \(a_{0}^{(1)}(r)\) is the mode which is singular for \(rm\ll 1\) in (110). In figure 14 we show the behavior of the resulting electric field (normalized by \(r\) to be dimensionless); as expected
the field vanishes at large distances. For \(rm\ll 1\) the flux decays as a power law according to (107).
For \(\delta q\neq 0\mod q_{\phi}\), instead, (106) implies that the electric flux cannot be fully screened by the scalar field, in agreement with our expectation. In this case we expect the ground state to be given by the state in which the maximal possible amount of charge has been screened at infinity, 56 at least for sufficiently small \(m\). The expectation value of the gauge field on this state is a linear combinations of the two modes \(a_{0}^{(1)}(r)\) and \(a_{0}^{(2)}(r)\) in (107):
Footnote 56: For instance, we expect a Wilson line of supercritical odd charge \(q>0\) interacting with a charge 2 scalar field to flow to a (positive) charge 1 Wilson line at distances \(r\gg 1/m\).
\[\begin{split}\langle a_{0}(r)\rangle&=e\left( \delta q-nq_{\phi}\right)\tilde{a}_{0}(r)+nq_{\phi}\left[\tilde{a}_{0}(r)-a_{0 }^{(nor)}(r)\right]\\ &=e\left[\frac{\delta q}{4\pi\delta}+\alpha_{1}(\delta q-nq_{ \phi})\right]R_{0}^{\delta-1}a_{0}^{(1)}(r)+e\,\alpha_{2}(\delta q-nq_{\phi}) R_{0}^{-\delta}a_{0}^{(2)}(r)\,.\end{split} \tag{108}\]
As evident from the plot (b)b, the term \(e\left(\delta q-nq_{\phi}\right)\tilde{a}_{0}(r)\) leads to a nontrivial electric field which can be measured at large distances \(r\gg 1/m\).
We close this section with some comments on the massless limit, taken as \(m\to 0\) for fixed \(r/R_{0}\). For a line such that \(q=0\mod q_{\phi}\), this limit is smooth. Indeed in this case the expectation value of the gauge field is given by (108), which for \(m=0\) reduces exactly to the expression in (107), in agreement with the discussion in sections 2.3 and 2.4. For \(q\neq 0\mod q_{\phi}\) the gauge field (108) admits also a contribution proportional to \(a_{0}^{(2)}(r)\). However, according to the discussion below (109), the coefficients \(\alpha_{1}\) and \(\alpha_{2}\) become infinitesimal in the massless limit. Thus from (108) and (108) we conclude that the electric flux, and more in general the screening cloud, do not depend on the value of \(q\mod q_{\phi}\) for \(r\ll m^{-1}\). Indeed, as already commented, figure (b)b shows that the electric flux constantly decreases for \(r\ll 1/m\). Physically this behavior is due to the fact that the scalar wave-function is delocalized over
distances of the order of the Compton wavelength. It is thus necessary to make measurements at \(r\gtrsim m^{-1}\) to see the effects of the quantization of the \(U(1)\) charge of the matter field. In the massless limit, the wave-function can be spread over arbitrary distances and it is possible to store fractional units of charge at \(r\to\infty\).57 This is not in contradiction with the general theorem proven in section 2.6, since the defect remains nontrivial at all scales also in the massless limit due to the conformal one-point function for the scalar field (A.37).
Footnote 57: This is the physical meaning of (A.63) in the massless limit, since \(\alpha_{2}\) is infinitesimal while the solution \(a^{(2)}(r)\sim r^{\delta}\) grows with the distance.
The same general discussion remains true upon including a quartic coupling, which only modifies the expressions for the solutions in (A.44). In particular analogous conclusions are found for massive scalars, and Wilson lines charged under the one-form symmetry lead a remnant Coulomb field at large distances. In the massless limit, one finds again the screening cloud does not depend on the value of \(q\mod q_{\phi}\). As remarked in section 2.6, this raises a small puzzle, since the analysis in section 2.4 shows that all Wilson lines flow logarithmically to a trivial one in the double-scaling limit. We plan to analyze this issue further in future work.
## Appendix B Details on fermionic QED\({}_{4}\)
### Defect propagator and double-trace deformation for subcritical charge
In this section we compute the propagator of the defect operator \(\alpha\) for the theory (3.19) at the alternate quantization fixed point. We then use this result to compute the exact propagator in the presence of a double-trace deformation as in (3.46). We argue that such a propagator does not have a tachyon pole.
We start by computing the propagator at \(f=0\). We consider the action in Euclidean signature
\[S=\int_{\text{AdS}_{2}}d^{2}x\sqrt{g}\,\bar{\psi}\left(\overleftarrow{\nabla }_{\text{AdS}_{2}}-g\gamma^{0}+m\right)\psi\,,\] (B.1)
where the Euclidean gamma matrices are given by \(\gamma^{0}=\sigma_{1}\) and \(\gamma^{1}=\sigma_{3}\).
To extract the propagator we pursue the generating function approach as in appendix A.1. We consider the theory with Dirichlet boundary conditions in terms of the modes (3.26),
\[\beta(\tau)=\frac{m+\nu}{\nu}J(\tau)\,,\qquad\bar{\beta}(\tau)=\frac{m+\nu}{ \nu}\bar{J}(\tau)\,,\] (B.2)
where \(J\) is an external Grassmanian function, which is interpreted as an external source. The action is stationary with Dirichlet boundary conditions of the form (B.2) if we add the following boundary term:58
Footnote 58: This is just the Euclidean version of the term (3.31) in the limit \(r_{0}\to 0\); note however that its interpretation is now different, as we are imposing Dirichlet conditions, rather than minimizing the action for arbitrary values of the fluctuations.
\[S_{bdry}=\frac{\nu}{m+\nu}\int d\tau\left[\bar{\alpha}(\tau)\beta(\tau)+\bar{ \beta}(\tau)\alpha(\tau)\right]\,=\int d\tau\left[\bar{\alpha}(\tau)J(\tau)+ \bar{J}(\tau)\alpha(\tau)\right]\,,\] (B.3)
where in the last step we used the Dirichlet conditions (B.2).
Importantly, the fact that the bulk action is linear in derivatives implies that the on-shell action coincides with the boundary term (B.3). We may thus follow the GKPW prescription as in appendix A.1 and interpret the theory with Dirichlet boundary conditions (B.2) as the deformation of the alternate fixed point by a complex source \(J\) for \(\alpha\)[136; 137]. Then the propagator for the boundary field \(\alpha\) at the alternate quantization fixed point is59
Footnote 59: \(G_{\alpha}^{(0)}(\omega)\) is the sum of \(\alpha(\omega)\) with \(J(\omega)\) stripped off, and similarly for \(\bar{\alpha}(\omega)\), just like in the scalar case (A.6). Here we have to write the expression a bit differently from the scalar case, since we are dealing with Grassmannian quantities.
\[\langle\alpha(\omega)\bar{\alpha}(\omega^{\prime})\rangle_{f=0}=2\pi\delta( \omega-\omega^{\prime})G_{\alpha}^{(0)}(\omega)\,,\qquad G_{\alpha}^{(0)}( \omega)=-\left[\frac{\partial\alpha(\omega)}{\partial J(\omega)}+\frac{ \partial\bar{\alpha}(\omega)}{\partial\bar{J}(\omega)}\right]\,,\] (B.4)
where similarly to (A.6), \(\alpha\) is obtained from the boundary limit (cf. (3.25)) of a regular solution of the bulk equations of motion with boundary condition (B.2) for \(r\to 0\). The Fourier transform is defined according to
\[\alpha(\tau)=\int\frac{d\omega}{2\pi}e^{-i\omega\tau}\alpha(\omega)\,,\qquad \bar{\alpha}(\tau)=\int\frac{d\omega}{2\pi}e^{i\omega\tau}\bar{\alpha}(\omega)\,,\] (B.5)
and similarly for \(J(\tau)\), \(\bar{J}(\tau)\).
To find the value of \(\alpha(\omega)\) to be used in (B.4) we thus need to solve the Euclidean Dirac-Coulomb equation for \(\psi(\tau,r)=e^{-i\omega\tau}\psi(r)\):
\[\left[(-i\omega r-g)\,\gamma^{0}+\left(r\partial_{r}-\frac{1}{2}\right)\gamma ^{1}+m\right]\psi=0\,.\] (B.6)
We set
\[\psi=\left(\begin{array}{c}\frac{1}{2}(\psi_{1}+\psi_{2})\\ \frac{i}{2}(\psi_{1}-\psi_{2})\end{array}\right)\,,\] (B.7)
so that (B.6) reduces to
\[r\partial_{r}\psi_{1}+\left(r\omega-ig-\frac{1}{2}\right)\psi_{1 }+m\psi_{2}=0\,,\] (B.8) \[r\partial_{r}\psi_{2}+m\psi_{1}+\left(ig-r\omega-\frac{1}{2} \right)\psi_{2}=0\,.\] (B.9)
Using (B.9) to solve for \(\psi_{1}\) in terms of \(\psi_{2}\) and \(\partial_{r}\psi_{2}\), we find
\[\psi_{1}=\frac{1}{2m}\left[(1+2r\omega-2ig)\,\psi_{2}-2r\partial_{r}\psi_{2} \right]\,,\] (B.10)
and (B.8) reduces to
\[-r^{2}\partial_{r}^{2}\psi_{2}+\left[m^{2}-(g+ir\omega)^{2}+r\omega-\frac{1}{ 4}\right]\psi_{2}=0\,.\] (B.11)
The most general solution to this equation is a linear combination of Whittaker \(W\) functions
\[\psi_{2}=c_{1}W_{ig-\frac{1}{2},\nu}(2r\omega)+c_{2}W_{\frac{1}{2}-ig,\nu}(-2r \omega)\,.\] (B.12)
In the following we focus on \(\omega>0\). From \(W_{x,y}(z)\overset{z\to\infty}{\propto}e^{-z/2}\), we infer that regularity implies that we need to set \(c_{2}=0\) in (B.12). Comparing (B.7) with (3.25), we find that the \(r\to 0\) limit of \(\psi_{2}\) can be written as
\[\psi_{2}\overset{r\to 0}{\sim}\frac{2(g+i\nu)}{\nu+m-ig}\beta r^{\frac{1}{2}+ \nu}+\frac{2m}{\nu+m-ig}\alpha r^{\frac{1}{2}-\nu}\,.\] (B.13)
We thus extract \(\alpha\) and \(\beta\) by comparing with the expansion of the Whittaker's function (A.11). Performing the same steps also for \(\bar{\psi}\), we eventually find
\[\frac{\alpha(\omega)}{\beta(\omega)}=\frac{\bar{\alpha}(\omega)}{\bar{\beta}( \omega)}=i\omega^{-2\nu}\frac{\Gamma(2\nu)\Gamma\left(1-ig-\nu\right)}{m2^{2 \nu}\Gamma(-2\nu)\Gamma\left(-ig+\nu\right)}\,,\qquad\omega>0\,.\] (B.14)
Note that \(\bar{\alpha}/\bar{\beta}\neq\alpha^{\dagger}/\beta^{\dagger}\) on the solution. The propagator then follows from (B.4):60
Footnote 60: Note that the result (B.15) cannot be straightforwardly continued to \(\omega<0\), since the fermion propagator is discontinuous at \(\omega=0\).
\[G_{\alpha}^{(0)}(\omega)=-2i\omega^{-2\nu}\frac{\nu/m}{m+\nu}\frac{\Gamma(2 \nu)\Gamma\left(1-ig-\nu\right)}{2^{2\nu}\Gamma(-2\nu)\Gamma\left(-ig+\nu \right)}\,,\qquad\omega>0\,.\] (B.15)
For future purposes, we study the imaginary part of (B.15) for \(g>0\) and \(0<\nu<1/2\). First note that for \(g=0\), we find \(G_{\alpha}^{(0)}(\omega)=i\omega^{-2\nu}|c|\) where \(|c|>0\) is a constant. More generally, we find that for \(g>0\) and \(0<\nu<1/2\) the propagator (B.15) always has a positive imaginary part for \(\text{Re}(\omega)>0\):
\[\text{Im}(\left[G_{\alpha}^{(0)}(|\omega|e^{i\lambda})\right])>0\quad\text{ for}\quad-\frac{\pi}{2}\leq\lambda\leq\frac{\pi}{2}\,.\] (B.16)
We now consider a double-trace defect deformation as in (3.46). The exact Euclidean propagator in this case can be obtained from (B.15) by resumming the perturbative series in \(f\) as in (A.18):
\[\langle\alpha(\omega)\bar{\alpha}(\omega^{\prime})\rangle=2\pi\delta(\omega- \omega^{\prime})G_{\alpha}(\omega)\,,\qquad G_{\alpha}(\omega)=\frac{G_{\alpha }^{(0)}(\omega)}{1+fG_{\alpha}^{(0)}(\omega)}\,.\] (B.17)
The retarded propagator \(G_{\alpha,R}\) is obtained by analytically continuing the Euclidean expression from \(\omega>0\) as in (A.19). Then the property (B.16) implies that the retarded propagator analytically continued on the upper half plane \(\text{Im}(\omega)>0\) has no singularities irrespectively of the sign of \(f\). In particular, there is no tachyon pole, differently than in the scalar setup analyzed in appendix A.1. The expansion of (B.17) for \(\omega/|f|^{\frac{1}{2\nu}}\ll 1\) takes qualitatively the same form for both signs of \(f\), corresponding (up to a contact term) to an operator of scaling
dimension \(\Delta=\frac{1}{2}+\nu\).61 In conclusion, the defect fermionic propagator does not display any pathology.
Footnote 61: Note however that the propagator differs by a sign in both cases, hinting at a change in nature between creation and annihilation operators and thus at a screening mechanism; we analyze this mechanism in section 3.3.
We also comment that, by numerically studying \(G_{\alpha}^{(0)}(\omega)\) for \(m=1\) and \(m=1/2\), and \(g\) such that \(0<\nu<1/2\), we found that the imaginary part \(\mathrm{Im}\big{(}\Big{[}G_{\alpha}^{(0)}(|\omega|e^{i\lambda})\Big{]}\big{)}\) admits zeroes for \(\lambda=\frac{\pi}{2}+\delta_{m,g}\) where \(\delta_{m,g}>0\) is a numerically small number. For instance for \(m=1\) and \(g=0.9\) we find \(\delta_{m,g}\simeq 0.0016\), while for \(g\ll 1\) and arbitrary \(m\) we find \(\delta_{m,g}\simeq g^{2}\pi\). Such a zero of the imaginary part implies a pole in the second sheet for the double-trace deformed propagator (117) at \(f<0\), for \(\mathrm{Re}(\omega)<0\) and \(\mathrm{Im}(\omega)<0\). As commented in footnote 34, we expect the imaginary value of \(\omega\) on this pole to be associated with the lifetime of the unstable vacuum after a negative double-trace defect deformation is suddenly turned on. It would be interesting to investigate this connection further.
### Massive Dirac-Coulomb equation for subcritical charge
In this section we study the AdS\({}_{2}\) Dirac-Coulomb equation for the model (3.19) in the presence of the deformation (3.48):
\[\left[i\left(\not{\partial}-i\not{A}\right)-m\pm i\,r\gamma^{3}M\right]\psi^{ (\pm)}(t,r)=0\,, \tag{118}\]
where \(m>0\), \(M>0\), \(A_{0}=g/r\) with \(g>0\) and we work in Lorentzian signature with the gamma matrices given by (3.20) and (3.9). Note that the mass term \(M\) does not modify the near-boundary behavior (3.25).
We are interested in finding bound states, i.e. solutions of (118) with energy \(\omega\) such that \(M^{2}>\omega^{2}\). To this aim we follow [59] and set
\[\psi^{(\pm)}(t,r)=\sqrt{r}e^{-i\omega t}\left(\begin{array}{c}e^{-\frac{ \rho}{2}}\sqrt{M\mp\omega}\left(\Phi_{2}^{(\pm)}(\rho)\pm\Phi_{1}^{(\pm)}( \rho)\right)\\ e^{-\frac{\rho}{2}}\sqrt{M\pm\omega}\left(\Phi_{1}^{(\pm)}(\rho)\mp\Phi_{2}^{( \pm)}(\rho)\right)\end{array}\right)\,, \tag{119}\]
where we defined
\[\rho=2\lambda r\,,\qquad\lambda\equiv\sqrt{M^{2}-\omega^{2}}\,. \tag{120}\]
Using (119), (118) reduces to
\[\partial_{\rho}\Phi_{1}^{(\pm)}+\left(\frac{g\omega}{\rho\sqrt{M ^{2}-\omega^{2}}}-1\right)\Phi_{1}^{(\pm)}+\left(\frac{gM}{\lambda}\pm m \right)\Phi_{2}^{(\pm)}=0\,, \tag{121}\] \[\partial_{\rho}\Phi_{2}^{(\pm)}-\left(\frac{gM}{\lambda}\mp m \right)\Phi_{1}^{(\pm)}-\frac{g\omega}{\rho\lambda}\Phi_{2}^{(\pm)}=0\,. \tag{122}\]
Solving (122) for \(\Phi_{1}^{(\pm)}\) as
\[\Phi_{1}^{(\pm)}=\frac{\lambda\rho\partial_{\rho}\Phi_{2}^{(\pm)}-g\Phi_{2}^{( \pm)}\omega}{gM\mp m\lambda}\,, \tag{123}\]
we recast (B.21) in the form
\[\rho\partial_{\rho}^{2}\Phi_{2}^{(\pm)}+(1-\rho)\partial_{\rho}\Phi_{2}^{(\pm)}+ \left(\frac{g\omega}{\lambda}-\frac{\nu}{\rho}\right)\Phi_{2}^{(\pm)}=0\,,\] (B.24)
where \(\nu=\sqrt{m^{2}-g^{2}}\). The solution of (B.24) that is regular for \(\rho\to\infty\) (with \(\rho>0\)) is written in terms of a confluent hypergeometric function
\[\Phi_{2}^{(\pm)}(\rho)\propto\rho^{\nu}U\left(\nu-\frac{g\omega}{\lambda},1+2 \nu;\rho\right)\,.\] (B.25)
We want to find the quantization condition on \(\omega\) for the most general linear boundary condition on the modes (3.25):
\[\beta/\alpha=\frac{m+\nu}{\nu}f=\text{sgn}(f)\mu^{2\nu}\,,\] (B.26)
where we defined \(\mu=\left(\frac{m+\nu}{\nu}|f|\right)^{1/(2\nu)}>0\) as the mass scale associated with the double-trace perturbation. We are particularly interested in the consequences of a negative \(f\) in (B.26), but we will study both signs for generality.
To proceed, we compare the solution (B.19) and (B.26) to rewrite the small \(\rho\) expansion of \(\Phi_{2}^{(\pm)}\) in terms of \(\alpha\) and \(\beta\)
\[\begin{split}\Phi_{2}^{(\pm)}\overset{\rho\to 0}{\sim}& \beta\left(\frac{\rho}{2\sqrt{M^{2}-\omega^{2}}}\right)^{\nu} \left[\frac{g}{2(m+\nu)\sqrt{M\pm\omega}}\mp\frac{1}{2\sqrt{M\mp\omega}}\right] \\ +&\alpha\left(\frac{\rho}{2\sqrt{M^{2}-\omega^{2}}} \right)^{-\nu}\left[\frac{1}{2\sqrt{M\pm\omega}}\mp\frac{g}{2\left(m+\nu \right)\sqrt{M\mp\omega}}\right]\,.\end{split}\] (B.27)
Using the expansion of the confluent hypergeometric function,
\[U(x,1+y;z)\overset{z\to 0}{\sim}z^{-y}\frac{\Gamma(y)}{\Gamma(x)}+\frac{ \Gamma(-y)}{\Gamma(x-y)}\,,\] (B.28)
we extract the ratio \(\alpha/\beta\) from the comparison of (B.25) and (B.27):
\[\frac{\beta^{(\pm)}}{\alpha^{(\pm)}}=\frac{4^{\nu}\lambda^{2\nu}(M\mp\omega)[ g(M\pm\omega)\mp\lambda(\nu+m)]\Gamma(-2\nu)\Gamma\left(1+\nu-\frac{g\omega}{ \lambda}\right)}{\Gamma(2\nu)\Gamma\left(-\nu-\frac{g\omega}{\lambda}\right) \left[(\nu m+\nu^{2})\,M^{2}\pm M\left(g^{2}\omega-\nu g\lambda\right)-m \omega(\nu\omega+g\lambda)-m^{2}\omega^{2}\right]}\,.\] (B.29)
Using (B.29), the boundary condition (B.26) provides a condition on \(\omega\) from which we infer the energies \(\omega_{n}\) of the bound states. Let us state the results for \(f=0\) and \(f\to+\infty\):
* \(f\to+\infty\): this sets \(\alpha=0\), corresponding to standard quantization. We find \[\omega_{n}=\frac{M}{\sqrt{1+\frac{g^{2}}{(n+\nu)^{2}}}}\,,\] (B.30) where \(n=1,2,\ldots\) for \((\delta)=+\) and \(n=0,1,2,\ldots\) for \((\delta)=-\). (B.30) agrees with the well known result for the relativistic Hydrogen atom [59].
* \(f=0\): this sets \(\beta=0\), corresponding to alternate quantization. We find \[\omega_{n}=\frac{M(n-\nu)}{\sqrt{(n-\nu)^{2}+g^{2}}}\,,\] (114) where again \(n=1,2,\ldots\) for \((\delta)=+\) and \(n=0,1,2,\ldots\) for \((\delta)=-\). Note that \(\omega_{0}=-\frac{\nu}{m}M\) is negative. (114) is a new result to the best of our knowledge.
Increasing the coupling \(f\) from \(0\) to \(\infty\) smoothly transforms the spectrum (114) into (115). For a sufficiently negative \(f<0\) we instead encounter an interesting phenomenon: as we increase \(\mu\) in (111) the lowest bound state eventually reaches \(\omega=-M\) and then _dives_ into the continuum part of the spectrum. To see this, we look for a solution of (111) with \(\omega\simeq-M\). We expand the ratio (111) as:
\[\frac{\beta^{(\pm)}}{\alpha^{(\pm)}}=(gM)^{2\nu}\left[-c_{1}^{(\pm)}+c_{2}^{( \pm)}\frac{\omega+M}{Mg^{2}}+O\left(\frac{(\omega+M)^{2}}{M^{2}g^{4}}\right) \right]\,, \tag{115}\]
where we defined the following coefficients
\[c_{1}^{(\pm)} =\frac{\pi\,2^{2\nu}(m\pm\nu)}{g\,\sin\left(2\pi\nu\right)\Gamma (2\nu)\Gamma(1+2\nu)}\,, \tag{116}\] \[c_{2}^{(\pm)} =c_{1}^{(\pm)}\frac{\nu}{3}\left(1-4\nu^{2}+6m^{2}\mp 3m\right)\,. \tag{117}\]
Importantly, they are both positive for \(0<\nu<1/2\):62\(c_{1}^{(\pm)}>0\) and \(c_{2}^{(\pm)}>0\). Note also that \(c_{1}^{(+)}>c_{1}^{(-)}\). We define the critical value for \(\mu\) as:
Footnote 62: To see this one needs to use \(\nu=\sqrt{m^{2}-g^{2}}\) and remember that we assumed \(m>0\) and \(g>0\) everywhere.
\[\mu_{c}^{(\pm)}\equiv gM\left[c_{1}^{(\pm)}\right]^{\frac{1}{2\nu}}\,. \tag{118}\]
For \(f<0\) and \(\mu=\mu_{c}^{(\pm)}-\delta\mu\) with \(0<\delta\mu\ll gM\) we can use the expansion (115) to solve for the energy of the lowest bound state:
\[\omega\simeq-M\left\{1-\frac{2\delta\mu}{M}\frac{\nu\,g}{c_{2}^{(\pm)}}\left[ c_{1}^{(\pm)}\right]^{1-\frac{1}{2\nu}}\right\}\,. \tag{119}\]
(129) clearly shows that as we lower \(\delta\mu\) to \(0\) the energy eventually becomes \(-M\), at which point we have a completely delocalized bound state solution. This solution is sometimes referred to as a _diving_ state [59]. As we discuss in subsection 3.3, this phenomenon implies that one unit of charge is screened in the vacuum for \(\mu>\mu_{c}\). Note that \(\mu_{c}^{(+)}>\mu_{c}^{(-)}\), as it could have been intuitively expected since the lowest energy mode, \(n=0\), is absent from (114) for \((\delta)=+\); therefore it takes a stronger perturbation for \((\delta)=+\) than for \((\delta)=-\) to make the lowest bound state join the negative continuum part of the spectrum.
### Massive Dirac-Coulomb equation for supercritical charge
In this section we solve for the diving states of the massive Dirac-Coulomb equation (111) in the presence of the gauge field
\[A_{0}=\begin{cases}\dfrac{g}{r_{0}}&\text{for }r<r_{0}\\ \dfrac{g}{r}&\text{for }r\geq r_{0}\,,\end{cases} \tag{113}\]
where \(g>m>0\). We will be interested in the limit \(1/r_{0}\gg M\) and \(\tilde{\nu}=\sqrt{g^{2}-m^{2}}\ll 1\).
The equation for \(\psi^{(\pm)}(t,r)=e^{-i\omega t}\psi^{(\pm)}_{<}(r)\) for \(r<r_{0}\) reads
\[\left[r\left(\omega+\frac{g}{r_{0}}\right)\gamma^{0}+i\left(r\partial_{r}- \frac{1}{2}\right)\gamma^{1}-m\pm ir\gamma^{3}M\right]\psi^{(\pm)}_{<}(r)=0\,. \tag{114}\]
In the limit of interest \(-\omega\simeq M\ll g/r_{0}\), the above reduces to
\[\left[i\left(r\partial_{r}-\frac{1}{2}\right)\gamma^{1}-m+r\frac{g}{r_{0}} \gamma^{0}\right]\psi^{(\pm)}_{<}(r)\simeq 0\,, \tag{115}\]
whose solutions satisfying standard boundary conditions (112) for \(r\to 0\) are
\[\psi^{(\pm)}_{<}(r)\propto\begin{pmatrix}rJ_{m+\frac{1}{2}}\left(\dfrac{gr}{r _{0}}\right)\\ rJ_{m-\frac{1}{2}}\left(\dfrac{gr}{r_{0}}\right)\end{pmatrix}\,. \tag{116}\]
The important conclusion for us is that the ratio between the two components at \(r=r_{0}\) is independent of \(\omega\) and \(M\) (cf. (3.22) for the notation):
\[\frac{\chi^{(\pm)}_{<}(r)}{\xi^{(\pm)}_{<}(r)}\simeq\frac{J_{m+\frac{1}{2}} \left(g\right)}{J_{m-\frac{1}{2}}\left(g\right)}\equiv R_{g}\,. \tag{117}\]
A different potential for \(r<r_{0}\) might change the value of \(R_{g}\), which would however remain approximately independent of \(\omega\) and \(M\). For \(m=1\) we have \(R_{g}=1/g-\cot(g)\).
The solution \(\psi^{(\pm)}(t,r)=e^{-i\omega t}\psi^{(\pm)}_{>}(r)\) for \(r>r_{0}\) is obtained as in appendix B.2 and can be found by replacing \(\nu\to i\tilde{\nu}\) in (109) and (110). The boundary condition arises from the requirement of continuity at \(r=r_{0}\), which leads to
\[\frac{\chi^{(\pm)}_{>}(r_{0})}{\xi^{(\pm)}_{>}(r_{0})}=\frac{\chi^{(\pm)}_{< }(r_{0})}{\xi^{(\pm)}_{<}(r_{0})}\simeq R_{g}\,. \tag{118}\]
For63\(\omega g\ll 1/r_{0}\), we can express the outer solutions \(\chi^{(\pm)}_{>}(r_{0})\) and \(\xi^{(\pm)}_{>}(r_{0})\) using the small \(r\) mode expansion (110) and write the boundary condition (118) as
Footnote 63: This requirement arises since the expansion (110) holds for \(\omega r\ll 1\).
\[\frac{\beta}{\alpha}=\frac{gR_{g}-m-i\tilde{\nu}}{g-mR_{g}-i\tilde{\nu}R_{g}} r_{0}^{-i2\tilde{\nu}}\,, \tag{119}\]
which in particular implies \(|\alpha|=|\beta|\). Proceeding as in the previous section, we find that the ratio \(\beta^{(\pm)}/\alpha^{(\pm)}\) for the solution \(\psi_{>}^{(\pm)}(r)\) at \(\omega=-M\) reads
\[\frac{\beta^{(\pm)}}{\alpha^{(\pm)}}=\frac{(m\pm i\tilde{\nu})\,\Gamma(-2i \tilde{\nu})}{g\Gamma(2i\tilde{\nu})}(2Mg)^{2i\tilde{\nu}}\,.\] (B.44)
We conclude that the condition to have a bound state with energy \(\omega=-M\) reads
\[(2Mgr_{0})^{2i\tilde{\nu}}=e^{2i\tilde{\nu}\eta^{(\pm)}}\,,\] (B.45)
where we defined the following real quantity for convenience
\[\begin{split}\eta^{(\pm)}&=\frac{1}{2i\tilde{\nu} }\log\left[\frac{g\left(m-gR_{g}+i\tilde{\nu}\right)\Gamma(2i\tilde{\nu})}{ \left(m\pm i\tilde{\nu}\right)\left(g-mR_{g}-i\tilde{\nu}R_{g}\right)\Gamma(-2 i\tilde{\nu})}\right]\\ &=\frac{R_{g}+1}{2m(R_{g}-1)}\mp\frac{1}{2m}-2\gamma_{E}+O\left( \tilde{\nu}^{2}\right)\,,\end{split}\] (B.46)
where \(\gamma_{E}\) is the Euler-Mascheroni constant. For us it is only relevant that \(\eta^{(\pm)}\) does not depend on \(\tilde{\nu}\) for \(\tilde{\nu}\ll 1/2\). The condition (B.45) can be conveniently written as
\[\tilde{\nu}\log(2Mgr_{0})=\tilde{\nu}\eta^{(\pm)}-\pi n\,,\qquad n=1,2,\ldots\,.\] (B.47)
where we excluded \(n\leq 0\) since the approximations leading to (B.43) break down for \(Mr_{0}\gtrsim O(1)\).64 The result (B.47) agrees with the classic analysis by Pomeranchuk and Smorodinsky [21].
Footnote 64: In practice, an additional solution for \(M\simeq 1/r_{0}\), intuitively corresponding to \(n=0\) in (B.47), may exist for different potentials at \(r<r_{0}\), somewhat similarly to the negative double-trace deformation discussed in section 3.3.
### Massless Dirac equation for supercritical charge
In this appendix we study the massless Dirac equation in the presence of a supercritical Coulomb potential (B.37):
\[\left[i\left(\not{\partial}-i\not{A}\right)-m\right]\psi(t,r)=0\,,\] (B.48)
where \(\psi(t,r)=e^{-i\omega t}\psi(r)\). The solution for \(r<r_{0}\) coincides with (B.40) (at small frequencies). For \(r>r_{0}\) the equation in Fourier space coincides with the Euclidean equation (B.6) up to the replacement \(\omega\to-i\omega\). Therefore, writing the spinor as in (B.7), we obtain the following equation for \(\psi_{2}\):
\[r^{2}\partial_{r}^{2}\psi_{2}+\left[\frac{1}{4}-m^{2}+ir\omega+(g+r\omega) \right]\psi_{2}=0\,,\] (B.49)
while \(\psi_{1}\) is given by
\[\psi_{1}=\frac{1}{2m}\left[\left(1-2ig-2ir\omega\right)\psi_{2}-2r\partial_{r }\psi_{2}\right]\,.\] (B.50)
We first show that (B.48) admits infinitely many resonances in the negative energy continuum. To this aim, we use the fact that the solution with purely outgoing boundary conditions for \(\text{Re}(\omega)<0\) reads
\[\psi(t,r)\propto e^{-i\omega t}\left(\tfrac{1}{2m}W_{\tfrac{1}{2}+ig,i\tilde{ \nu}}(-2ir\omega)+\tfrac{1}{2}W_{-\tfrac{1}{2}+ig,i\tilde{\nu}}(-2ir\omega) \right)\qquad\text{for }r>r_{0}\,,\] (B.51)
which behaves as \(\psi\sim e^{-i\omega t+i\omega r}\) for \(|r\omega|\gg 1\). From the expansion (A.11), we find that the ratio between the small \(\omega r\) modes (3.64) corresponding to the solution (B.51) is given by
\[\frac{\beta}{\alpha}=(-2i\omega)^{2i\tilde{\nu}}\frac{i\Gamma\left(-2i\tilde{ \nu}\right)\Gamma\left(1-ig+i\tilde{\nu}\right)}{m\Gamma\left(2i\tilde{\nu} \right)\Gamma\left(-ig-i\tilde{\nu}\right)}\,.\] (B.52)
Using this expression in the boundary condition (B.43) and working at leading order in \(\tilde{\nu}\ll 1\), we obtain the following condition on the frequency \(\omega\):
\[\left(-2c\,\omega r_{0}e^{-i\tilde{\gamma}}\right)^{2i\tilde{\nu}}=1\,,\] (B.53)
where we defined
\[\tilde{\gamma}=\frac{\pi}{e^{2\pi m}-1}\,,\] (B.54)
and \(c\) is an \(O(1)\) positive number given by
\[c=\exp\left[\frac{1+R_{g}}{2m(R_{g}-1)}+\frac{\psi(1+im)+\psi(1-im)}{2}+2 \gamma_{E}\right]\,.\] (B.55)
For \(m>\frac{\log(2)}{2\pi}\approx 0.11\),65 we have \(0<\tilde{\gamma}<\pi\) and (B.53) admits the following infinite family of solutions with \(\text{Re}(\omega)<0\):
Footnote 65: This restriction applies in all the physical cases, since \(m_{0}\geq 1/2\) for \(d\geq 3\).
\[\omega_{n}=-\frac{1}{2cr_{0}}e^{i\tilde{\gamma}-\pi n/\tilde{\nu}}\,,\qquad n =1,2,\ldots\,,\] (B.56)
where the restriction on \(n\) arises from the requirement \(|\omega_{n}r_{0}|\ll 1\), which is needed in order to be able to use the expansion (3.64) at \(r=r_{0}\). The solutions (B.56) are resonances with \(\text{Re}(\omega_{n})\sim\text{Im}(\omega_{n})=-\exp\left(-\pi n/\tilde{\nu} \right)/r_{0}\) and correspond to poles of the retarded Green's function analytically continued to the second sheet. Note that by increasing \(m\) the imaginary part becomes smaller. For \(m=1/2\) and \(m=1\), as appropriate for the \(\ell=0\) modes in \(d=3\) and \(d=4\), we find a numerically small imaginary part
\[\text{Im}(\omega_{n})\simeq\text{Re}(\omega_{n})\times\begin{cases}0.14&\text {for }m=\tfrac{1}{2}\\ 0.006&\text{for }m=1\,.\end{cases}\] (B.57)
The result (B.56) was previously obtained in [30].
We now find the scattering wave-functions. The most general solution of (B.49) is
\[\psi_{2} =\beta\,\frac{2i\,(g-\tilde{\nu})}{g+im-\tilde{\nu}}(-2i\omega)^{- \frac{1}{2}-i\tilde{\nu}}M_{-\frac{1}{2}+ig,i\tilde{\nu}}(-2ir\omega)\] (B.58) \[+\alpha\frac{2im}{g+im-\tilde{\nu}}(2i\omega)^{-\frac{1}{2}+i \tilde{\nu}}M_{\frac{1}{2}-ig,-i\tilde{\nu}}(2ir\omega)\,,\]
where \(M_{x,y}(z)\) is the Whittaker \(M\) function and \(\psi_{1}\) is found from (B.50):
\[\psi_{1} =\beta\frac{2m}{g+im-\tilde{\nu}}(-2i\omega)^{-\frac{1}{2}-i \tilde{\nu}}M_{\frac{1}{2}+ig,i\tilde{\nu}}(-2ir\omega)\] (B.59) \[+\alpha\frac{2g-2\tilde{\nu}}{g+im-\tilde{\nu}}(2i\omega)^{-\frac {1}{2}+i\tilde{\nu}}M_{-\frac{1}{2}-ig,-i\tilde{\nu}}(2ir\omega)\,.\]
The coefficients \(\alpha\) and \(\beta\) are chosen such that they precisely coincide with those in (3.64), as can be seen using
\[M_{x,y}(z)\overset{z\to 0}{\sim}z^{\frac{1}{2}+y}\,.\] (B.60)
The ratio \(\beta/\alpha\) is thus determined by the boundary condition (B.43). In the following we determine the absolute value \(|\alpha|=|\beta|\) as well by demanding the orthonormality condition
\[\int dr\sqrt{gg^{rr}}\psi_{\omega}^{\dagger}(r)\psi_{\omega^{\prime}}(r)=(2\pi )\delta(\omega-\omega^{\prime})\,,\] (B.61)
where \(\psi_{\omega}\) denotes the wave-function at frequency \(\omega\).
To compute the integral (B.61), we use that the Dirac equation (B.48) implies
\[(\omega-\omega^{\prime})\psi_{\omega}^{\dagger}(r)\psi_{\omega^{\prime}}(r)= \frac{i}{\sqrt{gg^{rr}}}\partial_{r}\left[\sqrt{gg^{rr}}\,\bar{\psi}_{\omega}( r)\gamma^{1}\psi_{\omega^{\prime}}(r)\right]\,.\] (B.62)
(B.62) and the continuity of the wave-functions let us express the integral (B.61) as a boundary term
\[\int dr\sqrt{gg^{rr}}\psi_{\omega}^{\dagger}(r)\psi_{\omega^{ \prime}}(r) =i\lim_{r\to\infty}\frac{\bar{\psi}_{\omega}(r)\gamma^{1}\psi_{ \omega^{\prime}}(r)}{r(\omega-\omega^{\prime})}\] (B.63) \[=\lim_{r\to\infty}\frac{\psi_{1}^{\dagger}(r)\psi_{1}^{\prime}(r) -\psi_{2}^{\dagger}(r)\psi_{2}^{\prime}(r)}{2ir(\omega^{\prime}-\omega)}\,,\]
where we used (B.7) and we have to use \(\omega^{\prime}\) in the primed spinors. To evaluate the limit, we use the following expansion of the Whittaker function
\[M_{x,y}(z)\overset{z\to\infty}{\sim}\frac{e^{z/2}z^{-x}\Gamma(2y+1)}{\Gamma \left(1/2-x+y\right)}\left[1+O\left(\frac{1}{z}\right)\right]+\frac{e^{-\frac{ z}{2}}(-1)^{x-y+\frac{3}{2}}z^{x}\Gamma(2y+1)}{\Gamma\left(1/2+x+y\right)}\left[1+O \left(\frac{1}{z}\right)\right]\,,\] (B.64)
from which we obtain
\[\psi_{1}(r)\overset{r\to\infty}{\sim}\alpha\,r^{1/2+ig}e^{ir\omega }(2i\omega)^{ig-i\tilde{\nu}}A(\omega)\left[1+O\left(\frac{1}{r}\right)\right]\,,\] (B.65) \[\psi_{2}(r)\overset{r\to\infty}{\sim}i\beta\,r^{1/2-ig}e^{-ir \omega}(-2i\omega)^{-ig+i\tilde{\nu}}B(\omega)\left[1+O\left(\frac{1}{r} \right)\right]\,,\] (B.66)
where
\[A(\omega) =\frac{\beta}{\alpha}\frac{2m\Gamma\left(1+2i\tilde{\nu}\right)}{ \Gamma\left(1+ig+i\tilde{\nu}\right)\left(g+im-\tilde{\nu}\right)}-i\frac{2(2i \omega)^{2i\tilde{\nu}}\Gamma\left(1-2i\tilde{\nu}\right)}{\Gamma\left(ig-i \tilde{\nu}\right)\left(g+im-\tilde{\nu}\right)}\,,\] (B.67) \[B(\omega) =\frac{\alpha}{\beta}\frac{2m\Gamma\left(1-2i\tilde{\nu}\right)}{ \Gamma\left(1-ig-i\tilde{\nu}\right)\left(g+im-\tilde{\nu}\right)}+i\frac{2(-2 i\omega)^{-2i\tilde{\nu}}\Gamma\left(1+2i\tilde{\nu}\right)}{\Gamma\left(i\tilde{\nu}- ig\right)\left(g+im-\tilde{\nu}\right)}\,.\]
Recall that the ratio \(\beta/\alpha\) is an \(\omega\)-independent phase given by (B.43). We notice that \(A(\omega)\) and \(B(\omega)\) are log-periodic functions of \(\omega\)
\[A(\omega)=A(\omega e^{\pm\pi/\tilde{\nu}})\,,\qquad B(\omega)=B(\omega e^{\pm \pi/\tilde{\nu}})\,,\] (B.68)
and that \(|A(\omega)|^{2}=|B(\omega)|^{2}\). We therefore find
\[\lim_{r\to\infty}\frac{\psi_{1}^{\dagger}(r)\psi_{1}^{\prime}(r) -\psi_{2}^{\dagger}(r)\psi_{2}^{\prime}(r)}{2ir(\omega^{\prime}-\omega)}\\ =\lim_{r\to\infty}e^{\text{sgn}(\omega)\pi(\tilde{\nu}-g)}\frac{|A (\omega)|^{2}|\alpha|^{2}e^{ir(\omega^{\prime}-\omega)}-|B(\omega)|^{2}|\beta| ^{2}e^{-ir(\omega^{\prime}-\omega)}}{2i(\omega^{\prime}-\omega)}\,,\] (B.69)
where we kept only the leading terms in the expansion for \(\omega\to\omega^{\prime}\), as the limit clearly vanishes (in the distributional sense) when \(\omega\neq\omega^{\prime}\). Finally, using \(|A(\omega)|^{2}=|B(\omega)|^{2}\) and \(|\alpha|^{2}=|\beta|^{2}\), we get
\[\lim_{r\to\infty}\frac{\psi_{1}^{\dagger}(r)\psi_{1}^{\prime}(r) -\psi_{2}^{\dagger}(r)\psi_{2}^{\prime}(r)}{2ir(\omega^{\prime}-\omega)} =e^{\text{sgn}(\omega)\pi(\tilde{\nu}-g)}|A(\omega)|^{2}|\beta|^{ 2}\lim_{r\to\infty}\frac{\sin\left[r(\omega^{\prime}-\omega)\right]}{(\omega^ {\prime}-\omega)}\] (B.70) \[=e^{\text{sgn}(\omega)\pi(\tilde{\nu}-g)}|A(\omega)|^{2}|\beta|^{ 2}\pi\delta(\omega-\omega^{\prime})\,,\]
and, from (B.63),
\[\int dr\sqrt{gg^{rr}}\psi_{\omega}^{\dagger}(r)\psi_{\omega^{\prime}}(r)=\pi e ^{\text{sgn}(\omega)\pi(\tilde{\nu}-g)}|A(\omega)|^{2}|\beta|^{2}\delta(\omega -\omega^{\prime})\,.\] (B.71)
The normalization condition (B.61) is thus satisfied by setting
\[|\beta|^{2}=\frac{2e^{-\text{sgn}(\omega)\pi(\tilde{\nu}-g)}}{|A(\omega)|^{2}}\,.\] (B.72)
Our result has an important consequence. Plugging (B.72) into the solutions (B.58) and (B.59) and recalling the property (B.68), we infer
\[\psi_{\omega}(re^{-\pi n/\tilde{\nu}})\ \simeq e^{-\pi n/(2\tilde{\nu})}\psi_{ \omega e^{-\pi n/\tilde{\nu}}}(r)\,,\qquad n\in\mathds{Z}\,.\] (B.73)
In practice, in our calculations we assumed \(\omega r_{0}\ll 1\), and thus (B.73) holds only as long as \(\omega r_{0}\), \(\omega e^{-\pi n/\tilde{\nu}}r_{0}\ll 1\). As we explain in section 3.4, (B.73) has important implications for the screening of supercritical lines when \(\tilde{\nu}\ll 1\).
Vector boson instability
In this section, we study perturbative instabilities for the vector bosons in a \(SU(2)\) gauge theory, in the presence of a Wilson-line in a \((2s+1)\)-dimensional representation. The result was originally derived in [73; 74].
According to the discussion in section 4, we study the fluctuations for the gauge field in the presence a background Coulomb potential in the third direction:
\[A_{0}^{3}=\frac{g_{YM}^{2}s}{4\pi r}\,. \tag{112}\]
The equations of motion deriving from the action (4.1) are
\[\nabla^{\mu}F^{a}_{\mu\nu}+\varepsilon_{abc}A^{b}_{\mu}g^{\mu\sigma}F^{c}_{ \sigma\nu}=0\,, \tag{113}\]
where as in sections 2 and 3 we work on AdS\({}_{2}\times S^{2}\), with metric
\[ds^{2}=\frac{dt^{2}-dr^{2}}{r^{2}}-d\Omega_{2}^{2}\,. \tag{114}\]
We are interested in the equation for the fluctuations \(A_{\mu}^{1}\) and \(A_{\mu}^{2}\). It is convenient to define a charged \(W\)-boson as
\[W_{\mu}=A_{\mu}^{1}+iA_{\mu}^{2}\,. \tag{115}\]
The linearization of the equation of motion (113) reads
\[D^{\mu}\left(D_{\mu}W_{\nu}-D_{\nu}W_{\mu}\right)-iW^{\mu}F_{\mu\nu}=0\,, \tag{116}\]
where we defined an Abelian covariant derivative as
\[D_{\mu}W_{\nu}=(\nabla_{\mu}+iA_{\mu}^{3})W_{\nu}\,,\qquad D_{\mu}D_{\nu}W_{ \rho}=(\nabla_{\mu}+iA_{\mu}^{3})D_{\nu}W_{\rho}\,, \tag{117}\]
and \(F_{\mu\nu}=\partial_{\mu}A_{\nu}^{3}-\partial_{\nu}A_{\mu}^{3}\) is the Abelian field strength associated with the Coulomb potential (112). (116) is invariant under the linearized gauge transformations66
Footnote 66: To check this, one needs to use \(\nabla_{\mu}F^{\mu\nu}=0\).
\[\delta W_{\mu}=D_{\mu}\lambda-\lambda_{3}W_{\mu}\,,\qquad\delta A_{\mu}^{3}=- \partial_{\mu}\lambda_{3}\,, \tag{118}\]
where \(\lambda=\lambda_{1}+i\lambda_{2}\) and \(\lambda_{3}\) are infinitesimal parameters. \(\lambda\) carries the same charge as \(W\), while \(\lambda_{3}\) is neutral.
To proceed, we decompose \(W_{\mu}=(W_{a},W_{i})\), where \(a,b,c,\ldots\) denote AdS\({}_{2}\) indices and \(i,j,k,\ldots\) the \(S^{2}\) indices. (116) then explicitly reads
\[D_{a}D^{a}W^{i}+\nabla_{k}\nabla^{k}W^{i}-{\cal R}^{i}_{\ j}W^{j }-\nabla^{i}\left(D_{a}W^{a}+\nabla_{j}W^{j}\right)=0\,, \tag{119}\] \[D_{b}(D^{b}W^{a}-D^{a}W^{b})+\nabla_{i}\nabla^{i}W^{a}-iW^{b}{F _{b}}^{\ a}-D^{a}\nabla_{i}W^{i}=0\,, \tag{120}\]
where \({\cal R}^{\mu}_{\,\nu}\) is the Ricci tensor. In the following we choose the gauge
\[\nabla_{i}W^{i}=0\,, \tag{105}\]
which leaves a residual gauge freedom \(\delta W_{a}=D_{a}\lambda\) with \(\nabla_{i}\lambda=0\). Using (105) in (106) we obtain \(\nabla_{i}(D_{a}W^{a})=0\). We thus can use the residual freedom to further impose
\[D_{a}W^{a}=0\,. \tag{106}\]
The conditions (105) and (106) ensure that (106) and (107) decouple. Note that (106) still leaves a residual gauge freedom of the form
\[\delta W_{a}=D_{a}\lambda\quad\text{for $\lambda$ such that $D_{a}D^{a}\lambda=0$}\,. \tag{107}\]
This will be important in what follows.
The analysis of the first equation (106) is straightforward. The condition (105) is compatible with setting
\[W_{i}=\sqrt{g_{S^{2}}}\,\varepsilon_{ij}\nabla^{j}W^{T}\,, \tag{108}\]
where \(W^{T}\) is a scalar. Then the \(W^{T}\) equation reduces to the Klein-Gordon equation in a Coulomb field
\[(D_{a}D^{a}+\nabla_{i}\nabla^{i})W^{T}=0\,. \tag{109}\]
The analysis of section 2.1 lets us conclude that the defect scaling dimensions are given by
\[\Delta_{\ell}=\frac{1}{2}\pm\frac{1}{2}\sqrt{1+4\ell(\ell+1)-\frac{g_{YM}^{4} s^{2}}{4\pi^{2}}}\qquad\text{for $\ell=1,2,\ldots$}\,, \tag{110}\]
where the \(\ell=0\) mode is excluded since it does not contribute to (108).
To analyze (107) we set \(W_{a}(t,r)=e^{-i\omega t}w_{a}(r)\) and solve the condition (106) in terms of the components \(w_{a}=(w_{0},w_{r})\)
\[w_{0}=i\frac{w_{r}}{\omega-A_{0}^{3}}\,. \tag{111}\]
Decomposing \(W_{r}\) into spherical harmonics
\[w_{r}=e^{-i\omega t}\sum_{\ell,m}Y_{\ell,m}(\hat{n})w_{\ell,m}(r)\,, \tag{112}\]
we obtain
\[r^{2}\partial_{r}^{2}w_{\ell,m}+\frac{2A_{0}^{3}}{A_{0}^{3}-\omega}r\partial_ {r}w_{\ell,m}+\left[r^{2}\left(\omega-A_{0}^{3}\right)^{2}-\ell(\ell+1)\right] w_{\ell,m}=0\,. \tag{113}\]
Looking for solutions in the form \(w_{\ell,m}\sim r^{\Delta_{\ell}-1}\), we again find the same \(\Delta_{\ell}\) as in (110). In this polarization (in the AdS\({}_{2}\) directions) the \(\ell=0\) mode solution is excluded because it is equivalent to a shift of the form (107).
From the result (110) we conclude that the first instability is found for the \(\ell=1\) modes at
\[s=\frac{6\pi}{g_{YM}^{2}}\,. \tag{114}\]
Details on Wilson lines in large \(N_{f}\) QED\({}_{3}\)
### The saddle-point equations
In this section we provide details on the calculation of the gauge field sourced by a Wilson line in large \(N_{f}\) QED\({}_{3}\), with and without a Chern-Simons term. For the sake of generality we consider right away the action with a Chern-Simons term (108). We thus want to extremize the following Euclidean effective action for the gauge field
\[S_{q}[A]=-2N_{f}\text{Tr}\left[\log\left(\not{\partial}-i\not{A}\right)\right] -i\frac{k}{4\pi}\int d^{3}x\varepsilon^{\mu\nu\rho}A_{\mu}\partial_{\nu}A_{ \rho}+iq\int d\tau A_{0}\,. \tag{109}\]
To proceed we consider the ansatz
\[F_{\tau r}=i\frac{E}{r^{2}}\qquad\text{and}\qquad A_{\theta}=b=\text{const.}\,. \tag{110}\]
The ansatz (110) is dictated by conformal invariance; for \(k=0\) parity demands \(b=0\). It is further convenient to exploit Weyl invariance to map the theory to AdS\({}_{2}\times S^{1}\). The one-loop fermion determinant naturally decomposes into a sum over the contributions of the AdS\({}_{2}\) KK modes, labeled by the angular momentum \(j\in\frac{1}{2}+\mathds{Z}\), as in section 3.1:
\[\text{Tr}\left[\log\left(\not{\partial}-i\not{A}\right)\right]_{\text{AdS}_{ 2}\times S^{1}}=\text{Vol}(\text{AdS}_{2})\sum_{j\in\frac{1}{2}+\mathds{Z}} \Sigma_{j}(E,b)\,. \tag{111}\]
We defined \(\Sigma_{j}\) to be proportional to the one-loop determinant for the AdS\({}_{2}\) Dirac operator in a constant electric field
\[\text{Vol}(\text{AdS}_{2})\Sigma_{j}(E,b)=\text{Tr}\left[\log\left(\not{ \partial}-i\not{A}+\tilde{m}_{j}\right)\right]_{\text{AdS}_{2}}\,, \tag{112}\]
where the KK masses receive a contribution from the holonomy
\[\tilde{m}_{j}=j+b\,. \tag{113}\]
We factored out explicitly the AdS\({}_{2}\) volume in (111) for future convenience. We will discuss how to explicitly evaluate \(\Sigma_{\ell}\) and the infinite sum in (111) in the next section.
To conveniently write the last two terms in (109) we parametrize AdS\({}_{2}\) using global coordinates
\[ds^{2}_{\text{AdS}_{2}}=d\sigma^{2}+\sinh^{2}\sigma d\phi^{2}\,, \tag{114}\]
and we choose the following gauge for the AdS\({}_{2}\) gauge field
\[A_{\sigma}=0\,,\quad A_{\phi}=i(\cosh\sigma-1)E\quad\implies\quad F_{\sigma \phi}=iE\sinh\sigma\,. \tag{115}\]
It can be checked that (115) is indeed equivalent to the electric field in (110). Then using (115) we obtain
\[\begin{split}-i\frac{k}{4\pi}\int d^{3}x\varepsilon^{\mu\nu \rho}A_{\mu}\partial_{\nu}A_{\rho}+iq\int dx^{\mu}A_{\mu}&=-i \frac{k}{2\pi}\int_{\text{AdS}_{2}\times S^{1}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We used that the AdS\({}_{2}\) volume in global coordinates is given by
\[\text{Vol}(\text{AdS}_{2})=2\pi\int_{0}^{\sigma_{c}}\sinh\sigma=2\pi\left[\frac{e^{ \sigma_{c}}}{2}-1+O\left(e^{-\sigma_{c}}\right)\right]\,,\] (D.9)
where we introduced a (large) radial cutoff \(\sigma_{c}\). As explained in [50], since \(\text{Vol}(\partial\text{AdS}_{2})=\pi e^{\sigma_{c}}+O\left(e^{-\sigma_{c}}\right)\), the terms proportional to \(e^{\sigma_{c}}\) in the action can be absorbed into a defect cosmological constant counterterm and thus we can replace \(\text{Vol}(\text{AdS}_{2})\) with its well known regulated expression \(\text{Vol}(\text{AdS}_{2})|_{\text{reg}}=-2\pi\)[139, 140].
Overall, from (D.3) and (D.8) we obtain
\[\frac{S_{q}[A]}{\text{Vol}(\text{AdS}_{2})}=-2N_{f}\sum_{j\in\frac{1}{2}+ \mathds{Z}}\Sigma_{j}(E,b)+k\,bE-q\,E\,,\] (D.10)
from which we obtain the saddle-point equations
\[2\sum_{j\in\frac{1}{2}+\mathds{Z}}\frac{\partial\Sigma_{j}(E,b)} {\partial E} =\frac{k}{N_{f}}b-\frac{q}{N_{f}}\,,\] (D.11) \[2\sum_{j\in\frac{1}{2}+\mathds{Z}}\frac{\partial\Sigma_{j}(E,b)} {\partial b} =\frac{k}{N_{f}}E\,.\] (D.12)
### The fluctuation determinant via zeta function regularization
In this appendix we explain how to evaluate the fluctuation determinant (D.4), as well as its derivatives in (D.11) and (D.12).
We start by commenting on two important properties of the sum in (D.11) as a function of the holonomy \(b\). From the definitions (D.4) and (D.5) it follows that
\[\Sigma_{j}(E,b\pm n)=\Sigma_{j\pm n}(E,b)\quad\text{for }n\in\mathds{Z}\,.\] (D.13)
This implies that the sum
\[\sum_{j\in\frac{1}{2}+\mathds{Z}}\Sigma_{j}(E,b)\,,\] (D.14)
is a periodic function of \(b\) with unit period, in agreement with the discussion on integral holonomies in section 5. Additionally, we will soon see that \(\Sigma(E,b)\) is an even function of both \(E\) and \(b\). This implies in particular
\[\sum_{j\in\frac{1}{2}+\mathds{Z}}\frac{\partial\Sigma_{j}(E,b)}{\partial b} \bigg{|}_{b=0}=\sum_{j\in\frac{1}{2}+\mathds{Z}}\frac{\partial\Sigma_{j}(E,b) }{\partial E}\bigg{|}_{E=0}=0\,.\] (D.15)
This ensures that (D.12) is solved by \(b=0\) for \(k=0\), as expected from parity invariance. In general, given a solution \((E,b)\) of (D.11) and (D.12) for certain values \((k,q)\), this implies that \((-E,b)\) is a solution for \((-k,-q)\), \((E,-b)\) is a solution for \((-k,q)\) and \((-E,-b)\) solves the equations for \((k,-q)\).
To proceed, we express the determinant of the Dirac operator in a constant electric field [111, 141] as an integral,
\[\Sigma_{j}(E,b)=\int_{-\infty}^{\infty}d\nu\,\mu_{E}(\nu)\log\left(\nu^{2}-E^{2}+ \tilde{m}_{j}^{2}\right)\,, \tag{111}\]
where \(\mu_{E}(\nu)\) is the appropriate hyperbolic spectral density67
Footnote 67: In [141] the determinant is given for an Euclidean electric field, in which case the spectral density receives an additional discrete contribution; it can be checked that upon Wick rotating the electric field to be real in Lorentzian signature, as in (108), the only effect of the discrete contribution is to introduce the principal value prescription in (111).
\[\mu_{E}(\nu)=P\left(\frac{\nu\sinh(2\pi\nu)}{4\pi\left[\cosh(2\pi\nu)-\cosh(2 \pi E)\right]}\right)\,; \tag{112}\]
the prefix P specifies that the pole at \(\nu=E\) in the spectral density should be integrated according to the principal value prescription. (111) makes it clear that the effective action develops an imaginary part for supercritical electric fields, i.e. when \(E^{2}>\tilde{m}_{j}^{2}\) for some \(j\). Below we will focus on subcritical fields, for which (111) is real.
Even upon taking derivatives of (111) with respect to \(E\) and \(b\), both the integration over \(\nu\) and the sums in (110) and (111) do not converge. We therefore need to regulate the calculation. We decided to use zeta function regularization. This amounts to rewriting the fluctuation determinant in (107) as
\[\frac{\mathrm{Tr}\left[\left(\not{\partial}-i\not{A}\right)\right]_{\mathrm{ AdS}_{2}\times S^{1}}}{\mathrm{Vol}(\mathrm{AdS}_{2})}=-\lim_{s\to 0}\frac{d}{ds}\sum_{j \in\frac{1}{2}+\mathds{Z}}\Sigma_{j}^{(s)}(E,b)\,, \tag{113}\]
where
\[\Sigma_{j}^{(s)}(E,b)=\int_{-\infty}^{\infty}d\nu\,\frac{\mu_{E}(\nu)}{\left( \nu^{2}-E^{2}+\tilde{m}_{j}^{2}\right)^{s}}\,. \tag{114}\]
The idea then is to compute the sum on the right hand-side of (113) for sufficiently large \(s\), so that both the integration and the sum converge, and then analytically continue the result.
In practice (113) and its derivatives can only be evaluated numerically. We sketch below the strategy to evaluate the fluctuation determinant itself; the derivatives are computed analogously.
First, we deal with the integral over \(\nu\). To this aim, we notice that the spectral density in (112) admits the following asymptotic expansion for large \(\nu\)
\[\mu_{E}(\nu)\sim\frac{|\nu|}{4\pi}+O\left(e^{-2\pi|\nu|}\right)\quad\text{for }|\nu|\to\infty\,. \tag{115}\]
We therefore define a subtracted spectral density
\[\tilde{\mu}_{E}(\nu)=\mu_{E}(\nu)-\frac{|\nu|}{4\pi}\,, \tag{116}\]
which decays exponentially for \(\nu\to\infty\). We then separate (111) into two contributions
\[\Sigma_{j}^{(s)}(E,b)=\Sigma_{j}^{(s,1)}(E,b)+\Sigma_{j}^{(s,2)}(E,b)\,, \tag{114}\]
where
\[\Sigma_{j}^{(s,1)}(E,b) =2\int_{0}^{\infty}d\nu\frac{\nu}{4\pi\left(\nu^{2}-E^{2}+\tilde{ m}_{j}^{2}\right)^{s}}=-\frac{\left(\tilde{m}_{j}^{2}-E^{2}\right)^{1-s}}{4\pi(1-s) }\,, \tag{115}\] \[\Sigma_{j}^{(s,2)}(E,b) =2\int_{0}^{\infty}d\nu\frac{\tilde{\mu}_{E}(\nu)}{\left(\nu^{2}- E^{2}+\tilde{m}_{j}^{2}\right)^{s}}\,. \tag{116}\]
We evaluated the integral in (115) by analytically continuing the result for \(s>1\). Even though we were not able to perform the integration in (116) in closed form, the integral converges for arbitrary \(s\) since \(\tilde{\mu}_{E}(\nu)=O(e^{-2\pi|\nu|})\) for \(\nu\to\infty\).
Next we isolate the divergent contributions to the sum from \(\Sigma_{j}^{(s,1)}(E,b)\) and \(\Sigma_{j}^{(s,2)}(E,b)\). Grouping terms with opposite spin we have
\[\begin{split}\left[\Sigma_{j}^{(s,1)}(E,b)+\Sigma_{-j}^{(s,1)}( E,b)\right]_{div}&=j^{-2s}\left[\frac{j^{2}}{2\pi(s-1)}+\frac{-(1-2s)b^ {2}-E^{2}}{2\pi}\right]\,,\\ \left[\Sigma_{j}^{(s,2)}(E,b)+\Sigma_{-j}^{(s,2)}(E,b)\right]_{ div}&=\frac{4}{j^{2s}}\int_{0}^{\infty}d\nu\,\tilde{\mu}_{E}(\nu)\,, \end{split} \tag{117}\]
where the last integral can be evaluated numerically for arbitrary values of \(E\). All we have to do then is to write the sum we are interested in as
\[\begin{split}\sum_{j\in\frac{1}{2}+\mathrm{Z}}\Sigma_{j}^{(s)}( E,b)&=\sum_{j>0}\left\{\left[\Sigma_{j}^{(s,1)}(E,b)+\Sigma_{-j}^{(s,1)}( E,b)\right]_{div}+\left[\Sigma_{j}^{(s,2)}(E,b)+\Sigma_{-j}^{(s,2)}(E,b) \right]_{div}\right\}\\ &+\sum_{j>0}\left\{\Sigma_{j}^{(s,1)}(E,b)+\Sigma_{-j}^{(s,1)}(E, b)-\left[\Sigma_{j}^{(s,1)}(E,b)+\Sigma_{-j}^{(s,1)}(E,b)\right]_{div}\right\}\\ &+\sum_{j>0}\left\{\Sigma_{j}^{(s,2)}(E,b)+\Sigma_{-j}^{(s,2)}(E, b)-\left[\Sigma_{j}^{(s,2)}(E,b)+\Sigma_{-j}^{(s,2)}(E,b)\right]_{div}\right\}\,. \end{split} \tag{118}\]
The sum in the first line can be evaluated analytically in terms of generalized zeta functions. The sums in the last two lines instead converge for \(s\to 0\). Therefore their contributions to (110) can be straightforwardly evaluated numerically. We do not report further details.
Finally we comment that for \(k=b=0\) it is also simple to compute (numerically) the determinant of the Dirac-Coulomb operator (112) in dimensional regularization. As a crosscheck, we verified that the results of zeta function regularization and dimensional regularization agree in the overlapping regime. We also checked that the numerical result is periodic as a function of \(b\) and satisfies (114).
### Solving the saddle-point equations
We proved in the previous section that (104) is always satisfied for \(k=b=0\). In this case, by computing numerically the left-hand side of (103) for \(0<E<1/2\), we obtain the curve \(q(E)/N_{f}\); figure 11 is obtained plotting \(\{E,q(E)/N_{f}\}\). By studying the limit of \(q(E)\) for \(E\to 1/2\) from below we find the result (105).
For nonzero \(k\) it is harder to solve (103) and (104). We instead compute the sums on the left-hand side of (103) and (104) as a function of \(E\) and \(b\) and use the result to read off the corresponding values of \(k\) and \(q\). Note that different values of \(E\) and \(b\) may correspond to the same pair \((k,q)\), i.e. multiple saddle-points may exist for the same value of the charge and the Chern-Simons level. This is indeed what we find.
As explained in section 5.2.3, in order to decide what the stable Wilson lines are in the theory at hand, we carve out the region \(R\) of solutions \((k_{R},q_{R})\) to the equations (103) and (104) for
\[-1/2<b<1/2\quad\cap\quad|E|<1/2-|b|\,, \tag{106}\]
where the latter condition comes from the requirement of stability. By symmetry, it is enough to focus on \(E>0\). To determine the boundary of the region it turns out to be convenient to span the \((E,b)\) plane using the curves defined by
\[c_{\pm}(x)=\Big{\{}(E,b)\;\text{such that}\;\left(\pm 1/2+b\right)^{2}-E^{2}=x^ {2}\;\cap\;\mp b>0\Big{\}}\;. \tag{107}\]
In some sense, the curve \(c_{\pm}(x)\) specifies all points in the \((E,b)\) plane which are equidistant from an instability of the \(j=\pm 1/2\) mode. The restriction on the sign of \(b\) ensures that the two curves do not intersect.
Let us call \(C_{\pm}(x)\) the set of solutions \((k,q)\) to (103) and (104) for the values of \((E,b)\) which lie on the curve \(c_{\pm}(x)\). The \(C_{\pm}(x)\) are thus curves on the \((k,q)\) plane. Examples of the curve \(C_{-}(x)\) for different values of \(x\) are shown in figure 14(a). The symmetry properties of the equations imply that \(C_{+}(x)\) is obtained by mirroring \(C_{-}(x)\) around the \(k\) axis. Interestingly for
Figure 15: In figure 14(a) we plot the curve \(C_{-}(x)\) for \(x=0.4\), \(0.3\), \(0.2\), \(0.1\), \(0.05\), \(0.01\), while in figure 14(b) we show both the curves \(C_{-}(x)\) and \(C_{+}(x)\) for \(x=0.4\), \(0.2\), \(0.01\).
\(x\) sufficiently small the curve \(C_{-}(x)\) intersects the \(k=0\) axis at two different points. For \(x\to 0\) the first intersection point approaches \((k/N_{f},q/N_{f})=(0,1/2)\),68 while the second intersection point approaches the critical value determined in (110), namely \((k/N_{f},q/N_{f})\simeq(0,0.56)\). We also notice that the curves \(C_{-}(x)\) and \(C_{+}(x)\) intersect each other at two other points, see figure (b)b.
Footnote 68: We determined the value of the intersection point analytically by studying the fluctuation determinant for \(b\to\pm 1/2\).
It can be seen by changing \(x\) from \(1/2\) to \(0\) that the curves \(C_{\pm}(x)\) cover the full region below the curve specified by the union of \(\lim_{x\to 0}C_{-}(x)\) and \(\lim_{x\to 0}C_{+}(x)\). As figure (a)a shows, as \(x\to 0\), the curve \(C_{-}(x)\) approaches a limit that is composed of two parts: a generic-looking curve that starts at \((k/N_{f},q/N_{f})\simeq(0,0.56)\) and ends at the point \((k^{*}/N_{f},q^{*}/N_{f})\simeq(1/\pi,0.34)\), and a second part that is a straight line that we conjecture to be \(q/N_{f}=\frac{1}{2}|k|/N_{f}+\frac{1}{2}\), see plot (b)b.
By considering the union of the two curves discussed above we obtain the region \(R\) in figure 17. Note that the region \(R\) strictly includes the one specified by \(|q|\leq|k|/2\) and marked by red on the plot. Hence all physical Wilson lines correspond to at least one perturbatively stable saddle-point.
As explained in the main text, the points in \(R\) for which \(|q|>|k|/2\) correspond to additional saddle-points in the physical region \(|q|\leq|k|/2\). To determine the value of \(q\) to which a saddle-point \((q^{*},k)\) with \(|q^{*}|>|k|/2\) corresponds to, we simply need to perform a shift \(q^{*}\to q^{*}-kn\equiv q\) (which is accompanied by the shift \(b^{*}\to b^{*}+n\)), with \(n\in\mathds{Z}\), such that \(|q|<|k|/2\). Note that for each point \((q^{*},k)\) in \(R\) there is a single value of \(n\in\mathds{Z}\) such that \(|q^{*}-kn|<|k|/2\) is in the physical region.
In practice, we consider the upper and lower boundary curves \(q_{\pm}(k)\) of the region \(R\). We then draw the shifted curves \(q_{\pm}(k)\mp k\), \(q_{\pm}(k)\mp 2k\), \(q_{\pm}(k)\mp 3k\); the intersection of these curves separates the physical region \(|q|<|k|/2\) of the \((q,k)\) plane into subregions according to the number of saddle-points. The result of this geometrical procedure is shown in the main text in figure 12.
Figure 16: In figure (a)a we plot the curve \(C_{-}(x)\) as \(x\) approaches \(0\). In figure (b)b we compare the curve \(C_{-}(x)\) to the line \(q/N_{f}=\frac{1}{2}|k|/N_{f}+\frac{1}{2}\) for \(x\to 0\) and \(k/N_{f}>0\). | |
2309.04340 | Identifying Single-Input Linear System Dynamics from Reachable Sets | This paper is concerned with identifying linear system dynamics without the
knowledge of individual system trajectories, but from the knowledge of the
system's reachable sets observed at different times. Motivated by a scenario
where the reachable sets are known from partially transparent manufacturer
specifications or observations of the collective behavior of adversarial
agents, we aim to utilize such sets to determine the unknown system's dynamics.
This paper has two contributions. Firstly, we show that the sequence of the
system's reachable sets can be used to uniquely determine the system's dynamics
for asymmetric input sets under some generic assumptions, regardless of the
system's dimensions. We also prove the same property holds up to a sign change
for two-dimensional systems where the input set is symmetric around zero.
Secondly, we present an algorithm to determine these dynamics. We apply and
verify the developed theory and algorithms on an unknown band-pass filter
circuit solely provided the unknown system's reachable sets over a finite
observation period. | Taha Shafa, Roy Dong, Melkior Ornik | 2023-09-08T14:11:46 | http://arxiv.org/abs/2309.04340v1 | # Identifying Single-Input Linear System Dynamics from Reachable Sets
###### Abstract
This paper is concerned with identifying linear system dynamics without the knowledge of individual system trajectories, but from the knowledge of the system's reachable sets observed at different times. Motivated by a scenario where the reachable sets are known from partially transparent manufacturer specifications or observations of the collective behavior of adversarial agents, we aim to utilize such sets to determine the unknown system's dynamics. This paper has two contributions. Firstly, we show that the sequence of the system's reachable sets can be used to uniquely determine the system's dynamics for asymmetric input sets under some generic assumptions, regardless of the system's dimensions. We also prove the same property holds up to a sign change for two-dimensional systems where the input set is symmetric around zero. Secondly, we present an algorithm to determine these dynamics. We apply and verify the developed theory and algorithms on an unknown band-pass filter circuit solely provided the unknown system's reachable sets over a finite observation period.
## I Introduction
This paper aims to determine whether it is possible to use a control system's reachable sets obtained at different time instances to calculate the system's dynamics. In certain instances, we may be able to determine an approximation of a system's reachable sets over a finite observation period. The purpose of this paper is to show that such information can be utilized to arrive at a dynamic model for an unknown system. Practical applications may include system identification of high-density drone and missile swarms [1, 2] where the reachable set can be found by observing multiple agents collectively, but without the capability of distinguishing them. Other applications include predicting macro-level population behaviors, e.g., determining how crowd behavior changes under social or economic events like the introduction of a new population or changes in the stock market [3]. We may also be able to model internal body functions on the cellular level [4, 5], namely understanding how cells change their identity and behavior in living systems.
We must first show that model identification using reachable sets will uniquely determine an unknown system's true dynamics. After uniqueness is proven, we develop a method to identify a linear model of an unknown system's behavior using its reachable sets. Previous research in system identification presents the most closely related contributions to the method presented in this paper. However, previous work on system identification classically relies on frequency response techniques induced by randomized actuator inputs [6, 7]. More sophisticated system identification techniques involve neural networks [8]. Single-layer and multi-layer neural networks have also been applied with the use of parameter estimation algorithms using a single hidden layer [9] and \(H_{\infty}\) control-induced excitations for robust identification of system nonlinearities [10]. More recent work involves using recurrent neural networks [11, 12] with Long Short-Term Memory Units (LSTM) and fractional order neural networks (FONN) [13, 14] to identify and control dynamic systems. These methods, however, cannot be used unless one has access to a system's actuators or individual trajectories. The significant difference of our novel method is that it does not require control of any actuators to model an unknown system nor observations of individual trajectories.
On a high level, the problem in this paper involves identifying the behaviors or capabilities of an observed system under limited information. While there exist other methods for adversarial behavior recognition, those works are focused on determining adversarial agent goals by matching actions of an agent against a plan library [15, 16, 17]. More recent work [18, 19] proposes using evolving fuzzy systems and artificial intelligence to adaptively predict agent behavior. In contrast, our method is starkly different since it is not primarily concerned with predicting adversarial behavior, but determining all possible actions of an adversary within a time horizon. Thus, instead of using a library of finite predetermined adversarial actions, our method uses reachable sets to produce a dynamic model of an unknown system.
The outline of this paper is as follows: in Section II, we discuss the problem statement, namely posing the question of whether linear dynamics can be uniquely recovered given an unknown system's sequence of reachable sets and how to recover said dynamics. In Section III, we address the question of whether the system dynamics are uniquely determined by the system's reachable sets. We show that under generic assumptions, the system dynamics are indeed unique under asymmetric input sets. For unknown systems with input sets symmetric around zero, uniqueness modulo a sign has been proved in the two-dimensional case; we conjecture the same holds for higher dimensions. In Section IV, we propose a procedure using knowledge of the reachable sets to calculate the system dynamics. In Section V, we illustrate by example how to implement this procedure to identify the models of an unknown band-pass filter circuit and an additional dynamical system with a symmetric input set.
### _Notation_
We denote the set of all \(n\times m\) real and complex matrices by \(\mathbb{R}^{n\times m}\) and \(\mathbb{C}^{n\times m}\) respectively; for \(M\in\mathbb{R}^{n\times m}\), we let \(M^{T}\in\mathbb{R}^{m\times n}\) denote its transpose. Vectors \(e_{1},\ldots,e_{n}\) will denote the canonical basis vectors in \(\mathbb{R}^{n}\). We let \(\mathbb{N}\) denote the set of all natural numbers, \(\mathbb{Z}_{\geq 0}\) denote the set of non-negative integers, and \(GL(n)\) denote the set of invertible square matrices of dimension \(n\in\mathbb{N}\). Let \(\mathcal{S}\) be a set of points in \(\mathbb{R}^{n}\). Then \(\mathrm{Conv}(\mathcal{S})\) denotes the convex hull of \(\mathcal{S}\). Notation \(B\mathcal{X}\) where \(B\in\mathbb{R}^{n\times m}\) and \(\mathcal{X}\subset\mathbb{R}^{m}\) denotes the set \(B\mathcal{X}=\{Bx\ |\ x\in\mathcal{X}\}\). Given two sets \(\mathcal{A}\), \(\mathcal{B}\in\mathbb{R}^{n}\), we denote \(\mathcal{A}\oplus\mathcal{B}=\{a+b\ |\ a\in\mathcal{A}\), \(b\in\mathcal{B}\}\) as their Minkowski sum. Similarly, \(\mathcal{A}\ominus\mathcal{B}=\{c\in\mathbb{R}^{n}\ |\ c\oplus B\subseteq \mathcal{A}\}\) denotes the Minkowski difference. We also define \(\mathcal{A}+b=\{a+b\ |\ a\in\mathbb{R}^{n}\}\) as the translation of \(\mathcal{A}\) by \(b\in\mathbb{R}^{n}\).
## II Problem Statement
We consider the discrete-time, single-input linear system
\[x[i+1]=Ax[i]+bu[i],\quad x[0]=0, \tag{1}\]
where all \(i\in\mathbb{Z}_{\geq 0}\), \(x\in\mathbb{R}^{n}\), \(A\in\mathbb{R}^{n\times n}\), \(b\in\mathbb{R}^{n}\) and \(u\ \in\ \mathcal{U}\ \subset\ \mathbb{R}\) where \(\mathcal{U}=[\underline{u},\overline{u}]\) such that \(\underline{u}\neq\overline{u}\). We assume \(b\neq 0\) since the system's reachable sets are trivial otherwise. We also assume \(x[0]=0\); by a shift in coordinates, the case of \(x[0]\neq 0\) is equivalent to that of an _affine_ system \(x[i+1]=Ax[i]+bu[i]+c\) with initial state at the origin. Solving the problem in this setting can likely be approached by reproducing similar calculations in subsequent sections, but we leave such an effort for future work.
Our goal is to establish whether the dynamics of (1), i.e., matrices \(A\) and \(b\), can be determined using the system's reachable sets. We now formally define said reachable sets.
**Definition 1**: _For \(i\in\mathbb{Z}_{\geq 0}\), the (forward) reachable set of system (1) at time \(i\) is_
\[\mathcal{R}(i,x[0])=\{\phi_{u}(i;x[0])\ |\ u:\mathbb{Z}_{\geq 0}\to\mathcal{U}\},\]
_where \(\phi_{u}(\cdot;x[0])\) denotes the controlled trajectory of system (1) with control signal \(u\)._
We present the problem of whether the system dynamics are uniquely determined by the system's reachable sets.
**Problem 1**: _Given a sequence of sets \(\{\mathcal{R}(i,0)\}_{i\in\mathbb{N}}\) which is generated by (1) for some \((A,b)\), determine whether \((A,b)\) can be uniquely recovered from \(\{\mathcal{R}(i,0)\}_{i\in\mathbb{N}}\)._
Notice that we explicitly assume the knowledge of all reachable sets at all times. Such an assumption might not always be realistic. We will show that we often need only the first \(n+1\) reachable sets to uniquely recover the dynamics. We leave the more general case - where only reachable sets at different time steps are available - for future work.
The first step to solving Problem 1 is to derive a simple relationship between the system matrices and \(\mathcal{R}(i,0)\). Given system (1), we naturally utilize Minkowski sums and the Minkowski difference [20] to produce such a relationship for all \(i\in\mathbb{N}\).
**Theorem 1**: _Let \(\mathcal{R}(i,0)\) be the reachable set at time \(i\) of (1). Then_
\[A^{i-1}b\mathcal{U}=\mathcal{R}(i,0)\ominus\mathcal{R}(i-1,0). \tag{2}\]
By (1) it is clear that \(\mathcal{R}(1,0)=b\mathcal{U}\). Since
\[x[i]=A^{i}x[0]+A^{i-1}bu[0]+\ldots+bu[i-1],\]
_clearly_
\[\mathcal{R}(i,0)=A^{i-1}b\mathcal{U}\oplus\ldots\oplus b\mathcal{U}\]
_and hence_
\[\mathcal{R}(i,0)=A^{i-1}b\mathcal{U}\oplus\mathcal{R}(i-1,0).\]
We recall that the Minkowski sum of two convex sets is also convex [21]. Since all sets \(A^{i-1}b\mathcal{U}\) are convex by the definition of \(\mathcal{U}\), all sets \(\mathcal{R}(i,0)\) are convex by induction. Hence, the appropriate Minkowski difference [22] can be calculated to arrive at (2).
Theorem 1 implies that we can obtain \(\{A^{i-1}b\mathcal{U}\}_{i\in\mathbb{N}}\) using the reachable sets \(\mathcal{R}(i,0)\). We will prove that when \(\mathcal{U}\neq[-c,c]\) with \(c\in\mathbb{R}\), matrices \(A\) and \(b\) are indeed _generically_ uniquely defined from \(\{A^{i-1}b\mathcal{U}\}_{i\in\mathbb{N}}\), that is, uniquely defined under the assumptions formally written in Theorem 2 shown to be generic in a topological sense in Lemma 1. When \(\mathcal{U}=[-c,c]\) for some \(c\in\mathbb{R}\), we can show that \((A,b)\) are not uniquely defined, but conjecture that they are unique up to a change in sign. We prove that this property holds for \(n=2\). We shall refer to solutions for cases with such a set \(\mathcal{U}\) as \(\pm\)_-unique_, which is explicitly defined in the next section.
Following Problem 1, which seeks to determine whether system dynamics are uniquely defined from reachable sets, we present the second problem, which aims to explicitly determine such dynamics.
**Problem 2**: _Develop a method to recover at least one pair \((A,b)\) which generates \(\{\mathcal{R}(i,0)\}_{i\in\mathbb{N}}\)._
Based on methods in [20] for calculating Minkowski differences, we can calculate \(\{A^{i-1}b\mathcal{U}\}_{i\in\mathbb{N}}\). We show in Section IV that the results of these Minkowski differences and knowledge of \(\mathcal{U}\) are sufficient for calculating \(\{A^{i-1}b\}_{i\in\mathbb{N}}\), which in turn can be utilized to calculate the matrix pair \((A,b)\) for controllable systems. We first tackle Problem 1.
## III Uniqueness of the Derived System Model
We wish to determine when any pair \((A,b)\) uniquely defines the dynamics of (1). It can be easily shown that the answer is generally negative. Consider an unknown system (1) where
\[A=\begin{bmatrix}0&0\\ 0&1\end{bmatrix},\quad b=\begin{bmatrix}0\\ 1\end{bmatrix} \tag{3}\]
and \(\mathcal{U}=[0,1]\). By equation (2) of Theorem 1, we see that if \(A^{\prime}=I\), then the reachable sets of (1) with matrix pairs \((A,b)\) and \((A^{\prime},b)\) are equivalent. Thus, we begin by determining sufficient conditions which guarantee whether \((A,b)\) can be uniquely recovered as stated in Problem 1. We will show uniqueness under several technical assumptions; Lemma 1 shows said assumptions are generic in a topological sense.
**Lemma 1**: _Let \(\mathcal{N}\subset\mathbb{R}^{n\times n}\) be the set of all matrices such that if \(A\in\mathcal{N}\), then \(A^{2}\) has distinct eigenvalues. Let \(b\in\mathbb{R}^{n}\backslash\{0\}\) and \(\mathcal{O}\in\mathbb{R}^{n\times n}\) be the set of all matrices such that, if \(A\in\mathcal{O}\) and \(\eta\in\mathbb{C}^{n}\) is any left eigenvector of \(A\), \(b^{T}\eta\neq 0\). Then, \(GL(n)\cap\mathcal{N}\cap\mathcal{O}\) is an open and dense set._
It is a well known result that the set of all matrices with distinct eigenvalues and the set \(GL(n)\) are both open and dense [23]. Clearly, openness of the former set implies \(\mathcal{N}\) is open. To show \(\mathcal{N}\) is also dense, we would follow similar steps as part of the proof to show \(GL(n)\cap\mathcal{N}\cap\mathcal{O}\) is dense. For succinctness, we prove \(GL(n)\cap\mathcal{N}\cap\mathcal{O}\) is open and dense and leave the proof that \(\mathcal{N}\) is dense to the reader.
Openness of \(GL(n)\cap\mathcal{N}\cap\mathcal{O}\) can be trivially concluded by the continuity of eigenvectors [24], meaning if we consider a matrix \(A(t)\) whose elements are a continuous function of \(t\), any eigenvectors \(v_{i}(t)\) and left eigenvectors \(\eta_{i}(t)\) of norm \(1\) of \(A(t)\) are continuous function of \(t\).
We now prove denseness. In other words, we will show that for any arbitrary matrix \(A\) and any \(\epsilon>0\), there exists a matrix \(A^{\prime\prime}\in GL(n)\cap\mathcal{N}\cap\mathcal{O}\) such that \(\|A-A^{\prime\prime}\|<\epsilon\). Let \(\eta_{i}\) be the left eigenvectors of \(A\) so that \(\eta_{i}^{T}A=\lambda_{i}\eta_{i}^{T}\). By the denseness of \(GL(n)\), for any \(\delta>0\) we can find vectors \(\eta_{i}^{\prime T}\) such that \(\|\eta_{i}^{T}-\eta_{i}^{\prime T}\|<\delta\) for all \(i\) and \(\det(|\eta_{i}^{\prime}\ \eta_{2}^{\prime}\ \cdots\ \eta_{n}^{\prime T}|)\neq 0\). By the continuity of determinants and because \(b\neq 0\), we can slightly perturb one element of \(\eta_{i}^{\prime T}\) to obtain \(\eta_{i}^{\prime\prime T}\) such that \(\det([\eta_{1}^{\prime\prime}\ \eta_{2}^{\prime\prime}\ \cdots\ \eta_{n}^{\prime\prime}])\neq 0\), \(\|\eta_{i}^{T}-\eta_{i}^{\prime\prime T}\|<\delta\) for all \(i\), and \(b^{T}\eta_{i}\neq 0\). We now let \(\eta_{i}^{\prime\prime}\) form a basis in \(\mathbb{C}^{n}\), and define a matrix \(A^{\prime}\) such that \(\eta_{i}^{\prime\prime T}A^{\prime}=\lambda_{i}\eta_{i}^{\prime\prime T}\) and \(A^{\prime}\in\mathcal{O}\). If the perturbations above are performed in a way that ensure that perturbations of real eigenvectors remain real, and perturbations of complex conjugate vectors remain complex conjugates, matrix \(A^{\prime}\) is real [25].
Since \(\eta_{i}^{\prime\prime}\) form a basis in \(\mathbb{C}^{n}\), we can represent any vector \(x\in\mathbb{R}^{n}\) as \(x=\sum_{i=1}^{n}\beta_{i}(x)\eta_{i}^{\prime\prime T}\) where \(\beta_{i}(x)\in\mathbb{R}\). We can compute \(\beta_{i}(x)\) as a continuous function of \(x\). Recall that \(\|A\|=\max_{\|x\|=1}\|Ax\|\). We consider \(x\) such that \(\|x\|=1\). Then \(\beta_{i}(x)\) is a continuous function on a compact space and thus has a maximum. Let \(\alpha_{i}=\max\{|\beta_{i}(x)|\ |\ \|x\|=1\}\). Note that \(x^{T}A^{\prime}=\sum_{i=1}^{n}\lambda_{i}\beta_{i}(x)\eta_{i}^{\prime\prime T}\). It follows that
\[\|x^{T}A-x^{T}A^{\prime}\|\leq\sum_{i=1}^{n}\|(\beta_{i}(x)\eta_{i}^{\prime \prime T})A-\beta_{i}(x)\lambda_{i}\eta_{i}^{\prime\prime T}\|\]
\[=\sum_{i=1}^{n}\|\beta_{i}(x)(\eta_{i}^{T}A-(\eta_{i}^{T}-\eta_{i}^{\prime \prime T})A)-\beta_{i}(x)\lambda_{i}\eta_{i}^{\prime\prime T}\|\]
\[=\sum_{i=1}^{n}\|\beta_{i}(x)((\eta_{i}^{\prime\prime T}-\eta_{i}^{T})A+ \lambda_{i}(\eta_{i}^{T}-\eta_{i}^{\prime\prime T}))\|\]
\[<\sum_{i=1}^{n}(\|\alpha_{i}A\|+\|\alpha_{i}\lambda_{i}\|)\delta\]
and so if we set \(\delta=\epsilon/(2\sum_{i=1}^{n}\|\alpha_{i}A\|+\|\alpha_{i}\lambda_{i}\|)\), then \(\|x^{T}A-x^{T}A^{\prime}\|<\epsilon/2\).
Given \(\lambda_{1},\ldots,\lambda_{n}\), for any \(\rho>0\) we can obviously find a set \(\{\lambda_{1}^{\prime},\ldots,\lambda_{n}^{\prime}\}\) such that \(|\lambda_{i}-\lambda_{i}^{\prime}|<\rho\) for all \(i\), \(\lambda_{i}^{\prime}\neq 0\) for all \(i\), and \(\lambda_{i}=\overline{\lambda_{j}}\) implies \(\lambda_{i}^{\prime}=\overline{\lambda_{j}^{\prime}}\). Now, define \(A^{\prime\prime}\) by \(\eta_{i}^{\prime\prime T}A^{\prime\prime}=\lambda_{i}^{\prime}\eta_{i}^{\prime \prime T}\) and \(A^{\prime\prime}\in\mathcal{M}\cap\mathcal{N}\cap\mathcal{O}\). As before, if the perturbation of eigenvalues is performed in such a way that real eigenvalues remain real and complex conjugates remain conjugate, \(A^{\prime\prime}\) is real. It follows that
\[\|x^{T}A^{\prime}-x^{T}A^{\prime\prime}\|\leq\sum_{i=1}^{n}\|\beta_{i}(x)\lambda _{i}\eta_{i}^{\prime\prime T}-\beta_{i}(x)\lambda_{i}^{\prime}\eta_{i}^{\prime \prime T}\|\]
\[=\sum_{i=1}^{n}\|\beta_{i}(x)\eta_{i}^{\prime\prime T}(\lambda_{i}-\lambda_{i}^ {\prime})\|<\sum_{i=1}^{n}\|\alpha_{i}\eta_{i}^{\prime\prime T}\|\rho.\]
If we set \(\rho=\epsilon/(2\sum_{i=1}^{n}\|\alpha_{i}\eta_{i}^{\prime\prime T}\|)\), then \(\|x^{T}A^{\prime}-x^{T}A^{\prime\prime}\|<\epsilon/2\). Finally we have \(\|x^{T}Ax-x^{T}A^{\prime\prime}\|=\|x^{T}A-x^{T}A^{\prime\prime}+x^{T}A^{\prime}-x ^{T}A^{\prime}\|\leq\|x^{T}A-x^{T}A^{\prime}\|+\|x^{T}A^{\prime}-x^{T}A^{\prime \prime}\|<\epsilon/2+\epsilon/2=\epsilon\). Since this inequality holds for all \(x\) such that \(\|x\|=1\), indeed \(\|A-A^{\prime\prime}\|<\epsilon\), and the claim is proven.
We emphasize that many well-known linear controllable systems, such as the discrete double integrator, RLC circuit, and linearized pendulum [26], contain \(A\) matrices which satisfy the conditions of Lemma 1. Also, these generic assumptions are not necessary, but sufficient to guarantee uniqueness. For example, a row perturbation of \(A\) in (3) clearly does not satisfy the generic assumptions in Lemma 1, but the reachable sets of (1) with this new matrix can be used to uniquely generate the dynamics, which implies this method can be applied to a larger set of systems. Finding such non-generic assumptions which guarantee uniqueness is a highly involved problem and remains for future work. In the proof below, we will use the assumptions in Lemma 1 to prove that the dynamics derived from reachable sets are generically unique, at least for an asymmetric input set.
**Theorem 2**: _Let \(\mathcal{U}=[c,d]\), where \(c\neq\pm d\). Let \(\eta_{i}\ \in\ \mathbb{C}^{n}\) for \(i\in\{1,\ldots,n\}\) be the left eigenvectors of \(A\). Let the sequence \(\{\mathcal{R}(j,0)\}_{j\in\mathbb{N}}\) be generated by system (1) for system matrices \((A,b)\) and \((A^{\prime},b^{\prime})\), where \(A\), \(A^{\prime}\in GL(n)\), \(A\) and \(A^{\prime}\) have \(n\) distinct eigenvalues, and \(b^{T}\eta_{i}\neq 0\) for all \(i\). Then, \((A,b)=(A^{\prime},b^{\prime})\)._
If \((A,b)\) and \((A^{\prime},b^{\prime})\) for system (1) produce an identical sequence \(\{\mathcal{R}(j,0)\}_{j\in\mathbb{N}}\), then \(\mathcal{R}(1,0)=b\mathcal{U}=b^{\prime}\mathcal{U}\), i.e., there are two options: (i) \(bc=b^{\prime}c\) and \(bd=b^{\prime}d\) or (ii) \(bc=b^{\prime}d\) and \(bd=b^{\prime}c\). If the latter option is true, then \(bcd=b^{\prime}d^{2}=b^
the first element of the left eigenvectors of \(\hat{A}\) being non-zero. To simplify the notation, by a standard abuse we now let \((A,e_{1})\), \((A^{\prime},e_{1})\) represent the system matrices after performing the above transformation. By the above discussion, we are then assuming that \(A\) and \(A^{\prime}\) are invertible, have distinct eigenvalues, and that \(\eta_{i1}\neq 0\) for all \(i\).
Noting that the two systems produce the same reachable sets, by (2) it follows that \(A^{k}e_{1}=A^{\prime k}e_{1}\) for all \(k\in\mathbb{N}\). By the same logic as in the first paragraph of the proof, we see that since \(c\neq-d\), then \(A^{k}ce_{1}=A^{\prime k}ce_{1}\) and \(A^{k}de_{1}=A^{\prime k}de_{1}\) is satisfied for all \(k\in\mathbb{N}\), giving us the relation
\[A^{k}e_{1}=A^{\prime k}e_{1}\quad\forall\,k\in\mathbb{Z}_{\geq 0}. \tag{4}\]
Equation (4) implies \(A^{k-1}A^{\prime}e_{1}=A^{\prime k-1}Ae_{1}\) and \(A^{\prime k-2}Ae_{1}=A^{k-1}e_{1}\) for all \(k\geq 2\). We have
\[A^{k-1}A^{\prime}e_{1}=A^{\prime k-1}Ae_{1}=A^{\prime}A^{\prime k-2}Ae_{1}=A^{ \prime}A^{k-1}e_{1}.\]
Hence, \(A^{k-1}A^{\prime}e_{1}=A^{\prime}A^{k-1}e_{1}\); since \(A^{\prime}\) is invertible,
\[A^{k}e_{1}=A^{\prime(-1)}A^{k}A^{\prime}e_{1}\quad\forall\,k\in\mathbb{Z}_{ \geq 0}. \tag{5}\]
Let \(v_{i}\) denote the right eigenvectors of \(A\) and \(v^{\prime}_{i},\,\eta^{\prime}_{i}\) denote the right and left eigenvectors of \(A^{\prime(-1)}AA^{\prime}\) respectively. Since \(A\) and \(A^{\prime(-1)}AA^{\prime}\) are similar matrices, their eigenvalues are equal [25]. Let \(A=VDV^{-1}\) and \(A^{\prime(-1)}AA^{\prime}=V^{\prime}DV^{\prime(-1)}\) where the rows of \(V^{-1}\) and \(V^{\prime-1}\) are \(\eta^{\prime I}_{i}\) and \(\eta^{\prime T}_{i}\) respectively and the columns of \(V\) and \(V^{\prime}\) are \(v_{i}\) and \(v^{\prime}_{i}\) respectively. By our assumptions, \(\eta_{i1}\neq 0\), so we can now scale the \(\eta\)'s so that \(\eta_{i1}=1\). We then redefine \(v_{i}\) to be the newly scaled right eigenvectors such that \(\eta_{i1}=1\). Next, we write (5) in tensor notation [25] and get
\[\sum_{i}\lambda_{i}^{k}v_{i}=\sum_{i}\lambda_{i}^{k}v^{\prime}_{i}\eta^{ \prime T}_{i1}\quad\forall\,k\in\mathbb{Z}_{\geq 0}\]
which implies
\[\sum_{i}\lambda_{i}^{k}(v_{i}-v^{\prime}_{i}\eta^{\prime T}_{i1})=0\quad \forall\,k\in\mathbb{Z}_{\geq 0}. \tag{6}\]
Taking \(k\in\{0,\ldots,n-1\}\) we have a series of \(n\) equations. For the \(j\)-th element of any \(v_{i}\) and \(v^{\prime}_{i}\), we have
\[\Lambda S_{j}=\begin{bmatrix}1&\ldots&1\\ \lambda_{1}&\ldots&\lambda_{n}\\ \vdots&\vdots&\vdots\\ \lambda_{1}^{n-1}&\ldots&\lambda_{n}^{n-1}\end{bmatrix}\begin{bmatrix}v_{1j}- v^{\prime}_{1j}\eta^{\prime T}_{11}\\ v_{2j}-v^{\prime}_{2j}\eta^{\prime T}_{21}\\ \vdots\\ v_{nj}-v^{\prime}_{nj}\eta^{\prime T}_{n1}\end{bmatrix}=\begin{bmatrix}0\\ 0\\ \vdots\\ 0\end{bmatrix}\]
for any \(j\in\{1,\ldots,n\}\). Notice that \(\Lambda\in\mathbb{C}^{n\times n}\) is the square Vandermonde matrix [27]. Recall that the Vandermonde matrix is invertible if elements \(\lambda_{i}\) are distinct for all \(i\), which holds by assumption. If \(\eta^{\prime}_{i1}=0\) for any \(i\), then \(v_{i}=0\), which contradicts the assumption that \(A\) is diagonalizable. Consequently, \(\eta^{\prime}_{i1}\neq 0\) for all \(i\), so similar to the previous step, we can scale \(v^{\prime}_{i}\) and \(\eta^{\prime}_{i1}\) such that \(\eta^{\prime}_{i1}=1\) for all \(i\). It follows that \(v_{ij}=v^{\prime}_{ij}\) for all \(i,j\) since \(\Lambda\) is invertible. Therefore, \(A=A^{\prime(-1)}AA^{\prime}\).
Recall that we assumed that all eigenvalues of \(A\) are distinct. Thus, since \(A\) and \(A^{\prime}\) commute, we can conclude that \(A\) and \(A^{\prime}\) have the same eigenvectors [28]. Recall that \(A\) and \(A^{\prime}\) are both diagonalizable. If we take the eigenvalue expansion of \(A\) and \(A^{\prime}\) and multiply both on the left by \(V^{-1}\), then equation (4) implies
\[D^{k}\begin{bmatrix}\eta_{11}\\ \eta_{21}\\ \vdots\\ \eta_{n1}\end{bmatrix}=D^{\prime k}\begin{bmatrix}\eta_{11}\\ \eta_{21}\\ \vdots\\ \eta_{n1}\end{bmatrix}\quad\forall\,\,k\in\mathbb{N},\]
where \(D^{\prime}\) is the diagonal matrix with eigenvalues of \(A^{\prime}\) on the diagonal. Subtracting the right hand side from both sides reveals that (4) implies
\[(\lambda_{i}^{k}-\lambda_{i}^{\prime k})\eta_{i1}=0\quad\forall\,\,k\in \mathbb{N}.\]
By assumption, \(\eta_{i1}\neq 0\) for all \(i\), so
\[\lambda_{i}^{k}=\lambda_{i}^{\prime k}\quad\forall\,\,k\in\mathbb{N}.\]
Therefore, both \(A\) and \(A^{\prime}\) have the same eigenvectors and eigenvalues, hence \(A=A^{\prime}\).
Theorem 2 proves that given reachable sets of generic system (1), the pair \((A,b)\), i.e., the system dynamics, are uniquely defined when the set of control inputs is not symmetric around \(0\). We now want to address the degenerate case where \(\mathcal{U}=[-c,c]\). It can be easily seen that in such a case, system (1) with \((A,b)\) and \((-A,-b)\) will produce the same reachable sets. To discuss a relaxed notion of system uniqueness, we provide a formal definition of \(\pm\)-uniqueness.
**Definition 2**: _The system dynamics \((A,b)\) of (1) are \(\pm\)-unique if \((A,b)\) and \(-(A,b)\) generate the same reachable sets, but there do not exist other pairs \((A^{\prime},b^{\prime})\) which generate the same reachable sets._
We conjecture that in the case when \(\mathcal{U}\) is symmetric around \(0\) - a scenario common in many controls applications [29] - the dynamics are \(\pm\)-unique.
**Conjecture 1**: _Let \(\mathcal{U}=[-c,c]\). Let the sequence \(\{\mathcal{R}(i,0)\}_{i\in\mathbb{N}}\) be generated by \((A,b)\), where \(A^{2}\) has distinct eigenvalues and \((A,b)\) are known to satisfy the assumptions of Theorem 2. Then, \((A,b)\) is \(\pm\)-unique._
Proving the conjecture above requires extensive theoretical developments and remains for future work. As an illustration, we formally prove the conjecture to be true in the two-dimensional case.
**Theorem 3**: _Let \(n=2\). Then, Conjecture 1 is correct._
Similarly to the proof of Theorem 2, we have two options: \(bc=b^{\prime}c\) or \(bc=-b^{\prime}c\). In the former case, we reach the same result as before, namely \(b=b^{\prime}\). In the latter case, we obtain \(b=-b^{\prime}\). Altogether, we get \(b=(-1)^{p(0)}b^{\prime}\) where \(p(0)\in\{0,1\}\).
As in Theorem 2, through a coordinate transformation, we assume without loss of generality that \(b^{\prime}=e_{1}\). Then \(b\mathcal{U}=(-1)^{p(0)}b^{\prime}\mathcal{U}=(-1)^{p(0)}[-c,c]e_{1}\). Following the same steps as in the beginning of the proof in Theorem 2, with a standard abuse of notation, we let \(A\), \(A^{\prime}\) represent the system dynamics in this new basis where \(A\) and \(A^{\prime}\) satisfy our assumptions. Also, we find that if \(\mathcal{U}=[-c,c]\), then we arrive at the relation
\[A^{k}e_{1}=(-1)^{p(k)}A^{\prime k}e_{1}\quad\forall\,k\in\mathbb{Z}_{\geq 0}. \tag{7}\]
When \(k=2\), we see that regardless of \(p(1)\), \(AA^{\prime}e_{1}=(-1)^{p(2)}A^{\prime}Ae_{1}\). Using this fact along with equation (7) implies \(A^{k-1}A^{\prime}e_{1}=(-1)^{p(k)}A^{\prime k-1}Ae_{1}\) for all \(k\geq 1\) and \(A^{k-2}A^{\prime}e_{1}=(-1)^{p(k-1)}A^{\prime k-2}Ae_{1}\) for all \(k\geq 2\). We have
\[(-1)^{p(k)}A^{\prime k-1}Ae_{1}=(-1)^{p(k)}A^{\prime k-2}Ae_{1}\]
\[=(-1)^{p(k)}(-1)^{p(k-1)}(-1)^{p(1)}A^{\prime}A^{k-1}e_{1}.\]
Hence, \(A^{k-1}A^{\prime}e_{1}=(-1)^{p(k)}(-1)^{p(k-1)}(-1)^{p(1)}A^{\prime}A^{k-1}e_{1}\); since \(A^{\prime}\) is invertible,
\[A^{k-1}e_{1}=\frac{A^{\prime(-1)}A^{k-1}A^{\prime}e_{1}}{(-1)^{p(k)}(-1)^{p(k- 1)}(-1)^{p(1)}}.\]
We define \(q(k)\in\{0,1\}\) by \((-1)^{q(k)}=((-1)^{p(k)}(-1)^{p(k-1)}(-1)^{p(1)})^{-1}=(-1)^{p(k)}(-1)^{p(k-1) }(-1)^{p(1)}\). We then have
\[A^{k-1}e_{1}=(-1)^{q(k)}A^{\prime(-1)}A^{k-1}A^{\prime}e_{1}\quad\forall\,k \in\mathbb{N}.\]
It holds that \(A^{k-1}=A^{\prime(-1)}A^{k-1}A^{\prime}\) have the same eigenvalues, so \(A^{k-1}=-A^{\prime(-1)}A^{k-1}A^{\prime}\) must have eigenvalues of opposite sign. That is, if \(\lambda_{i}\) and \(\lambda_{i}^{\prime}\) are the eigenvalues of \(A\) and \(\pm A^{\prime(-1)}AA^{\prime}\) respectively, then \(\lambda_{i}=\pm\lambda_{i}^{\prime}\). Following the same steps as in the proof of Theorem 2 we get
\[\sum_{i}\lambda_{i}^{k-1}v_{i}=(-1)^{q(k)}\sum_{i}\lambda_{i}^{k-1}v_{i}^{ \prime}\eta_{i1}^{\prime T}\quad\forall\,k\in\mathbb{N}.\]
Subtracting the right hand side from both sides gives us
\[\sum_{i}\lambda_{i}^{k-1}(v_{i}-(-1)^{q(k)}v_{i}^{\prime}\eta_{i1}^{\prime T} )=0\quad\forall\,k\in\mathbb{N}. \tag{8}\]
We now show that if \((A,B)\in(\mathbb{R}^{2\times 2},\mathbb{R}^{2})\), then equation (8) implies \(A=\pm A^{\prime(-1)}AA^{\prime}\). Recall that \(q(k)\in\{0,1\}\) and so \((q(1),q(2))\in\{(0,0),\,(0,1),\,(1,0),\,(1,1)\}\). When \((q(1),q(2))=(0,0)\), equation (8) is the same as equation (6) for \(k=1\) and \(k=2\). If we write these equations in matrix form as in Theorem 2 we again have the Vandermonde matrix on the left-hand side. Following the same steps as Theorem 2, we see that \(v_{i}=v_{i}^{\prime}\) for all \(i\). Since \(\lambda_{i}=\pm\lambda_{i}^{\prime}\), then \(A=\pm A^{\prime(-1)}AA^{\prime}\). Similarly, if \((q(1),q(2))=(1,1)\), if we follow the same procedure to find \(A=\pm A^{\prime(-1)}AA^{\prime}\).
The most interesting cases are when \((q(1),q(2))\in\{(0,1),(1,0)\}\). Let us first consider \((q(1),q(2))=(1,0)\). Recall \(q(k)\in\{0,1\}\), so if \(q(3)=0\), then \((q(2),q(3))=(0,0)\). If \((q(k),q(k+1))=(0,0)\) for some \(k\), we then have
\[\Lambda S_{j}=\begin{bmatrix}\lambda_{1}^{k}&\lambda_{2}^{k}\\ \lambda_{1}^{k+1}&\lambda_{2}^{k+1}\end{bmatrix}\begin{bmatrix}v_{1j}-v_{1j}^ {\prime}\eta_{11}^{\prime T}\\ v_{2j}-v_{2j}^{\prime}\eta_{21}^{\prime T}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}. \tag{9}\]
We note
\[\det(\Lambda)=\lambda_{1}^{k}\lambda_{2}^{k}\begin{vmatrix}1&1\\ \lambda_{1}&\lambda_{2}\end{vmatrix}\neq 0\]
since we have two non-zero scalars multiplied by the non-zero Vandermonde determinant in the case of distinct eigenvalues. Hence, \(\Lambda\) as defined in (9) is invertible and we again conclude that \(A=\pm A^{\prime(-1)}AA^{\prime}\).
We lastly consider cases where \(q(k)\) is alternating, namely \(\{q(k)\}_{k=1}^{3}=(0,1,0)\) and \(\{q(k)\}_{k=1}^{3}=(1,0,1)\). In the former case, we have
\[\Lambda S_{j}=\begin{bmatrix}1&1\\ \lambda_{1}^{2}&\lambda_{2}^{2}\end{bmatrix}\begin{bmatrix}v_{1j}-v_{1j}^{ \prime}\eta_{11}^{\prime T}\\ v_{2j}-v_{2j}^{\prime}\eta_{21}^{\prime T}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}.\]
The generic assumption that all eigenvalues are distinct modulo a sign implies \(\Lambda\) is invertible, thus we again find \(v_{i}=v_{i}^{\prime}\) and thus \(A=\pm A^{\prime(-1)}AA^{\prime}\). By following the same steps, we arrive at the same conclusion when \(\{q(k)\}_{k=1}^{3}=(1,0,1)\).
We now have that \(A=(-1)^{q(2)}A^{\prime(-1)}AA^{\prime}\). If \(q(2)=0\), then \(A\) and \(A^{\prime}\) commute. Using assumptions of the theorem statement, we can conclude that \(A\) and \(A^{\prime}\) have the same eigenvectors [28]. If \(q(2)=1\), then \(A=-A^{\prime(-1)}AA^{\prime}\) and so \(A^{2}=A^{\prime(-1)}A^{2}A^{\prime}\). Clearly, \(A^{2}\) and \(A^{\prime}\) commute, and by the theorem statement, \(A^{2}\) has distinct eigenvalues, which again implies that \(A\) and \(A^{\prime}\) share the same eigenvectors.
We now follow the same steps as in the latter part of the proof of Theorem 2. Namely, we can diagonalize \(A\) and \(A^{\prime}\); taking the eigenvalue expansion of equation (7) and multiplying both sides on the left by the matrix of left eigenvectors gives us the series of equations
\[\lambda_{i}^{k-1}=(-1)^{q(k)}\lambda_{i}^{\prime k-1}\quad\forall\,k\in\mathbb{ N}.\]
Since \(q(2)=0\) or \(q(2)=1\), then \(\lambda_{i}=\lambda_{i}^{\prime}\) or \(\lambda_{i}=-\lambda_{i}^{\prime}\) for all \(i\). Since both \(A\) and \(A^{\prime}\) have the same eigenvectors and eigenvalues same to a sign, then \(A=\pm A^{\prime}\).
Theorem 2 solves Problem 1 in the generic case where \(\mathcal{U}\neq[-c,c]\) while Theorem 3 proves there exists a \(\pm\)-unique solution to Problem 1 in the two-dimensional case where \(\mathcal{U}=[-c,c]\). The proof of Theorem 3 drives our intuition for Conjecture 1 in general: intuitively, adding dimensions to the system should not make it more likely that multiple generic systems can produce the same reachable sets for all time, especially considering no two such systems exist when the input set is asymmetric. Formalizing this statement is left for future work.
We remark that if the system dynamics do not satisfy the assumptions of Theorem 2 or Theorem 3, they might not be (uniquely or \(\pm\)-uniquely) recoverable. However, using a slight perturbation of the reachable sets might recover a generic approximation of the true dynamics. Doing so, however, introduces challenges on the method of perturbing these sets. We leave such a discussion for future work.
## IV Solving for the System Dynamics
We ultimately want to use reachable sets to solve for the system dynamics. Equation (2) of Theorem 1 already gives us a formula for calculating \(A^{i-1}b\mathcal{U}\) for all \(i\in\mathbb{N}\), namely
\[A^{i-1}b\mathcal{U}=\mathcal{R}(i,0)\ominus\mathcal{R}(i-1,0).\]
In Theorem 2, we proved that the answer to Problem 1 is affirmative for generic, single-input linear systems, meaning that for cases where the linear system dynamics satisfy
the generic assumptions of Lemma 1, we can uniquely determine the true dynamics from the system's reachable sets. This motivates us to devise a procedure to calculate \((A,b)\).
We will determine \((A,b)\) from reachable sets through a two step procedure. First, we calculate \(A^{i-1}b\mathcal{U}\) for \(i=\{1,\ldots,n+1\}\). In the case where \(\mathcal{U}\neq[-c,c]\), the sequence of sets \(A^{i-1}b\mathcal{U}\) can be used to calculate \((A,b)\) directly. If \(\mathcal{U}=[-c,c]\), these same sets can be utilized to compute a number of candidate dynamics \((A,b)\) which satisfy \(\mathcal{R}(i,0)\) for all \(i\). To determine which candidate solutions are correct, we compute the forward reachable sets of (1) using all candidate \((A,b)\). By Theorem 3, in the two-dimensional case, only two solutions \((A,b)\) and \((A^{\prime},b^{\prime})\) such that \((A,b)=-(A^{\prime},b^{\prime})\) will satisfy \(\mathcal{R}(i,0)\) for all \(i\).
We begin our method by first using an algorithm that takes reachable sets of (1) and solves for \(A^{i-1}b\mathcal{U}\). By equation (2), we can utilize existing methods [20, 22, 30] to compute the Minkowski difference between two polygons to calculate \(A^{i-1}b\mathcal{U}\) given \(\mathcal{R}(i,0)\) for all \(i\in\mathbb{N}\). For this narrative, we adopt the method in [20]. By Lemma 1 of [20], if we let \(v^{(i)}\in\mathcal{V}\) be the vertices of \(\mathcal{R}(i-1,0)\), then the Minkowski difference \(\mathcal{R}(i,0)\ominus\mathcal{R}(i-1,0)\) may be computed by taking the intersection of the translation of the set \(\mathcal{R}(i,0)\) by vertices \(v^{(i)}\in\mathcal{V}\) of \(\mathcal{R}(i-1,0)\):
\[\mathcal{R}(i,0)\ominus\mathcal{R}(i-1,0)=\bigcap_{v^{(i)}\in\mathcal{V}}( \mathcal{R}(i,0)-v^{(i)}). \tag{10}\]
While computing the intersection in (10) is generally computationally difficult, calculations are made significantly easier as \(A^{i-1}b\mathcal{U}\) is a line segment; see [20] for details.
We now move to recover \(A^{i-1}b\) from \(A^{i-1}b\mathcal{U}\). We consider two cases: \(\mathcal{U}\neq[-c,c]\) and \(\mathcal{U}=[-c,c]\) for some \(c\in\mathbb{R}\). In the former case, taking the mean of the vertices of \(A^{i-1}b\mathcal{U}\) will provide \(A^{i-1}b\frac{c+d}{2}\). Multiplying this vector by \(\frac{2}{c+d}\) recovers \(A^{i-1}b\).
**Theorem 4**: _Let us assume the \(n\)-dimensional system (1) is controllable. Let \(C_{A,b}=\begin{bmatrix}b&Ab&\ldots&A^{n-1}b\end{bmatrix}\). For the single-input case, \(A=AC_{A,b}C_{A,b}^{-1}\)._
The proof of Theorem 4 is trivial, noting that \(C_{A,b}\) is full rank for controllable systems. We note that the assumption of controllability is generic [29].
In the case where \(\mathcal{U}=[-c,c]\), by multiplying the vertices of \(A^{i-1}b\mathcal{U}\) by \(c\), we can only recover \(A^{i-1}b\) up to a sign, generating two candidates for each \(i\). Substituting all possible candidates for \(A^{i-1}b\) into the columns of \(C_{A,b}\) and \(AC_{A,b}\) generates \(2^{n+1}\) candidate matrices \(A\).
To determine which candidate solutions yield the correct \(\pm\)-unique matrix pair \((A,b)\), we can plot the reachable sets of all \(2^{n+1}\) candidate solutions to solve for the desired unknown \(\pm\)-unique system dynamics. In the next section, we use the CORA toolkit [31] and adopt methods of computing the Minkowski difference detailed in [20] to numerically calculate the dynamics \((A,b)\) for an unknown band-pass filter circuit system and a two-dimensional unknown system with \(\mathcal{U}=[-1,1]\), validating the developed theory.
## V Numerical Examples
To validate the developed theory and demonstrate how to apply the proposed method, we first consider a scenario of reverse engineering an electric circuit from manufacturer specifications. At times, manufacturers will only release partial information about a system. For example, instead of providing a dynamic model of a manufactured part, manufacturers might convey the set of all voltages a circuit may output within a set amount of time given the set of all viable input frequencies. Such information can be interpreted as the minimum time in which a state can be reached, providing a picture of the system's reachable sets. Motivated by such an example, in this section, we provide an example of identifying the matrices \((A,b)\) of a band-pass filter circuit from its reachable sets. In a subsequent example, we identify the \(\pm\)-unique dynamics of an unknown two-dimensional system with an input set symmetric around zero. Both examples utilize the CORA toolkit [31] for set computations, namely to calculate convex hulls and Minkowski differences.
### _Band-Pass Filter Circuit_
We present the linear dynamic model of a band-pass filter circuit [32]. Let us assume \(x[0]=0\). The state-space controllable canonical representation [29] of this circuit is
\[x[j+1]=Ax[j]+bv_{c}[j] \tag{11}\] \[=\begin{bmatrix}0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ -a_{0}&-a_{1}&-a_{2}&-a_{3}\end{bmatrix}x[j]+\begin{bmatrix}0\\ 0\\ 0\\ 1\end{bmatrix}v_{c}[j]\]
such that \(v_{c}[j]\in[0,1]\) for all \(j\in\mathbb{Z}_{\geq 0}\).
Assume the reachable sets \(\{\mathcal{R}(j+1,0)\}_{j=0}^{\infty}\) of the controllable, dynamical system (11) are known. From controllability, we know the form of system (11), but not parameters \(a_{0},\,a_{1},\,a_{2},\,a_{3}\). From this information, we want to recover the true parameters: \(a_{0}=3\), \(a_{1}=2\), \(a_{2}=3\), and \(a_{3}=6\). It can be easily shown that if \(a_{0}\neq 0\), the assumptions of Theorem 2 are satisfied. Clearly, \(A\) from (11) satisfies said assumptions. Clearly, the matrix \(M\) for which \(Mb=e_{1}\) is a simple row permutation; the assumptions of Theorem 2 are invariant under permutations, hence all assumptions are satisfied and the results of Theorem 2 apply when solving for the matrix pair \((A,b)\). That is, there exists a unique matrix pair which satisfies \(\{\mathcal{R}(j+1,0)\}_{j=0}^{\infty}\). Since the system is four-dimensional, Theorem 4 shows we need only consider the sets \(\{\mathcal{R}(j+1,0)\}_{j=0}^{4}\) to calculate \((A,b)\).
Assume, that \(\{\mathcal{R}(j+1,0)\}_{j=0}^{4}\) are known to equal
\[\mathcal{R}(1,0)=\mathrm{conv}\left(\begin{bmatrix}0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 0\\ 1\end{bmatrix}\right),\]
\[\mathcal{R}(2,0)=\mathrm{conv}\left(\begin{bmatrix}0\\ 0\\ 1\\ -5\end{bmatrix},\begin{bmatrix}0\\ 0\\ -6\end{bmatrix},\begin{bmatrix}0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 0\\ 1\end{bmatrix}\right),\]
\[\mathcal{R}(3,0)=\mathrm{conv}\left(\begin{bmatrix}0\\ 0.86\\ -6.02\\ 33.00\end{bmatrix},\begin{bmatrix}0\\ -0.14\\ 0.98\\ -6.00\end{bmatrix},\begin{bmatrix}0\\ -0.14\\ -0.98\\ -5.00\end{bmatrix},\begin{bmatrix}0\\ 0.86\\ -6.02\\ 34.00\end{bmatrix}\right),\]
\[\mathcal{R}(4,0)=\mathrm{conv}\left(\begin{bmatrix}-0.15\\ -5.99\\ -5.99\\ 33.00\end{bmatrix},\begin{bmatrix}0.85\\ -5.95\\ -34.01\\ -188\end{bmatrix},\begin{bmatrix}0.85\\ -5.95\\ 34.01\\ -187\end{bmatrix},\begin{bmatrix}-0.15\\ 1.05\\ -5.99\\ 34.00\end{bmatrix}\right),\]
\[\mathcal{R}(5,0)=\mathrm{conv}\left(\begin{bmatrix}-5.93\\ 33.88\\ -188.02\\ 1035.00\end{bmatrix},\begin{bmatrix}1.07\\ -6.12\\ 33.98\\ -188.00\end{bmatrix},\begin{bmatrix}1.07\\ -6.12\\ 33.98\\ -187.00\end{bmatrix},\begin{bmatrix}-5.93\\ 33.88\\ -188.02\\ 1036.00\end{bmatrix}\right).\]
Based on Theorem 2, the knowledge of only these five sets is sufficient to reconstruct the true values of parameters \(a_{0}\), \(a_{1}\), \(a_{2}\), and \(a_{3}\).
Since \(\mathcal{R}(1,0)=\mathcal{U}\) and \(\mathcal{U}=[0,1]\), \(b\) can be trivially computed to equal \(b=\begin{bmatrix}0&0&0&1\end{bmatrix}^{T}\). Next, by equation (2) of Theorem 1, \(A\dot{b}\mathcal{U}=\mathcal{R}(2,0)\ominus\mathcal{R}(1,0)\). Given \(\mathcal{U}=[0,1]\), by taking the Minkowski difference we get \(Ab=\begin{bmatrix}0&0&1&-6\end{bmatrix}^{T}\). Repeating the procedure, we have \(A^{2}b\mathcal{U}=\mathcal{R}(3,0)\ominus\mathcal{R}(2,0)\), \(A^{3}b\mathcal{U}=\mathcal{R}(4,0)\ominus\mathcal{R}(3,0)\), \(A^{4}b\mathcal{U}=\mathcal{R}(5,0)\ominus\mathcal{R}(4,0)\), and \(\mathcal{U}=[0,1]\). It follows that
\[A^{2}b=\begin{bmatrix}0\\ 1\\ -6\\ 33\end{bmatrix},\,A^{3}b=\begin{bmatrix}1\\ -6\\ 33\\ -182\end{bmatrix},\,A^{4}b=\begin{bmatrix}-6\\ 33\\ -182\\ 1002\end{bmatrix}.\]
Recall we assume the system is controllable, and thus the controllability matrix \(C_{A,b}\) is invertible. Finally, by Theorem 4,
\[A=AC_{A,b}C_{A,b}^{-1}\]
\[=\begin{bmatrix}A^{4}b&A^{3}b&A^{2}b&Ab\end{bmatrix}\begin{bmatrix}A^{3}b&A^{2 }b&Ab&b\end{bmatrix}^{-1}\]
which produces the correct matrix \(A\) accurately reconstructing the parameters \(a_{0}=3\), \(a_{1}=2\), \(a_{2}=3\), and \(a_{3}=6\).
### _Numerical Example with a Symmetric Input Set_
To validate Theorem 3, we present an example of a linear two-dimensional dynamical system
\[\begin{bmatrix}x_{1}[i+1]\\ x_{2}[i+1]\end{bmatrix}=\begin{bmatrix}2&1\\ 2&3\end{bmatrix}\begin{bmatrix}x_{1}[i]\\ x_{2}[i]\end{bmatrix}+\begin{bmatrix}0\\ 1\end{bmatrix}u[i] \tag{12}\]
with \(\mathcal{U}=[-1,1]\). Such a system satisfies the assumptions of Lemma 1. As in the previous example, we will show that we can reconstruct the values of system matrices in (12) from reachable sets, albeit up to a sign. Assume, thus, that we are given a sequence of reachable sets \(\{\mathcal{R}(i,0)\}_{i=1}^{4}\) as convex hulls of vertices:
\[\mathcal{R}(1,0)=\mathrm{conv}\left(\pm\begin{bmatrix}0\\ 1\end{bmatrix}\right),\]
\[\mathcal{R}(2,0)=\mathrm{conv}\left(\pm\begin{bmatrix}-1\\ -4\end{bmatrix},\pm\begin{bmatrix}1\\ 2\end{bmatrix}\right),\]
\[\mathcal{R}(3,0)=\mathrm{conv}\left(\pm\begin{bmatrix}6\\ 15\end{bmatrix},\pm\begin{bmatrix}4\\ 7\end{bmatrix},\pm\begin{bmatrix}6\\ 13\end{bmatrix}\right),\]
\[\mathcal{R}(4,0)=\mathrm{conv}\left(\pm\begin{bmatrix}27\\ 58\end{bmatrix},\pm\begin{bmatrix}15\\ 28\end{bmatrix},\pm\begin{bmatrix}25\\ 50\end{bmatrix},\pm\begin{bmatrix}27\\ 56\end{bmatrix}\right).\]
Clearly, \(b\mathcal{U}=\mathcal{R}(1,0)\). Equation (2) of Theorem (1) also shows that \(Ab\mathcal{U}=\mathcal{R}(2,0)\ominus\mathcal{R}(1,0)\), \(A^{2}b\mathcal{U}=\mathcal{R}(3,0)\ominus\mathcal{R}(2,0)\), and \(A^{3}b\mathcal{U}=\mathcal{R}(4,0)\ominus\mathcal{R}(3,0)\). Since \(\mathcal{U}=[-1,1]\), through the same calculations in the previous example we thus obtain
\[b=\pm\begin{bmatrix}0\\ 1\end{bmatrix},\,Ab=\pm\begin{bmatrix}1\\ 3\end{bmatrix},\]
\[A^{2}b=\pm\begin{bmatrix}5\\ 11\end{bmatrix},\,A^{3}b=\pm\begin{bmatrix}21\\ 43\end{bmatrix}.\]
Let us denote \(b^{-}=\begin{bmatrix}0\\ -1\end{bmatrix}\), \(b^{+}=\begin{bmatrix}0\\ 1\end{bmatrix}\), \(Ab^{-}=\begin{bmatrix}-1\\ -3\end{bmatrix}\), etc. We now consider a set of \(2^{3}\) possible candidate pairs of \((C_{A,b},AC_{A,b})\) matrices:
\[(C_{A,b},AC_{A,b})=\left\{\begin{array}{l}\left(\begin{bmatrix}b^{+}&Ab^{+}\\ -6\\ 33\end{bmatrix}\right),\,A^{3}b=\begin{bmatrix}1\\ -6\\ 33\end{bmatrix},\,A^{3}b=\begin{bmatrix}1\\ -182\\ 1002\end{bmatrix},\,A^{3}b=\begin{bmatrix}-6\\ -182\\ 1002\end{bmatrix}.\end{array}\right. \tag{13}\]
By Theorem 4, determining all candidate matrix pairs \((A,b)\) becomes a trivial calculation using all possible pairs from (13). Doing so provides two \(\pm\)-unique candidate pairs:
\[(A,b)=\pm\left(\begin{bmatrix}2&1\\ 2&3\end{bmatrix},\begin{bmatrix}0\\ 1\end{bmatrix}\right),\,(A^{\prime},b^{\prime})=\pm\left(\begin{bmatrix}8&-1\\ 20&-3\end{bmatrix},\begin{bmatrix}0\\ -1\end{bmatrix}\right).\]
While the calculations above used only \(\mathcal{R}(1,0)\), \(\mathcal{R}(2,0)\), and \(\mathcal{R}(3,0)\), to distinguish between these two final candidates we need to employ \(\mathcal{R}(4,0)\). Fig. 1 shows the plots of the forward reachable sets for system (12) at time \(i=4\) with matrix pairs \((A,b)\) and \((A^{\prime},b^{\prime})\) on the left and right respectively.
Fig. 1(a) shows a reachable set that is identical to \(\mathcal{R}(4,0)\), while Fig. 1(b) illustrates the reachable set
\[\mathcal{R}^{\prime}(4,0)=\mathrm{conv}\left(\pm\begin{bmatrix}35\\ 82\end{bmatrix},\pm\begin{bmatrix}25\\ 60\end{bmatrix},\pm\begin{bmatrix}33\\ 74\end{bmatrix},\pm\begin{bmatrix}35\\ 80\end{bmatrix}\right),\]
which is not the same as \(\mathcal{R}(4,0)\). Therefore, we can identify the matrix pair \((A,b)\), up to a sign, as the true dynamics
Fig. 1: Reachable sets for candidate dynamics at \(i=4\).
of the unknown linear system (12). As mentioned before, reachable sets in this case do not allow us to distinguish any further: the dynamics that differ only in the sign generate the same reachable sets.
## VI Conclusion
This paper considers the problem of determining the dynamics of an unknown discrete-time linear system using its reachable sets. The theory developed in this paper proves that for input sets that are asymmetric around the origin, the derived system dynamics are, given some technical assumptions, unique. Thus, in such cases, we can determine the true dynamics of an unknown system using the sequence of the system's reachable sets. For the case where the input set is symmetric, we prove that the derived dynamics are unique up to a factor of \(\pm 1\) for two-dimensional systems and provide a conjecture that asserts the same result holds for the \(n\)-dimensional case. We then develop a method for deriving the dynamics of a system given the sequence of the system's reachable sets using Minkowski differences and proceed to illustrate by example how the method can be applied to identify the unknown linear model of a band-pass filter. Novel identification methods are also applied to an academic system with an input set symmetric around zero to detail how we can adapt said methods to uniquely identify the model of a linear system.
A natural next step is to prove the stated conjecture to show \(\pm\)-uniqueness for \(n\)-dimensional systems. Also, our current technical assumptions are consistent with generic properties of matrices, but ideally we want to relax these assumptions to identify necessary conditions for uniqueness. We also want to consider cases when the state's initial conditions are non-zero, when there is only available knowledge of the system's reachable sets at non-consecutive time steps, and also when working with the more general framework of multi-input systems.
## Acknowledgements
We thank Jeffrey Stuart from Pacific Lutheran University for providing insights in combinatorial matrix theory that helped develop the scope this project, namely addressing the question of uniqueness outlined in Problem 1.
| この論文は、個々のシステム経路を知識を持たずに線形システムダイナミクスを特定することに焦点を当てています。しかし、異なる時間におけるシステムの到達可能領域の知識に基づいています。偏見の製造者仕様や、敵対的な代理の集団行動の観察から到達可能領域が知られている状況を動機として、未知のシステムのダイナミクスを決定するために、これらの領域を利用することを目指しています。この論文には、2つの貢献があります。まず、システムの到達可能領域のシーケンスが、一定の一般的な仮定の下で、非対称入力セットの下でシステムのダイナミクスを唯一的に決定できることを示しています。システムの次元に関わらず、これは、システムの次元に関わらず、存在します。また、入力セットがゼロの周りで対称である二次元システムにおいても、その性質は符号の変更で保存されることが証明されています。さらに、このダイ |
2309.10335 | Magnetic phase transitions in the triangular-lattice spin-1 dimer
compound K2Ni2(SeO3)3 | In our study, we conduct magnetization and heat capacity measurements to
investigate field induced magnetic phase transitions within the newly
synthesized compound K2Ni2(SeO3)3, a spin-1 dimer system arranged on a
triangular lattice. From our first-principles simulations, we determine that
the spin system in K2Ni2(SeO3)3 can be represented as a two-dimensional
triangular-lattice spin-1 dimer model, including an intra-dimer exchange of J1
= 0.32 meV, an inter-dimer exchange of J2 = 0.79 meV, and an easy-axis
anisotropy of D = 0.14 meV. The presence of easy-axis magnetic anisotropy
explains the distinct magnetic phase diagrams observed under c-axis directional
and in-plane magnetic fields. Notably, our investigation unveils a two-step
phase transition with the magnetic field aligned with the c direction. Our
findings yield valuable insights into the magnetic phase transitions inherent
to geometrically frustrated magnetic systems featuring dimer structures. | Lei Yue, Ziyou Lu, Kun Yan, Le Wang, Shu Guo, Ruixin Guo, Peng Chen, Xiaobin Chen, Jia-Wei Mei | 2023-09-19T05:42:00 | http://arxiv.org/abs/2309.10335v3 | Magnetic phase transitions in the triangular-lattice spin-1 dimer compound K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\)
###### Abstract
In our study, we conduct magnetization and heat capacity measurements to investigate field-induced magnetic phase transitions within the newly synthesized compound K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\), a spin-1 dimer system arranged on a triangular lattice. The Ni-Ni dimers exhibit a ferromagnetic intra-dimer interaction, effectively behaving as an ensemble with a total spin of \(S=2\). In contrast, antiferromagnetic interactions manifest between these dimers on the triangular lattice. The trigonal distortion of the NiO\({}_{6}\) octahedra introduces easy-axis magnetic anisotropy, accounting for the distinct magnetic phase diagrams observed when applying \(c\)-axis directional and in-plane magnetic fields. Notably, our investigation unveils a two-step phase transition with the magnetic field aligned with the \(c\) direction. We propose that the system at the first transition is from a paramagnetic state to an up-up-down state, characterized by the \(Z_{3}\) lattice-symmetry breaking. Subsequently, a Berezinskii-Kosterlitz-Thouless transition, involving the breaking of the \(c\)-axis spin-rotation symmetry, leads to the formation of the "Y state" at low temperatures. These findings yield valuable insights into the magnetic phase transitions inherent to geometrically frustrated magnetic systems featuring dimer structures.
## I Introduction
The exploration of phase transitions holds profound significance in the realm of physics. These transitions reveal the fascinating world of symmetry breaking [1], a powerful concept that aids in organizing our comprehension of the fundamental laws governing the universe [2]. As time has progressed, our understanding of phase transitions has evolved significantly, introducing concepts like "categorical symmetry" through advancements in mathematics and physics [3; 4; 5; 6; 7].
In recent years, there has been a surge in interest in investigating field-induced magnetic phase transitions within quantum frustrated magnetic systems [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. These systems, marked by geometric frustration, exhibit ground states characterized by multiple degenerate configurations and strong quantum fluctuations [24; 25]. Consequently, they can manifest various distinct magnetic ground states when subjected to magnetic fields. The study of materials featuring frustrated magnetism in the presence of magnetic fields provides a unique and intricate context for delving into magnetic phase transitions.
In this study, we explore magnetic phase transitions within the newly synthesized compound K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\), which shares its isostructural nature with the sister compound K\({}_{2}\)Co\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\)[26; 27]. This compound presents a triangular lattice framework hosting a spin-1 Ni-Ni dimer system, manifesting ferromagnetic intra-dimer interactions alongside antiferromagnetic inter-dimer exchange terms. The Ni-Ni dimers collectively exhibit behavior akin to an ensemble with a total spin of \(S=2\), distinguishing K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) from previously studied triangular lattice spin-1 spin dimer compound Ba\({}_{3}\)Mn\({}_{2}\)O\({}_{8}\) with an antiferromagnetic intra-dimer interaction [9; 10; 11; 28; 29].
Our investigation probes the magnetic phase transitions in K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) when subjected to magnetic fields applied both in-plane and out-of-plane. Notably, we uncover a succesive two-step phase transition induced by fields for \(B\|c\). The first transition is from a paramagnetic state to an up-up-down (UUD) state, marked by the breaking of \(Z_{3}\) lattice symmetry. Subsequently, a Berezinskii-Kosterlitz-Thouless transition occurs, breaking the \(c\)-axis spin-rotation symmetry and leading to the "Y state" at low temperatures. The phase transition with the in-plane field \(B\|ab\) is also studied, and exhibits distinct behavior compared to the case with \(B\|c\), due to the presence of easy-axis magnetic anisotropy.
The subsequent sections of this paper are structured as follows. Section II provides details of our experimental
methods, encompassing sample synthesis, sample characterization, and magnetization as well as heat capacity measurements. Additionally, we provide the theoretical setup for first-principles simulations. Moving forward to Section III, we unveil the main outcomes of our investigation. In Section III.1, we delve into the crystal structure and thermodynamic properties. Section III.2 takes us into the realm of estimating exchange interactions within K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) by an analysis that combines Curie-Weiss fitting of magnetic susceptibility data with the first-principles simulations. In Section III.3, which is also the most significant part, we explore field-induced magnetic phase transitions for both the \(c\)-axis directional and in-plane magnetic fields. Finally, we summarize our results in Section IV.
## II Methods
The synthesis of K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) single crystals was carried out using the flux method, a procedure akin to that for K\({}_{2}\)Co\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\)[27]. The initial materials, comprising NiO (Alfa Aesar, Ni: 78.5%), KOH (Alfa Aesar, 99.98%), and SeO\({}_{2}\) (Aladdin, 99.9%), were mixed in a molar ratio of \(1:4.8:4.8\) in preparation. In the glovebox, the mixed materials were ground for 5 minutes and then transferred into an alumina crucible, which was subsequently placed inside a quartz tube. The tube was sealed under a vacuum pressure of \(10^{-3}\) Pa. This assembly was subjected to a heating process of 4 hours to 700 \({}^{\circ}\)C, followed by an 8 hour hold at that temperature. Subsequently, it was cooled over 100 hours to 200 \({}^{\circ}\)C maintained for an additional 4 hours, after which it was gradually cooled to room temperature. Finally, immersing the resulting materials in de-ionized water yielded transparent yellow single crystals, as depicted in the inset of Fig. 2(b).
The crystal structure of K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) was determined using single crystal X-ray diffraction, with the Bruker SMART APEX II 4K CCD diffractometer. Magnetization and heat capacity measurements were conducted utilizing the Quantum Design Magnetic Property Measurement System and Physical Property Measurement System, respectively.
First-principles calculations were performed using the Vienna Ab-initio Simulation Package (VASP) with projected-augmented wave (PAW) potentials [30; 31]. The Perdew-Burke-Ernzerhof parametrization (PBE) of the localized density approximation (LDA) was used for the exchange-correlation interaction [32]. Lattice parameters come from the crystallographic data in Tables 2 (details in Section III). The initial positions of Se(II) atoms in a unit cell are 0.78411 and 0.21589 in direct coordinates, where the randomness of Se atoms was neglected. Structures were fully optimized until the force exerted on each atom was less than 0.01 eV/A. The energy tolerance was set to \(1\times 10^{-5}\) eV and the energy cutoff was 500 eV. A \(9\times 9\times 3\) K-point mesh was used for the optimization of the unit cell structure. For the self-consistent collinear spin calculation of the supercell structures, a 2\(\times\)2\(\times\)1 supercell structure was employed, along with a corresponding 6\(\times\)6\(\times\)4 K-point mesh. Further for calculations of total energies, the local-density approximation plus the mean-field Hubbard model (LDA+U) [33] has been employed with the Coulomb repulsion parameter \(U=3.5\) eV and exchange parameter \(J=0.95\) eV. These values of \(U\) and \(J\) were shown to reproduce the experimental magnetic moment and optical properties of NiO satisfactorily [34].
## III Results
### Sample characterization
Tables 1 and 2 present detailed crystal structure information for K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\). K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) is isostructural with its sister compound K\({}_{2}\)Co\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\)[26; 27], and crystallizes in the hexagonal space group \(P6_{3}/mmc\)
\begin{table}
\begin{tabular}{c c c c c c} Atom & Wyckoff site & \(x\) & \(y\) & \(z\) & Occupancy \\ \hline K(I) & \(4f\) & 0.33333 & 0.66667 & 0.0349(5) & 1 \\ Ni(I) & \(4f\) & 0.33333 & 0.66667 & 0.6666(7) & 1 \\ Se(I) & \(4e\) & 0 & 0 & 0.1413(4) & 1 \\ Se(II) & \(4f\) & 0.33333 & 0.66667 & 0.2158(9) & 0.5 \\ O(I) & \(12k\) & 0.16010 & 0.83990 & 0.5973(8) & 1 \\ O(II) & \(6h\) & 0.49930 & 0.50070 & 0.2500 & 1 \\ \end{tabular}
\end{table}
Table 2: Crystallographic data in K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\).
\begin{table}
\begin{tabular}{c c} Formula & K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) \\ \hline Formula mass (amu) & 576.46 \\ Crystal system & trigonal \\ Space group & \(P6_{3}/mmc\), No. 194 \\ a (Å) & 5.4475(3) \\ c (Å) & 17.4865(15) \\ V (Å) & 449.40(6) \\ Z & 2 \\ T (K) & 299 \\ \(\rho\) (calc) (g/cm\({}^{3}\)) & 4.260 \\ \(\lambda\) (Å) & 0.71073 \\ F (000) & 536.0 \\ \(\theta\) (deg) & 4.475-29.715 \\ Crystal size (mm\({}^{3}\)) & 0.045 \(\times\) 0.04 \(\times\) 0.021 \\ \(\mu\) (mm\({}^{\text{-1}}\)) & 17.295 \\ Final R indices (R1/\(\omega\)R2) & 0.0297(285)/0.0622(306) \\ R indices (all data) (R1/\(\omega\)R2) & 0.0327 \\ Goodness of fit & 1.181 \\ \end{tabular}
\end{table}
Table 1: Structure refinement information.
(No. 194) with the lattice parameters of \(a=b=5.4475(3)\) A, \(c=17.4865(15)\) A. While K(I), Ni(I), O(I), O(II), O(II) and Se(I) atoms fully occupy crystallographic positions, the Se(II) atoms, located on the Wyckoff position \(4f\), exhibit a split into two sites with equal occupancy. The structural disorder due to the Se(II) random occupancy is the same as that in K\({}_{2}\)Co\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\), which is thoroughly discussed in Ref. [27].
Figure 1 schematically shows the crystal structure of K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\), where two NiO\({}_{6}\) octahedra share a common O\({}_{3}\)-triangle face, forming a Ni\({}_{2}\)O\({}_{9}\) dimer. The NiO\({}_{6}\) octahedron exhibits a trigonal elongation along the \(c\)-axis, resulting in easy-axis magnetic anisotropy. Within the Ni\({}_{2}\)O\({}_{9}\) dimer, the bond angle \(\angle\)Ni-O-Ni between Ni\({}^{2+}\) ions at the bridging O(II) atoms measures 85.4\({}^{\circ}\), close to 90\({}^{\circ}\), suggesting a negligible metal-metal (Ni-Ni) bond. Consequently, Ni\({}_{2}\)O\({}_{9}\) constitutes an easy-axis spin-1 dimer. The dimers are interconnected through Se(I,II)O\({}_{3}\) tripods. While the Se(I)O\({}_{3}\) tripods establish connections between the Ni\({}_{2}\)O\({}_{9}\) dimers through the oxygen atoms located on the upper and lower O\({}_{3}\)-triangles, the Se(II)O\({}_{3}\) tripods form connections by utilizing the oxygen atoms shared on the common middle O\({}_{3}\)-triangle face.
Figure 2 presents the basic thermodynamic properties of K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\). The temperature-dependent zero-field heat capacity divided by temperature \(C_{p}(T)/T\) in Fig. 2(a), and the magnetization \(M(T)\) with an applied magnetic field of \(B=0.1\) T in Fig. 2(b), reveal a well-defined magnetic phase transition occurring at the critical temperature \(T_{c}=5.75\) K. No additional discernible thermodynamic anomalies are observed above \(T_{c}\) in either \(C_{p}(T)\) or \(M(T)\). Below \(T_{c}\), the magnetization \(M\) exhibits a larger magnitude when subjected to a \(c\)-axis directional field in comparison to an in-plane field, thereby indicating the easy-axis magnetic anisotropy, which is further confirmed by the field dependent magnetization \(M(B)\) at 1.8 K as shown in Fig. 2(c).
### Exchange interactions
In K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\), the Ni\({}^{2+}\) ions possess a spin-1 3\(d^{8}\) electronic configuration. Within a Ni\({}_{2}\)O\({}_{9}\) dimer, two Ni\({}^{2+}\) ions interact magnetically through the three shared oxygen atoms, resulting in the intra-dimer exchange interaction (\(J_{1}\) in Fig.1(b)). According to the Goodenough-Kanamori rule [35], the Ni-O-Ni super-exchange pathways with bond angle \(\angle\)Ni-O-Ni of 85.4\({}^{\circ}\) lead to a ferromagnetic interaction with \(J_{1}<0\).
Within the triangular lattice of Ni\({}_{2}\)O\({}_{9}\) dimers, the inter-dimer magnetic interactions, denoted as \(J_{2}\) and \(J_{3}\) in Fig. 1(b), are mediated through two distinct Se(I,II)O\({}_{3}\) tripods, involving the Ni-O-Se(I)-O-Ni and Ni-O-Se(II)-O-Ni paths, respectively. Se(I)O\({}_{3}\) facilitates the super-exchange interaction \(J_{2}\), while Se(II)O\({}_{3}\) plays a role in both \(J_{2}\) and \(J_{3}\). It's worth noting that the random occupation of Se(II) could potentially induce disorder effects to \(J_{2}\) and \(J_{3}\). However, the exploration of these disorder effects is beyond the scope of the present study, and thus we do not consider them in this paper.
The triangular-lattice layer of Ni\({}_{2}\)O\({}_{9}\) dimers is effectively isolated from each other by non-magnetic K\({}^{+}\) ions, resulting in minimal inter-layer interactions. Consequently, the spin system within K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) can be described as a two-dimensional spin-1 dimer model ex
Figure 1: Crystal structure of K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\). (a) The unit cell comprises two layers of Ni\({}_{2}\)O\({}_{9}\) dimers (gray polyhedra). (b) Schematic representation of nearest-neighbor (\(J_{1}\)), next-nearest-neighbor (\(J_{2}\)) and third-nearest-neighbor (\(J_{3}\)) exchange interactions. (c) Top view of Ni\({}_{2}\)O\({}_{9}\) dimers in the \(ab\) plane, forming the triangular lattice.
pressed by the following Hamiltonian
\[H = J_{1}\sum_{i}\mathbf{S}_{i1}\cdot\mathbf{S}_{i2}+\sum_{\langle ij \rangle}\big{(}J_{2}(\mathbf{S}_{i1}\cdot\mathbf{S}_{j1}+\mathbf{S}_{i2}\cdot \mathbf{S}_{j2})\] \[+ J_{3}(\mathbf{S}_{i1}\cdot\mathbf{S}_{j2}+\mathbf{S}_{i2}\cdot \mathbf{S}_{j1})\big{)}-D\sum_{i}\big{(}(S_{i1}^{z})^{2}+(S_{i2}^{z})^{2}\big{)}.\]
Here, \(\mathbf{S}_{i,1/2}\) represents the spin-1 operator of the first/second Ni\({}^{2+}\) ion within the Ni\({}_{2}\)O\({}_{9}\) dimer on the \(i\)-th site, and \(\langle ij\rangle\) denotes the nearest-neighbor bond for the dimers. The parameters \(J_{1}\), \(J_{2}\), and \(J_{3}\) correspond to the exchange interactions as illustrated in Fig. 1(b), while \(D\) accounts for the easy-axis magnetic anisotropy.
To provide a preliminary estimate of the magnetic anisotropy parameter \(D\), we compare the magnetization magnitudes of K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) for both \(c\)-axis directional and in-plane magnetic fields at 1.8 K in Fig. 2(c) and find that \((M_{c}-M_{ab})/M_{ab}\simeq 0.2\) under \(B=7\) T. This provides a rough determination of the magnetic anisotropy \(D\), which is on the order of 10% of the interactions between the Ni\({}_{2}\)O\({}_{9}\) dimers. The magnetic anisotropy plays a role in the different field-induced phase diagrams for the \(c\)-axis directional and in-plane magnetic fields, which we will delve into further in Figs. 8 and 11.
To determine magnetic interactions in K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\), we performed fittings on the temperature-dependent magnetization data shown in Fig. 2(b). These fittings are conducted by utilizing the Curie-Weiss law \(M/H=\frac{C}{T-6}+\chi_{0}\) within the temperature range of 200 K to 300 K, where \(C\) is the Curie constant, \(\Theta\) is the Curie-Weiss temperature and \(\chi_{0}\) denotes the \(T\)-independent contribution. The fittings yield values of \(\Theta_{c}=-40\) K and \(\Theta_{ab}=-43\) K, indicating the overall antiferromagnetic interactions in K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\). Additionally, we determine the \(g\)-factors, with \(g_{c}=2.30\) and \(g_{ab}=2.24\), from the Curie constants.
The Curie-Weiss fitting, though informative, does not provide separate values for \(J_{1}\), \(J_{2}\), and \(J_{3}\). To obtain these individual exchange parameters, we turn to first-principles simulations, which offer a more detailed insight into the magnetic interactions. In our simulations, we considered four distinct magnetization configurations, as visualized in Fig. 3, and calculated their corresponding total energies. While our simulations do not include magnetic anisotropy due to the absence of spin-orbit interaction, they allow us to deduce specific values for the exchange parameters.
Our theoretical calculations reveal the results, \(J_{1}=-5.3\) meV, \(J_{2}=2.4\) meV, and \(J_{3}=1.2\) meV. These values, obtained through simulations, complement our experimental findings and provide a more comprehensive understanding of the magnetic interactions. The theo
Figure 3: Four different spin configurations in the first-principles simulation. The arrows indicate spin directions on the Ni\({}^{2+}\) irons.
Figure 2: (a) Zero-field specific heat. (b) Temperature dependent magnetization with \(B=0.1\) T. (c) Field dependent magnetization at 1.8 K.
retical Curie-Weiss temperature calculated using these parameters, approximately \(\Theta=-2(J_{1}+3J_{2}+3J_{3})/3\simeq-43\) K, closely aligns with the value derived from the experimental fitting, reinforcing the consistency of our results.
### Field-induced magnetic transitions
As clarified in Section III.2, our DFT calculations provide valuable insights into the superexchange interactions in K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\). Specifically, we find that \(J_{1}\), which governs the interaction between neighboring Ni\({}^{2+}\) ions within each dimer, exhibits a ferromagnetic-like behavior, in line with the Goodenough-Kanamori rule, given the nearly 90\({}^{\circ}\) bond angle \(\angle\)Ni-O-Ni. This ferromagnetic \(J_{1}\) interaction strongly encourages a parallel alignment of spins on the Ni\({}^{2+}\) ions within the Ni\({}_{2}\)O\({}_{9}\) dimer. Consequently, we can conceptualize the dimer as possessing a total spin, denoted as \(\mathbf{S}_{i1}^{\rm tot}=\mathbf{S}_{i1}+\mathbf{S}_{i2}\), with a total spin value of \(S^{\rm tot}=2\) when the temperature remains below the strength of the intra-dimer interaction, \(|J_{1}|=61\) K.
In term of total dimer spin, building upon Eq. (1), we can approximate the effective dimer interaction as follows
\[H\simeq\frac{J_{1}}{2}\sum_{i}(\mathbf{S}_{i}^{\rm tot})^{2}+J_{2}\sum_{\langle ij \rangle}\mathbf{S}_{i}^{\rm tot}\cdot\mathbf{S}_{j}^{\rm tot}-D\sum_{i}(S_{i}^ {\rm tot,z})^{2}, \tag{2}\]
where we employ the approximation \(J_{2}\simeq J_{3}\) based on our DFT estimates and add an additional term involving \(S_{i1}^{z}S_{i2}^{z}\) within the dimer, which is likely to not significantly alter the underlying physics. By disregarding the spin fluctuations for \(\mathbf{S}_{i}^{\rm tot}\) with lower total spin values (\(S^{\rm tot}=1,0\)), we can treat K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) as a trianguralattice antiferromagnet, with each Ni\({}_{2}\)O\({}_{9}\) unit contributing an overall spin-2. Consequently, K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) serves as a platform for exploring the magnetic phase transitions of an effective spin-2 triangular-lattice antiferromagnet at low temperatures.
#### iii.3.1 \(B\|c\)
We can only resolve a single magnetic phase transition occurring at \(T_{c}=5.75\) K in the zero-field heat capacity measurement displayed in Fig. 2(a). The introduction of a magnetic field prompts us to explore the field-induced magnetic phase transition, which is depicted in the temperature-dependent specific heat divided by temperature \(C_{p}(T)/T\) for K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) with the magnetic field aligned along the \(c\)-axis, as presented in Fig. 4. As the applied magnetic field strength increases, the original transition at \(T_{c}\) gradually splits into two distinct transitions, labeled as \(T_{c1}\) and \(T_{c2}\). Notably, these transition points, \(T_{c1}\) and \(T_{c2}\), exhibit distinct behaviors with respect to the magnetic field strength. The black dashed line in Fig. 4 signifies the shift of \(T_{c1}\) to higher temperatures as the magnetic field is increased, whereas the pink dashed line indicates the opposite trend for \(T_{c2}\), shifting to lower temperatures. It is important to observe that while the peak at \(T_{c1}\) retains its sharpness, resembling the zero-field transition, the transition peak at \(T_{c2}\) exhibits a significantly broader profile. This contrasting behavior in the peak profiles highlights the distinct nature of these phase transitions occurring at \(T_{c1}\) and \(T_{c2}\).
The heat capacity measurements in Fig. 4 provide compelling evidence for the existence of two successive transitions, and hence the presence of an intermediate phase. This intermediate state is anticipated to manifest itself in the field-dependent specific heat \(C_{p}(B)\) when probed at various fixed temperatures. As the applied magnetic field induces a progressive upward shift of \(T_{c1}(B)\), as indicated by the black dashed line in Fig. 5, one would expect to identify the same phase transition point \(B_{c1}(T)\) within \(C_{p}(B)\) when maintaining a constant temperature above \(T_{c}=5.75\) K. Likewise, keeping the temperature fixed below \(T_{c}\) allows us to discern the phase transition occurring at \(B_{c2}(T)\), which corresponds to \(T_{c2}(B)\). To validate the existence of the intermediate phase, a comprehensive analysis of the detailed data for the field-dependent specific heat \(C_{p}(B)\) is presented in Fig. 5.
In Fig. 5(a), we examine the field-dependent specific heat \(C_{p}(B)\) while maintaining a constant temperature of \(T=6.75\) K above \(T_{c}\). At this temperature, K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) is in a paramagnetic state under low magnetic fields. As the magnetic field \(B\) increases, we observe an enhancement in \(C_{p}(B)\), particularly at low fields. By applying polynomial fitting to the data within the range of 0 T to 4 T, we find that the increase in \(C_{p}(B)\) approximately follows a relationship of \(\Delta C_{p}(B)\propto B^{2}\), where \(\Delta C_{p}(B)=C_{p}(B)-C_{p}(0)\). This behavior aligns with the field dependence of specific heat typically observed in paramagnetic states [36]. Upon reaching the critical value of \(B_{c1}=7.5\) T, a peak emerges in \(C_{p}(B)\), indicating a transition into the intermediate state for the
Figure 4: Specific heat \(C_{p}(T)/T\) and successive phase transition temperatures \(T_{c1}\) and \(T_{c2}\) indicated by dashed lines under different fields \(B\|c\).
system. Similar trends are observed in Figs. 5(b)-(g) for temperatures above \(T_{c}\). In this range, the critical magnetic field \(B_{c1}(T)\) decreases as the temperatures at which the measurements are taken decrease.
Around \(T_{c}=5.75\) K (Figs. 5(f)-(k)), the transition peaks in \(C_{p}(B)\) become notably broad and challenging to discern clearly. However, as we continue to lower the temperature below \(T_{c}\), the transition peak in \(C_{p}(B)\) becomes pronounced again, allowing us to confidently identify the critical field \(B_{c2}(T)\). When we delve below 5.0 K in Figs. 5(l)-(o), a distinct pattern emerges in the field dependence in \(C_{p}(B)\), differing from that in Figs. 5(a)-(f). Here, the specific heat \(C_{p}(B)\) initially decreases as the field increases at low fields, then experiences an enhancement with further increases in \(B\), and reaches a peak at \(B_{c2}\) before transitioning into the intermediate phase. Notably, with decreasing holding temperatures, the critical field \(B_{c2}(T)\) exhibits a consistent increase. The distinct trends observed in \(B_{c1}(T)\) and \(B_{c2}(T)\) underscore the intricate interplay between temperature and magnetic field, providing validation for the existence of the intermediate phase.
The successive phase transitions in K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) are not only evident in the specific heat measurements (as seen in \(C_{p}(T)\) in Fig. 4 and \(C_{p}(B)\) in Fig. 5) but also discernible in the magnetization data. Fig. 6 presents the temperature-dependent magnetization \(M(T)\) and its associated differential magnetization concerning temperature (\(dM/dT\)) under varying magnetic fields. At lower magnetic field strengths, as the temperature decreases to 2 K, the magnetization in Fig. 6(a) exhibits a steady increase. Conversely, at higher magnetic fields, the magnetization initially ascends as the temperature decreases, but it subsequently begins to decline.
Notably, the temperature-dependent differential magnetization (\(dM/dT\)) in Fig. 6(b) reveals two distinctive kinks. These kink temperatures, indicated by arrows, agree with the critical temperatures \(T_{c1}\) and \(T_{c2}\) as determined through specific-heat measurements. A sudden increase in magnetization becomes evident at \(T_{c1}\). While the rate of enhancement diminishes until reaching \(T_{c2}\) at low magnetic fields, this upward trend in magnetization not only decelerates but also ultimately reverses, leading to a decrease in magnetization at high magnetic fields.
To further underscore the coherent alignment between the heat capacity and magnetization measurements, we reconfigure the data by plotting \(C_{H}=-\mu_{0}H(\frac{dM}{dT})_{H}\) in Fig. 7. This quantity signifies the heat capacity contri
Figure 5: Field-dependent specific heat \(C_{p}(B)\) and critical magnetic fields \(B_{c1}\) and \(B_{c2}\) at selected temperatures with \(B\|c\). The solid red lines in (a)-(d) represent the fitting results of \(\Delta C_{p}(B)\propto B^{2}\).
bution from the work performed by the magnetic field on the magnetization. According to the second law of thermodynamics,
\[dQ=dU-\mu_{0}HdM, \tag{3}\]
we know the specific heat at constant field
\[C=C_{M}+C_{H}, \tag{4}\]
where \(C_{M}=(\frac{\partial U}{\partial T})_{M}\) represents the specific heat at a constant magnetization, derived from the internal energy \(U\). Meanwhile \(C_{H}=-\mu_{0}H(\frac{\partial M}{\partial T})_{H}\) accounts for the contribution arising from the work done by the magnetic field. It's important to note that when the magnetic field remains constant, the magnetization \(M\) varies with temperature, and consequently, \(C_{M}\) also depends on the magnetic field. Nonetheless, \(C_{H}\) specifically represents how changes in magnetization influence the heat capacity due to the work done by the magnetic field.
Upon heat capacity \(C_{H}\) arising from the magnetic field's work on the magnetization in Fig. 7, we conduct a comparative analysis by examining the temperature-specific heat \(C_{p}(T)/T\) shown in Fig. 4. We observe that the first transition peak in \(C_{p}(T)/T\) at \(T_{c1}\) closely resembles the corresponding peak in \(C_{H}\). This resemblance suggests that at the transition point \(T_{c1}\), there is a substantial alteration in magnetization aligned with the field direction. However, the situation deviates significantly at the second transition \(T_{c2}\). Here, the behavior of \(C_{H}\) differs notably from that of \(C_{p}(T)/T\) and \(C_{H}\) even exhibits a negative value under strong magnetic fields. These observations imply that, although the magnetization along the field direction does not undergo significant changes at \(T_{c2}\), the internal energy experiences substantial variations. This may indicate that the magnetization perpendicular to the field direction undergoes pronounced shifts during this phase transition. We can also draw parallels between the field-dependent heat capacity be
Figure 8: Magnetic phase diagram for \(B\|c\) with the phase boundaries determined by the collected data from \(C_{p}(T)\) in Fig. 4, \(C_{p}(B)\) in Fig. 5 and \(M(T)\) in Fig. 6.
Figure 6: (a) Temperature-dependent magnetization with the magnetic field aligned along the \(c\)-axis. (b) Corresponding temperature derivative of the magnetization, \(dM/dT\).
Figure 7: Heat capacity contribution \(C_{H}=-\mu_{0}H(\frac{\partial M}{\partial T})_{H}\) from the work performed by the magnetic field on the magnetization.
havior \(C_{p}(B)\) at low magnetic fields in Fig. 5, and the patterns observed in \(C_{H}\) in Fig. 7. It becomes evident that as we keep the temperature constant and increase the magnetic field strength, both \(C_{H}\) and \(C_{p}(B)\) exhibit similar trends in their values at low fields.
Upon gathering critical points from the successive magnetic transitions \(T_{c1}\) and \(T_{c2}\) in \(C_{p}(T)/T\) (Fig. 4), as well as \(M(T)\) (Fig. 6), and \(B_{c1}\) and \(B_{c2}\) in \(C_{p}(B)\) (Fig. 5), we can construct the magnetic phase diagram for \(B\|c\), as illustrated in Fig. 8. The color intensity in the diagram reflects the values of \(C_{p}/T\), and it approximately aligns with the critical points. The phase diagram is effectively divided into three distinct phases. As previously mentioned, we have argued that K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) exhibits triangular-lattice antiferromagnetic behavior with an effective spin of 2. Drawing insights from Monte Carlo simulations applied to the classic triangular-lattice antiferromagnet under magnetic fields [16; 37], we propose a sequence of phase transitions. Initially, the system undergoes a continuous phase transition, transitioning from the paramagnetic state to the UUD state, which breaks the \(Z_{3}\) lattice symmetry. As the temperature continues to decrease, a Berezinskii-Kosterlitz-Thouless phase transition ensues, leading the system into the "Y state" from the UUD state. This transition breaks the \(c\)-axis spin rotation symmetry.
#### iii.2.2 \(B\|ab\)
To comprehensively explore the field-induced magnetic phases in K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\), we extend our investigations to encompass in-plane magnetic fields aligned with the crystallographic \(ab\) plane (\(B\|ab\)). Following a similar interpretation approach as outlined in Section III.3.1, the magnetization data presented in Fig. 9 and the heat capacity measurements depicted in Fig. 10 enabled us to extract critical transition temperatures \(Tc(B)\) and fields \(Bc(T)\), respectively, under in-plane magnetic fields. These critical values are subsequently used to construct a magnetic phase diagram, which is presented in Fig. 11.
Figure 10: Specific heat under different in-plane fields \(B\|ab\).
Figure 9: (a) Temperature-dependent magnetization with the in-plane magnetic field. (b) Corresponding temperature derivative of the magnetization, \(dM/dT\).
This phase diagram serves as a visual representation of how the magnetic phases evolve under the influence of in-plane magnetic fields.
It is noteworthy that our findings reveal a notable disparity between the phase diagrams for \(B\|c\) and \(B\|ab\). This divergence can be attributed to the impact of on-site magnetic anisotropy, characterized by parameter \(D\), which significantly influences the magnetic behavior of the Ni\({}^{2+}\) ions within the crystal structure of K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\). This observation underscores the critical importance of considering crystallographic orientations and the effects of magnetic anisotropy when investigating the magnetic properties of intricate materials such as K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\).
## IV Summary and conclusion
In our comprehensive investigation, we delved into the intriguing field-induced magnetic phase transitions within the newly synthesized compound K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\), characterized by a spin-1 dimer system arranged on a triangular lattice. Through an integrated approach encompassing magnetization and heat capacity measurements, coupled with Curie-Weiss fitting of the magnetization data and first-principles simulations, we elucidated the underlying exchange interactions governing the behavior of the spin-1 Ni\({}^{2+}\) ions in K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\).
Our study uncovered a fascinating interplay of magnetic interactions within this compound. We observed a ferromagnetic intra-dimer interaction between Ni-Ni dimers, effectively rendering them as an ensemble with a total spin of \(S=2\). In contrast, antiferromagnetic interactions emerged between these dimers on the triangular lattice. This intriguing magnetic behavior was further influenced by the trigonal distortion of the NiO\({}_{6}\) octahedra, introducing an easy-axis magnetic anisotropy. This anisotropy played a pivotal role in shaping the distinctive magnetic phase diagrams observed under both \(c\)-axis directional and in-plane magnetic fields.
One of the most notable findings of our investigation was the identification of a two-step phase transition when the magnetic field was aligned with the \(c\) direction. The first transition, from a paramagnetic state to an up-up-down (UUD) state, was characterized by the breaking of the \(Z_{3}\) lattice symmetry. Subsequently, a Berezinskii-Kosterlitz-Thouless transition ensued, marked by the breaking of the c-axis spin-rotation symmetry, leading to the formation of what we term the "Y state" at low temperatures.
In conclusion, our study provides valuable insights into the intricate magnetic phase transitions inherent to geometrically frustrated magnetic systems featuring dimer structures. The newly synthesized compound K\({}_{2}\)Ni\({}_{2}\)(SeO\({}_{3}\))\({}_{3}\) serves as an intriguing model system, shedding light on the rich and complex behavior of spin-1 systems arranged on triangular lattices under the influence of magnetic fields. These findings not only expand our fundamental understanding of quantum magnetism but also hold promise for potential applications in emerging technologies.
###### Acknowledgements.
This work is supported by the National Key Research and Development Program of China (Grant No. 2021YFA1400400), Shenzhen Fundamental Research Program (Grant No. JCYJ20220818100405013), the Guangdong Innovative and Entrepreneurial Research Team Program (Grants No. 2017ZT07C062), Shenzhen Key Laboratory of Advanced Quantum Functional Materials and Devices (Grant No. ZDSYS20190902092905285), Guangdong Basic and Applied Basic Research Foundation (Grant No. 2020B1515120100), Shenzhen Science and Technology Program (Grant No. RCYX20221008092848063).
| ```
この研究では、K2Ni2(SeO3)3というNewly synthesized compound を使って、磁化度と熱容量を測定し、磁場誘発磁気相転換を調べました。三角形格子で配置されたスピン1ダイマー系です。第一原理シミュレーションから、K2Ni2(SeO3)3のスピンシステムを2次元三角形格子スピン1ダイマーモデルとして表現することができました。このモデルには、 intra-dimer exchange of J1 = 0.32 meV、 inter-dimer exchange of J2 = 0.79 meV、 and an easy-axis anisotropy of D = 0.14 meVがあります。このeasy-axis磁気異方性を示すことで、c軸方向と面内磁場下で観察された明確な磁気相図が説明できます。特に、 our investigation unveils a two-step phase transition |
2309.07649 | Decay estimates for one Aharonov-Bohm solenoid in a uniform magnetic
field II: wave equation | This is the second of a series of papers in which we investigate the decay
estimates for dispersive equations with Aharonov-Bohm solenoids in a uniform
magnetic field. In our first starting paper \cite{WZZ}, we have studied the
Strichartz estimates for Schr\"odinger equation with one Aharonov-Bohm solenoid
in a uniform magnetic field. The wave equation in this setting becomes more
delicate since a difficulty is raised from the square root of the eigenvalue of
the Schr\"odinger operator $H_{\alpha, B_0}$ so that we cannot directly
construct the half-wave propagator. An independent interesting result
concerning the Gaussian upper bounds of the heat kernel is proved by using two
different methods. The first one is based on establishing Davies-Gaffney
inequality in this setting and the second one is straightforward to construct
the heat kernel (which efficiently captures the magnetic effects) based on the
Schulman-Sunada formula. As byproducts, we prove optimal bounds for the heat
kernel and show the Bernstein inequality and the square function inequality for
Schr\"odinger operator with one Aharonov-Bohm solenoid in a uniform magnetic
field. | Haoran Wang, Fang Zhang, Junyong Zhang | 2023-09-14T12:15:42 | http://arxiv.org/abs/2309.07649v1 | # Decay estimates for one Aharonov-Bohm solenoid in a uniform magnetic field II: wave equation
###### Abstract.
This is the second of a series of papers in which we investigate the decay estimates for dispersive equations with Aharonov-Bohm solenoids in a uniform magnetic field. In our first starting paper [36], we have studied the Strichartz estimates for Schrodinger equation with one Aharonov-Bohm solenoid in a uniform magnetic field. The wave equation in this setting becomes more delicate since a difficulty is raised from the square root of the eigenvalue of the Schrodinger operator \(H_{\alpha,B_{0}}\) so that we cannot directly construct the half-wave propagator. An independent interesting result concerning the Gaussian upper bounds of the heat kernel is proved by using two different methods. The first one is based on establishing Davies-Gaffney inequality in this setting and the second one is straightforward to construct the heat kernel (which efficiently captures the magnetic effects) based on the Schulman-Sunada formula. As byproducts, we prove optimal bounds for the heat kernel and show the Bernstein inequality and the square function inequality for Schrodinger operator with one Aharonov-Bohm solenoid in a uniform magnetic field.
**Key Words: Strichartz estimates, Davies-Gaffney inequality, wave equation, Aharonov-Bohm solenoids, uniform magnetic field AMS Classification: 42B37, 35Q40.**
## 1. Introduction
In this paper, as a sequence of recent papers [19, 21, 36], we study the decay and Strichartz estimates for the wave equation on the plane pierced by one infinitesimally thin Aharonov-Bohm solenoid and subjected to a perpendicular uniform magnetic field of constant magnitude \(B_{0}\). More precisely, we study the wave equation
\[\begin{cases}\partial_{tt}u(t,x)+H_{\alpha,B_{0}}u(t,x)=0,\\ u(0,x)=u_{0}(x),\quad\partial_{t}u(0,x)=u_{1}(x),\end{cases} \tag{1.1}\]
where the magnetic Schrodinger operator
\[H_{\alpha,B_{0}}=-(\nabla+i(A_{B}(x)+A_{\rm hmf}(x)))^{2}, \tag{1.2}\]
is the same as the one considered in [36]. Here, \(A_{B}(x)\) is the Aharonov-Bohm potential (initially introduced in [3])
\[A_{B}(x)=\alpha\Big{(}-\frac{x_{2}}{|x|^{2}},\frac{x_{1}}{|x|^{2}}\Big{)}, \quad x=(x_{1},x_{2})\in\mathbb{R}^{2}\setminus\{0\}, \tag{1.3}\]
where \(\alpha\in\mathbb{R}\) represents the circulation of \(A_{B}\) around the solenoid; \(A_{\rm hmf}(x)\) is given by
\[A_{\rm hmf}(x)=\frac{B_{0}}{2}(-x_{2},x_{1}),\quad B_{0}>0, \tag{1.4}\]
which generates the background uniform magnetic field.
We stress that the model is on \(\mathbb{R}^{2}\) and the magnetic field \(B\) is given by
\[B(x):=DA-DA^{t},\quad B_{ij}=\frac{\partial A^{i}}{\partial x_{j}}-\frac{ \partial A^{j}}{\partial x_{i}},\quad i,j=1,2. \tag{1.5}\]
Hence, the generated magnetic field \(B(x)=B_{0}+\alpha\delta(x)\) is actually a superposition of the uniform field and the Aharonov-Bohm field, where \(\delta\) is the usual Dirac delta. As mentioned in [36], the Aharonov-Bohm potential that produces the singular magnetic field has the same homogeneity as \(\nabla\) (homogenous of degree \(-1\)) so that the perturbation from the Aharonov-Bohm potential (1.3) is critical; the potential \(A_{\rm hmf}(x)\) is unbounded at infinity and the uniform magnetic filed \(B(x)=B_{0}\) from (1.5) generates a trapped well. Moreover, due to the presence of the potential (1.4), the spectrum of the operator \(H_{\alpha,B_{0}}\) consists of pure point, and thus the dispersive behavior of wave equation associated with \(H_{\alpha,B_{0}}\) will be distinguished from the models in [19, 21].
The Hamiltonian \(H_{\alpha,B_{0}}\) can be defined as a self-adjoint operator on \(L^{2}\), via Friedrichs' Extension Theorem (see e.g. [22, Thm. VI.2.1] and [28, X.3]), with a natural form domain, which in 2D turns out to be equivalent to
\[\mathcal{D}(H_{\alpha,B_{0}})\simeq\mathcal{H}^{1}_{\alpha,B_{0}}:=\left\{f \in L^{2}(\mathbb{R}^{2};\mathbb{C}):\int_{\mathbb{R}^{2}}\big{|}\big{(} \nabla+i(A_{B}+A_{\rm hmf})f\big{|}^{2}\ dx<+\infty\right\}.\]
We refer to [36, Section 2] for the Friedrichs' extension via quadratic forms and to [15] for more about the self-adjoint extension theory. In what follows and throughout, the operator \(H_{\alpha,B_{0}}\) should be regarded as a self-adjoint operator generated by the procedure of the Friedrichs' extension. Therefore, the half-wave propagator \(e^{it\sqrt{H_{\alpha,B_{0}}}}\) can be treated as one-parameter groups of operators on \(L^{2}(\mathbb{R}^{2})\). This allows to study a large class of dispersive estimates, such as time-decay (perhaps local in time), Strichartz and local smoothing for dispersive evolutions as (1.1). The validity of such properties has been central object of deep investigation of dispersive equations in the last decades, due to their relevance in the description of linear and nonlinear dynamics. To better frame our results, let us briefly sketch the state of the art about these problems.
Due to the significance of dispersive and Strichartz estimates in harmonic analysis and partial differential equations, there are too much literature to cite all of them here. But we would like to refer to [6, 7, 8, 10, 11, 13, 14, 31] and the references therein for various dispersive equations with electromagnetic potentials in mathematics and physics. The dispersive equations with the Aharonov-Bohm potential, as a diffraction physical model, have attracted more and more researchers to study from the mathematical perspective. In [17, 18], the authors studied the validity of the time decay estimates for the Schrodinger equation with the Aharonov-Bohm potential. However, due to the lack of pseudo-conformal invariance (which plays a critical role in the Schrodinger case), the arguments of [17, 18] break down for the wave equation. Very recently, Fanelli, Zheng and the last author [19] established Strichartz estimate for the wave equation by constructing the odd sine propagator. To solve open problems, raised in the survey [16] on the dispersive estimates for other equations (e.g. Klein-Gordon, Dirac, etc.), Gao, Yin, Zheng and the last author [21] constructed the spectral measure and then applied to prove the time decay and Strichartz estimates for the Klein-Gordon equation. The potential models in
[17, 18, 19, 21] are all scaling-invariant and without unbounded (at infinity) perturbations, which is a special case of our model (1.2) (with \(B_{0}\equiv 0\)). In this paper, as [36], we proceed to consider the wave equation in the magnetic fields mixed with the Aharonov-Bohm and the uniform ones.
Before stating our main results, let us introduce some preliminary notations. We define the magnetic Besov spaces as follows. Let \(\varphi\in C_{c}^{\infty}(\mathbb{R}\setminus\{0\})\) satisfy \(0\leq\varphi\leq 1,\mathrm{supp}\,\varphi\subset[1/2,1]\), and
\[\sum_{j\in\mathbb{Z}}\varphi(2^{-j}\lambda)=1,\quad\varphi_{j}( \lambda):=\varphi(2^{-j}\lambda),\,j\in\mathbb{Z},\quad\phi_{0}(\lambda):=\sum _{j\leq 0}\varphi(2^{-j}\lambda). \tag{1.6}\]
**Definition 1.1** (Magnetic Besov spaces associated with \(H_{\alpha,B_{0}}\)).: For \(s\in\mathbb{R}\) and \(1\leq p,r<\infty\), the homogeneous Besov norm \(\|\cdot\|_{\dot{\mathcal{B}}^{s}_{p,r}(\mathbb{R}^{2})}\) is defined by
\[\|f\|_{\dot{\mathcal{B}}^{s}_{p,r}(\mathbb{R}^{2})}=\Big{(}\sum _{j\in\mathbb{Z}}2^{jsr}\|\varphi_{j}(\sqrt{H_{\alpha,B_{0}}})f\|_{L^{p}( \mathbb{R}^{2})}^{r}\Big{)}^{1/r}. \tag{1.7}\]
In particular, for \(p=r=2\), we have the Sobolev norm
\[\|f\|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})}:=\|f \|_{\dot{\mathcal{B}}^{s}_{2,2}(\mathbb{R}^{2})}. \tag{1.8}\]
**Remark 1.2**.: Alternatively, the Sobolev space can be defined by
\[\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2}):=H^{-\frac{ \pi}{2}}_{\alpha,B_{0}}L^{2}(\mathbb{R}^{2}),\]
with the norm
\[\|f\|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})}:=\big{\|}H^{\frac {\pi}{2}}_{\alpha,B_{0}}f\big{\|}_{L^{2}(\mathbb{R}^{2})}. \tag{1.9}\]
By the spectral theory of operators on \(L^{2}\), the norms in (1.8) and (1.9) are equivalent; see Proposition 2.5 below.
**Definition 1.3**.: A pair \((q,p)\in[2,\infty]\times[2,\infty)\) is said to be admissible, if \((q,p)\) satisfies
\[\frac{2}{q}\leq\frac{1}{2}-\frac{1}{p}. \tag{1.10}\]
For \(s\in\mathbb{R}\), we denote \((q,p)\in\Lambda_{s}^{W}\) if \((q,p)\) is admissible and satisfies
\[\frac{1}{q}+\frac{2}{p}=1-s. \tag{1.11}\]
Now we state our main theorem.
**Theorem 1.4**.: _Let \(H_{\alpha,B_{0}}\) be as in (1.2) and \(t\in I:=[0,T]\) with any finite \(T\). Then there exists a constant \(C_{T}\) depending on \(T\) such that_
\[\|e^{it\sqrt{H_{\alpha,B_{0}}}}f\|_{L^{\infty}(\mathbb{R}^{2})} \leq C_{T}|t|^{-1/2}\|f\|_{\dot{\mathcal{B}}^{3/2}_{1,1}(\mathbb{R}^{2})}, \quad t\in I,\quad t\neq 0. \tag{1.12}\]
_Let \(u(t,x)\) be the solution of (1.1) with initial data \((u_{0},u_{1})\in\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})\times \dot{\mathcal{H}}^{s-1}_{\alpha,B_{0}}(\mathbb{R}^{2})\), then the Strichartz estimates_
\[\|u(t,x)\|_{L^{q}(I;L^{p}(\mathbb{R}^{2}))}\leq C_{T}\left(\|u_{0} \|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})}+\|u_{1}\|_{\dot{ \mathcal{H}}^{s-1}_{\alpha,B_{0}}(\mathbb{R}^{2})}\right) \tag{1.13}\]
_hold for \((q,p)\in\Lambda_{s}^{W}\) and \(0\leq s<1\)._
**Remark 1.5**.: The local-in-time decay estimate (1.12) is quite different from the Schrodinger counterpart (see [36, Theorem 1.1])
\[\big{\|}e^{itH_{\alpha,B_{0}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\leq C|\sin( tB_{0})|^{-1}\big{\|}f\big{\|}_{L^{1}\mathbb{R}^{2})},\quad t\neq\frac{k\pi}{B_{0}}, \,k\in\mathbb{Z},\]
which is similar to the harmonic oscillators (see Koch and Tataru [24]). The period \(\pi/B_{0}\) is essentially the Larmor period. However, for the wave equation, provided that the data \(f=\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})f\) is localized at frequency scale \(2^{j}\) with \(j\in\mathbb{Z}\), we can prove (see (5.11) below)
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}}) e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\] \[\lesssim 2^{2j}\big{(}1+2^{j}t\big{)}^{-N}\|\varphi(2^{-j}\sqrt{H_ {\alpha,B_{0}}})f\|_{L^{1}(\mathbb{R}^{2})},\quad\text{for}\quad 2^{j}t\lesssim 1\]
and (see (5.9) below)
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\] \[\lesssim 2^{2j}\big{(}1+2^{j}t\big{)}^{-\frac{1}{2}}\|\varphi(2^{- j}\sqrt{H_{\alpha,B_{0}}})f\|_{L^{1}(\mathbb{R}^{2})},\quad\text{for}\quad 2^{j}t \lesssim 1,\quad 2^{-j}|t|\leq\frac{\pi}{8B_{0}}.\]
The decay estimates for waves depend on the frequency. The Strichartz estimates (1.13) is still local-in-time but the endpoint of the time interval \(T\) can beyond \(\frac{\pi}{2B_{0}}\) which is the upper bound of \(T\) for Schrodinger's Strichartz estimates. Due to the unbounded potentials caused a trapped well, the Strichartz estimate is impossible to be global-in-time (for example, the Strichartz estimates for dispersive equations on sphere or torus), but still captures integration regularity behavior near \(t=0\).
Now let us figure out some points in our proof.
* As mentioned above, for the Schrodinger equation considered in [36], the explicit eigenvalues and eigenfunctions of the operator \(H_{\alpha,B_{0}}\) are the key ingredients. In particular, the eigenvalues are given by \[\lambda_{k,m}=(2m+1+|k+\alpha|)B_{0}+(k+\alpha)B_{0},\quad m,\,k\in\mathbb{Z},\,m\geq 0,\] see (2.1) below. One feature of \(\lambda_{k,m}\) is that \(k\) and \(m\) can be separated in the series convergent argument. However, for the half-wave propagator, this feature breaks down for the square root of \(\lambda_{k,m}\). Therefore, we cannot directly construct the wave propagator by following the argument of [19, 36].
* Due to the uniform magnetic field caused a trapped well, the spectral measure will involve a factor \(\sin(tB_{0})\) (which is a short-time decay but not long-time). This lead to the failure of the spectral measure argument in [21].
* To go around constructing the spectral measure, we turn to prove the Bernstein inequality to deal with the low frequency. For the high frequency, we use the classical subordination formula \[e^{-y\sqrt{H_{\alpha,B_{0}}}}=\frac{y}{2\sqrt{\pi}}\int_{0}^{\infty}e^{-sH_{ \alpha,B_{0}}}e^{-\frac{y^{2}}{4s}}s^{-\frac{3}{2}}ds,\quad y>0,\] which provides a connecting bridge between the Schrodinger propagator and the half-wave propagator. This idea is originated from [27] and [12].
The dispersive estimates proved in [36] are then used to address the high frequency of the waves.
* The Littlewood-Paley theory (including Bernstein inequality and the square function inequality) associated with the Schrodinger operator \(H_{\alpha,B_{0}}\) are proved by establishing the Gaussian upper bounds for the heat kernel.
* The heat kernel estimates for magnetic Schrodinger operators have its own interest, we provide two methods to study the heat kernel. Unfortunately, due to the fact \(A(x)=A_{\mathrm{hmf}}(x)+A_{B}(x)\notin L^{2}_{\mathrm{loc}}(\mathbb{R}^{2})\), Simon's diamagnetic pointwise inequality (see e.g. [29, Theorem B.13.2], [4]) cannot be directly used. Even though we cannot recover all the magnetic effects to prove the optimal heat kernel estimates, but we can prove \[\left|e^{-tH_{\alpha,B_{0}}}(x,y)\right|\lesssim\frac{1}{t}e^{-\frac{|x-y|^{2} }{Ct}},\] which is enough for proving the Bernstein inequality and the square function inequality. We first prove the on-diagonal estimates and then extend to the off-diagonal estimates by establishing the Davies-Gaffney inequality. The key points are the argument of [9] and [20] applying to the magnetic operator \(H_{\alpha,B_{0}}\). To recover more magnetic effects, we use the Schulman-Sunada formula from [34, 35] to construct the heat kernel and prove \[\left|e^{-tH_{\alpha,B_{0}}}(x,y)\right|\leq C\frac{B_{0}e^{-\alpha tB_{0}}} {4\pi\sinh(tB_{0})}e^{-\frac{B_{0}|x-y|^{2}}{4\sinh(tB_{0})}},\] which is better than the previous one. For more discussion on the heat kernel estimates, we refer to the remarks in Section 3.
The paper is organized as follows. In Section 2, as a preliminary step, we briefly recall the self-adjoint extension and the spectrum of the operator \(H_{\alpha,B_{0}}\), and prove the equivalence between Sobolev norm and a special Besov norm. In Section 3, we construct the heat kernel and prove the Gaussian upper bounds. In Section 4, we prove the Bernstein inequalities and the square function inequality by using the heat kernel estimates. Finally, in Section 5 and Section 6, we prove the dispersive estimate (1.12) and the Strichartz estimate (1.13) in Theorem 1.4 respectively.
**Acknowledgments:** The authors thank L. Fanelli, P. St'ovicek and P. D'Ancona for helpful discussions. This work is supported by National Natural Science Foundation of China (12171031, 11901041, 11831004).
## 2. preliminaries
In this section, we first repeat the preliminary section of [36] to recall two known results about the Friedrichs self-adjoint extension of the operator \(H_{\alpha,B_{0}}\) and the spectrum of \(H_{\alpha,B_{0}}\). Next, we use the spectral argument to prove the equivalence between the Sobolev norm and a special Besov norm.
### Quadratic form and self-adjoint extension
Define the space \(\mathcal{H}^{1}_{\alpha,B_{0}}(\mathbb{R}^{2})\) as the completion of \(\mathcal{C}^{\infty}_{c}(\mathbb{R}^{2}\setminus\{0\};\mathbb{C})\) with respect to the norm
\[\|f\|_{\mathcal{H}^{1}_{\alpha,B_{0}}(\mathbb{R}^{2})}=\Big{(}\int_{\mathbb{R }^{2}}|\nabla_{\alpha,B_{0}}f(x)|^{2}dx\Big{)}^{\frac{1}{2}}\]
where
\[\nabla_{\alpha,B_{0}}f(x)=\nabla f+i(A_{B}+A_{\rm hmf})f.\]
The quadratic form \(Q_{\alpha,B_{0}}\) associated with \(H_{\alpha,B_{0}}\) is defined by
\[Q_{\alpha,B_{0}}:\qquad\mathcal{H}^{1}_{\alpha,B_{0}}\to\mathbb{R}\] \[Q_{\alpha,B_{0}}(f)=\int_{\mathbb{R}^{2}}|\nabla_{\alpha,B_{0}}f (x)|^{2}dx.\]
Then the quadratic form \(Q_{\alpha,B_{0}}\) is positive definite, which implies that the operator \(H_{\alpha,B_{0}}\) is symmetric semi bounded from below and thus admits a self-adjoint extension (Friedrichs extension) \(H^{F}_{\alpha,B_{0}}\) with the natural form domain
\[\mathcal{D}=\Big{\{}f\in\mathcal{H}^{1}_{\alpha,B_{0}}(\mathbb{R}^{2}):H^{F}_ {\alpha,B_{0}}f\in L^{2}(\mathbb{R}^{2})\Big{\}}\]
Even though the operator \(H_{\alpha,B_{0}}\) has many other self-adjoint extensions (see [15]) by the von Neumann extension theory, in this whole paper, we use the simplest Friedrichs extension and briefly write \(H_{\alpha,B_{0}}\) as its Friedrichs extension \(H^{F}_{\alpha,B_{0}}\).
### The spectrum of the operator \(H_{\alpha,B_{0}}\)
In this subsection, we exhibit the eigenvalues and eigenfunctions of the Schrodinger operator
\[H_{\alpha,B_{0}}=-(\nabla+i(A_{B}(x)+A_{\rm hmf}(x)))^{2},\]
where the magnetic vector potentials are in (1.3) and (1.4).
**Proposition 2.1** (The spectrum for \(H_{\alpha,B_{0}}\)).: _Let \(H_{\alpha,B_{0}}\) be the self-adjoint Schrodinger operator in (1.2). Then the eigenvalues of \(H_{\alpha,B_{0}}\) are given by_
\[\lambda_{k,m}=(2m+1+|k+\alpha|)B_{0}+(k+\alpha)B_{0},\quad m,\,k\in\mathbb{Z}, \,m\geq 0, \tag{2.1}\]
_with (finite) multiplicity_
\[\#\Bigg{\{}j\in\mathbb{Z}:\frac{\lambda_{k,m}-(j+\alpha)B_{0}}{2B_{0}}-\frac {|j+\alpha|+1}{2}\in\mathbb{N}\Bigg{\}}.\]
_Furthermore, let \(\theta=\frac{x}{|x|}\), the corresponding eigenfunction is given by_
\[V_{k,m}(x)=|x|^{|k+\alpha|}e^{-\frac{B_{0}|x|^{2}}{4}}\,P_{k,m}\Bigg{(}\frac{ B_{0}|x|^{2}}{2}\Bigg{)}e^{ik\theta} \tag{2.2}\]
_where \(P_{k,m}\) is the polynomial of degree \(m\) given by_
\[P_{k,m}(r)=\sum_{n=0}^{m}\frac{(-m)_{n}}{(1+|k+\alpha|)_{n}}\frac{r^{n}}{n!}.\]
_with \((a)_{n}\) (\(a\in\mathbb{R}\)) the Pochhammer's symbol_
\[(a)_{n}=\begin{cases}1,&n=0;\\ a(a+1)\cdots(a+n-1),&n=1,2,\cdots\end{cases}\]
**Remark 2.2**.: One can verify that the orthogonality holds
\[\int_{\mathbb{R}^{2}}V_{k_{1},m_{1}}(x)V_{k_{2},m_{2}}(x)\,dx=0,\quad\text{ if}\quad(k_{1},m_{1})\neq(k_{2},m_{2}).\]
**Remark 2.3**.: Let \(L^{\alpha}_{m}(t)\) be the generalized Laguerre polynomials
\[L^{\alpha}_{m}(t)=\sum_{n=0}^{m}(-1)^{n}\Bigg{(}\begin{array}{c}m+\alpha\\ m-n\end{array}\Bigg{)}\frac{t^{n}}{n!},\]
then one has the well known orthogonality relation
\[\int_{0}^{\infty}x^{\alpha}e^{-x}L^{\alpha}_{m}(x)L^{\alpha}_{n}(x)\,dx=\frac{ \Gamma(n+\alpha+1)}{n!}\delta_{n,m},\]
where \(\delta_{n,m}\) is the Kronecker delta. Let \(\tilde{r}=\frac{B_{0}|x|^{2}}{2}\) and \(\alpha_{k}=|k+\alpha|\), then
\[P_{k,m}(\tilde{r})=\sum_{n=0}^{m}\frac{(-1)^{n}m(m-1)\cdots(m-(n-1))}{(\alpha_ {k}+1)(\alpha_{k}+2)\cdots(\alpha_{k}+n)}\frac{\tilde{r}^{n}}{n!}=\left( \begin{array}{c}m+\alpha_{k}\\ m\end{array}\right)^{-1}L^{\alpha_{k}}_{m}(\tilde{r}). \tag{2.3}\]
Therefore,
\[\|V_{k,m}(x)\|^{2}_{L^{2}(\mathbb{R}^{2})}=\pi\Big{(}\frac{2}{B_{0}}\Big{)}^{ \alpha_{k}+1}\Gamma(1+\alpha_{k})\Bigg{(}\begin{array}{c}m+\alpha_{k}\\ m\end{array}\Bigg{)}^{-1}. \tag{2.4}\]
**Remark 2.4**.: Recall the Poisson kernel formula for Laguerre polynomials [2, (6.2.25)]: for \(a,b,c,\alpha>0\)
\[\sum_{m=0}^{\infty}e^{-cm}\frac{m!}{\Gamma(m+\alpha+1)}L^{\alpha }_{m}(a)L^{\alpha}_{m}(b)\] \[=\frac{e^{\frac{\alpha_{k}}{2}}}{(ab)^{\frac{n}{2}}(1-e^{-c})} \exp\left(-\frac{(a+b)e^{-c}}{1-e^{-c}}\right)I_{\alpha}\left(\frac{2\sqrt{ab} e^{-\frac{c}{2}}}{1-e^{-c}}\right)\]
then this together with (2.3) gives
\[\sum_{m=0}^{\infty}e^{-cm}\frac{m!}{\Gamma(m+\alpha_{k}+1)}\Bigg{(} \begin{array}{c}m+\alpha_{k}\\ m\end{array}\Bigg{)}^{2}P_{k,m}(a)P_{k,m}(b) \tag{2.5}\] \[=\frac{e^{\frac{\alpha_{k}c}{2}}}{(ab)^{\frac{n}{2}}(1-e^{-c})} \exp\left(-\frac{(a+b)e^{-c}}{1-e^{-c}}\right)I_{\alpha_{k}}\left(\frac{2\sqrt {ab}e^{-\frac{c}{2}}}{1-e^{-c}}\right).\]
We refer to [36] for the proof.
### The Sobolev spaces
In this subsection, we will prove the equivalence of two norms.
**Proposition 2.5** (Equivalent norms).: _Let the Sobolev norm and Besov norm be defined in (1.9) and (1.7) respectively. For \(s\in\mathbb{R}\), then there exist positive constants \(c,C\) such that_
\[c\|f\|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})}\leq\|f\|_{\dot{ \mathcal{B}}^{s}_{2,2}(\mathbb{R}^{2})}\leq C\|f\|_{\dot{\mathcal{H}}^{s}_{ \alpha,B_{0}}(\mathbb{R}^{2})}, \tag{2.6}\]
_and_
\[c\|f\|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})}\leq\|f\|_{\dot{ \mathcal{B}}^{s}_{2,2}(\mathbb{R}^{2})}\leq C\|f\|_{\dot{\mathcal{H}}^{s}_{ \alpha,B_{0}}(\mathbb{R}^{2})}. \tag{2.7}\]
Proof.: Let \(\tilde{V}_{k,m}\) be the \(L^{2}\)-normalization of \(V_{k,m}\) in (2.2), then the eigenfunctions \(\left\{\tilde{V}_{k,m}\right\}_{k\in\mathbb{Z},m\in\mathbb{N}}\) form an orthonormal basis of \(L^{2}(\mathbb{R}^{2})\) corresponding to the eigenfunctions of \(H_{\alpha,B_{0}}\).
By the functional calculus, for any well-behaved functions \(F\) (e.g. bounded Borel measurable function) and \(f\in L^{2}\), we can write
\[F(H_{\alpha,B_{0}})f=\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}F(\lambda_{k,m})c _{k,m}\tilde{V}_{k,m}(x).\]
where
\[c_{k,m}=\int_{\mathbb{R}^{2}}f(y)\overline{\tilde{V}_{k,m}(y)}dy.\]
Then
\[\|F(H_{\alpha,B_{0}})f\|_{L^{2}(\mathbb{R}^{2})}=\Big{(}\sum_{k\in\mathbb{Z}, \atop m\in\mathbb{N}}\big{|}F(\lambda_{k,m})c_{k,m}\big{|}^{2}\Big{)}^{1/2}. \tag{2.8}\]
In particular, we have
\[\|f\|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})}=\|H^{\frac{s}{2}}_ {\alpha,B_{0}}f\|_{L^{2}(\mathbb{R}^{2})}=\Big{(}\sum_{k\in\mathbb{Z},\atop m \in\mathbb{N}}\big{|}\lambda^{\frac{s}{2}}_{k,m}c_{k,m}\big{|}^{2}\Big{)}^{1/2}.\]
Let \(\varphi\in C^{\infty}_{c}(\mathbb{R}\setminus\{0\})\) in (1.6). On the one hand, by the definition and (2.8), we have
\[\|f\|_{\dot{\mathcal{B}}^{s}_{2,2}(\mathbb{R}^{2})} =\Big{(}\sum_{j\in\mathbb{Z}}2^{2js}\|\varphi_{j}(\sqrt{H_{\alpha,B_{0}}})f\|^{2}_{L^{2}(\mathbb{R}^{2})}\Big{)}^{1/2}\] \[=\Big{(}\sum_{j\in\mathbb{Z}}\sum_{k\in\mathbb{Z},\atop m\in \mathbb{N}}2^{2js}\big{|}\varphi\Big{(}\frac{\sqrt{\lambda_{k,m}}}{2^{j}}\Big{)} c_{k,m}\big{|}^{2}\Big{)}^{1/2}\] \[\leq\Big{(}\sum_{j\in\mathbb{Z}}\sum_{k\in\mathbb{Z},\atop m\in \mathbb{N}}\lambda^{s}_{k,m}\big{|}\varphi\Big{(}\frac{\sqrt{\lambda_{k,m}}}{2 ^{j}}\Big{)}c_{k,m}\big{|}^{2}\Big{)}^{1/2}\] \[\lesssim\Big{(}\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}| \lambda^{\frac{s}{2}}_{k,m}c_{k,m}|^{2}\sum_{j\in\mathbb{Z}}\big{|}\varphi \Big{(}\frac{\sqrt{\lambda_{k,m}}}{2^{j}}\Big{)}\big{|}^{2}\Big{)}^{1/2}\] \[\lesssim\Big{(}\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}| \lambda^{\frac{s}{2}}_{k,m}c_{k,m}|^{2}\Big{)}^{1/2}=\|f\|_{\dot{\mathcal{H}}^ {s}_{\alpha,B_{0}}(\mathbb{R}^{2})}.\]
On the other hand, we have
\[\|f\|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})} =\Big{(}\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}|\lambda^{\frac {s}{2}}_{k,m}c_{k,m}|^{2}\Big{)}^{1/2}\] \[=\Big{(}\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}\Big{|}\sum_{j \in\mathbb{Z}}\varphi\Big{(}\frac{\sqrt{\lambda_{k,m}}}{2^{j}}\Big{)}\lambda ^{\frac{s}{2}}_{k,m}c_{k,m}\Big{|}^{2}\Big{)}^{1/2}\] \[\leq C\Big{(}\sum_{j\in\mathbb{Z}}\sum_{k\in\mathbb{Z},\atop m\in \mathbb{N}}2^{2js}\big{|}\varphi\Big{(}\frac{\sqrt{\lambda_{k,m}}}{2^{j}} \Big{)}c_{k,m}\big{|}^{2}\Big{)}^{1/2}\] \[\lesssim\Big{(}\sum_{j\in\mathbb{Z}}2^{2js}\|\varphi_{j}(\sqrt{H_ {\alpha,B_{0}}})f\|^{2}_{L^{2}(\mathbb{R}^{2})}\Big{)}^{1/2}=\|f\|_{\dot{ \mathcal{B}}^{s}_{2,2}(\mathbb{R}^{2})}.\]
In the above inequality, we have used the fact that, for a fixed \(\lambda\), there are only finite terms in the summation
\[1=\sum_{j\in\mathbb{Z}}\varphi\big{(}\frac{\lambda}{2^{j}}\big{)}.\]
Above all, we have proved (2.6). One can prove (2.7) similarly.
## 3. Heat kernel estimates
In this section, for our purpose of the Littlewood-Paley theory associated with \(H_{\alpha,B_{0}}\), we study the heat kernel estimates associated with the magnetic Schrodinger operator \(H_{\alpha,B_{0}}\). We provide two methods to study the heat kernel. In the first method, we first combine the strategies of [18, 21, 19] to construct the heat kernel by using the spectrum property in Proposition 2.1. And then we use the representation of the heat kernel to obtain the on-diagonal estimates. Finally we extend the on-diagonal bounds by adding the Gaussian factor \(\exp(-d^{2}(x,y)/Ct)\) to obtain the off-diagonal Gaussian bounds. In the second one, we directly construct the heat kernel by using the Schulman-Sunada formula in [34, 35] and then optimal the established bounds.
### Method I:
More precisely, we will first prove the following result.
**Proposition 3.1**.: _Let \(H_{\alpha,B_{0}}\) be the operator in (1.2) and suppose \(x=r_{1}(\cos\theta_{1},\sin\theta_{1})\) and \(y=r_{2}(\cos\theta_{2},\sin\theta_{2})\). Let \(u(t,x)\) be the solution of the heat equation_
\[\begin{cases}\big{(}\partial_{t}+H_{\alpha,B_{0}}\big{)}u(t,x)=0,\\ u(0,x)=f(x).\end{cases} \tag{3.1}\]
_Then_
\[u(t,x)=e^{-tH_{\alpha,B_{0}}}f=\int_{\mathbb{R}^{2}}K_{H}(t;x,y)f(y)\,dy,\quad t >0,\]
_where the kernel of the heat propagator \(e^{-tH_{\alpha,B_{0}}}\) is given by_
\[K_{H}(t;x,y)=\Big{(}\frac{B_{0}e^{-\alpha tB_{0}}}{4\pi^{2}\sinh tB_{0}}\Big{)} e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{2})\cosh tB_{0}}{4\sinh tB_{0}}}\sum_{k\in \mathbb{Z}}e^{ik(\theta_{1}-\theta_{2}+itB_{0})}I_{\alpha_{k}}\left(\frac{B_ {0}r_{1}r_{2}}{2\sinh tB_{0}}\right). \tag{3.2}\]
_Furthermore, there exists a constant \(C\) such that_
\[|K_{H}(t;x,y)|\leq C\frac{B_{0}e^{(1-\alpha)B_{0}t}}{\sinh tB_{0}}e^{-\frac{B _{0}(r_{1}-r_{2})^{2}}{4\tanh tB_{0}}}. \tag{3.3}\]
**Remark 3.2**.: The argument is a bit different from the proof for the Schrodinger propagator. In particular, at first glance, the factor \(e^{-tB_{0}k}\) is a trouble in the summation of the formula (3.2) when \(k\in-\mathbb{N}\), but it converges due to the factor \(\sinh(tB_{0})\) in the modified Bessel function.
Proof.: We construct the representation formula (3.2) of the heat flow \(e^{-tH_{\alpha,B_{0}}}\) by combining the argument of [18] and [19, 21]. This is close to the construction of Schrodinger flow in our previous paper [36], however, we provide the details again for self-contained.
Our starting point is the Proposition 2.1. Let \(\tilde{V}_{k,m}\) be the \(L^{2}\)-normalization of \(V_{k,m}\) in (2.2), then the eigenfunctions \(\left\{\tilde{V}_{k,m}\right\}_{k\in\mathbb{Z},m\in\mathbb{N}}\) form an orthonormal basis of \(L^{2}(\mathbb{R}^{2})\) corresponding to the eigenfunctions of \(H_{\alpha,B_{0}}\).
We expand the initial data \(f(x)\in L^{2}\) as
\[f(x)=\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}c_{k,m}\tilde{V}_{k,m}(x)\]
where
\[c_{k,m}=\int_{\mathbb{R}^{2}}f(x)\overline{\tilde{V}_{k,m}(x)}\,dx. \tag{3.4}\]
The solution \(u(t,x)\) of (3.1) can be written as
\[u(t,x)=\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}u_{k,m}(t)\tilde{V}_{k,m}(x), \tag{3.5}\]
where \(u_{k,m}(t)\) satisfies the ODE
\[\begin{cases}&u_{k,m}^{\prime}(t)=-\lambda_{k,m}u_{k,m}(t),\\ &u_{k,m}(0)=c_{k,m},\quad k\in\mathbb{Z},\,m\in\mathbb{N}.\end{cases}\]
Thus we obtain \(u_{k,m}(t)=c_{k,m}e^{-t\lambda_{k,m}}\). Therefore the solution (3.5) becomes
\[u(t,x)=\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}c_{k,m}e^{-t\lambda_{k,m}}\tilde{V}_{k,m}(x).\]
Plugging (3.4) into the above expression yields
\[u(t,x)=\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}e^{-t\lambda_{k,m}}\left(\int_{\mathbb{R}^{2}}f(y )\overline{\tilde{V}_{k,m}(y)}dy\right)\tilde{V}_{k,m}(x).\]
We write \(f\) in a harmonic spherical expansion
\[f(y)=\sum_{k\in\mathbb{Z}}f_{k}(r_{2})e^{ik\theta_{2}},\]
where
\[f_{k}(r_{2})=\frac{1}{2\pi}\int_{0}^{2\pi}f(r_{2},\theta_{2})e^{-ik\theta_{2} }\,d\theta_{2},\quad r_{2}=|y|, \tag{3.6}\]
we thus have
\[u(t,x) =\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}e^{-t\lambda_{k,m}}\frac{V_{k,m}(x)}{\|V_{k,m}\| _{L^{2}}^{2}}\Bigg{(}\int_{0}^{\infty}f_{k}(r_{2})e^{-\frac{B_{0}r_{2}^{2}}{ 4}}\,P_{k,m}\Big{(}\frac{B_{0}r_{2}^{2}}{2}\Big{)}r_{2}^{1+\alpha_{k}}\mathrm{ d}r_{2}\Bigg{)}\] \[=\Big{(}\frac{B_{0}}{2\pi}\Big{)}\sum_{k\in\mathbb{Z}}e^{ik\theta _{1}}\frac{B_{0}^{\alpha_{k}}e^{-t\beta_{k}}}{2^{\alpha_{k}}\Gamma(1+\alpha_{k })}\Bigg{[}\sum_{m=0}^{\infty}\left(\begin{array}{c}m+\alpha_{k}\\ m\end{array}\right)e^{-2tmB_{0}}\] \[\times\Bigg{(}\int_{0}^{\infty}f_{k}(r_{2})(r_{1}r_{2})^{\alpha_ {k}}e^{-\frac{B_{0}(r_{2}^{2}+r_{2}^{2})}{4}}P_{k,m}\left(\frac{B_{0}r_{2}^{2 }}{2}\right)P_{k,m}\left(\frac{B_{0}r_{1}^{2}}{2}\right)r_{2}\mathrm{d}r_{2} \Bigg{)}\Bigg{]},\]
where \(\alpha_{k}=|k+\alpha|\) and we use (2.1),(2.2),(2.4) and
\[\lambda_{k,m} =(2m+1+|k+\alpha|)B_{0}+(k+\alpha)B_{0}\] \[:=2mB_{0}+\beta_{k}\]
with \(\beta_{k}=(1+|k+\alpha|)B_{0}+(k+\alpha)B_{0}\geq B_{0}>0\).
Using the formula (2.5) and (3.6), we obtain
\[u(t,x) =\Big{(}\frac{B_{0}e^{-\alpha tB_{0}}}{4\pi^{2}\sinh tB_{0}}\Big{)} \int_{0}^{\infty}\int_{0}^{2\pi}\sum_{k\in\mathbb{Z}}e^{ik(\theta_{1}-\theta_{2 })}\frac{B_{0}^{\alpha_{k}}e^{-t\beta_{k}}}{2^{\alpha_{k}}\Gamma(1+\alpha_{k}) }(r_{1}r_{2})^{\alpha_{k}}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{2})}{4}}f(r_{2}, \theta_{2})\] \[\times\frac{2^{\alpha_{k}}e^{t\alpha_{k}B_{0}}}{(B_{0}r_{1}r_{2} )^{\alpha_{k}}}\exp\left(-\frac{\frac{B_{0}(r_{1}^{2}+r_{2}^{2})}{2}e^{-2tB_{ 0}}}{1-e^{-2tB_{0}}}\right)I_{\alpha_{k}}\left(\frac{B_{0}r_{1}r_{2}e^{-tB_{0 }}}{1-e^{-2tB_{0}}}\right)r_{2}\mathrm{d}r_{2}\mathrm{d}\theta_{2}\] \[=\Big{(}\frac{B_{0}e^{-\alpha tB_{0}}}{4\pi^{2}\sinh tB_{0}} \Big{)}\int_{0}^{\infty}\int_{0}^{2\pi}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{2}) \cosh tB_{0}}{4\sinh tB_{0}}}f(r_{2},\theta_{2})\] \[\quad\times\sum_{k\in\mathbb{Z}}e^{ik(\theta_{1}-\theta_{2}+itB_{ 0})}I_{\alpha_{k}}\left(\frac{B_{0}r_{1}r_{2}}{2\sinh tB_{0}}\right)r_{2} \mathrm{d}r_{2}\mathrm{d}\theta_{2}.\]
Therefore, we obtain the heat kernel
\[K_{H}(t;x,y)=\Big{(}\frac{B_{0}e^{-\alpha tB_{0}}}{4\pi^{2}\sinh tB_{0}}\Big{)} e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{2})\cosh tB_{0}}{4\sinh tB_{0}}}\sum_{k\in \mathbb{Z}}e^{ik(\theta_{1}-\theta_{2}+itB_{0})}I_{\alpha_{k}}\left(\frac{B_ {0}r_{1}r_{2}}{2\sinh tB_{0}}\right),\]
which gives (3.2).
Now we need to verify the inequality (3.3). To this end, it suffices to show
\[|K_{H}(t;x,y)|\leq\frac{B_{0}e^{-\alpha B_{0}t}}{4\pi^{2}\sinh tB_{0}}e^{- \frac{B_{0}(r_{1}^{2}+r_{2}^{2})}{4\tanh tB_{0}}}\sum_{k\in\mathbb{Z}}e^{-kB_{ 0}t}I_{\alpha_{k}}\left(\frac{B_{0}r_{1}r_{2}}{2\sinh tB_{0}}\right). \tag{3.7}\]
Let \(z=\frac{B_{0}r_{1}r_{2}}{2\sinh tB_{0}}>0\) and notice the monotonicity of the modified Bessel function \(I_{\mu}(z)\) with respect to the order, in other words, for fixed \(z>0\),
\[I_{\mu}(z)\leq I_{\nu}(z),\quad\mu\geq\nu.\]
Recall \(\alpha_{k}=|k+\alpha|\) and \(\alpha\in(0,1)\), thus we show
\[\sum_{k\in\mathbb{Z}}e^{-kB_{0}t}I_{\alpha_{k}}\left(z\right)= \sum_{k\geq 0}e^{-kB_{0}t}I_{k+\alpha}(z)+\sum_{k\geq 1}e^{kB_{0}t}I_{k- \alpha}(z)\] \[\leq \sum_{k\geq 0}e^{-kB_{0}t}I_{k}(z)+e^{B_{0}t}\sum_{k\geq 0}e^{kB_{ 0}t}I_{k}(z)\] \[\leq e^{B_{0}t}\Big{(}\sum_{k\geq 0}e^{-kB_{0}t}I_{k}(z)+\sum_{k\geq 0}e^{ kB_{0}t}I_{k}(z)\Big{)}\] \[\leq e^{B_{0}t}\Big{(}\sum_{k\in\mathbb{Z}}e^{kB_{0}t}I_{|k|}(z)+I_{0}( z)\Big{)}\] \[\leq Ce^{B_{0}t}(e^{z}+e^{z\cosh(B_{0}t)})\] \[\leq Ce^{B_{0}t}e^{z\cosh(B_{0}t)}\] \[\leq Ce^{B_{0}t}e^{\frac{B_{0}r_{1}r_{2}}{2\sinh tB_{0}!}},\]
where we use the formula [5, Eq. (9.6.19)]
\[\sum_{k\in\mathbb{Z}}e^{kt}I_{|k|}(z)=e^{z\cosh(t)},\quad I_{0}(z)\leq Ce^{z}.\]
Combining with (3.7), we have verified (3.3).
We next extend our result of the "on-diagonal" kernel estimate
\[|K_{H}(t;x,x)|\leq C\frac{B_{0}e^{(1-\alpha)B_{0}t}}{|\sinh tB_{0}|}\]
to the "off-diagonal". Let \(p_{t}(x,y)\) denote the heat kernel corresponding to a second-order differential elliptic or sub-elliptic operator, then the usual theory says that one can automatically improve on-diagonal bounds
\[p_{t}(x,x)\leq\frac{C}{V(x,\sqrt{t})}\]
to the typical Gaussian heat kernel upper bound
\[p_{t}(x,y)\leq\frac{C}{V(x,\sqrt{t})}\exp\Big{(}-\frac{d^{2}(x,y)}{Ct}\Big{)}\]
for all \(t>0\) and \(x,y\) ranging in the space where the operator acts, for an appropriate function \(V\).
For our specific operator \(H_{\alpha,B_{0}}\), we prove that
**Proposition 3.3**.: _Let \(K_{H}(t;x,y)\) be in Proposition 3.1, then there exists a constant \(C\) such that_
\[|K_{H}(z;x,y)|\leq C(\operatorname{Re}z)^{-1}\exp\Big{(}-\operatorname{Re} \frac{d^{2}(x,y)}{Cz}\Big{)},\]
_for all \(z\in\mathbb{C}_{+}\) and \(x,y\in\mathbb{R}^{2}\). In particular, \(z=t>0\), then_
\[|K_{H}(t;x,y)|\leq Ct^{-1}\exp\Big{(}-\frac{d^{2}(x,y)}{Ct}\Big{)},\quad t>0. \tag{3.8}\]
**Remark 3.4**.: One usual way to prove the Gaussian bounds for the magnetic Schrodinger operator is to apply the important diamagnetic inequality
\[\Big{|}\Big{(}e^{t(\nabla+iA(x))^{2}}f\Big{)}(x)\Big{|}\leq\Big{(}e^{t\Delta} |f|\Big{)}(x), \tag{3.9}\]
which relates estimates on the magnetic Schrodinger operator semigroup to estimates on the free heat semigroup. The obvious disadvantage of using (3.9) is that all the effects of the magnetic field are completely eliminated. To our best knowledge, (3.9) is available for \(A(x)\in L^{2}_{\operatorname{loc}}\), see [29]. Unfortunately, our magnetic potential \(A(x)=A_{B}(x)+A_{\operatorname{hmrf}}(x)\notin L^{2}_{\operatorname{loc}}( \mathbb{R}^{2})\).
**Remark 3.5**.: To recover some magnetic effects, it would be tempting to prove
\[\Big{|}\Big{(}e^{-tH_{\alpha,B_{0}}}f\Big{)}(x)\Big{|}\leq\Big{(}e^{-tH_{0,B_{ 0}}}|f|\Big{)}(x), \tag{3.10}\]
or
\[\Big{|}\Big{(}e^{-tH_{\alpha,B_{0}}}f\Big{)}(x)\Big{|}\leq\Big{(}e^{-tH_{ \alpha,0}}|f|\Big{)}(x). \tag{3.11}\]
If (3.10) was available,
\[\big{|}e^{-tH_{\alpha,B_{0}}}(x,y)\big{|}\lesssim\big{|}e^{-tH_{0,B_{0}}}(x,y )\big{|}\lesssim\frac{1}{\sinh(B_{0}t)}e^{-\frac{B_{0}|x-y|^{2}}{4\tanh(B_{0}t )}}, \tag{3.12}\]
where we use the Mehler heat kernel of \(e^{-tH_{0,B_{0}}}\) (e.g. [30, P168])
\[e^{-tH_{0,B_{0}}}(x,y)=\frac{B_{0}}{4\pi\sinh(B_{0}t)}e^{-\frac{B_{0}|x-y|^{2} }{4\tanh(B_{0}t)}-\frac{iB_{0}}{2}(x_{1}y_{2}-x_{2}y_{1})}. \tag{3.13}\]
If (3.11) was available, we obtain
\[\left|e^{-tH_{\alpha,B_{0}}}(x,y)\right|\lesssim\left|e^{-tH_{\alpha,0}}(x,y) \right|\lesssim\frac{1}{t}e^{-\frac{|x-y|^{2}}{4t}}. \tag{3.14}\]
We refer to [19, Proposition 3.2] for the last Gaussian upper bounds for \(e^{-tH_{\alpha,0}}(x,y)\). Since for \(t\geq 0\), one has
\[\sinh(t)\geq t,\quad t/\tanh(t)\geq 1,\]
hence (3.14) is weaker than (3.12). Unfortunately, as pointed out in [26], the semigroup generated by the magnetic Schrodinger operator is not Markovian, in fact not even positivity preserving which is important in the theory of comparison of heat semigroups. So the truth of (3.10) and (3.11) is not known, we refer to [26].
**Remark 3.6**.: The Gaussian decay on the right side of (3.8) is the one of the heat kernel, which is considerably weaker than the decay of the Mehler kernel (3.12). Similarly as [26], we ask that how to prove
\[|K_{H}(t;x,y)|\lesssim\frac{B_{0}e^{-\alpha B_{0}t}}{|\sinh tB_{0}|}e^{-\frac{ |x-y|^{2}}{C\tanh(B_{0}t)}},\quad t>0.\]
The truth of this estimate would reveal a robust dependence of the magnetic heat kernel on the magnetic field. In our case, we give a positive answer to this problem by proving (3.30) in the subsequent subsection.
Proof.: We prove (3.8) by using [9, Theorem 4.2]. The Theorem claims that if \((M,d,\mu,L)\) satisfies the Davies-Gaffney estimates, that is,
\[|\langle e^{-tL}f,g\rangle|\leq\|f\|_{L^{2}(U_{1})}\|g\|_{L^{2}(U_{2})}e^{- \frac{d^{2}(U_{1},U_{2})}{4t}}\]
for all \(t>0\), \(U_{i}\subset M\) with \(i=1,2\) and \(f\in L^{2}(U_{1},d\mu)\), \(g\in L^{2}(U_{2},d\mu)\) and \(d(U_{1},U_{2})=\inf\{\rho=|x-y|:x\in U_{1},y\in U_{2}\}\). If, for some \(K\) and \(D>0\),
\[e^{-tL}(x,x)\leq Kt^{-\frac{D}{2}},\qquad\forall t>0,\quad x\in M,\]
then
\[|e^{-zL}(x,y)|\leq K(\operatorname{Re}z)^{-\frac{D}{2}}\Big{(}1+\operatorname {Re}\frac{d^{2}(x,y)}{4z}\Big{)}^{\frac{D}{2}}\exp\Big{(}-\operatorname{Re} \frac{d^{2}(x,y)}{4z}\Big{)}\]
for all \(z\in\mathbb{C}_{+}\) and \(x,y\in M\).
For our model \(M=\mathbb{R}^{2}\) and \(L=H_{\alpha,B_{0}}\), we need to verify the on-diagonal estimates
\[e^{-tH_{\alpha,B_{0}}}(x,x)\leq Kt^{-1},\qquad\forall t>0,\quad x\in M, \tag{3.15}\]
and the Davies-Gaffney estimates
\[|\langle e^{-tH_{\alpha,B_{0}}}f,g\rangle|\leq\|f\|_{L^{2}(U_{1})}\|g\|_{L^{2} (U_{2})}e^{-\frac{d^{2}(U_{1},U_{2})}{4t}}. \tag{3.16}\]
If this has been done, for \(z=t>0\) and \(D=2\) and \(\epsilon>0\), then
\[|e^{-tH_{\alpha,B_{0}}}(x,y)| \leq Ct^{-1}\Big{(}1+\frac{|x-y|^{2}}{4t}\Big{)}\exp\Big{(}-\frac {|x-y|^{2}}{4t}\Big{)}\] \[\leq Ct^{-1}\exp\Big{(}-\frac{|x-y|^{2}}{(4+\epsilon)t}\Big{)}.\]
Therefore, it suffices to verify (3.15) and (3.16). Since \(x=y\) and \(\frac{B_{0}e^{-\alpha B_{0}t}}{4\pi^{2}|\sinh tB_{0}|}\leq Ct^{-1}\), the estimate (3.15) is a direct consequence of (3.3). However, the inequality (3.16) is more complicated, this is a consequence of (3.17) below.
**Proposition 3.7** (Davies-Gaffney inequality).: _Let \(A\) and \(B\) be two disjoint measurable sets in \(\mathbb{R}^{2}\) and suppose that \(f\in L^{2}(A)\) and \(g\in L^{2}(B)\) such that \(\operatorname{supp}(f)\subset A\) and \(\operatorname{supp}(g)\subset B\). Then_
\[|\langle e^{-tH_{\alpha,B_{0}}}f,g\rangle|\leq\|f\|_{L^{2}(A)}\|g\|_{L^{2}(B)} e^{-\frac{d^{2}(A,B)}{4t}} \tag{3.17}\]
_where \(d(A,B)=\inf\{\rho=|x-y|:x\in A,y\in B\}\)._
Proof.: Let \(\rho=d(A,B)\) and define
\[A_{\rho}=\{x\in\mathbb{R}^{2}:d(x,A)<\rho\},\quad A_{\rho}^{c}=\mathbb{R}^{2} \setminus A_{\rho}\]
where \(d(x,A)=\inf\{|x-y|:y\in A\}\). Then \(B\subset A_{\rho}^{c}\), furthermore, by Cauchy-Schwartz inequality, we have
\[|\langle e^{-tH_{\alpha,B_{0}}}f,g\rangle| \leq\Big{(}\int_{B}|e^{-tH_{\alpha,B_{0}}}f|^{2}dx\Big{)}^{1/2}\| g\|_{L^{2}(B)}\] \[\leq\Big{(}\int_{A_{\rho}^{c}}|e^{-tH_{\alpha,B_{0}}}f|^{2}dx \Big{)}^{1/2}\|g\|_{L^{2}(B)}.\]
Therefore, (3.17) follows if we could prove
\[\int_{A_{\rho}^{c}}|e^{-tH_{\alpha,B_{0}}}f|^{2}dx\leq\|f\|_{L^{2}(A)}^{2}e^{- \frac{d^{2}(A,B)}{2t}}. \tag{3.18}\]
To this end, for any fixed \(s>t\) and \(x\in\mathbb{R}^{2}\) and \(\tau\in[0,s)\), we define the function
\[\xi(\tau,x):=\frac{d^{2}(x,A_{\rho}^{c})}{2(\tau-s)},\]
and set
\[J(\tau):=\int_{\mathbb{R}^{2}}\big{|}e^{-\tau H_{\alpha,B_{0}}}f\big{|}^{2}e^{ \xi(\tau,x)}\,dx. \tag{3.19}\]
**Lemma 3.8**.: _For the function defined in (3.19), we have that_
\[J(t)\leq J(0). \tag{3.20}\]
We assume (3.20) to prove (3.18) by postponing the proof for a moment. Since \(x\in A_{\rho}^{c}\), one has \(\xi(\tau,x)=0\), thus
\[\int_{A_{\rho}^{c}}|e^{-tH_{\alpha,B_{0}}}f|^{2}dx\leq\int_{\mathbb{R}^{2}} \big{|}e^{-tH_{\alpha,B_{0}}}f\big{|}^{2}e^{\xi(t,x)}\,dx=J(t). \tag{3.21}\]
For \(t=0\), since
\[e^{\xi(0,x)}\leq\begin{cases}1,&x\in A_{\rho}^{c};\\ e^{-\frac{d^{2}(x,A_{\rho}^{c})}{2s}},&x\in A,\end{cases}\]
we see
\[J(0)=\int_{\mathbb{R}^{2}}f(x)^{2}e^{\xi(0,x)}\,dx\leq\int_{A_{\rho}^{c}}|f(x )|^{2}\,dx+\exp\Big{(}-\frac{\rho^{2}}{2s}\Big{)}\int_{A}|f(x)|^{2}\,dx. \tag{3.22}\]
By using (3.21), (3.20) and (3.22) and taking \(s\to t+\), we obtain
\[\int_{A_{\rho}^{c}}|e^{-tH_{\alpha,B_{0}}}f|^{2}dx\leq J(t)\leq J(0)\] \[\leq C\left(\int_{A_{\rho}^{c}}|f(x)|^{2}\,dx+\exp\Big{(}-\frac{ \rho^{2}}{2s}\Big{)}\int_{A}|f(x)|^{2}\,dx\right)\] \[\leq\int_{A_{\rho}^{c}}|f(x)|^{2}\,dx+\exp\Big{(}-\frac{\rho^{2}}{ 2t}\Big{)}\|f\|_{L^{2}(A)}^{2}.\]
Since \(f\in L^{2}(A)\) and \(\operatorname{supp}(f)\subset A\), then \(\int_{A_{\rho}^{c}}|f(x)|^{2}\,dx=0\) which implies (3.18).
Now it remains to prove (3.20) in Lemma 3.8.
The proof of Lemma 3.8.: Indeed, we need to prove that the function \(J(\tau)\) defined in (3.19) is non-increasing in \(\tau\in[0,s)\). We closely follow the argument of the integrated maximum principle [20, Theorem 12.1]. Furthermore, for all \(\tau,\tau_{0}\in[0,s)\), if \(\tau>\tau_{0}\), then
\[J(\tau)\leq J(\tau_{0}). \tag{3.23}\]
which shows (3.20) by taking \(\tau=t\) and \(\tau_{0}=0\). Without loss of generality, we assume \(f\geq 0\) in (3.19). Indeed, if \(f\) has a change sign, we set \(g=|e^{-\tau_{0}H_{\alpha,B_{0}}}f|\geq 0\), then
\[|e^{-\tau H_{\alpha,B_{0}}}f|=|e^{-(\tau-\tau_{0})H_{\alpha,B_{0}}}e^{-\tau_{0 }H_{\alpha,B_{0}}}f|\leq e^{-(\tau-\tau_{0})H_{\alpha,B_{0}}}g.\]
Assume that (3.23) holds for \(g\geq 0\), then
\[J(\tau) =\int_{\mathbb{R}^{2}}\big{(}e^{-\tau H_{\alpha,B_{0}}}f\big{)}^{ 2}e^{\xi(\tau,x)}\,dx\] \[\leq\int_{\mathbb{R}^{2}}\big{(}e^{-(\tau-\tau_{0})H_{\alpha,B_{0 }}}g\big{)}^{2}e^{\xi(\tau,x)}\,dx\] \[\leq\int_{\mathbb{R}^{2}}g^{2}(x)e^{\xi(\tau_{0},x)}dx\] \[=\int_{\mathbb{R}^{2}}\big{(}e^{-\tau_{0}H_{\alpha,B_{0}}}f\big{)} ^{2}e^{\xi(\tau_{0},x)}dx\] \[=J(\tau_{0}).\]
From now on, we assume \(f\geq 0\). By using [20, Theorem 5.23](which claims \(e^{-\tau H_{\alpha,B_{0}}^{\Omega_{i}}}f\to e^{-\tau H_{\alpha,B_{0}}}f\) in \(L^{2}\) as \(i\to+\infty\) where \(\Omega_{0}\subset\Omega_{1}\subset\cdots\subset\Omega_{i}\subset\cdots\to \mathbb{R}^{2}\) as \(i\to+\infty\) ), it suffices to show that, for any relatively compact open set \(\Omega\subset\mathbb{R}^{2}\), the function
\[J_{\Omega}(\tau):=\int_{\Omega}\big{|}e^{-\tau H_{\alpha,B_{0}}^{\Omega}}f \big{|}^{2}e^{\xi(\tau,x)}\,dx\]
is non-increasing in \(\tau\in[0,s)\), where \(H_{\alpha,B_{0}}^{\Omega}\) is the Dirichelet Laplace operator in \(\Omega\)
\[H_{\alpha,B_{0}}^{\Omega}=H_{\alpha,B_{0}}\big{|}_{W_{0}^{2}(\Omega)\cap D(H_{ \alpha,B_{0}})}.\]
To this end, we need to prove the derivative on \(J_{\Omega}(\tau)\) w.r.t. \(\tau\) is non-positive.
By using [20, Theorem 4.9], the function \(u(t,\cdot)=e^{-tH^{\Omega}_{\alpha,R_{0}}}f\) is strongly differentiable in \(L^{2}(\Omega)\) and its strong derivative \(\frac{du}{dt}\) in \(L^{2}(\Omega)\) is given by
\[\frac{du}{dt}=-H^{\Omega}_{\alpha,B_{0}}u.\]
Then we have
\[\begin{split}\frac{dJ_{\Omega}(\tau)}{d\tau}&=\operatorname {Re}\frac{d}{d\tau}\langle u,ue^{\xi(\tau,x)}\rangle\\ &=\operatorname{Re}\langle\frac{du}{d\tau},ue^{\xi(\tau,x)} \rangle+\operatorname{Re}\langle u,\frac{d(ue^{\xi(\tau,x)})}{d\tau}\rangle \\ &=2\operatorname{Re}\langle\frac{du}{d\tau},ue^{\xi(\tau,x)} \rangle+\langle|u|^{2},\frac{d(e^{\xi(\tau,x)})}{d\tau}\rangle\\ &=2\operatorname{Re}\langle-H^{\Omega}_{\alpha,B_{0}}u,ue^{\xi( \tau,x)}\rangle+\langle|u|^{2},\frac{d(e^{\xi(\tau,x)})}{d\tau}\rangle.\end{split} \tag{3.24}\]
Since \(e^{\xi(\tau,\cdot)}\in\operatorname{Lip}_{\operatorname{loc}}(\mathbb{R}^{2})\), one has \(e^{\xi(\tau,\cdot)}\in\operatorname{Lip}(\Omega)\). The solution \(u(t,\cdot)\in W^{1}_{0}(\Omega)\), hence \(e^{\xi(\tau,\cdot)}u(t,\cdot)\in W^{1}_{0}(\Omega)\). On the one hand, recall the operator
\[H^{\Omega}_{\alpha,B_{0}}=-(\nabla+i(A_{B}(x)+A_{\operatorname{hmf}}(x)))^{2},\]
by using the Green formula, we obtain
\[2\operatorname{Re}\langle-H^{\Omega}_{\alpha,B_{0}}u,ue^{\xi( \tau,x)}\rangle=2\langle(\nabla+i(A_{B}(x)+A_{\operatorname{hmf}}(x)))^{2}u,ue ^{\xi(\tau,x)}\rangle \tag{3.25}\] \[=-2\int_{\Omega}|(\nabla+i(A_{B}(x)+A_{\operatorname{hmf}}(x))) u|^{2}e^{\xi(\tau,x)}\,dx-2\operatorname{Re}\int_{\Omega}\nabla u\cdot\nabla \xi e^{\xi(\tau,x)}\bar{u}\,dx.\]
On the other hand, we observe that
\[\frac{d(e^{\xi(\tau,x)})}{dt}=e^{\xi(\tau,x)}\frac{\partial\xi}{\partial\tau},\]
then
\[\begin{split}\langle|u|^{2},\frac{d(e^{\xi(\tau,x)})}{dt}\rangle& =\int_{\Omega}|u|^{2}e^{\xi(\tau,x)}\frac{\partial\xi(\tau,x)}{ \partial\tau}\,dx\\ &\leq-\frac{1}{2}\int_{\Omega}|u|^{2}e^{\xi(\tau,x)}|\nabla\xi|^{ 2}\,dx.\end{split} \tag{3.26}\]
This is because that we can verify that the function \(\xi(\tau,x)\) satisfies
\[\frac{\partial\xi}{\partial\tau}+\frac{1}{2}|\nabla\xi|^{2}=-\frac{d^{2}(x,A^ {c}_{\rho})}{2(\tau-s)^{2}}+\frac{1}{2}\Bigg{(}\frac{d(x,A^{c}_{\rho})|\nabla d (x,A^{c}_{\rho})|}{2(\tau-s)}\Bigg{)}^{2}\leq-\frac{3}{4}\frac{d^{2}(x,A^{c}_{ \rho})}{2(\tau-s)^{2}}\leq 0,\]
since \(\|\nabla f\|_{L^{\infty}}\leq\|f\|_{\operatorname{Lip}}\) and the function \(x\mapsto d(x,E)\) is Lipschitz function with Lipschitz norm \(1\), see [20, Lemma 11.2 and Theorem 11.3]. Therefore, by collecting (3.24), (3.25) and (3.26), we finally show
\[\begin{split}\frac{dJ_{\Omega}(\tau)}{d\tau}&\leq-2 \int_{\Omega}|(\nabla+i(A_{B}(x)+A_{\operatorname{hmf}}(x)))u|^{2}e^{\xi( \tau,x)}\,dx\\ &\quad-2\int_{\Omega}\Big{(}\operatorname{Re}\big{(}\nabla u \cdot\nabla\xi\bar{u}\big{)}+\frac{1}{4}|u|^{2}|\nabla\xi|^{2}\Big{)}e^{\xi( \tau,x)}\,dx.\end{split}\]
On the one hand, we notice that
\[\operatorname{Re}(\nabla u\cdot\nabla\xi\bar{u})=\frac{1}{2}\big{(}(\nabla u \cdot\nabla\xi\bar{u})+(\nabla\bar{u}\cdot\nabla\xi u)\big{)}=\frac{1}{2} \big{(}\nabla|u|^{2}\cdot\nabla\xi\big{)}=(\nabla|u|)\cdot\nabla\xi|u|.\]
Therefore, we have
\[\frac{dJ_{\Omega}(\tau)}{d\tau} \leq-2\int_{\Omega}\big{(}|(\nabla+i(A_{B}(x)+A_{\rm hmf}(x)))u|^{2}- |\nabla|u||^{2}\big{)}e^{\xi(\tau,x)}\,dx\] \[\quad-2\int_{\Omega}\Big{(}|\nabla|u||^{2}+(\nabla|u|)\cdot\nabla \xi|u|+\frac{1}{4}|u|^{2}|\nabla\xi|^{2}\Big{)}e^{\xi(\tau,x)}\,dx\] \[=-2\int_{\Omega}\big{(}|(\nabla+i(A_{B}(x)+A_{\rm hmf}(x)))u|^{2} -|\nabla|u||^{2}\big{)}e^{\xi(\tau,x)}\,dx\] \[\quad-2\int_{\Omega}\big{|}\nabla|u|+\frac{1}{2}|u|\nabla\xi\big{|} ^{2}e^{\xi(\tau,x)}\,dx.\]
On the other hand, the diamagnetic inequality shows that
\[|\nabla|u|(x)|=\Big{|}\operatorname{Re}\big{(}\frac{\bar{u}(x)}{|u (x)|}\nabla u(x)\big{)}\Big{|} =\Big{|}\operatorname{Re}\Big{(}\big{(}\nabla u(x)+i(A_{B}+A_{\rm hmf })u(x)\big{)}\frac{\bar{u}(x)}{|u(x)|}\Big{)}\Big{|}\] \[\leq\Big{|}\big{(}\nabla+i(A_{B}+A_{\rm hmf})\big{)}u(x)\Big{|}.\]
Therefore, we finally prove that
\[\frac{dJ_{\Omega}(\tau)}{d\tau}\leq 0\]
Consequently, (3.23) follows, which implies Lemma 3.8.
### Method II
In this subsection, we will use the Schulman-Sunada formula (which is the second method we used in [36] to construct the Schrodinger propagator) to reconstruct the heat propagator. For more about the Schulman-Sunada formula, we refer the reads to [34, 35]. The representation and estimate of the heat kernel capture more magnetic effects.
Let \(M=\mathbb{R}^{2}\setminus\{\vec{0}\}=(0,+\infty)\times\mathbb{S}^{1}\) where \(\mathbb{S}^{1}\) is the unit circle. The universal covering space of \(M\) is \(\tilde{M}=(0,+\infty)\times\mathbb{R}\), then \(M=\tilde{M}/\Gamma\) where the structure group \(\Gamma=2\pi\mathbb{Z}\) acts in the second factor of the Cartesian product. Then Schulman's ansatz (see [34, 35]) enables us to compute the heat propagator \(e^{-tH_{\alpha,B_{0}}}\) on \(M\) by using the heat propagator \(e^{-t\tilde{H}_{\alpha,B_{0}}}\) (see the operator \(\tilde{H}_{\alpha,B_{0}}\) in (3.31) below) on \(\tilde{M}\). More precisely, see [35, (1)], we have
\[e^{-tH_{\alpha,B_{0}}}(r_{1},\theta_{1};r_{2},\theta_{2})=\sum_{j\in\mathbb{Z }}e^{-t\tilde{H}_{\alpha,B_{0}}}(r_{1},\theta_{1}+2j\pi;r_{2},\theta_{2}), \tag{3.27}\]
which is similar to the construction of wave propagator on \(\mathbb{T}^{n}\), see [32, (3.5.12)]. In the following subsections, we will use it to construct the heat kernel.
**Proposition 3.9**.: _Let \(K_{H}(t;x,y)\) be the heat kernel in (3.2) of Proposition 3.1. Then_
\[e^{-tH_{\alpha,B_{0}}}(r_{1},\theta_{1};r_{2},\theta_{2}) \tag{3.28}\] \[=\frac{B_{0}}{4\pi\sinh(tB_{0})}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{ 2})}{4\tanh(tB_{0})}}\] \[\quad\times\bigg{(}e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh (tB_{0}+i(\theta_{1}-\theta_{2}))}e^{-\alpha tB_{0}}e^{-i\alpha(\theta_{1}- \theta_{2})}\varphi(\theta_{1},\theta_{2})\] \[\quad-\frac{\sin(\alpha\pi)}{\pi}\int_{-\infty}^{\infty}e^{-\frac {B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(s)}\frac{e^{s\alpha}}{e^{i(\theta_{1}- \theta_{2})}e^{s+tB_{0}}+1}\,ds\bigg{)},\]
_where_
\[\varphi(\theta_{1},\theta_{2})=\begin{cases}1,\qquad|\theta_{1}-\theta_{2}|\leq\pi \\ e^{-2\pi\alpha i},\quad-2\pi<\theta_{1}-\theta_{2}\leq-\pi\\ e^{2\pi\alpha i},\quad\pi<\theta_{1}-\theta_{2}\leq 2\pi.\end{cases} \tag{3.29}\]
_Furthermore, we obtain the estimate_
\[\Big{|}e^{-tH_{\alpha},B_{0}}(x,y)\Big{|}\leq C\frac{B_{0}e^{-\alpha tB_{0}}} {4\pi\sinh(tB_{0})}e^{-\frac{B_{0}|x-y|^{2}}{4\sinh(tB_{0})}}. \tag{3.30}\]
Proof.: Recall (3.2) and \(\alpha_{k}=|k+\alpha|\), we have
\[K_{H}(t;x,y)=\frac{B_{0}}{4\pi^{2}\sinh tB_{0}}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^ {2})\cosh tB_{0}}{4\sinh tB_{0}}}\sum_{k\in\mathbb{Z}}e^{ik(\theta_{1}-\theta _{2})}e^{-(k+\alpha)tB_{0}}I_{\alpha_{k}}\left(\frac{B_{0}r_{1}r_{2}}{2\sinh tB _{0}}\right).\]
The main obstacle is to take the summation in \(k\). If \(\alpha=0\), by using the formula [5, Eq. 9. 6. 19]
\[\sum_{k\in\mathbb{Z}}e^{kz}I_{|k|}(x)=e^{x\cosh(z)},\]
we will see
\[K_{H}(t;x,y)=\frac{B_{0}}{4\pi\sinh(B_{0}t)}\exp\left(-\frac{B_{0}(r_{1}^{2}+r_ {2}^{2})}{4\tanh(B_{0}t)}+\frac{B_{0}r_{1}r_{2}}{2\sinh(B_{0}t)}\cosh(B_{0}t+i (\theta_{1}-\theta_{2}))\right),\quad\text{if}\quad\alpha=0,\]
which is exactly the same as the result (3.13) obtained from the Mehler formula. Heuristically, if we can replace the above summation in \(k\) by the integration in \(k\), then one can use the translation invariant of integration to obtain further result. To this end, as did in [36, Section 4], we consider the operator
\[\tilde{H}_{\alpha,B_{0}}=-\partial_{r}^{2}-\frac{1}{r}\partial_{r}+\frac{1}{ r^{2}}\Big{(}-i\partial_{\theta}+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2}, \tag{3.31}\]
which acts on \(L^{2}(\tilde{M},rdr\,d\theta)\). We emphasize that the variable \(\theta\in\mathbb{R}\) while not compact manifold \(\mathbb{S}^{1}\). Then we choose \(e^{i(\tilde{k}-\alpha)\theta}\) as an eigenfunction of the operator \(\Big{(}-i\partial_{\theta}+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2}\) on \(L^{2}_{\theta}(\mathbb{R})\), which satisfies that
\[\Big{(}-i\partial_{\theta}+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2}\varphi( \theta)=\Big{(}\tilde{k}+\frac{B_{0}r^{2}}{2}\Big{)}^{2}\varphi(\theta). \tag{3.32}\]
It worths to point out that \(\tilde{k}\in\mathbb{R}\) is a real number while \(k\in\mathbb{Z}\). More importantly, we informally move the \(\alpha\) in right hand side of (3.32) to the \(e^{i(\tilde{k}-\alpha)\theta}\), which will simplify the eigenfunctions. Hence, similarly as [36] for the Schrodinger kernel, we obtain the heat kernel of \(e^{-t\tilde{H}_{\alpha,B_{0}}}\)
\[\tilde{K}_{H}(t;x,y)= \frac{B_{0}}{4\pi\sinh(tB_{0})}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{2 })}{4\tanh(tB_{0})}}\] \[\times\int_{\mathbb{R}}e^{i(\tilde{k}-\alpha)(\theta_{1}-\theta_ {2}-itB_{0})}I_{|\tilde{k}|}\bigg{(}\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})} \bigg{)}\,d\tilde{k},\]
where \(x=(r_{1},\theta_{1})\in\tilde{M}\) and \(y=(r_{2},\theta_{2})\in\tilde{M}\). A key identity is from [34, (2.11)], we state it in the following lemma.
**Lemma 3.10**.: _For any \(z\in\mathbb{C}\), one has_
\[\begin{split}\int_{\mathbb{R}}e^{z\vec{k}}I_{|\vec{k}|}(x)\,d\vec{k}& =e^{x\cosh(z)}H(\pi-|\operatorname{Im}z|)\\ &+\frac{1}{2\pi i}\int_{-\infty}^{\infty}e^{-x\cosh(s)}\left( \frac{1}{z+s+i\pi}-\frac{1}{z+s-i\pi}\right)\,ds.\end{split} \tag{3.33}\]
_where_
\[H(x)=\begin{cases}1,&x>0,\\ 0,&x\leq 0.\end{cases}\]
_is the Heaviside step function._
Proof.: 1 To prove (3.33), we recall the representation of the modified Bessel function of order \(\nu\)
Footnote 1: We would like to appreciate to Prof. P. Šfovícek for his helpful discussion.
\[I_{\nu}(x)=\frac{1}{\pi}\int_{0}^{\pi}e^{x\cos s}\cos(s\nu)\,ds-\frac{\sin(\nu \pi)}{\pi}\int_{0}^{\infty}e^{-x\cosh s-\nu s}\,ds,\quad x>0. \tag{3.34}\]
For fixed \(x>0\), one has that
\[I_{\nu}(x)\sim\frac{1}{\sqrt{2\pi\nu}}\Big{(}\frac{ex}{2\nu}\Big{)}^{\nu}, \quad\nu\to+\infty\]
decays very rapidly in the order \(\nu\). Due to this fact, the LHS of (3.33) is absolutely convergent, hence the dominated convergent theorem implies that the LHS of (3.33) represents an entire function in \(z\) (holomorphic everywhere on \(\mathbb{C}\)). The RHS of (3.33) is also an entire function in \(z\) as well but it is less obvious. The RHS of (3.33) is surely holomorphic in \(z\) everywhere on \(\mathbb{C}\) except the lines \(\operatorname{Im}z=\pm\pi\). On these lines, there is no discontinuity for the RHS of (3.33). For example, we consider \(\operatorname{Im}z=\pi\). In fact, if set
\[F(z)= e^{x\cosh(z)}H(\pi-|\operatorname{Im}z|)\] \[+\frac{1}{2\pi i}\int_{-\infty}^{\infty}e^{-x\cosh(s)}\left(\frac {1}{z+s+i\pi}-\frac{1}{z+s-i\pi}\right)\,ds,\]
then one can prove
\[\lim_{\operatorname{Im}z\to\pi-}F(z)=\lim_{\operatorname{Im}z\to\pi+}F(z).\]
Indeed, for \(a\in\mathbb{R}\) and \(\epsilon>0\), we need to prove
\[\lim_{\epsilon\to 0}F(a+i\pi-i\epsilon)=\lim_{\epsilon\to 0}F(a+i\pi+i \epsilon),\]
that is,
\[\lim_{\epsilon\to 0}\left(e^{x\cosh(a+i\pi-i\epsilon)}+\frac{1}{2\pi i} \int_{-\infty}^{\infty}e^{-x\cosh(s)}\left(\frac{1}{a+i\pi-i\epsilon+s+i\pi} -\frac{1}{a+i\pi-i\epsilon+s-i\pi}\right)\,ds\right)\] \[=\lim_{\epsilon\to 0}\frac{1}{2\pi i}\int_{-\infty}^{\infty}e^{-x \cosh(s)}\left(\frac{1}{a+i\pi+i\epsilon+s+i\pi}-\frac{1}{a+i\pi+i\epsilon+s- i\pi}\right)\,ds.\]
By direct computation, we obtain
\[\lim_{\epsilon\to 0}\frac{1}{2\pi i}\Big{[}\int_{-\infty}^{ \infty}e^{-x\cosh(s)}\left(\frac{1}{a+i\epsilon+s+2i\pi}-\frac{1}{a-i\epsilon+s +2i\pi}\right)\,ds\] \[\qquad+\int_{-\infty}^{\infty}e^{-x\cosh(s)}\left(\frac{1}{a+s-i \epsilon}-\frac{1}{a+s+i\epsilon}\right)\,ds\Big{]}\] \[=\lim_{\epsilon\to 0}\frac{1}{2\pi i}\Big{[}\int_{-\infty}^{ \infty}e^{-x\cosh(s)}\frac{-2i\epsilon}{(a+s+2i\pi)^{2}+\epsilon^{2}}\,ds+ \int_{-\infty}^{\infty}e^{-x\cosh(s)}\frac{2i\epsilon}{(a+s)^{2}+\epsilon^{2}} \,ds\Big{]}\] \[=\lim_{\epsilon\to 0}\frac{1}{\pi}\Big{[}-\int_{-\infty}^{ \infty}e^{-x\cosh(s)}\frac{\epsilon}{(a+s+2i\pi)^{2}+\epsilon^{2}}\,ds+\int_ {-\infty}^{\infty}e^{-x\cosh(s)}\frac{\epsilon}{(a+s)^{2}+\epsilon^{2}}\,ds \Big{]}\] \[=e^{-x\cosh(a)},\]
where we use the fact that the Poisson kernel is an approximation to the identity, implying that, for any reasonable function \(m(x)\)
\[m(x)=\lim_{\epsilon\to 0}\frac{1}{\pi}\int_{-\infty}^{\infty}m(y)\frac{ \epsilon}{(x-y)^{2}+\epsilon^{2}}\,ds.\]
Obviously, since \(\cosh(a+i\pi)=-\cosh a\), we have
\[e^{x\cosh(a+i\pi)}=e^{-x\cosh(a)}\] \[= \lim_{\epsilon\to 0}\frac{1}{2\pi i}\Big{[}\int_{-\infty}^{ \infty}e^{-x\cosh(s)}\left(\frac{1}{a+i\epsilon+s+2i\pi}-\frac{1}{a-i\epsilon+ s+2i\pi}\right)\,ds\] \[\qquad+\int_{-\infty}^{\infty}e^{-x\cosh(s)}\left(\frac{1}{a+s-i \epsilon}-\frac{1}{a+s+i\epsilon}\right)\,ds\Big{]}.\]
Therefore the RHS of (3.33), \(F(z)\), is an entire function in \(z\) as well. As a consequence, it suffices to verify the formula (3.33) for purely imaginary value of \(z\) only. Let \(z=ib\) and recall (3.34), then
\[\int_{\mathbb{R}}e^{ib\tilde{k}}I_{|\tilde{k}|}(x)\,d\tilde{k} =\frac{1}{\pi}\int_{\mathbb{R}}e^{ib\tilde{k}}\int_{0}^{\pi}e^{x \cos s}\cos(s|\tilde{k}|)\,ds\,d\tilde{k}\] \[\qquad-\int_{\mathbb{R}}e^{ib\tilde{k}}\frac{\sin(|\tilde{k}|\pi) }{\pi}\int_{0}^{\infty}e^{-x\cosh s-|\tilde{k}|s}\,ds\,d\tilde{k}.\]
The first term becomes that
\[\frac{1}{\pi}\int_{\mathbb{R}}e^{ib\tilde{k}}\int_{0}^{\pi}e^{x \cos s}\cos(s\tilde{k})\,ds\,d\tilde{k} =\frac{1}{2\pi}\int_{\mathbb{R}}e^{ib\tilde{k}}\int_{\mathbb{R}}H (\pi-|s|)e^{x\cos s}e^{-is\tilde{k}}\,ds\,d\tilde{k}\] \[=e^{x\cos b}H(\pi-|b|)=e^{x\cosh(z)}H(\pi-|\operatorname{Im}z|).\]
The second term gives
\[-\int_{\mathbb{R}}e^{ib\tilde{k}}\frac{\sin(|\tilde{k}|\pi)}{\pi} \int_{0}^{\infty}e^{-x\cosh s-|\tilde{k}|s}\,ds\,d\tilde{k}\] \[=-\frac{1}{2\pi i}\int_{0}^{\infty}e^{-x\cosh s}\Big{(}\int_{0}^{ \infty}\left(e^{[i(b+\pi)-s]\tilde{k}}-e^{[i(b-\pi)-s]\tilde{k}}\right)d \tilde{k}\] \[\qquad+\int_{-\infty}^{0}\left(e^{[i(b-\pi)+s]\tilde{k}}-e^{[i(b +\pi)+s]\tilde{k}}\right)d\tilde{k}\Big{)}\,ds\] \[=\frac{1}{2\pi i}\int_{-\infty}^{\infty}e^{-x\cosh(s)}\left(\frac {1}{ib+s+i\pi}-\frac{1}{ib+s-i\pi}\right)\,ds.\]
Therefore, we have proved (3.33).
Let \(z=tB_{0}+i(\theta_{1}-\theta_{2})\), by using Lemma 3.10, we obtain
\[\tilde{K}_{H}(t;x,y)=\frac{B_{0}}{4\pi\sinh(tB_{0})}e^{-\frac{B_{0}( r_{1}^{2}+r_{2}^{2})}{4\tanh(tB_{0})}}e^{-\alpha tB_{0}}e^{-i\alpha(\theta_{1}- \theta_{2})}\] \[\qquad\qquad\times\bigg{(}e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0 })}\cosh(z)}H\big{(}\pi-|\theta_{1}-\theta_{2}|\big{)}\] \[\qquad+\,\frac{1}{2\pi i}\int_{-\infty}^{\infty}e^{-\frac{B_{0}r _{1}r_{2}}{2\sinh(tB_{0})}\cosh(s)}\left(\frac{1}{z+s+i\pi}-\frac{1}{z+s-i\pi} \right)\,ds\bigg{)}.\]
By using (3.27) and letting \(z_{j}=tB_{0}+i(\theta_{1}+2\pi j-\theta_{2})\), we further show
\[\begin{split}& e^{-tH_{\alpha,B_{0}}}(r_{1},\theta_{1};r_{2}, \theta_{2})=\sum_{j\in\mathbb{Z}}e^{-t\tilde{H}_{\alpha,B_{0}}}(r_{1},\theta_ {1}+2j\pi;r_{2},\theta_{2}),\\ &=\frac{B_{0}}{4\pi\sinh(tB_{0})}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^ {2})}{4\tanh(tB_{0})}}\sum_{j\in\mathbb{Z}}e^{-\alpha z_{j}}\\ &\times\bigg{(}e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(z_{ j})}H\big{(}\pi-|\theta_{1}+2\pi j-\theta_{2}|\big{)}\\ &\quad+\frac{1}{2\pi i}\int_{-\infty}^{\infty}e^{-\frac{B_{0}r_{ 1}r_{2}}{2\sinh(tB_{0})}\cosh(s)}\left(\frac{1}{z_{j}+s+i\pi}-\frac{1}{z_{j}+s- i\pi}\right)\,ds\bigg{)}.\end{split} \tag{3.35}\]
In the following, we consider the two summations. Since \(\theta_{1},\theta_{2}\in[0,2\pi)\), then \(\theta_{1}-\theta_{2}\in(-2\pi,2\pi)\), recall (3.29), hence we obtain
\[\begin{split}&\sum_{j\in\mathbb{Z}}e^{-\alpha z_{j}}e^{\frac{B_{0}r _{1}r_{2}}{2\sinh(tB_{0})}\cosh(z_{j})}H\big{(}\pi-|\theta_{1}+2\pi j-\theta_{ 2}|\big{)}\\ &=e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(z)}e^{-\alpha tB _{0}}\sum_{j\in\mathbb{Z}}e^{-i\alpha(\theta_{1}-\theta_{2}+2\pi j)}H\big{(} \pi-|\theta_{1}+2\pi j-\theta_{2}|\big{)}\\ &=e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(z)}e^{-\alpha tB _{0}}e^{-i\alpha(\theta_{1}-\theta_{2})}\varphi(\theta_{1},\theta_{2}),\end{split}\]
For the second summation term, we use the formula
\[\sum_{j\in\mathbb{Z}}\frac{e^{-2\pi i\alpha j}}{\sigma+2\pi j}=\frac{ie^{i \alpha\sigma}}{e^{i\sigma}-1},\quad\alpha\in(0,1),\quad\sigma\in\mathbb{C} \setminus 2\pi\mathbb{Z},\]
to obtain
\[\begin{split}&\sum_{j\in\mathbb{Z}}e^{-\alpha z_{j}}\left(\frac{1} {z_{j}+s+i\pi}-\frac{1}{z_{j}+s-i\pi}\right)\\ &=e^{-\alpha(tB_{0}+i(\theta_{1}-\theta_{2}))}\sum_{j\in\mathbb{Z} }\big{(}\frac{e^{-2\pi i\alpha j}}{i(\theta_{1}-\theta_{2}+2j\pi+\pi)+s+tB_{0 }}-\frac{e^{-2\pi i\alpha j}}{i(\theta_{1}-\theta_{2}+2j\pi-\pi)+s+tB_{0}} \big{)}\\ &=-ie^{-\alpha(tB_{0}+i(\theta_{1}-\theta_{2}))}\sum_{j\in\mathbb{ Z}}\big{(}\frac{e^{-2\pi i\alpha j}}{\sigma_{1}+2j\pi}-\frac{e^{-2\pi i \alpha j}}{\sigma_{2}+2j\pi}\big{)}\\ &=e^{-\alpha(tB_{0}+i(\theta_{1}-\theta_{2}))}\big{(}\frac{e^{i \alpha\sigma_{1}}}{e^{i\sigma_{1}}-1}-\frac{e^{i\alpha\sigma_{2}}}{e^{i\sigma_ {2}}-1}\big{)}=\frac{e^{s\alpha}}{e^{i(\theta_{1}-\theta_{2}+\pi)}e^{s+tB_{0 }}-1}\big{(}e^{i\alpha\pi}-e^{-i\alpha\pi}\big{)}\\ &=-2i\sin(\alpha\pi)\frac{e^{s\alpha}}{e^{i(\theta_{1}-\theta_{2}) }e^{s+tB_{0}}+1},\end{split}\]
where \(\sigma_{1}=(\theta_{1}-\theta_{2}+\pi)-i(s+tB_{0})\) and \(\sigma_{2}=(\theta_{1}-\theta_{2}-\pi)-i(s+tB_{0})\). Therefore, by using (3.35), we show (3.28)
\[\begin{split}& e^{-tH_{\alpha,B_{0}}}(r_{1},\theta_{1};r_{2}, \theta_{2})\\ &=\frac{B_{0}}{4\pi\sinh(tB_{0})}e^{-\frac{B_{0}(r_{1}^{2}+r_{2} )}{4\sinh(tB_{0})}}\\ &\qquad\times\bigg{(}e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})} \cosh(tB_{0}+i(\theta_{1}-\theta_{2}))}e^{-\alpha tB_{0}}e^{-i\alpha(\theta_{1 }-\theta_{2})}\varphi(\theta_{1},\theta_{2})\\ &\quad-\frac{\sin(\alpha\pi)}{\pi}\int_{-\infty}^{\infty}e^{- \frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(s)}\frac{e^{s\alpha}}{e^{i(\theta _{1}-\theta_{2})}e^{s+tB_{0}}+1}\,ds\bigg{)}.\end{split} \tag{3.36}\]
To prove (3.30), we first note that
\[\begin{split}\cosh(tB_{0}+i\theta)&=\cos\theta \cosh(tB_{0})+i\sin\theta\sinh(tB_{0}),\\ |x-y|^{2}&=r_{1}^{2}+r_{2}^{2}-2r_{1}r_{2}\cos( \theta_{1}-\theta_{2}),\end{split}\]
the first term is controlled by
\[\begin{split}&\frac{B_{0}}{4\pi\sinh(tB_{0})}e^{-\frac{B_{0}(r_{ 2}^{2}+r_{2}^{2})}{4\sinh(tB_{0})}}e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})} \cosh(tB_{0})\cos(\theta_{1}-\theta_{2}))}e^{-\alpha tB_{0}}\\ &\leq C\frac{B_{0}e^{-\alpha tB_{0}}}{4\pi\sinh(tB_{0})}e^{-\frac {B_{0}|x-y|^{2}}{4\sinh(tB_{0})}}.\end{split} \tag{3.37}\]
For the second term, we aim to prove
\[\Big{|}\int_{\mathbb{R}}e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh s} \frac{e^{\alpha s}}{1+e^{s+i(\theta_{1}-\theta_{2})+tB_{0}}}\,ds\Big{|}\leq C, \tag{3.38}\]
where \(C\) is a constant independent of \(t\), \(r_{1},r_{2}\) and \(\theta_{1},\theta_{2}\). To this end, let \(\theta=\theta_{1}-\theta_{2}\), we write
\[\begin{split}&\int_{\mathbb{R}}e^{-\frac{B_{0}r_{1}r_{2}}{2 \sinh(tB_{0})}\cosh s}\frac{e^{\alpha s}}{1+e^{s+i(\theta_{1}-\theta_{2})+tB_{0 }}}\,ds\\ &=e^{-\alpha tB_{0}}\int_{0}^{\infty}\Big{(}e^{-\frac{B_{0}r_{1}r _{2}}{2\sinh(tB_{0})}\cosh(-s-tB_{0})}\frac{e^{-\alpha s}}{1+e^{-s+i\theta}}+ e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(s-tB_{0})}\frac{e^{\alpha s}}{1+e^{s+i \theta}}\Big{)}\,ds\\ &=e^{-\alpha tB_{0}}\Big{(}\int_{0}^{\infty}e^{-\frac{B_{0}r_{1}r _{2}}{2\sinh(tB_{0})}\cosh(-s-tB_{0})}\big{(}\frac{e^{-\alpha s}}{1+e^{-s+i \theta}}+\frac{e^{\alpha s}}{1+e^{s+i\theta}}\big{)}\,ds\\ &\quad+\int_{0}^{\infty}\Big{(}e^{-\frac{B_{0}r_{1}r_{2}}{2 \sinh(tB_{0})}\cosh(s-tB_{0})}-e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})} \cosh(-s-tB_{0})}\Big{)}\frac{e^{\alpha s}}{1+e^{s+i\theta}}\,ds\Big{)},\end{split}\]
then we just need to verify that
\[\int_{0}^{\infty}\Big{|}\frac{e^{-\alpha s}}{1+e^{-s+i\theta}}+\frac{e^{\alpha s }}{1+e^{s+i\theta}}\Big{|}\,ds\lesssim 1, \tag{3.39}\]
and
\[\Big{|}\int_{0}^{\infty}\Big{(}e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})} \cosh(s-tB_{0})}-e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(-s-tB_{0})} \Big{)}\frac{e^{\alpha s}}{1+e^{s+i\theta}}\,ds\Big{|}\lesssim 1, \tag{3.40}\]
where the implicit constant is independent of \(\theta\). We first prove (3.39). In fact,
\[\frac{e^{-\alpha s}}{1+e^{-s+i\theta}}+\frac{e^{\alpha s}}{1+e^{s+i \theta}}\] \[=\frac{\cosh(\alpha s)e^{-i\theta}+\cosh((1-\alpha)s)}{\cos\theta+ \cosh s}\] \[=\frac{\cosh(\alpha s)\cos\theta+\cosh((1-\alpha)s)-i\sin\theta \cosh(\alpha s)}{2(\cos^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2}))}\] \[=\frac{2\cos^{2}(\frac{\theta}{2})\cosh(\alpha s)+(\cosh((1- \alpha)s)-\cosh(\alpha s))-2i\sin(\frac{\theta}{2})\cos(\frac{\theta}{2}) \cosh(\alpha s)}{2(\cos^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2}))}.\]
Since \(\cosh x-1\sim\frac{x^{2}}{2},\sinh x\sim x\), as \(x\to 0\); \(\cosh x\sim e^{x},\sinh x\sim e^{x}\), as \(x\to\infty\), we have
\[\int_{0}^{\infty}\Big{|}\frac{\cos^{2}(\frac{\theta}{2})\cosh(\alpha s)}{\cos ^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2})}\Big{|}\,ds\lesssim\int_{0}^{1} \frac{2|\cos(\frac{\theta}{2})|}{s^{2}+(2|\cos(\frac{\theta}{2})|)^{2}}ds+\int _{1}^{\infty}\ e^{(\alpha-1)s}ds\lesssim 1.\]
Similarly, we obtain
\[\int_{0}^{\infty}\Big{|}\frac{\sin(\frac{\theta}{2})\cos(\frac{\theta}{2}) \cosh(\alpha s)}{\cos^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2})}\Big{|}\,ds \lesssim 1.\]
Finally, we verify that
\[\int_{0}^{\infty}\Big{|}\frac{\cosh((1-\alpha)s)-\cosh(\alpha s)} {\cos^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2})}\Big{|}\,ds\] \[\lesssim\int_{0}^{1}\frac{|\frac{(1-\alpha)^{2}}{2}-\frac{\alpha ^{2}}{2}|s^{2}}{s^{2}}ds+\int_{1}^{\infty}\big{(}e^{-\alpha s}+e^{(\alpha-1)s }\big{)}ds\lesssim 1.\]
We next prove (3.40). For convenience, we denote
\[f(s)=e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cdot\cosh(s-tB_{0})}\]
Then, by noting that \(|f(\pm s)|\leq 1\), we obtain
\[\Big{|}\int_{1}^{\infty}\big{(}f(s)-f(-s)\big{)}\frac{e^{\alpha s }}{1+e^{s+i\theta}}\,ds\Big{|} \leq\int_{1}^{\infty}\Big{|}\frac{e^{\alpha s}}{1+e^{s+i\theta}} \Big{|}ds\] \[=\int_{1}^{\infty}\Big{|}\frac{e^{(\alpha-1)s}}{e^{-s}+e^{i \theta}}\Big{|}ds\] \[\leq\frac{1}{1-e^{-1}}\int_{1}^{\infty}e^{(\alpha-1)s}ds\] \[\lesssim 1,\]
hence, for (3.40), it suffices to prove
\[\int_{0}^{1}|f(s)-f(-s)|\Big{|}\frac{e^{\alpha s}}{1+e^{s+i\theta}}\Big{|}\, ds\lesssim 1.\]
Since
\[f^{\prime}(\pm s) =\mp\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\sinh(\pm s-tB_{0})f( \pm s)\] \[=\mp\frac{B_{0}r_{1}r_{2}}{2}\Big{(}\frac{\pm\sinh s\cosh(tB_{0}) }{\sinh(tB_{0})}-\cosh s\Big{)}e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh (\pm s-tB_{0})},\]
then for \(0<s<1\), there exists a constant \(C\) such that \(|f^{\prime}(s)|\leq C\). Hence, by differential mean value theorem, \(|f(s)-f(-s)|\leq Cs\) for \(0<s<1\), thus we have
\[\int_{0}^{1}|f(s)-f(-s)|\Big{|}\frac{e^{\alpha s}}{1+e^{s+i\theta}}\Big{|}\,ds \leq C\int_{0}^{1}\frac{se^{\alpha s}}{e^{s}-1}ds\lesssim 1.\]
Therefore, we prove (3.38). By collecting (3.36), (3.37) and (3.38), we finally obtain
\[\Big{|}e^{-tH_{\alpha,B_{0}}}(r_{1},\theta_{1};r_{2},\theta_{2}) \Big{|}\] \[\leq C\frac{B_{0}e^{-\alpha tB_{0}}}{4\pi\sinh(tB_{0})}\left(e^{- \frac{B_{0}|x-y|^{2}}{4\tanh(tB_{0})}}+e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{2})}{ 4\tanh(tB_{0})}}\right).\]
which implies (3.30).
## 4. Bernstein inequalities and square function inequalities
In this section, we prove the Bernstein inequalities and the square function inequality associated with the Schrodinger operator \(H_{\alpha,B_{0}}\) by using the heat kernel estimates showed in the previous section.
**Proposition 4.1** (Bernstein inequalities).: _Let \(\varphi(\lambda)\) be a \(C_{c}^{\infty}\) bump function on \(\mathbb{R}\) with support in \([\frac{1}{2},2]\), then it holds for any \(f\in L^{q}(\mathbb{R}^{2})\) and \(j\in\mathbb{Z}\)_
\[\|\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})f\|_{L^{p}(\mathbb{R}^{2})}\lesssim 2 ^{2j\left(\frac{1}{q}-\frac{1}{p}\right)}\|f\|_{L^{q}(\mathbb{R}^{2})},\quad 1 \leq q\leq p\leq\infty. \tag{4.1}\]
Proof.: Let \(\psi(x)=\varphi(\sqrt{x})\) and \(\psi_{e}(x):=\psi(x)e^{2x}\). Then \(\psi_{e}\) is a \(C_{c}^{\infty}\)-function on \(\mathbb{R}\) with support in \([\frac{1}{4},4]\) and then its Fourier transform \(\hat{\psi}_{e}\) belongs to Schwartz class. We write
\[\varphi(\sqrt{x}) =\psi(x)=e^{-2x}\psi_{e}(x)\] \[=e^{-2x}\int_{\mathbb{R}}e^{ix\cdot\xi}\hat{\psi}_{e}(\xi)\,d\xi\] \[=e^{-x}\int_{\mathbb{R}}e^{-x(1-i\xi)}\hat{\psi}_{e}(\xi)\,d\xi.\]
Therefore, by the functional calculus, we obtain
\[\varphi(\sqrt{H_{\alpha,B_{0}}})=\psi(H_{\alpha,B_{0}})=e^{-H_{\alpha,B_{0}}} \int_{\mathbb{R}}e^{-(1-i\xi)H_{\alpha,B_{0}}}\hat{\psi}_{e}(\xi)\,d\xi,\]
furthermore,
\[\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})=\psi(2^{-2j}H_{\alpha,B_{0}})=e^{-2^{-2 j}H_{\alpha,B_{0}}}\int_{\mathbb{R}}e^{-(1-i\xi)2^{-2j}H_{\alpha,B_{0}}}\hat{ \psi}_{e}(\xi)\,d\xi.\]
By using (3.8) in Proposition 3.3 with \(t=2^{-2j}\), we have
\[\Big{|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})(x,y)\Big{|} \lesssim 2^{4j}\int_{\mathbb{R}^{2}}e^{-\frac{2^{2j}|x-z|^{2}}{C}} e^{-\frac{2^{2j}|y-z|^{2}}{C}}\,dz\int_{\mathbb{R}}\hat{\psi}_{e}(\xi)\,d\xi\] \[\lesssim 2^{2j}\int_{\mathbb{R}^{2}}e^{-\frac{|2^{j}x-z|^{2}}{C}} e^{-\frac{|2^{j}y-z|^{2}}{C}}\,dz\] \[\lesssim 2^{2j}(1+2^{j}|x-y|)^{-N},\quad\forall N\geq 1.\]
where we use the fact that \(|\alpha-z|^{2}+|\beta-z|^{2}\geq\frac{1}{2}|\alpha-\beta|^{2}\) with \(\alpha,\beta\in\mathbb{R}^{2}\) and
\[\int_{\mathbb{R}^{2}}e^{-\frac{|\alpha-t|^{2}}{C}}e^{-\frac{|\beta- t|^{2}}{C}}\,dz \lesssim e^{-\frac{|\alpha-t|^{2}}{4C}}\int_{\mathbb{R}^{2}}e^{- \frac{|\alpha-t|^{2}}{2C}}e^{-\frac{|\beta-t|^{2}}{2C}}\,dz\] \[\lesssim e^{-\frac{|\alpha-t|^{2}}{4C}}\leq(1+|\alpha-\beta|)^{-N},\forall N\geq 1.\]
By Young's inequality, it follows (4.1).
**Proposition 4.2** (The square function inequality).: _Let \(\{\varphi_{j}\}_{j\in\mathbb{Z}}\) be a Littlewood-Paley sequence given by\((\ref{eq:1.6})\). Then for \(1<p<\infty\), there exist constants \(c_{p}\) and \(C_{p}\) depending on \(p\) such that_
\[c_{p}\|f\|_{L^{p}(\mathbb{R}^{2})}\leq\Big{\|}\Big{(}\sum_{j\in \mathbb{Z}}|\varphi_{j}(\sqrt{H_{\alpha,B_{0}}})f|^{2}\Big{)}^{\frac{1}{2}} \Big{\|}_{L^{p}(\mathbb{R}^{2})}\leq C_{p}\|f\|_{L^{p}(\mathbb{R}^{2})}. \tag{4.2}\]
Proof.: By using (3.8) in Proposition 3.3, Proposition 4.2 follows from the Rademacher functions argument in [33]. We also refer the reader to [1] for result that the square function inequality (4.2) can be derived from the heat kernel with Gaussian upper bounds.
## 5. The decay estimates
In this section, we mainly prove the decay estimate (1.12). The first key ingredient is the following Proposition about the subordination formula from [27, 12].
**Proposition 5.1**.: _If \(\varphi(\lambda)\in C_{c}^{\infty}(\mathbb{R})\) is supported in \([\frac{1}{2},2]\), then, for all \(j\in\mathbb{Z},t,x>0\) with \(2^{j}t\geq 1\), we can write_
\[\begin{split}&\varphi(2^{-j}\sqrt{x})e^{it\sqrt{x}}\\ &=\rho\big{(}\frac{tx}{2^{j}},2^{j}t\big{)}+\varphi(2^{-j}\sqrt{x })\big{(}2^{j}t\big{)}^{\frac{1}{2}}\int_{0}^{\infty}\chi(s,2^{j}t)e^{\frac{ i2^{j}t}{4s}}e^{i2^{-j}tsx}\,ds,\end{split} \tag{5.1}\]
_where \(\rho(s,\tau)\in\mathcal{S}(\mathbb{R}\times\mathbb{R})\) is a Schwartz function and and \(\chi\in C^{\infty}(\mathbb{R}\times\mathbb{R})\) with \(\text{supp}\,\chi(\cdot,\tau)\subseteq[\frac{1}{16},4]\) such that_
\[\sup_{\tau\in\mathbb{R}}\big{|}\partial_{s}^{\alpha}\partial_{\tau}^{\beta} \chi(s,\tau)\big{|}\lesssim_{\alpha,\beta}(1+|s|)^{-\alpha},\quad\forall\alpha,\beta\geq 0. \tag{5.2}\]
If this has been done, then by the spectral theory for the non-negative self-adjoint operator \(H_{\alpha,B_{0}}\), we can have the representation of the micolocalized half-wave propagator
\[\begin{split}&\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_ {\alpha,B_{0}}}}\\ &=\rho\big{(}\frac{tH_{\alpha,B_{0}}}{2^{j}},2^{j}t\big{)}+\varphi (2^{-j}\sqrt{H_{\alpha,B_{0}}})\big{(}2^{j}t\big{)}^{\frac{1}{2}}\int_{0}^{ \infty}\chi(s,2^{j}t)e^{\frac{i2^{j}t}{4s}}e^{i2^{-j}tsH_{\alpha,B_{0}}}\,ds. \end{split} \tag{5.3}\]
Proof.: This result is original from [27] and the authors of [12] provided an independent proof of the key formula. For the self-contained and the convenience of the reader, we follow the idea of [12] to provide the details of the proof.
The starting point of the proof is the subordination formula
\[e^{-y\sqrt{x}}=\frac{y}{2\sqrt{\pi}}\int_{0}^{\infty}e^{-sx-\frac{y^{2}}{4s}}s ^{-\frac{3}{2}}ds,\quad x,y>0. \tag{5.4}\]
Indeed, let \(|\xi|=\sqrt{x}\), we have
\[e^{-y\sqrt{x}} =e^{-y|\xi|}=\int_{\mathbb{R}}e^{-2\pi it\cdot\xi}\int_{\mathbb{R}}e^ {2\pi it\cdot\eta}e^{-y|\eta|}\,d\eta\,dt\] \[=\int_{\mathbb{R}}e^{-2\pi it\cdot\xi}\Big{(}\int_{-\infty}^{0}e^ {2\pi it\cdot\eta}e^{y\eta}\,d\eta+\int_{0}^{\infty}e^{2\pi it\cdot\eta}e^{-y \eta}\,d\eta\Big{)}\,dt\] \[=2\int_{\mathbb{R}}e^{-2\pi it\cdot\xi}\frac{y}{y^{2}+(2\pi t)^{2 }}\,dt=2\int_{\mathbb{R}}e^{-2\pi iyt\cdot\xi}\frac{1}{1+(2\pi t)^{2}}\,dt\] \[=2\int_{\mathbb{R}}e^{-2\pi iyt\cdot\xi}\int_{0}^{\infty}e^{-r(1 +(2\pi t)^{2})}\,dr\,dt\] \[=2\int_{0}^{\infty}e^{-r}\int_{\mathbb{R}}e^{-2\pi iyt\cdot\xi} e^{-r(2\pi t)^{2}}\,dt\,dr\] \[=2\int_{0}^{\infty}\frac{e^{-r}e^{-\frac{\ell^{2}y^{2}}{4r}}}{ \sqrt{4\pi r}}\,dr=2\int_{0}^{\infty}\frac{e^{-r}e^{-\frac{xy^{2}}{4r}}}{\sqrt {4\pi r}}\,dr,\quad s=\frac{y^{2}}{4r},\xi=\sqrt{x},\] \[=\frac{y}{2\sqrt{\pi}}\int_{0}^{\infty}e^{-\frac{y^{2}}{4s}}e^{- xs}s^{-\frac{3}{2}}\,ds\]
where we use
\[\alpha^{-\frac{\eta}{2}}e^{-\pi|\xi|^{2}/\alpha}=\int_{\mathbb{R}^{n}}e^{-2\pi ix \xi}e^{-\pi\alpha|x|^{2}}dx.\]
To obtain \(e^{it\sqrt{x}}\), we extend (5.4) by setting \(y=\epsilon-it\) with \(\epsilon>0\)
\[\begin{split} e^{it\sqrt{x}}&=\lim_{\epsilon\to 0^{+}}e^{-( \epsilon-it)\sqrt{x}}\\ &=\lim_{\epsilon\to 0}\frac{\epsilon-it}{2\sqrt{\pi}}\int_{0}^{ \infty}e^{-xs}e^{\frac{(it-\epsilon)^{2}}{4s}}s^{-\frac{3}{2}}ds,\quad s=r( \epsilon-it)\\ &=\lim_{\epsilon\to 0}\frac{\sqrt{\epsilon-it}}{2\sqrt{\pi}}\int_{0}^{ \infty}e^{rx(it-\epsilon)}e^{\frac{it-\epsilon}{4r}}r^{-\frac{3}{2}}dr\\ &=\lim_{\epsilon\to 0}\frac{\sqrt{\epsilon-it}}{2\sqrt{\pi}}\int_{0} ^{\infty}e^{itrx}e^{-\epsilon rx}e^{\frac{it}{4r}}e^{-\frac{\epsilon}{4r}}r^{- \frac{3}{2}}dr\\ &=\lim_{\epsilon\to 0}\frac{\sqrt{\epsilon-it}}{2\sqrt{\pi}}I_{ \epsilon,\epsilon x}(tx,t),\end{split} \tag{5.5}\]
where
\[I_{\epsilon,\delta}(a,t):=\int_{0}^{\infty}e^{ira}e^{-\delta r}e^{\frac{it}{4r }}e^{-\frac{\epsilon}{4r}}r^{-\frac{3}{2}}dr.\]
By the dominate convergence theorem, we have that
\[e^{it\sqrt{x}}=\lim_{\epsilon\to 0}\frac{\sqrt{\epsilon-it}}{2\sqrt{\pi}}I_{ \epsilon,\epsilon x}(tx,t)=\sqrt{\frac{t}{4\pi}}e^{-\frac{\pi}{4}i}\lim_{ \epsilon\to 0}I_{\epsilon,\epsilon x}(tx,t).\]
Thus it suffices to consider the oscillation integral
\[\lim_{\epsilon\to 0}I_{\epsilon,\epsilon x}(a,t)=I_{0,0}(a,t)=\int_{0}^{ \infty}e^{ira}e^{\frac{it}{4r}}r^{-\frac{3}{2}}dr. \tag{5.6}\]
**Lemma 5.2**.: _Let_
\[I(a,t)=\int_{0}^{\infty}e^{ira}e^{\frac{it}{4r}}r^{-\frac{3}{2}}dr.\]
_Then we can write_
\[I(a,t)=\tilde{\rho}(a,t)+\int_{0}^{\infty}e^{ira}e^{\frac{it}{4\tau}}\tilde{\chi}( r)\,dr, \tag{5.7}\]
_where \(\tilde{\chi}(r)\in C_{0}^{\infty}(r)\) and \(\operatorname{supp}\tilde{\chi}\subset[\frac{1}{16},4]\) and \(\tilde{\rho}(a,t)\) satisfies_
\[\big{|}\partial_{a}^{\alpha}\partial_{t}^{\beta}\tilde{\rho}(a,t)\big{|}\leq C _{N,\alpha,\beta}(a+t)^{-N},\quad\frac{1}{4}\leq\frac{a}{t}\leq 4,\,t\geq 1, \forall N\geq 1. \tag{5.8}\]
We now assume this lemma to prove (5.1). By (5.5) and (5.6) and noticing
\[I(a,t)=2^{\frac{j}{2}}I(2^{-j}a,2^{j}t),\]
we have that
\[\varphi(2^{-j}\sqrt{x})e^{it\sqrt{x}}=\sqrt{\frac{t}{4\pi}}e^{-\frac{\pi}{4}i }\varphi(2^{-j}\sqrt{x})2^{\frac{j}{2}}I\big{(}\frac{tx}{2^{j}},2^{j}t\big{)}.\]
By the support of \(\varphi\), one has \(2^{2j-2}\leq x\leq 2^{2j+2}\), hence \(\frac{1}{4}\leq\frac{tx}{2^{j}}/2^{j}t=x/2^{2j}\leq 4\). Note the condition \(2^{j}t\geq 1\). Therefore, by using this lemma, we prove
\[\varphi(2^{-j}\sqrt{x})e^{it\sqrt{x}}\] \[= \frac{1}{\sqrt{4\pi}}e^{-\frac{\pi}{4}i}\big{(}2^{j}t\big{)}^{ \frac{1}{2}}\varphi(2^{-j}\sqrt{x})\Big{(}\tilde{\rho}\big{(}\frac{tx}{2^{j}},2^{j}t\big{)}+\int_{0}^{\infty}\tilde{\chi}(s)e^{\frac{i2^{j}t}{4s}}e^{i2^{-j }txs}\,ds\Big{)}.\]
We need consider this expression when \(2^{j}t\geq 1\). To this end, let \(\phi\in C^{\infty}([0,+\infty)\) satisfies \(\phi(t)=1\) if \(t\geq 1\) and \(\phi(t)=0\) if \(0\leq t\leq\frac{1}{2}\), then set
\[\rho\big{(}\frac{tx}{2^{j}},2^{j}t\big{)}=e^{-\frac{\pi}{4}i}\big{(}2^{j}t \big{)}^{\frac{1}{2}}\varphi(2^{-j}\sqrt{x})\tilde{\rho}\big{(}\frac{tx}{2^{j }},2^{j}t\big{)}\phi(2^{j}t).\]
This together with (5.8) shows
\[\big{|}\partial_{a}^{\alpha}\partial_{t}^{\beta}\rho(a,t)\big{|}\leq C_{N, \alpha,\beta}(1+(a+t))^{-N},\quad\forall N\geq 0.\]
which implies \(\rho(a,t)\in\mathcal{S}(\mathbb{R}_{+}\times\mathbb{R}_{+})\). Set
\[\chi\big{(}s,2^{j}t\big{)}=e^{-\frac{\pi}{4}i}\tilde{\chi}\big{(}s\big{)}\phi (2^{j}t),\]
then \(\chi\) satisfies (5.2). then we finally write
\[\varphi(2^{-j}\sqrt{x})e^{it\sqrt{x}}\] \[= \rho\big{(}\frac{tx}{2^{j}},2^{j}t\big{)}+\big{(}2^{j}t\big{)}^{ \frac{1}{2}}\varphi(2^{-j}\sqrt{x})\int_{0}^{\infty}\chi(s,2^{j}t)e^{\frac{i2 ^{j}t}{4s}}e^{i2^{-j}txs}\,ds,\]
which proves (5.1) as desired.
_The proof of Lemma 5.2._ To prove (5.7), we divide the integral into three pieces. Let \(\beta(r)\) be a function in \(C^{\infty}(\mathbb{R})\) compact supported in \([\frac{1}{2},2]\) such that
\[1=\sum_{j\in\mathbb{Z}}\beta_{j}(r),\quad\beta_{j}(r)=\beta(2^{-j}r).\]
Corresponding to \(\beta_{j}\), we decompose
\[I(a,t)=\sum_{j\in\mathbb{Z}}I_{j}(a,t)=I_{l}(a,t)+I_{m}(a,t)+I_{h}(a,t)\]
where
\[I_{l}(a,t)=\sum_{j\leq-5}I_{j}(a,t), I_{m}(a,t)=\sum_{-4\leq j\leq 1}I_{j}(a,t),\quad I_{h}(a,t)=\sum_{j\geq 2 }I_{j}(a,t),\] \[I_{j}(a,t)=\int_{0}^{\infty}e^{ira}e^{\frac{it}{4r}}\beta_{j}(r)r ^{-\frac{3}{2}}dr.\]
Define the phase function \(\phi_{a,t}(r)=ra+\frac{t}{4r}\), then
\[\phi^{\prime}_{a,t}(r)=a-\frac{t}{4r^{2}}.\]
Define
\[\bar{\rho}(a,t)=I_{l}(a,t)+I_{h}(a,t),\]
we aim to prove (5.8). We first consider \(I_{h}(a,t)\).
\[\big{|}\partial_{a}^{\alpha}\partial_{t}^{\beta}I_{h}(a,t)\big{|} =\sum_{j\geq 2}\Big{|}\int_{0}^{\infty}e^{ira}e^{\frac{it}{4r}} \beta_{j}(r)r^{-\frac{3}{2}+\alpha}\big{(}\frac{1}{4r}\big{)}^{\beta}dr\Big{|}\] \[\leq C\sum_{j\geq 2}\int_{0}^{\infty}\Big{|}\frac{d}{dr}\Big{[} \Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)}\frac{d}{dr}\Big{)}^{N-1}\Big{(}\frac {1}{\phi^{\prime}_{a,t}(r)}\beta_{j}(r)r^{-\frac{3}{2}+\alpha-\beta}\Big{)} \Big{]}\Big{|}\,dr.\]
Due to that \(\operatorname{supp}\beta_{j}\subset[2^{j-1},2^{j+1}]\) with \(j\geq 2\) implies \(r\in[2,\infty)\) and the assumption \(\frac{a}{t}\geq\frac{1}{4}\), we see that
\[\big{|}\phi^{\prime}_{a,t}(r)\big{|}=\big{|}a-\frac{t}{4r^{2}}\big{|}\geq\frac {a+t}{16}.\]
For choosing \(N>\alpha\), we notice the fact that
\[\Big{|}\Big{(}\frac{d}{dr}\Big{)}^{K}\Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)} \Big{)}\Big{|}\leq C_{K}(a+t)^{-1}r^{-K},\quad t,r\in[1,+\infty)\]
to obtain
\[\Big{|}\frac{d}{dr}\Big{[}\Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)}\frac{d}{dr} \Big{)}^{N-1}\Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)}\beta_{j}(r)r^{-\frac{3}{ 2}+\alpha-\beta}\Big{)}\Big{]}\Big{|}\leq C_{N}(a+t)^{-N}2^{-\frac{3}{2}j}.\]
Therefore, we prove
\[\big{|}\partial_{a}^{\alpha}\partial_{t}^{\beta}I_{h}(a,t)\big{|}\leq C_{N}(a +t)^{-N}\sum_{j\geq 2}2^{-\frac{1}{2}j}\,\leq C_{N}(a+t)^{-N},\quad N\geq\alpha.\]
Next we consider \(I_{l}(a,t)\).
\[\big{|}\partial_{a}^{\alpha}\partial_{t}^{\beta}I_{l}(a,t)\big{|}=\sum_{j\leq- 5}\Big{|}\int_{0}^{\infty}e^{ira}e^{\frac{it}{4r}}\beta_{j}(r)r^{-\frac{3}{2}+ \alpha}\big{(}\frac{1}{4r}\big{)}^{\beta}dr\Big{|}.\]
\[\sum_{j\leq-5}\Big{|}\int_{0}^{\infty}e^{\frac{it}{4r}}e^{ira} \beta_{j}(r)r^{-\frac{3}{2}+\alpha-\beta}dr\Big{|}\] \[\leq C_{N}\sum_{j\leq-5}\int_{0}^{\infty}\Big{|}\frac{d}{dr}\Big{[} \Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)}\frac{d}{dr}\Big{)}^{N-1}\Big{(}\frac {1}{\phi^{\prime}_{a,t}(r)}\beta_{j}(r)r^{-\frac{3}{2}+\alpha-\beta}\Big{)} \Big{]}\Big{|}\,dr.\]
Due to that \(\operatorname{supp}\beta_{j}\subset[2^{j-1},2^{j+1}]\) with \(j\leq-5\) implies \(r\in(0,\frac{1}{16}]\) and the assumption \(\frac{a}{t}\leq 4\), we see that
\[\big{|}\phi^{\prime}_{a,t}(r)\big{|}=\big{|}a-\frac{t}{4r^{2}}\big{|}=\frac{|4r ^{2}a-t|}{4r^{2}}\geq\frac{a+t}{32r^{2}}.\]
For choosing \(N>\alpha\), we notice the fact that
\[\Big{|}\Big{(}\frac{d}{dr}\Big{)}^{K}\Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)} \Big{)}\Big{|}\leq C_{K}(a+t)^{-1}r^{2-K},\quad t\in[1,+\infty),r\in(0,\frac{1}{ 16}]\]
to obtain
\[\Big{|}\frac{d}{dr}\Big{[}\Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)} \frac{d}{dr}\Big{)}^{N-1}\Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)}\beta_{j}(r)r^ {-\frac{3}{2}+\alpha-\beta}\Big{)}\Big{]}\Big{|}\] \[\leq C_{N}(a+t)^{-N}2^{j\left(-\frac{3}{2}+\alpha-\beta+N\right)}.\]
Therefore, for large enough \(N\) such that \(-\frac{1}{2}+\alpha-\beta+N>0\), we prove
\[\big{|}\partial_{a}^{\alpha}\partial_{t}^{\beta}I_{h}(a,t)\big{|} \leq C_{N}(a+t)^{-N}\sum_{j\leq-5}2^{j\left(-\frac{1}{2}+\alpha- \beta+N\right)}\] \[\leq C_{N}(a+t)^{-N},\]
where we use the assumption that \(\frac{a}{t}\leq 4\) and \(t\geq 1\). In sum, we prove (5.8). Let
\[\tilde{\chi}(r)=\sum_{j=-4}^{1}\beta_{j}(r)r^{-\frac{3}{2}},\]
then \(\tilde{\chi}(r)\in C_{0}^{\infty}(r)\) and \(\operatorname{supp}\tilde{\chi}\subset[\frac{1}{16},4]\). Hence we have
\[I_{m}(a,t)=\int_{0}^{\infty}e^{ira}e^{\frac{it}{4r}}\tilde{\chi}(r)dr.\]
Therefore, we complete the proof of Lemma 5.2.
### Decay estimates for the microlocalized half-wave propagator
In this subsection, we mainly prove the following results
**Proposition 5.3**.: _Let \(2^{-j}|t|\leq\frac{\pi}{8B_{0}}\) and \(\varphi\) be in (1.6), then_
\[\begin{split}\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})& e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\\ &\lesssim 2^{2j}\big{(}1+2^{j}|t|\big{)}^{-\frac{1}{2}}\|\varphi(2^{-j} \sqrt{H_{\alpha,B_{0}}})f\|_{L^{1}(\mathbb{R}^{2})}.\end{split} \tag{5.9}\]
_In particular, for \(0<t<T\) with any finite \(T\), there exists a constant \(C_{T}\) depending on \(T\) such that_
\[\begin{split}\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})& e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\\ &\leq C_{T}2^{2j}\big{(}1+2^{j}|t|\big{)}^{-\frac{1}{2}}\|\varphi (2^{-j}\sqrt{H_{\alpha,B_{0}}})f\|_{L^{1}(\mathbb{R}^{2})}.\end{split} \tag{5.10}\]
**Remark 5.4**.: The finite \(T\) can be choosen beyond \(\frac{\pi}{B_{0}}\). If we could prove (5.10), then (1.12) follows
\[\begin{split}&\big{\|}e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{ \infty}(\mathbb{R}^{2})}\leq\sum_{j\in\mathbb{Z}}\big{\|}\varphi(2^{-j}\sqrt{H _{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R} ^{2})}\\ &\leq C_{T}|t|^{-\frac{1}{2}}\sum_{j\in\mathbb{Z}}2^{\frac{3}{2}j }\|\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})f\|_{L^{1}(\mathbb{R}^{2})}\leq C_{T} |t|^{-\frac{1}{2}}\|f\|_{\mathcal{B}^{3/2}_{1,1}(\mathbb{R}^{2})}.\end{split}\]
We estimate the microlocalized half-wave propagator
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|} _{L^{\infty}(\mathbb{R}^{2})}\]
by considering two cases that: \(|t|2^{j}\geq 1\) and \(|t|2^{j}\lesssim 1\). In the following argument, we can choose \(\tilde{\varphi}\in C_{c}^{\infty}((0,+\infty))\) such that \(\tilde{\varphi}(\lambda)=1\) if \(\lambda\in\operatorname{supp}\varphi\) and \(\tilde{\varphi}\varphi=\varphi\). Since \(\tilde{\varphi}\) has the same property of \(\varphi\), without confusion, we drop off the tilde above \(\varphi\) for brief. Without loss of generality, in the following argument, we assume \(t>0\).
**Case 1: \(t2^{j}\lesssim 1\).** We remark that we consider \(t2^{j}\lesssim 1\) while not \(t2^{j}\leq 1\), this will be used to extend the time interval. By the spectral theorem, one has
\[\|e^{it\sqrt{H_{\alpha,B_{0}}}}\|_{L^{2}(\mathbb{R}^{2})\to L^{2}( \mathbb{R}^{2})}\leq C.\]
Indeed, by the functional calculus, for \(f\in L^{2}\), we can write
\[e^{it\sqrt{H_{\alpha,B_{0}}}}f=\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}e^{it\sqrt{\lambda_{k,m}}}c_{k,m}\tilde{V}_{k,m}( x).\]
where
\[c_{k,m}=\int_{\mathbb{R}^{2}}f(y)\overline{\tilde{V}_{k,m}(y)}dy.\]
Then
\[\|e^{it\sqrt{H_{\alpha,B_{0}}}}f\|_{L^{2}(\mathbb{R}^{2})}=\Big{(}\sum_{ \begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}\big{|}e^{it\sqrt{\lambda_{k,m}}}c_{k,m}\big{|}^{ 2}\Big{)}^{1/2}=\Big{(}\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}\big{|}c_{k,m}\big{|}^{2}\Big{)}^{1/2}=\|f\|_{L^{2} (\mathbb{R}^{2})}.\]
Together with this, we use the Bernstein inequality (4.1) to prove
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\] \[\lesssim 2^{j}\|e^{it\sqrt{H_{\alpha,B_{0}}}}\varphi(2^{-j} \sqrt{H_{\alpha,B_{0}}})f\|_{L^{2}(\mathbb{R}^{2})}\] \[\lesssim 2^{j}\|\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})f\|_{L^{2}( \mathbb{R}^{2})}\lesssim 2^{2j}\|\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})f\|_{L^{1}( \mathbb{R}^{2})}.\]
In this case \(0<t\lesssim 2^{-j}\), we have
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})} \tag{5.11}\] \[\quad\lesssim 2^{2j}(1+2^{j}t)^{-N}\|\varphi(2^{-j}\sqrt{H_{ \alpha,B_{0}}})f\|_{L^{1}(\mathbb{R}^{2})},\quad\forall N\geq 0.\]
**Case 2: \(t2^{j}\geq 1\).** In this case, we can use (5.3) to obtain the micolocalized half-wave propagator
\[\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}\] \[=\rho\big{(}\frac{tH_{\alpha,B_{0}}}{2^{j}},2^{j}t\big{)}+\varphi (2^{-j}\sqrt{H_{\alpha,B_{0}}})\big{(}2^{j}t\big{)}^{\frac{1}{2}}\int_{0}^{ \infty}\chi(s,2^{j}t)e^{\frac{i2^{j}t}{4s}}e^{i2^{-j}tsH_{\alpha,B_{0}}}\,ds.\]
We first use the spectral theorems and the Bernstein inequality again to estimate
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})\rho\big{(}\frac{tH_{\alpha,B_{0 }}}{2^{j}},2^{j}t\big{)}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}.\]
Indeed, since \(\rho\in\mathcal{S}(\mathbb{R}\times\mathbb{R})\), then
\[\big{|}\rho\big{(}\frac{t\lambda_{k,m}}{2^{j}},2^{j}t\big{)}\big{|}\leq C(1+2^ {j}t)^{-N},\quad\forall N\geq 0.\]
Therefore, we use the Bernstein inequality and the spectral theorems to show
\[\begin{split}&\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})\rho \big{(}\frac{tH_{\alpha,B_{0}}}{2^{j}},2^{j}t\big{)}f\big{\|}_{L^{\infty}( \mathbb{R}^{2})}\\ &\lesssim 2^{j}\big{\|}\rho\big{(}\frac{tH_{\alpha,B_{0}}}{2^{j}},2^{j }t\big{)}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})f\big{\|}_{L^{2}(\mathbb{R}^{2} )}\\ &\lesssim 2^{j}(1+2^{j}t)^{-N}\Big{\|}\varphi(2^{-j}\sqrt{H_{ \alpha,B_{0}}})f\Big{\|}_{L^{2}(\mathbb{R}^{2})}\\ &\lesssim 2^{2j}(1+2^{j}t)^{-N}\Big{\|}\varphi(2^{-j}\sqrt{H_{ \alpha,B_{0}}})f\Big{\|}_{L^{1}(\mathbb{R}^{2})}.\end{split}\]
Next we use the dispersive estimates of Schrodinger propagator (see [36, Theorem 1.1])
\[\big{\|}e^{itH_{\alpha,B_{0}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\leq C|\sin (tB_{0})|^{-1}\big{\|}f\big{\|}_{L^{1}\mathbb{R}^{2})},\quad t\neq\frac{k\pi}{ B_{0}},\,k\in\mathbb{Z},\]
to estimate
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})\big{(}2^{j}t\big{)}^{\frac{1}{2 }}\int_{0}^{\infty}\chi(s,2^{j}t)e^{\frac{i2^{j}t}{4s}}e^{i2^{-j}tsH_{\alpha,B_ {0}}}f\,ds\big{\|}_{L^{\infty}(\mathbb{R}^{2})}.\]
For \(0<t<T_{0}<\frac{\pi}{2B_{0}}\), then \(\sin(tB_{0})\sim tB_{0}\)
\[\begin{split}&\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}}) \big{(}2^{j}t\big{)}^{\frac{1}{2}}\int_{0}^{\infty}\chi(s,2^{j}t)e^{\frac{i2^{ j}t}{4s}}e^{i2^{-j}tsH_{\alpha,B_{0}}}f\,ds\big{\|}_{L^{\infty}(\mathbb{R}^{2})} \\ &\lesssim\big{(}2^{j}t\big{)}^{\frac{1}{2}}\int_{0}^{\infty}\chi (s,2^{j}t)|\sin(2^{-j}tsB_{0})|^{-1}\,ds\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha, B_{0}}})f\big{\|}_{L^{1}(\mathbb{R}^{2})}.\end{split}\]
Since \(s\in[\frac{1}{16},4]\) (the compact support of \(\chi\) in \(s\)) and \(B_{0}>0\), if \(2^{-j}t\leq\frac{\pi}{8B_{0}}\), then
(5.12)
Collecting (5.11) and (5.12), it gives (5.9). To prove (5.10), we consider \(0<t<T\). For any \(T>0\), there exists \(j_{0}\) such that \(2^{-j_{0}}T\leq\frac{\pi}{8B_{0}}\) with \(j_{0}\in\mathbb{Z}_{+}\). For \(j\leq j_{0}\), then \(2^{j}t\lesssim 1\), then one has (5.10) from the first case. While for \(j\geq j_{0}\), if \(2^{j}t\lesssim 1\), one still has (5.10) from the first case. Otherwise, i.e. \(2^{j}t\geq 1\), one has (5.10) from the second case, since we always have \(2^{-j}t\leq\frac{\pi}{8B_{0}}\) for \(j\geq j_{0}\) and \(0<t\leq T\).
## 6. Strichartz estimate
In this section, we prove the Strichartz estimates (1.13) in Theorem 1.4 by using (5.10). To this end, we need a variety of the abstract Keel-Tao's Strichartz estimates theorem ([23]).
**Proposition 6.1**.: _Let \((X,\mathcal{M},\mu)\) be a \(\sigma\)-finite measured space and \(U:I=[0,T]\to B(L^{2}(X,\mathcal{M},\mu))\) be a weakly measurable map satisfying, for some constants \(C\) may
_depending on \(T\), \(\alpha\geq 0\), \(\sigma,h>0\),_
\[\begin{split}\|U(t)\|_{L^{2}\to L^{2}}&\leq C,\quad t \in\mathbb{R},\\ \|U(t)U(s)^{*}f\|_{L^{\infty}}&\leq Ch^{-\alpha}(h+| t-s|)^{-\sigma}\|f\|_{L^{1}}.\end{split} \tag{6.1}\]
_Then for every pair \(q,p\in[1,\infty]\) such that \((q,p,\sigma)\neq(2,\infty,1)\) and_
\[\frac{1}{q}+\frac{\sigma}{p}\leq\frac{\sigma}{2},\quad q\geq 2,\]
_there exists a constant \(\tilde{C}\) only depending on \(C\), \(\sigma\), \(q\) and \(r\) such that_
\[\Big{(}\int_{I}\|U(t)u_{0}\|_{L^{r}}^{q}dt\Big{)}^{\frac{1}{q}}\leq\tilde{C} \Lambda(h)\|u_{0}\|_{L^{2}}\]
_where \(\Lambda(h)=h^{-(\alpha+\sigma)(\frac{1}{2}-\frac{1}{p})+\frac{1}{q}}\)._
Proof.: This is an analogue of the semiclassical Strichartz estimates for Schrodinger in [25, 38]. We refer to [37] for the proof.
Now we prove the Strichartz estimates (1.13). Recall \(\varphi\) in (1.6) and Littlewood-Paley frequency cutoff \(\varphi_{k}(\sqrt{H_{\alpha,B_{0}}})\), for each \(k\in\mathbb{Z}\), we define
\[u_{k}(t,\cdot)=\varphi_{k}(\sqrt{H_{\alpha,B_{0}}})u(t,\cdot).\]
where \(u(t,x)\) is the solution of (1.1). Then, for each \(k\in\mathbb{Z}\), \(u_{k}(t,x)\) solves the Cauchy problem
\[\partial_{t}^{2}u_{k}+H_{\alpha,B_{0}}u_{k}=0,\quad u_{k}(0)=f_{k}(z),\ \partial_{t}u_{k}(0)=g_{k}(z),\]
where \(f_{k}=\varphi_{k}(\sqrt{H_{\alpha,B_{0}}})u_{0}\) and \(g_{k}=\varphi_{k}(\sqrt{H_{\alpha,B_{0}}})u_{1}\). Since \((q,p)\in\Lambda_{s}^{W}\) in definition 1.3, then \(q,p\geq 2\). Thus, by using the square-function estimates (4.2) and the Minkowski inequality, we obtain
\[\|u(t,x)\|_{L^{q}(I;L^{p}(\mathbb{R}^{2}))}\lesssim\Big{(}\sum_{k\in\mathbb{Z }}\|u_{k}(t,x)\|_{L^{q}(I;L^{p}(\mathbb{R}^{2}))}^{2}\Big{)}^{\frac{1}{2}}, \tag{6.2}\]
where \(I=[0,T]\). Denote the half-wave propagator \(U(t)=e^{it\sqrt{H_{\alpha,B_{0}}}}\), then we write
\[u_{k}(t,z)=\frac{U(t)+U(-t)}{2}f_{k}+\frac{U(t)-U(-t)}{2i\sqrt{H_{\alpha,B_{0} }}}g_{k}. \tag{6.3}\]
By using (6.2) and (6.3), we complete the proof of (1.13) after taking summation in \(k\in\mathbb{Z}\) if we could prove
**Proposition 6.2**.: _Let \(f=\varphi_{k}(\sqrt{H_{\alpha,B_{0}}})f\) for \(\varphi_{k}\) in (1.6) and \(k\in\mathbb{Z}\). Then_
\[\|U(t)f\|_{L^{q}(I;L^{p}(\mathbb{R}^{2}))}\leq C_{T}2^{ks}\|f\|_{L^{2}(\mathbb{ R}^{2})}, \tag{6.4}\]
_where the admissible pair \((q,p)\in[2,+\infty]\times[2,+\infty)\) and \(s\) satisfy (1.10) and (1.11)._
Proof.: Since \(f=\varphi_{k}(\sqrt{\mathrm{H}})f\), then
\[U(t)f=\varphi_{k}(\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}f:=U_{k }f.\]
By using the spectral theorem, we see
\[\|U_{k}(t)f\|_{L^{2}(\mathbb{R}^{2})}\leq C\|f\|_{L^{2}(\mathbb{R}^{2})}.\]
By using (5.10), we obtain
\[\|U_{k}(t)U_{k}^{*}(s)f\|_{L^{\infty}(\mathbb{R}^{2})} =\|U_{k}(t-s)f\|_{L^{\infty}(\mathbb{R}^{2})}\] \[\leq C_{T}2^{\frac{3}{2}k}\big{(}2^{-k}+|t-s|\big{)}^{-\frac{1}{2}} \|f\|_{L^{1}(\mathbb{R}^{2})},\]
Then the estimates (6.1) for \(U_{k}(t)\) hold for \(\alpha=3/2\), \(\sigma=1/2\) and \(h=2^{-k}\). Hence, Proposition 6.1 gives
\[\|U(t)f\|_{L^{q}(I;L^{p}(\mathbb{R}^{2}))}=\|U_{k}(t)f\|_{L^{q}(I;L^{p}( \mathbb{R}^{2}))}\leq C_{T}2^{k[2(\frac{1}{2}-\frac{1}{p})-\frac{1}{q}]}\|f\|_ {L^{2}(\mathbb{R}^{2})}.\]
which implies (6.4) since \(s=2(\frac{1}{2}-\frac{1}{p})-\frac{1}{q}\).
| この論文は、Aharonov-Bohmソレイドを、一様磁場の中に導入した、散乱方程式の崩壊見積もりに関する一連の論文の2番目です。私たちの最初の論文\cite{WZZ}では、Schr\"odinger方程式のStrichartz見積もりについて調べました。1つのAharonov-Bohmソレイドを、一様磁場の中に導入した。この設定における波動方程式は、Schr\"odinger演算子$H_{\alpha, B_0}$の固有値の平方根の計算から難しくなるため、半波伝播器を直接構築することができません。2つの異なる方法を用いて、ガウスの熱Kernelの上限の興味深い結果が証明されました。最初の方法は、この設定におけるDavies-Gaffney不等式を確立することによって行われ、2番目の方法は、Schulman-Sunadaの公式 |
2309.00126 | QS-TTS: Towards Semi-Supervised Text-to-Speech Synthesis via
Vector-Quantized Self-Supervised Speech Representation Learning | This paper proposes a novel semi-supervised TTS framework, QS-TTS, to improve
TTS quality with lower supervised data requirements via Vector-Quantized
Self-Supervised Speech Representation Learning (VQ-S3RL) utilizing more
unlabeled speech audio. This framework comprises two VQ-S3R learners: first,
the principal learner aims to provide a generative Multi-Stage Multi-Codebook
(MSMC) VQ-S3R via the MSMC-VQ-GAN combined with the contrastive S3RL, while
decoding it back to the high-quality audio; then, the associate learner further
abstracts the MSMC representation into a highly-compact VQ representation
through a VQ-VAE. These two generative VQ-S3R learners provide profitable
speech representations and pre-trained models for TTS, significantly improving
synthesis quality with the lower requirement for supervised data. QS-TTS is
evaluated comprehensively under various scenarios via subjective and objective
tests in experiments. The results powerfully demonstrate the superior
performance of QS-TTS, winning the highest MOS over supervised or
semi-supervised baseline TTS approaches, especially in low-resource scenarios.
Moreover, comparing various speech representations and transfer learning
methods in TTS further validates the notable improvement of the proposed
VQ-S3RL to TTS, showing the best audio quality and intelligibility metrics. The
trend of slower decay in the synthesis quality of QS-TTS with decreasing
supervised data further highlights its lower requirements for supervised data,
indicating its great potential in low-resource scenarios. | Haohan Guo, Fenglong Xie, Jiawen Kang, Yujia Xiao, Xixin Wu, Helen Meng | 2023-08-31T20:25:44 | http://arxiv.org/abs/2309.00126v1 | QS-TTS: Towards Semi-Supervised Text-to-Speech Synthesis via Vector-Quantized Self-Supervised Speech Representation Learning
###### Abstract
This paper proposes a novel semi-supervised TTS framework, QS-TTS, to improve TTS quality with lower supervised data requirements via Vector-Quantized Self-Supervised Speech Representation Learning (VQ-S3RL) utilizing more unlabeled speech audio. This framework comprises two VQ-S3R learners: first, the principal learner aims to provide a generative Multi-Stage Multi-Codebook (MSMC) VQ-S3R via the MSMC-VQ-GAN combined with the contrastive S3RL, while decoding it back to the high-quality audio; then, the associate learner further abstracts the MSMC representation into a highly-compact VQ representation through a VQ-VAE. These two generative VQ-S3R learners provide profitable speech representations and pre-trained models for TTS, significantly improving synthesis quality with the lower requirement for supervised data. QS-TTS is evaluated comprehensively under various scenarios via subjective and objective tests in experiments. The results powerfully demonstrate the superior performance of QS-TTS, winning the highest MOS over supervised or semi-supervised baseline TTS approaches, especially in low-resource scenarios. Moreover, comparing various speech representations and transfer learning methods in TTS further validates the notable improvement of the proposed VQ-S3RL to TTS, showing the best audio quality and intelligibility metrics. The trend of slower decay in the synthesis quality of QS-TTS with decreasing supervised data further highlights its lower requirements for supervised data, indicating its great potential in low-resource scenarios.
## I Introduction
Text-to-Speech (TTS) synthesis is a technology aiming to convert text to speech signals with correct pronunciation, natural prosody, and high audio fidelity. It has been widely used in various intelligent products, e.g. Human-Computer Interaction (HCI) [1, 2], Speech-to-Speech Translation (S2ST) [3, 4], and Artificial Intelligence Generated Content (AIGC) [5]. Meanwhile, with the popularization and penetration of AI technology into various fields, the capability of a TTS system on personalization and customization has also been paid more attention, to serve all people around the world better. But the high cost of creating a TTS dataset with sufficient high-quality speech audio and accurate transcripts hinders the development of TTS in this aspect. Hence, reducing the training requirement for supervised data of a high-quality TTS system is becoming more urgent.
Fortunately, a TTS system does not rely on supervised data entirely. It is feasible to utilize more unlabeled speech data that is easier to collect to compensate for insufficient supervised data in TTS training. For example, in a mainstream modular TTS framework, as shown in Fig. 1, we can employ unlabeled speech data to
* Enhance the analysis module to provide practical data-derived speech representations with sufficient phonetic information and easier to predict from the text.
* Enhance the synthesis and prediction modules by pre-training them on relevant tasks.
In this way, we may reduce supervised data requirements by building a semi-supervised TTS system using more unlabeled speech data. And the key is seeking an appropriate approach
Fig. 1: A mainstream TTS framework: the acoustic feature or speech representation is extracted from the waveform via the analysis module as the output of the prediction module, and decoded to the waveform by the synthesis module.
or task utilizing unlabeled speech audio well to provide the expected features and pre-trained models to TTS.
In this regard, Self-Supervised Speech Learning (S3RL) has shown excellent capability in providing profitable speech representations and pre-trained models for various supervised speech tasks. Especially in speech synthesis [6, 7], vector-quantization-based generative S3RL [8] has already demonstrated its eminent performance, providing a compact, discrete speech representation to reduce modeling complexity while keeping high-quality speech reconstruction. Inspired by it, in this paper, we propose QS-TTS, a novel semi-supervised TTS framework based on Vector-Quantized Self-Supervised Speech Representation Learning (VQ-S3RL).
This framework conducts VQ-S3RL on high-quality unlabeled speech data for two goals:
* Providing profitable speech representations for TTS.
* Providing effective pre-trained models to enhance the acoustic model and the vocoder.
It comprises two learners. First, we train a principal VQ-S3R learner, which combines the contrastive S3RL model, HuBERT [9], and the proposed generative S3RL model, Multi-Stage Multi-Codebook (MSMC) VQ-GAN in cascade. It converts the waveform into the MSMCR, a compact generative S3R comprising multiple sequences at different time resolutions and quantized by multiple codebooks, which is then decoded to high-quality speech audio via adversarial speech generation. Then, we train an associate VQ-S3R learner to abstract the MSMCR into a highly-compact VQ representation through a VQ-VAE-based model with multi-sequence encoding and decoding. These two generative VQ-S3R learners are applied in TTS training using supervised data, including playing the role of the analysis module providing the MSMCR, and providing pre-trained models for the prediction and synthesis module training.
In experiments, in addition to MOS tests to evaluate QS-TTS subjectively, we also measure the performance of QS-TTS objectively on audio quality using Frechet Distances (FD) in various embedding spaces and the intelligibility using the Character Error Rate (CER) and Phoneme Error Rate (CER). We first evaluate the overall performance of QS-TTS by comparing it with mainstream TTS approaches, e.g. FastSpeech, VITS, and their semi-supervised versions, in standard-resource and various low-resource scenarios. Then, we investigate the effect of the proposed VQ-S3RL on TTS by comparing it with different speech representations and transfer learning methods. Finally, we evaluate the performance of QS-TTS with different sizes of supervised data to further validate its effectiveness in reducing supervised data requirements.
In the rest of this paper, Section II introduces the background of this work. Section III illustrates the framework of QS-TTS in detail. Experiments are described in Sections IV and V. Finally, Section VI gives the conclusion to this paper.1
Footnote 1: Audio samples are available at [https://hhguo.github.io/DemoQSTTS/](https://hhguo.github.io/DemoQSTTS/)
## II Background
### _Semi-Supervised Text-to-Speech Synthesis_
The semi-supervised TTS aims to utilize both supervised and unsupervised data in training to improve TTS quality. It is usually achieved by transfer learning, i.e. pre-training TTS modules with only audio or text, and fine-tuning it using supervised data. For example, in Tacotron-based TTS [10, 11], the auto-regressive decoder can be pre-trained with unlabeled audio [12, 13], then applied in supervised training to achieve better prediction quality. In [14], the phoneme encoder is also pre-trained using only text in a BERT-like [15] way. Besides, we can also tag unlabeled speech audio using limited supervised data. For example, speech chain [16], i.e. back-translation [17], can train Automatic Speech Recognition (ASR) and TTS iteratively using only a few paired data and much unpaired data. In [18, 19], a weak ASR system is trained with a few minutes of supervised data, then decodes all unlabeled audio to generate transcripts to create a low-accuracy supervised dataset. TTS is directly pre-trained by this dataset, and then fine-tuned with the target supervised data. Besides, we can also create pseudo-labels via unsupervised acoustic unit discovery [20, 21] to form a pseudo-supervised dataset for TTS pre-training. Eventually, employing more unlabeled data in training can enhance the TTS system with lower requirements for supervised data.
However, with the development of representation learning, data-derived speech representations show better performance than conventional, signal-processing-based acoustic features in TTS, which can be better predicted from the text while keeping sufficient speech information for high-quality audio reconstruction. It effectively reduces the requirement for supervised data, indicating a new direction for semi-supervised TTS.
### _Self-Supervised Speech Representation Learning_
S3RL [22] aims to learn useful representations from unlabeled speech data to serve downstream supervised speech tasks. It is usually divided into two categories: contrastive and generative [23]. Contrastive S3RL models, e.g. Wav2Vec [24, 25, 26], HuBERT [9], WavLM [27], are usually encoder-based models trained with contrastive loss functions [28]. This kind of model is more robust to noisy data in training, hence can be applied with massive low-quality speech data to learn a general speech representation [29] to enhance various downstream speech classification [30, 31, 32, 33, 34] and synthesis [35, 36] tasks. Generative S3RL aims to reconstruct speech in training while applying restrictions to the latent space, which is more compatible with TTS intuitively due to their overlapped goal of speech generation. Hence, it is widely applied in TTS tasks, such as the Auto-regressive model, VAE [37], and VQ-VAE [8]. It cannot only provide an effective speech representation [38], but provide a good pre-trained model for TTS training [12]. However, this kind of approach is more sensitive to noisy data due to the reconstruction objective [23], hence having a higher requirement for audio quality, lacking good generalization for low-quality audio.
### _Vector-Quantized Representation Learning_
As a generative self-supervised learning method, vector-quantized representation learning aims to compress the target via vector quantization into a compact, discrete representation, while keeping high-quality reconstruction, e.g. VQ-VAE [8] and its enhanced version, VQ-GAN [39] and VQ-Diffusion [40]. It has been widely applied in speech generation, e.g. speech coding [41, 42, 43], Voice Conversion (VC) [44], and TTS [45, 46, 7]. To extract a better VQ speech representation with a balance between compactness and completeness, MSMC-TTS [7] proposes a Multi-Stage Multi-Codebook Representation (MSMCR), comprising multiple sequences with different time resolutions and quantized by multiple codebooks. It can be better predicted from the text via multi-stage modeling, significantly improving TTS performance with lower supervised data requirements, further showing the great potential of VQ representations in semi-supervised TTS.
## III QS-TTS
QS-TTS is a semi-supervised TTS framework based on vector-quantized self-supervised speech representation learning (VQ-S3RL). As shown in Fig. 2, it trains two VQ-S3R learners on unlabeled speech data to provide more-profitable speech representations and effective pre-trained models to enhance supervised TTS training, thereby improving synthesis quality while reducing the supervised data requirement. In this section, we will illustrate each module in detail.
### _The Principal VQ-S3R Learner_
We first propose a principal VQ-S3R learner combining contrastive and generative S3RL models to extract an effective speech representation, which is easier to predict from the text and well-reconstructed into high-quality audio. It first employs a contrastive S3RL model, HuBERT [9], trained with massive speech audio to extract an effective general speech representation \(\mathbf{z}_{c}\) from the speech signal \(\mathbf{s}\). Then, it conducts the generative VQ-S3RL based on MSMC-VQ-GAN using only high-quality speech data to convert \(\mathbf{z}_{c}\) into the generative VQ-S3R, MSMCR, while decoding it to high-quality audio \(\mathbf{\hat{s}}\). In this section, we will introduce the model architecture and training method of MSMC-VQ-GAN.
#### Iii-C1 Model architecture
The model architecture of MSMC-VQ-GAN is composed of an MSMC-VQ encoder and a speech decoder. Fig. 3 shows an example of a two-stage four-codebook VQ-GAN model. In the MSMC-VQ encoder, the input \(\mathbf{z}_{c}\) is first processed by a Transformer encoder composed of a linear layer and a feedforward Transformer block, then quantized in two stages. In the higher stage, the input sequence is down-sampled 4 times along the time axis via the down-sample module, which has an average pooling layer and a feedforward Transformer block. Then, \(\mathbf{\hat{z}}_{p}^{(2)}\) is quantized by a four-head codebook \(\mathbf{c}_{p}^{(2)}\) via Multi-Head Vector Quantization (MHVQ) [7], i.e. product quantization [47], which chunks the codebook into multiple sub-codebooks to quantize the input vector chunked in the same way, respectively. The quantized output \(\mathbf{z}_{p}^{(2)}\) is further processed by an up-sample block comprising two MLP layers with a LeakyReLU activation function in between, repetition operation for up-sampling, and four residual convolutional layers, to help the following quantization, and predict the lower-stage quantized sequence \(\mathbf{\hat{z}}_{p}^{(1)}\). In stage 1, we obtain the quantized sequence \(\mathbf{z}_{p}^{(1)}\) with the guidance of the high-stage information, and add it with the high-stage residual output as the encoder output, which is then fed to the speech decoder to generate the waveform. The speech decoder is composed of a frame decoder and a waveform generator. First, we employ the frame decoder with a feedforward Transformer block to process the whole input sequence, and then generate the speech waveform \(\mathbf{\hat{s}}\) via the waveform generator based on the Hifi-GAN model. Meanwhile, like MSMC-VQ-VAE, we still predict the Mel spectrogram from the output of the frame decoder using a
Fig. 2: The framework of QS-TTS. In vector-quantized self-supervised speech representation learning (VQ-S3RL), the principal VQ-S3R learner first converts the speech signal \(\mathbf{s}\) into \(\mathbf{z}_{c}\) via a pre-trained contrastive S3RL model, and then feed it to the Multi-Stage Multi-Codebook (MSMC) VQ-GAN to obtain the MSMCR \(\mathbf{z}_{p}\) and the reconstructed speech waveform \(\mathbf{\hat{s}}\). The associate VQ-S3R learner compresses \(\mathbf{z}_{p}\) into the VQ sequence \(\mathbf{z}_{a}\) via a VQ-VAE model. In TTS training, the vocoder and acoustic model are trained based on the pre-trained speech decoder and multi-stage decoder to map the text to the MSMCR \(\mathbf{\hat{z}}_{p}\), and then synthesize the waveform \(\mathbf{\hat{s}}\).
Mel linear layer.
#### Iii-A2 Loss function
The training objective of MSMC-VQ-GAN is composed of multiple loss terms. First, due to the non-differentiable VQ operations, we back-propagate the gradient to the MSMC-VQ encoder via the following loss:
\[\mathcal{L}_{vq}=\frac{1}{S}\sum_{i=1}^{S}||\mathbf{\hat{z}}_{p}^{(i)}-sg( \mathbf{z}_{p}^{(i)})||_{2}^{2} \tag{1}\]
where \(S\) denotes the number of stages. And we adopt the exponential moving average-based method [48] to update codebooks in training. For effective multi-stage representation learning, we also enhance the relationship between adjacent stages with the following loss term:
\[\mathcal{L}_{ms}=\frac{1}{S-1}\sum_{j=1}^{S-1}||\mathbf{\hat{z}}_{p}^{(j)}-sg( \mathbf{z}_{p}^{(j)})||_{2}^{2} \tag{2}\]
It can help the higher stage learn an effective representation stably [7, 49].
To reconstruct high-quality speech audio, we apply GAN training to the model with a UnivNet discriminator [50], composed of multiple sub-discriminators for multi-resolution spectrogram discriminating and multi-period waveform discriminating, to capture more discriminative information in frequency and time domains. The loss function for the discriminator is written as:
\[\mathcal{L}_{d}=\frac{1}{K}\sum_{k=1}^{K}[(D_{k}(\mathbf{s})-1)^{2}+D_{k}( \mathbf{\hat{s}})^{2}] \tag{3}\]
where \(K\) denotes the number of sub-discriminators. And the adversarial loss for the MSMC-VQ-GAN is written as:
\[\mathcal{L}_{adv}=\frac{1}{K}\sum_{k=1}^{K}[(D_{k}(\mathbf{\hat{s}})-1)^{2}] \tag{4}\]
To enhance GAN training quality, the Mel-spectrogram loss and feature matching loss, widely used in GAN-based neural vocoder training, are also employed in MSMC-VQ-GAN training [51]. Mel-spectrogram loss is the L1 distance between two waveforms in the Mel-scale frequency domain, which can improve the perceptual quality of the generated audio. It is written as follows:
\[\mathcal{L}_{mel}=||\phi(\mathbf{s})-\phi(\mathbf{\hat{s}})||_{1} \tag{5}\]
where \(\phi\) denotes the operating converting the waveform into the log-scale Mel spectrogram. Feature matching loss can further improve GAN training quality by reducing differences between the ground-truth waveform and the generated waveform in the hidden feature space of the discriminator as follows:
\[\mathcal{L}_{fm}=\frac{1}{K}\sum_{k=1}^{K}\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}|| (D_{k}^{(i)}(\mathbf{s}))-D_{k}^{(i)}(\mathbf{\hat{s}})||_{1} \tag{6}\]
where \(N_{k}\) denotes the number of hidden layers of \(k\)-th sub-discriminator, and \(D_{k}^{(i)}(*)\) denotes the output feature of \(i\)-th layer of \(k\)-th sub-discriminator.
Finally, to avoid the negative impact of the unstable performance of GAN to VQ-S3RL in the early-stage training, we still apply the frame-level reconstruction loss, which is an L2 distance between the predicted Mel spectrogram and the ground-truth one:
\[\mathcal{L}_{frame}=||\mathbf{x}-\mathbf{\hat{x}}||_{2}^{2} \tag{7}\]
where \(\mathbf{x}\) and \(\mathbf{\hat{x}}\) denote the ground-truth and predicted Mel spectrograms. Finally, the loss function for MSMC-VQ-GAN is written as follows:
\[\begin{split}\mathcal{L}_{g}&=\mathcal{L}_{adv}+ \lambda_{fm}*\mathcal{L}_{fm}+\lambda_{mel}*\mathcal{L}_{mel}\\ &+\lambda_{vq}*\mathcal{L}_{vq}+\lambda_{ms}*\mathcal{L}_{ms}+ \lambda_{frame}*\mathcal{L}_{frame}\end{split} \tag{8}\]
where \(\lambda_{fm},\lambda_{mel},\lambda_{vq},\lambda_{ms},\lambda_{frame}\) are weight coefficients.
### _The Associate VQ-S3R Learner_
To better predict the MSMCR from the text, we also propose an associate VQ-S3R learner to provide an effective pre-trained model for training the acoustic model. It imitates the process converting the text, a highly-compact discrete sequence, into the MSMCR, by employing a VQ-VAE-based model to abstract the MSMCR into a more compact VQ sequence and reconstruct it back.
#### Iii-B1 Model architecture
Fig. 4 shows the VQ-VAE model for the two-stage representation, where the second-stage sequence \(\mathbf{z}_{p}^{(2)}\) has a down-sample rate of 4. It is composed of a VQ encoder and a multi-stage decoder. In the VQ encoder, all sequences are first up-sampled to the same length as the lowest-stage sequence \(\mathbf{z}_{p}^{(1)}\), e.g. repeating all vectors in \(\mathbf{z}_{p}^{(2)}\) for 4
Fig. 3: The model architecture of a two-stage four-codebook VQ-GAN generator: “MQ”, “Q”, “Q”, “Q” denote multi-head vector quantization, addition, and concatenation operations. The red arrows emphasize the process of converting the MSMCR into the waveform at inference.
times. Then, aligned sequences are concatenated together to be processed by a feedforward Transformer block, and quantized by one codebook to obtain a highly-compact VQ sequence \(\mathbf{z}_{a}\). To force it to capture more phonetics-related information, we also employ a global encoder based on ECAPA-TDNN [52] to extract an utterance-level embedding representing some global attributes, e.g. speaker information, from the MSMCR. This embedding is then up-sampled to the frame level by repetition, and added with the quantized sequence \(\mathbf{z}_{a}\) for decoding.
The multi-stage decoder aims to reconstruct the MSMCR from \(\mathbf{z}_{a}\). It predicts sequences from high to low stages in a cascaded way. For this two-stage representation, the decoder first down-samples the encoder output for 4 times, then feeds it to the decoder block comprising a feedforward Transformer block followed by a linear layer. The output of the linear layer directly refers to the predicted sequence \(\mathbf{\hat{z}}_{p}^{(2)}\) in training, but is quantized by the codebook \(\mathbf{c}_{p}^{(2)}\) from the MSMC-VQ-GAN as the output. Similarly, in stage 1, \(\mathbf{\hat{z}}_{p}^{(1)}\) is predicted by given the encoder output and the higher-stage outputs. Notably, in training, we replace \(\mathbf{\hat{z}}_{p}^{(2)}\) with the ground-truth quantized sequence \(\mathbf{z}_{p}^{(2)}\) as the input.
#### Iii-B2 Loss function
The model is trained with the loss function \(\mathcal{L}_{a}\) written as follows:
\[\mathcal{L}_{vq} =||\mathbf{\tilde{z}}_{a}-sg(\mathbf{z}_{a})||_{2}^{2} \tag{9}\] \[\mathcal{L}_{rec} =\frac{1}{S}\sum_{i=1}^{S}||\mathbf{\hat{z}}_{p}^{(i)}-\mathbf{ z}_{p}^{(i)}||_{2}^{2}\] \[\mathcal{L}_{a} =\mathcal{L}_{vq}+\lambda_{rec}*\mathcal{L}_{rec}\]
where \(\mathcal{L}_{vq}\) is still the VQ loss for the VQ encoder training, \(\mathcal{L}_{rec}\) is the reconstruction loss between the ground-truth and reconstructed MSMCR, and \(\lambda_{rec}\) is a weight coefficient. In training, we still use the exponential moving average-based method to update the codebook \(\mathbf{c}_{a}\).
### _TTS Synthesis_
In TTS synthesis, we aim to convert the text to its corresponding MSMCR via the acoustic model, and then generate the waveform from the MSMCR via the vocoder. The acoustic model has the same architecture as MSMC-TTS [7], based on the FastSpeech [53]. It encodes the text sequence using a Transformer encoder, then up-samples it via repetition according to the predicted duration, finally generating the MSMCR \(\mathbf{\hat{z}}_{q}\) via a multi-stage decoder. In training, it inherits parameters of the pre-trained multi-stage decoder in the associate VQ-S3R learner, and then is trained with the supervised data. The training loss function is written as follows:
\[\mathcal{L}_{dur} =||\mathbf{\hat{d}}-\mathbf{d}||_{2}^{2} \tag{10}\] \[\mathcal{L}_{am} =\mathcal{L}_{rec}+\lambda_{dur}*\mathcal{L}_{dur}\]
where \(\mathcal{L}_{dur}\) denotes the duration loss between the ground-truth duration \(\mathbf{d}\) and the predicted duration \(\mathbf{\hat{d}}\), and \(\lambda_{dur}\) is a weight coefficient.
In vocoder, to convert the predicted \(\mathbf{\hat{z}}_{p}\) composed of multiple sequences to the waveform, we first feed it to the MSMC-VQ encoder in the pre-trained MSMC-VQ-GAN to obtain the encoder output sequence, as indicated by the red arrows shown in Fig. 3. Then, we synthesize the waveform via the speech decoder fine-tuned with the audio in the supervised dataset to adapt the target speaker better. In fine-tuning, we only update the parameters of the speech decoder using the same training configurations of MSMC-VQ-GAN.
## IV Experimental Protocol
### _Dataset_
In VQ-S3RL, we employ AIShell-3, a Mandarin multi-speaker high-quality speech dataset, as the training set. This dataset contains roughly 85 hours of emotion-neutral recordings spoken by 218 native Chinese Mandarin speakers, showing a rich coverage of phonetic and speaker information. In TTS training, we employ multiple supervised TTS datasets to evaluate TTS systems under various scenarios. The first dataset is _CSMSC2_, a single-speaker Mandarin TTS corpus with 10-hour high-quality supervised data, which is widely applied in training the standard single-speaker Mandarin TTS system. We also conduct low-resource scenarios by extracting subsets from CSMSC. Besides, a test set with 200 utterances is also extracted from this dataset but has no overlap with all training sets. Then, we construct a more challenging low-resource scenario using only 10 minutes of child speech data spoken by a five-year-old girl in Mandarin3. It has a test set with 24 utterances out of the training set. Finally, we use an internal Cantonese dataset with 15 minutes of supervised data to evaluate the performance of TTS systems in low-resource languages. The Cantonese test set comprises 134 utterances out of the training set.
Fig. 4: The model structure of VQ-VAE for the two-stage four-codebook representation, where the second-stage sequence has a down-sample rate of 4 along the time axis.
### _Feature_
First, in audio processing, all single-channel audio used in our work is down-sampled to the sample rate of 16kHz. Then, we extract Mel spectrograms for all datasets in the following way: first, pre-emphasize the audio with the coefficient of 0.97, and then convert it to the 1025-dim magnitude spectrograms by STFT with a window length of 50ms, a frameshift of 12.5ms, and an FFT size of 2048, finally compress the spectrogram into the 80-dim log-scale Mel spectrogram. We employ a HuBERT4[9], the contrastive S3RL model, pre-trained on WenetSpeech [54], a Mandarin dataset with around 10,000 hours of speech audio, to extract the general speech representation. The HuBERT feature is a sequence of 1024-dim vectors with a frameshift of 20ms. To align it to the Mel spectrogram, we up-sample it via nearest neighbor interpolation to a frameshift of 12.5ms.
Footnote 4: The pre-trained HuBERT model is available at [https://huggingface.co/TencentGameMat/chinese-hubert-large](https://huggingface.co/TencentGameMat/chinese-hubert-large).
In text processing, we convert the text to phonemes as the input of the acoustic model. For CSMSC and child datasets, we directly employ the phonemes and their corresponding duration labeled in the dataset for training. For the Cantonese dataset, we use an open-source G2P tool [55] to obtain phonemes, and train a Montreal Forced Aligner5 (MFA) model to obtain the phoneme-level duration for training.
Footnote 5: The tool is available at [https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner).
### _Model Configuration_
In VQ-S3RL, we apply the MSMC-VQ-GAN with 2 stages, where the second stage has a down-sample rate of 4 along the time axis. And in each stage, a 4-head codebook is used for vector quantization, where each head is composed of 64 codewords with a dimension of 64. In the MSMC-VQ encoder, we apply the 4-layer 256-dim feedforward Transformer block with 2-head self-attention to the Transformer encoder and the down-sample block. The MLP block has two 256-dim linear layers with a Tanh activation function. The residual CNN block comprises 4 1-D residual convolutional layers with a kernel size of 5. In the speech decoder, the frame decoder is also implemented with the 4-layer 256-dim feedforward Transformer block. The waveform generator is a Hifi-GAN-V1 [51] generator, which upsamples the input sequence 200 times to the 16kHz waveform via 4 CNN-based upsampling blocks with the upsample rates of \([5,5,4,2]\) and the kernel sizes of \([11,11,8,4]\).
In GAN training based on the UnivNet discriminator (UnivNet-c32) [50], we extract three magnitude spectrograms from the waveform using three STFT parameter sets, FFT size \([256,512,1024]\), frameshift \([40,80,160]\), and frame length \([120,320,640]\), for multi-resolution spectrogram discriminating. And we also reshape the 1-D waveform into five 2-D sequences with the period of \([2,3,5,7,11]\) for multi-period waveform discriminating6. In this work, MSMC-VQ-GAN is trained on AIShell-3 for 400k iterations using the AdamW optimizer (\(\beta_{1}=0.8,\beta_{2}=0.99\)) with a batch size of 16 utterances. Similar to random window discriminating [56], we also randomly select a segment with a length of 0.75 seconds from each utterance for adversarial training of the waveform generator to improve training efficiency. The learning rate of \(2\times 10^{-4}\) exponentially decays with the rate \(2^{-\frac{1}{200,000}}\) after 200k warm-up iterations. The weight coefficients \(\lambda_{fm},\lambda_{mel},\lambda_{vq},\lambda_{ms},\lambda_{frame}\) are set to 2, 45, 10, 1, 450, respectively. To stabilize the training process, we also apply warm-up training here, i.e. no GAN training in the first 50k iterations.
Footnote 6: The implementations of Hifi-GAN generator and UnivNet discriminator are available at [https://github.com/jaywalnut30/vits](https://github.com/jaywalnut30/vits)
The associate VQ-S3R learner compresses the MSMCR with one codebook with 64 256-dim codewords. The VQ encoder is a Transformer encoder with the same configuration as MSMC-VQ-GAN. And the global encoder is an ECAPA-TDNN [52] model with 128 channels for hidden layers. This model is trained on AIShell-3 for 200k iterations using Adam [57] optimizer (\(\beta_{1}=0.9,\beta_{2}=0.98\)) with the batch size of 64 utterances, and the learning rate of \(2\times 10^{-4}\) exponentially decayed with the rate \(2^{-\frac{1}{20,000}}\) after 20k warm-up iterations.
Finally, in TTS training, we use the supervised TTS dataset to fine-tune the speech decoder of MSMC-VQ-GAN with the same training configuration, and train the acoustic model with \(\lambda_{\textit{dur}}=0.1\) based on the pre-trained multi-stage decoder in the associate VQ-S3R learner. For a standard supervised dataset with sufficient audio, i.e. 10-hour CSMSC, we fine-tune the vocoder for 400k iterations and the acoustic model for 200k iterations. Otherwise, for all low-resource datasets, we only fine-tune the vocoder for 100k iterations, and the acoustic model for 50k iterations to avoid over-fitting.
### _Baselines_
In our experiments, we implement multiple fully-supervised and semi-supervised TTS approaches following the training configuration of QS-TTS.
#### Iv-D1 FastSpeech
It is a mainstream non-autoregressive neural TTS system based on the Mel spectrogram. It comprises an acoustic model based on Transformer blocks and a Hifi-GAN vocoder, having the same model hyperparameters as QS-TTS. We upgrade it to two semi-supervised versions: FastSpeech-S and FastSpeech-SS. In FastSpeech-S, the Mel-spectrogram-based Hifi-GAN vocoder is pre-trained on AI-Shell3, and then fine-tuned with the supervised dataset. In FastSpeech-SS, the Mel spectrogram is replaced with HuBERT features, and the HuBERT-based Hifi-GAN vocoder is also pre-trained on AI-Shell3.
#### Iv-D2 Vits [58]
It is the SOTA end-to-end TTS system based on VAE representations. It applies a VAE-GAN model based on Hifi-GAN to learn a generative speech representation, which can be predicted from the text by a Glow-based module [59]. We implement it based on the official model configuration7, and apply it with the waveform generator and the discriminator with the same configuration as that of QS-TTS for a fair comparison. We also implement its semi-supervised
version, VITS-SS [21], in our experiments. It first applies k-means clustering with 512 centroids on HuBERT features of AIShell-3 to create pseudo-labels for all unlabeled audio, then pre-trains VITS on this created paired dataset, and finally fine-tunes VITS using the supervised dataset.
#### Iv-D3 Msmc-Tts
It can be seen as the fully-supervised version of QS-TTS, which learns the MSMCR from the Mel spectrogram via the MSMC-VQ-GAN model, and is trained with only the supervised dataset. Besides, We also implement another semi-supervised version of MSMC-TTS, MSMC-TTS-SS, which pre-trains the Mel-spectrogram-based MSMC-VQ-GAN on AIShell-3, and only fine-tunes the speech decoder in TTS training.
#### Iv-D4 Back Translation
We also implement this semi-supervised training approach to pre-train the acoustic model. It first employs the supervised TTS dataset to train an ASR system, and then transcribe unlabeled speech audio of AIShell-3 to create a low-precision but large-scale paired dataset. The acoustic model is pre-trained on this dataset, and finally fine-tuned with the supervised TTS dataset. This approach highly relies on the quality of ASR to ensure precise transcription, so we train a CTC-ASR based on the pre-trained Mandarin Hubert model to convert the audio into phonemes and their corresponding durations extracted from CTC alignments. Besides, we also implement the k-means-based approach proposed in VITS-SS to avoid training ASR models. It directly quantize HuBERT features of AIShell-3 into 512 codewords by k-means to obtain pseudo-labels and corresponding durations, which are then applied to pre-train the acoustic model.
### _Evaluation_
In this work, we conduct objective and subjective tests to evaluate the proposed TTS approach comprehensively.
#### Iv-E1 Objective Metrics
Except for Mel Cepstral Distortion (MCD) [60], computing the perceptual difference between two fully-aligned audio files, we also propose evaluating the synthesis quality using Frechet distances [61] in various embedding spaces. Frechet distance can measure the distance between two sets of samples by calculating the difference between them in distributions as follows:
\[\mathbb{F}(\mathcal{N}_{s},\mathcal{N}_{t})=||\mu_{s}-\mu_{t}||^{2}+tr(\Sigma _{s}+\Sigma_{t}-2\sqrt{\Sigma_{s}\Sigma_{t}}) \tag{11}\]
where \(\mathcal{N}_{s}\) and \(\mathcal{N}_{t}\) denote the normal distribution of the synthesized audio set, and the target ground-truth audio set in the embedding space. And we can use different audio classification models to extract embeddings and calculate their mean vectors (\(\mu_{s},\mu_{t}\)) and covariance matrices (\(\Sigma_{s},\Sigma_{t}\)). This work employs three embedding spaces: acoustics, speaker, and phonetics. The Frechet distance in acoustic space, i.e. Frechet Audio Distance (FAD) [62], has been well applied in evaluating the synthesis quality of neural vocoder, which extracts the embedding for each 4-second audio from an audio classification model. Similarly, we extract utterance-level speaker embeddings using an ECAPA-TDNN-based speaker verification model, and extract utterance-level phonetic embeddings using a Transformer-based ASR model by averaging its encoder output sequence into one vector. These three distances are denoted as FD-AC, FD-SV, and FD-ASR in our following work8. Notably, the Frechet distance in speaker space is multiplied by 10 to align with other distances.
Footnote 8: The pre-trained audio classification, speaker verification, and ASR models are available at [https://github.com/harriaylor/forchvggish](https://github.com/harriaylor/forchvggish), [https://huggingface.co/speechbrain/spkrce-capa-voxceleb](https://huggingface.co/speechbrain/spkrce-capa-voxceleb), and [https://github.com/openai/whvgere/tree/main](https://github.com/openai/whvgere/tree/main) (multi-lingual base version).
We also evaluate intelligibility, the most crucial factor in evaluating a TTS system, by calculating the Character Error Rate (CER) and Phoneme Error Rate (PER) by transcribing the synthesized audio using ASR tools9. And we use G2P tools10 to convert the transcribed and ground-truth text into phonemes to calculate PER, which focuses more on the pronunciation accuracy of phonemes.
Footnote 9: The ASR tools for Mandarin and Cantonese are available at [https://github.com/wenet-e2/wenet/tree/main/runtime/binding/python](https://github.com/wenet-e2/wenet/tree/main/runtime/binding/python) and [https://huggingface.co/Scrya/whisper-large-v2-cantonese](https://huggingface.co/Scrya/whisper-large-v2-cantonese).
#### Iv-E2 Subjective Metrics
We conduct MOS (mean opinion score) tests to subjectively evaluate TTS systems on synthesis quality. In each MOS test, 10 native speakers are hired to rate each audio sample in 20 test cases, where each test case contains multiple audio samples synthesized by different TTS approaches but from the same text. The rating ranges from 1 to 5 with an increment of 0.5, where the higher score indicates better quality. Finally, we statistic the scores of each method to obtain the MOS with a 95% confidence interval.
## V Results
### _TTS comparison: Semi-Supervised v.s. Supervised_
First, we compare the proposed semi-supervised TTS system with supervised TTS systems: FastSpeech, VITS, and MSMC-TTS. Table I shows the MOS test result of standard single-speaker TTS systems on the 10-hour CSMSC. FastSpeech, the modular TTS system based on the Mel Spectrogram, shows the worst synthesis quality among all TTS approaches, obtaining the lowest MOS of 3.55 and the
highest FD-AC and FD-SV of 0.79 and 1.19. But it has a satisfying performance in pronunciation accuracy, showing the lowest PER of 0.70%. VITS, the end-to-end approach based on VAE, significantly improves audio quality by a much lower FD-AC and FD-SV, and shows a higher MOS of 3.74. But the intelligibility also seriously degrades with the increased CER and PER. Instead, MSMC-TTS performs well in both intelligibility and synthesis quality, obtaining much-decreased metrics over FastSpeech, and a higher MOS of 3.99, which indicates the effectiveness of this approach based on VQ speech representations. QS-TTS inherits this approach, and is enhanced via the proposed VQ-S3RL using more unlabeled speech data. It further improves the overall performance, although sufficient supervised data is given in training.
Then, we conduct this comparison in a more challenging TTS task, a low-resource scenario with only 15 minutes of supervised data. We randomly select 257 utterances from the full training set of CSMSC to form a 15-minute training set. As shown in Table II, all supervised approaches degrades seriously in audio quality and intelligibility under this scenario, achieving very low subjective scores. VITS and MSMC-TTS still achieve higher audio quality than FastSpeech, showing lower FD-AC and FD-SV. However, due to insufficient data to learn effective speech representations, their performance on intelligibility has more severe degradation, where the end-to-end approach, VITS, shows the highest CER and PER. QS-TTS addresses this problem by conducting VQ-S3RL on more unlabeled speech data, significantly improving audio quality and intelligibility. It shows significantly improved audio quality by an FD-AC of 0.25 and an FD-SV of 1.00, even lower than FastSpeech trained with 10 hours of supervised data. The intelligibility is also enhanced with the CER of 8.98 and PER of 1.46. Finally, its MOS of 3.75 further validates that QS-TTS surpasses all supervised methods greatly in this low-resource scenario.
As shown in Fig. 5, we also visualize the magnitude spectrograms of samples synthesized by the MSMC-TTS and QS-TTS to investigate their differences further. The high-quality audio usually presents clear and smooth harmonics in the middle- and low-frequency parts to produce accurate pronunciation perceptually. However, MSMC-TTS trained with only 15 minutes of supervised data cannot synthesize the expected harmonics stably. It often presents a jittered pitch, and fuzzy middle-frequency harmonics, as shown in the red boxes, which leads to a degradation of audio quality and intelligibility. In QS-TTS, these issues are alleviated significantly. The synthesized audio shows clearer smoother harmonics in low- and middle-frequency parts, and has a higher variance in the high-frequency part, finally leading to a higher MOS in the subjective test.
### _TTS Comparison: Semi-Supervised Approaches_
In the experiment, we compare QS-TTS with semi-supervised TTS systems, FastSpeech-SS and VITS-SS, to further validate the effectiveness of QS-TTS. First, we compare different semi-supervised TTS systems under an intra-lingual cross-style low-resource scenario using the 10-minute child speech dataset. It shares the same languages as AlShell-3, but has a unique timbe of a five-year-old girl unseen in AlShell-3. As shown in Table III, in this more challenging task with fewer supervised data, all approaches show higher Frechet distances and error rates. First, the SOTA-supervised approach, MSMC-TTS, shows the worst intelligibility with the high CER and
Fig. 5: The magnitude spectrograms of audio samples synthesized from the same input text by MSMC-TTS (left) and QS-TTS (right) trained with 15-minute CSMSC. Red boxes highlight areas with significant differences.
PER of 27.53% and 13.02%, and also performs poorly on audio quality with the highest FD-AC of 1.76. The semi-supervised version of FastSpeech, FastSpeech-SS, is significantly enhanced over its fully-supervised version, achieving comparable performance to MSMC-TTS. And VITS-SS enhanced by semi-supervised training outperforms MSMC-TTS significantly, achieving much lower Frechet distances and error rates. Finally, although QS-TTS has a higher FD-AC than VITS-SS, it obtains the best performance on all other metrics, especially with a CER of 14.61% of a PER of 5.35% which are twice as low as those of MSMC-TTS. It indicates the universal effectiveness of QS-TTS in extreme scenarios.
Then, we further validate its effectiveness in a cross-lingual scenario, which builds a TTS system for a low-resource language, Cantonese, using a 15-minute supervised dataset, where the language is also unseen in the unlabeled speech dataset. As shown in Table IV, this dataset has a relatively lower audio fidelity and lower expressiveness on prosody, hence achieving the MOS of 3.86 only. FastSpeech-SS shows a significant improvement over MSMC-TTS in this task, obtaining a higher MOS of 3.58, and lower metrics in audio quality and intelligibility. But VITS-SS fails to keep a stable performance, showing seriously degraded intelligibility with the much higher CER and PER of 42.35% and 19.92%. The pseudo-labels extracted from the Mandarin dataset using k-means cannot adapt to Cantonese well, providing a seriously biased pre-trained model. It leads to the lowest MOS of 3.34, showing the limitation of this semi-supervised approach in cross-lingual scenarios. However, QS-TTS still performs best with the highest MOS of 3.66. It still has the lowest CER and PER, and keeps high audio quality with comparable or lower Frechet distances than other approaches.
In conclusion, through these challenging low-resource experiments, QS-TTS is validated as a more effective and stable approach in improving both audio quality and intelligibility of TTS over other supervised and semi-supervised approaches.
### _The Principal VQ-S3R Learner_
We conduct experiments for two VQ-S3R learners respectively to investigate the impact of the proposed VQ-S3RL on TTS. The principal learner combines HuBERT-based contrastive S3RL and MSMC-VQ-GAN-based generative S3RL to benefit TTS maximally. To validate the effectiveness of these two components, we implement QS-TTS-P, i.e. QS-TTS with only the principal learner, and compare it with the following semi-supervised TTS systems: FastSpeech-S, FastSpeech-SS, and MSMC-TTS-SS. All of these systems use AIShell-3 for pre-training and the 15-minute supervised data of CSMSC for TTS training.
As shown in Table V, first, The Mel-spectrogram-based FastSpeech-S, which does not use HuBERT and MSMC-VQ, performs best in analysis-synthesis, since the Mel spectrogram contains sufficient acoustic information for complete speech reconstruction. However, this feature with abundant information is hard to predict by the acoustic model without sufficient supervised data for training, leading to the worst TTS performance on audio quality. In MSMC-TTS-SS, MSMC-VQ compressed the Mel spectrogram into a more compact representation with less information. This lossy compression degrades the analysis-synthesis quality, but makes the feature easier to predict from the model trained with less supervised training data. It leads to a smaller gap between the ground-truth and predicted features, making TTS synthesis closer to analysis-synthesis in audio quality. However, the information loss also degrades intelligibility, causing a trade-off between audio quality and intelligibility in TTS synthesis. Hence, it is not advisable to overly compress features to enhance TTS unless we can keep sufficient phonetic information in compression. In FastSpeech-SS, the HuBERT, a contrastive speech representation learned from massive speech audio, discards more acoustic information, showing a lower audio quality in analysis-synthesis, but keeps richer phonetic information, which has a lower PER of 0.58% than that of the Mel spectrogram, validates its completeness in representing speech.
This feature also significantly improves TTS in both audio quality and intelligibility. Finally, QS-TTS-P applies MSMC-VQ with HuBERT to learn the compact representation with rich phonetic information, and achieves the best performance in TTS, strongly verifying the effectiveness of the principal VQ-S3R learner in TTS.
### _The Associate VQ-S3R Learner_
The associate VQ-S3R learner aims to provide a practical pre-trained model to enhance the acoustic model to predict the MSMCR better. To validate its effectiveness on TTS, as shown in Table. VI, we compare it with another pre-training approach, back translation, under low-resource scenarios using 15 minutes of supervised data in Mandarin or Cantonese.
First, the baseline approach, ASR-based back-translation, does not achieve consistent performance in these two scenarios. In Mandarin TTS, the ASR enhanced by Mandarin-HuBERT and trained with the Mandarin TTS dataset can transcribe the Mandarin unlabeled speech dataset well, providing a good pre-training paired dataset with sufficient transcription precision to support model pre-training, achieving the best TTS quality among all methods. However, in Cantonese TTS, the ASR trained with the Cantonese TTS dataset cannot transcribe Mandarin speech audio well into Cantonese phonemes, leading to the low-quality paired dataset. The pre-trained acoustic model on this dataset contaminates the following fine-tuning with Cantonese supervised data instead, showing higher CER and PER. It indicates the limitation of the ASR-based back-translation under the cross-lingual application. Although the k-means-based back-translation avoids ASR training, and shows consistent performance in both scenarios, it only improves the audio quality while degrading intelligibility. Different from these back-translation-based approaches, the associate VQ-S3R learner can enhance TTS in both audio quality and intelligibility under intra-lingual and cross-lingual scenarios, hence validated as an effective and general pre-training approach.
Besides, we also investigate the impact of the codebook size of the associate VQ-S3R learner on TTS. As shown in Fig. 6, we train three VQ-VAE models with the codebook sizes of 4, 64, and 1024, then apply them for acoustic model training in Mandarin and Cantonese. The results in these two scenarios show opposite conclusions, that the Mandarin TTS system sharing the same language as the pre-training set prefers a smaller codebook size, while the Cantonese TTS system in a different language from the pre-training set prefers a larger codebook size. The highly-compact VQ sequence can abstract phonetic information in the Mandarin training set well, but also lacks generalization to represent cross-lingual speech. Hence, the larger codebook size benefits Cantonese TTS instead. In practice, we suggest training multiple learners with different codebook sizes, and selecting the suitable learner in TTS training in terms of the difference in language between the pre-training set and the supervised dataset.
### _Requirements for Supervised Data_
Finally, we investigate the supervised data requirements of QS-TTS by comparing MSMC-TTS and QS-TTS trained with different sizes of CSMSC, and drawing the line charts of FD-ASR and CER, as shown in Fig. 7. First, the fully-supervised MSMC-TTS and semi-supervised QS-TTS achieve similar and good TTS performance with sufficient supervised data. Under the situation with 10-hour supervised data, the gap between these two systems is just 0.04 on FD-ASR and 0.06% on CER. However, as data size decreases, the synthesis quality of MSMC-TTS degrades significantly, showing the rapidly increased FD-ASR and CER. In the low-resource scenario with only 15 minutes of supervised data, it achieves the FD-ASR of 4.44 and CER of 16.80%, which are 193% and 175% higher than those in the system trained with 10-hour supervised data. Instead, QS-TTS shows a much-lower audio quality and intelligibility decay as supervised data size decreases. The QS-TTS trained with 15-minute supervised data achieves the FD-ASR of 2.2 and CER of 8.98%, which are only 50% and 48%
Fig. 6: The impact of different codebook sizes to associate VQ-S3R learner in Mandarin and Cantonese low-resource TTS
Fig. 7: The FD-ASRs and PERs of MSMC-TTS and QS-TTS trained with different data sizes of CSMSC.
higher than QS-TTS trained with 10-hour supervised data. And the gap between these two approaches is also widened to 2.22 on FD-ASR and 7.82% on CER, which is nearly a hundred times larger. Hence, under the low-resource scenario, MSMC-TTS has produced serious pronunciation issues, but QS-TTS can still keep good performance. This result strongly validates that the proposed semi-supervised TTS framework, QS-TTS, has a lower requirement for supervised data, and indicates its great potential in low-resource scenarios.
## VI Conclusion
This paper proposes QS-TTS, a novel semi-supervised TTS framework based on VQ-S3RL to effectively utilize more unlabeled speech audio to improve TTS quality while reducing its requirements for supervised data. The VQ-S3RL is conducted through two learners: The principal learner combines Multi-Stage Multi-Codebook (MSMC) VQ-GAN with contrastive S3RL to learn high-quality generative MSMC VQ-S3R, while decoding it to the high-quality audio; the associate learner further compresses the MSMCR into a highly-compact VQ representation via a VQ-VAE-based model. Then, TTS is implemented based on the MSMCR, and applied with VQ-S3R learners via transfer learning to achieve higher synthesis quality with lower supervised data requirements. This proposed framework can synthesize high-quality speech with lower supervised data requirements, significantly outperforming mainstream supervised and semi-supervised TTS approaches, especially in low-resource scenarios. Besides, the proposed VQ-S3RL also shows its effectiveness in providing better speech representations and pre-trained models for TTS by comparing with TTS systems with different speech representations and transfer learning methods. Finally, the slowly decayed performance of QS-TTS as supervised data decreases further validates its lower requirement for supervised data, and indicates its great potential in low-resource scenarios.
| この論文は、ベクトル量子化自己 supervised な音声表現学習 (VQ-S3RL) を用いて、音声データの標的化を削減することで、TTS品質を向上させるための新しい semi-supervised TTSフレームワーク、QS-TTSを提案しています。このフレームワークには、2つの VQ-S3R 学習器が含まれています。まず、主学習器は、MSMC-VQ-GANを組み合わせた対照的 S3RLを用いて、高品質のオーディオを生成する多段階マルチコードブック (MSMC) の VQ-S3R を提供します。その後、関連学習者は、MSMC represntationを非常にコンパクトな VQ 表現に変換します。これらの2つの生成型 VQ-S3R 学習器は、TTSに有益な音声表現と事前学習モデルを提供します。これにより、TTS の合成品質が大幅に向上し、標的化データの要求が |
2309.07891 | HandNeRF: Learning to Reconstruct Hand-Object Interaction Scene from a
Single RGB Image | This paper presents a method to learn hand-object interaction prior for
reconstructing a 3D hand-object scene from a single RGB image. The inference as
well as training-data generation for 3D hand-object scene reconstruction is
challenging due to the depth ambiguity of a single image and occlusions by the
hand and object. We turn this challenge into an opportunity by utilizing the
hand shape to constrain the possible relative configuration of the hand and
object geometry. We design a generalizable implicit function, HandNeRF, that
explicitly encodes the correlation of the 3D hand shape features and 2D object
features to predict the hand and object scene geometry. With experiments on
real-world datasets, we show that HandNeRF is able to reconstruct hand-object
scenes of novel grasp configurations more accurately than comparable methods.
Moreover, we demonstrate that object reconstruction from HandNeRF ensures more
accurate execution of downstream tasks, such as grasping and motion planning
for robotic hand-over and manipulation. Homepage:
https://samsunglabs.github.io/HandNeRF-project-page/ | Hongsuk Choi, Nikhil Chavan-Dafle, Jiacheng Yuan, Volkan Isler, Hyunsoo Park | 2023-09-14T17:42:08 | http://arxiv.org/abs/2309.07891v5 | # HandNeRF: Learning to Reconstruct Hand-Object Interaction Scene
###### Abstract
This paper presents a new method to learn hand-object interaction prior for reconstructing a 3D hand-object scene from a single RGB image. The inference as well as training-data generation for 3D hand-object scene reconstruction is challenging due to the depth ambiguity of a single image and occlusions by the hand and object. We turn this challenge into an opportunity by utilizing the hand shape to constrain the possible relative configuration of the hand and object geometry. We design a generalizable implicit function, HandNeRF, that explicitly encodes the correlation of the 3D hand shape features and 2D object features to predict the hand and object scene geometry. With experiments on real-world datasets, we show that HandNeRF can reconstruct hand-object scenes of novel grasp configurations more accurately than comparable methods. Moreover, we demonstrate that object reconstruction from HandNeRF ensures more accurate execution of downstream tasks, such as grasping and motion planning for robotic hand-over and manipulation. The code will be released here: [https://github.com/SamsungLabs/HandNeRF](https://github.com/SamsungLabs/HandNeRF)
## I Introduction
The understanding of 3D hand-object interactions, i.e., semantic reconstruction of hand and object geometry, is key to applications such as human-to-robot object handover, and augmented and virtual reality. Most of the current methods are primarily based on template-based approaches, where a known 3D CAD model of a hand and an object is assumed. The major focus is predicting the transformation that fits the known 3D CAD model to input observation [1, 2, 3]. Even with these assumptions, the hand and object reconstruction from a single RGB image are both difficult tasks due to depth ambiguity, partial observation, and mutual occlusions.
The 3D hand reconstruction methods have seen significant advances [4, 5, 6] due to large-scale hand datasets and automated reliable 3D hand annotations [7, 8, 9, 10, 11]. In contrast, the progress in reconstruction of grasped objects from a single RGB image is relatively limited due to lack of data. Generating a 3D CAD model of large set of object and labeling 6D poses in hand-object interaction scenes are labor-intensive and challenging. The sparsity of views in realworld data collection settings makes the labeling ambiguous, often requiring manual initialization and post-processing for refining 6D object pose annotations [9, 12].
In this paper, we present a new method, named HandNeRF, that estimates a semantic neural radiance field of a hand-object interaction scene from a single RGB image and without using an object template. HandNeRF predicts the density (occupancy), color, and semantic label (hand, object, or background) for points in the 3D space which can be used for 3D semantic reconstruction and novel view synthesis. The key technical contribution of HandNeRF is that it alleviates the ill-posed 2D to 3D reconstruction problem by utilizing the hand shape to constrain the possible relative configuration of hand and object geometry. In particular, HandNeRF explicitly learns the correlation between hand and object geometry to regularize their semantic reconstruction.
HandNeRF is trained on multiple hand-object interaction scenes to learn the correlation between hand and object geometry. Each scene has synchronized sparse view RGB images, 3D hand mesh annotation, and 2D semantic segmentation. At the inference time, a single RGB image with a novel grasp configuration is given. Fig. 1 illustrates HandNeRF, which is trained with sparse view RGB images and generates high-quality 3D reconstruction and rendering of images from an unseen single RGB input. Note that we do not use depth information in the whole process, which is much more unconstrained setting for both training and testing.
We evaluate HandNeRF on realworld datasets including
Fig. 1: Given a single RGB image of a hand-object interaction scene, HandNeRF predicts the hand and object’s density, color, and semantics, which can be converted to reconstruction of 3D hand and object meshes and rendered to novel view images (RGB, depth, and semantic segmentation). HandNeRF learns the correlation between hand and object geometry from different types of hand-object interactions, supervised by sparse view images. HandNeRF is tested on a novel scene with an unseen hand-object interaction.
DexYCB [9] and HO-3D v3 [13] in terms of novel view synthesis and 3D reconstruction. We compare with the state-of-the-art template-free baselines [14, 15, 16], which we adapt to the task of reconstructing hand-object interaction without 3D object ground truth during training. Following the previous works [15, 17], we first keep the object used in training and testing the same, but the grasp configuration at testing is chosen to be significantly different from those during training. We further evaluate the generalization capability of HandNeRF by testing the model trained on 15 DexYCB objects on 4 unseen DexYCB objects. By learning the hand-object interaction prior with the explicit hand and object correlation, HandNeRF outperforms the baselines in generalization to novel hand grasps, which entail unseen occlusions and unseen object poses, and novel object shapes. Furthermore, we present qualitative results demonstrating HandNeRF's ability to reconstruct objects using in-house data. The annotation process for this data is fully automated in a casual environment, which uses only 7 sparse view RGB cameras, without the need for 3D CAD model generation or 6D object pose labeling. Finally, we experimentally demonstrate that HandNeRF enables more accurate execution of a downstream task of grasping for robotic hand-over and collision-free motion planning.
## II Related Work
Our work, HandNeRF, lies at the intersection of understanding 3D hand-object interaction and implicit neural representations. In this section, we first review the current approaches for 3D hand-object interaction reconstruction from a single RGB camera. Then, we discuss recent methods for sparse view-specific implicit neural representations, relevant to our work.
**3D reconstruction of hand-object interaction:** The study on the understanding of 3D hand-object interactions [12, 18, 19, 20] refers to semantic reconstruction of the hand and object geometry. In the context of this task, the existing methods for hand and object reconstruction are primarily based on template-based approaches, where the template indicates a known 3D CAD model of a hand and an object. The 3D hand reconstruction focuses on predicting mesh parameters, such as MANO [21], and has seen a significant advance due to large-scale datasets [7, 9, 10] and success of deep learning-based approaches [22, 6, 23]. Whereas, the 3D object reconstruction is approached as 6D pose estimation [1, 2, 3], which predicts the transformation that fits the known 3D CAD model to input observation.
The template-based approach for object reconstruction has two main limitations regarding collection of training data in the real world. First, it is costly and labor-intensive to obtain every object's 3D CAD model, requiring 3D laser scanning or a dense multi-view camera setup. Second, labeling 6D object poses in hand-object scenes is challenging due to hand occlusions and becomes more ambiguous if the captured views are not dense enough. In contrast, for training HandNeRF we require only a few sparse RGB views of hand-object interaction scenes and hand-pose annotations which can be automated [10, 11].
Recently, [19, 24, 15] proposed methods that reconstruct a hand-held object without a known template. The work of Hasson et al. [19] jointly estimated the MANO hand parameters and genus-0 object mesh by leveraging AtlasNet [25]. Karunratanakul et al. [24] characterized the surface of the hand and object with a signed distance field. Ye et al. [15] conditioned object reconstruction on the hand articulation and also predicted a signed distance field of an object. While these methods are template-free at the inference time, they still require 3D object meshes for training. Therefore, they suffer with the same data collection problems as the template-based methods.
**Implicit neural representation from sparse view RGB images:** The sparse view-specific NeRF (Neural Radiance Field) representations have gained attention in object reconstruction [26, 27, 14, 28] and 3D human body reconstruction [16, 29, 30, 31]. Without any 3D annotation, they reconstruct a plausible 3D scene when optimized over a single scene only with sparse views. These methods address the limitations of multi-view reconstruction approaches such as vanilla NeRF [32] and SfM (Structure from Motion), which require a dense capture setup and fail when given sparse views [33]. These representations are explored for generalization by learning object appearance and geometry priors from multiple scenes. When tested on novel scenes with unseen object poses or unseen objects, a partial 3D reconstruction is achieved, although with blurry textures and noisy geometry. This limited performance is inevitable due to sparsity of input views, but the practical applications of these methods is significant. Nevertheless, scenes with a single view or hand-held objects are are less studied.
Our work is most relevant to the work of Choi et al. [16], MonoNHR. It reconstructs a neural radiance field of a clothed human body from a single RGB image without ground truth 3D scans of clothed humans, by conditioning on a human body mesh [34]. While the task and approach are analogous to ours, MonoNHR does not explicitly encode correlation between the object (clothes) and hand (body).
## III Method
The motivation for HandNeRF is to tackle the challenges of 3D scene reconstruction from a 2D RGB image, such as depth uncertainties and partial observation. HandNeRF achieves this by learning hand-object interaction feature that correlates the hand and object geometry, given a 3D hand shape and 2D object segmentation.
### _Modeling Hand-object Interaction_
Consider a point on a 3D object, \(\mathbf{x}_{o}\in\mathds{R}^{3}\) where its occupancy or density is \(\sigma\in[0,1]\), i.e., one if occupied, and zero otherwise. The problem of 3D reconstruction of the object can be cast as learning a function that predicts the density given the location and associated 3D feature \(\mathbf{f}_{o}\):
\[f(\mathbf{x}_{o},\mathbf{f}_{o})=\sigma, \tag{1}\]
where \(f\) is an implicit function of which zero-level set defines the surface of the object. Despite the success of representing objects [14] and humans [35], Equation (1) has a limited capability to express complex scenes such as hand-object interaction scenes.
We extend Equation (1) by incorporating the interactions between the object and hand. Consider a 3D hand mesh \(\mathcal{M}=\{\mathbf{m}_{i}\}\) that is made of a set of faces, where \(\mathbf{m}_{i}\) is the \(i^{\mathrm{th}}\) face of the mesh. Each face in the mesh is associated with a 3D feature \(\mathbf{f}_{h}\). We marginalize the density of the object over the density predicted by the hand mesh:
\[f(\mathbf{x}_{o},\mathbf{f}_{o})=\sum_{\mathbf{x}_{h}\in\mathcal{M}}f( \mathbf{x}_{o},\mathbf{f}_{o}|\mathbf{x}_{h},\mathbf{f}_{h})f(\mathbf{x}_{h}, \mathbf{f}_{h}), \tag{2}\]
where \(\mathbf{x}_{h}\) is the centroid of the vertices of the face \(\mathbf{m}_{i}\), \(f(\mathbf{x}_{o},\mathbf{f}_{o}|\mathbf{x}_{h},\mathbf{f}_{h})\) is the conditional density given the hand pose and its feature, and \(f(\mathbf{x}_{h},\mathbf{f}_{h})=\{0,1\}\) is the hand occupancy provided by 3D hand mesh estimation.
Learning \(f(\mathbf{x}_{o},\mathbf{f}_{o}|\mathbf{x}_{h},\mathbf{f}_{h})\) is challenging due to the quadratic complexity of pairwise relationship between all possible pairs of hand and object points \((\mathbf{x}_{h},\mathbf{x}_{o})\). Instead, we propose to learn an interaction feature \(\mathcal{F}\), a correlation between \(\mathbf{f}_{o}\) and \(\mathbf{f}_{h}\) through a series of 3D convolutions:
\[\mathcal{F}=\phi_{n}\circ\cdots\circ\phi_{1}\circ\mathcal{V}, \tag{3}\]
where \(\mathcal{F}\in\mathds{R}^{w\times h\times d\times c}\) is the volume of the interaction features with \(w\) width, \(h\) height, \(d\) depth, \(c\) feature dimension, and \(\phi_{1},\cdots,\phi_{n}\) are the 3D convolutional filters. The interaction feature \(\mathcal{F}|_{\mathbf{x}_{o}}\) evaluated at an object point \(\mathbf{x}_{o}\) is expected to encode how hand surface points contribute to predicting the occupancy of the point \(\mathbf{x}_{o}\) of the object. The input to the 3D CNN is \(\mathcal{V}\in\mathds{R}^{w\times h\times d\times c^{\prime}}\), which is the feature volume with the \(c^{\prime}\) feature dimension that includes both hand and object features:
\[\mathcal{V}_{\mathbf{x}}=\left\{\begin{array}{ll}\mathbf{f}_{h}&\mathrm{if} \quad\mathbf{x}\in\{\overline{\mathbf{m}}_{i}\}\\ \mathbf{f}_{o}&\mathrm{else\ if}\quad\mathrm{II}\mathbf{x}\in\mathcal{O}\\ \mathbf{0}&\mathrm{otherwise}\end{array}\right., \tag{4}\]
where \(\mathcal{V}_{\mathbf{x}}\) is the feature at \(\mathbf{x}\), \(\{\overline{\mathbf{m}}_{i}\}\) is a set of the centroids of the hand mesh faces, \(\mathrm{II}\mathbf{x}\) is the camera projection of \(\mathbf{x}\) to the input image, and \(\mathcal{O}\) is the 2D input object mask.
With the interaction feature \(\mathcal{F}\), we extend Equation (1) to include the color, \(\mathbf{c}\in\mathds{R}^{3}\), and semantic label \(\mathbf{l}\in[0,1]^{L}\) where \(L=3\) is the number of semantic classes (i.e., hand, object, and background):
\[f(\mathbf{x}_{o},\mathbf{d},\mathbf{f}_{\mathrm{2D}},\mathcal{F}|_{\mathbf{x} _{o}})=(\sigma,\mathbf{c},\mathbf{l}), \tag{5}\]
where \(\mathbf{d}\) is the rendering viewing direction, and \(\mathbf{f}_{\mathrm{2D}}\) is the pixel-aligned image feature of \(\mathbf{x}_{o}\). With the prediction of the volume density, color radiance, and semantic label, we render each pixel with its label by integrating the density field. Please refer to Semantic-NeRF [36] for more technical detail of semantic neural rendering.
Fig. 2 shows the impact of the interaction feature \(\mathcal{F}\). Our method and MonoNHR [16] both use 3D CNNs to encode features volumetrically based on estimated 3D hand mesh. Unlike MonoNHR, ours explicitly learns hand-object interactions as elaborated, enabling robust object geometry reconstruction even for unobserved and occluded surfaces.
### _Implementation of HandNeRF_
We learn the representation of the hand-object interaction by minimizing the following loss:
\[\mathcal{L}=\sum_{\mathbf{p}\in\mathcal{R}}\left(\left\|\hat{C}(\mathbf{p})- C(\mathbf{p})\right\|_{2}^{2}-\sum_{i=1}^{L}L_{i}(\mathbf{p})\log(\hat{L}_{i}( \mathbf{p}))\right)\]
where \(\mathcal{R}\) is a set of pixels in multiview images, \(\hat{C}(\mathbf{p})\) and \(C(\mathbf{p})\) are the predicted and ground truth color of pixel \(\mathbf{p}\), respectively, and \(\hat{L}_{i}(\mathbf{p})\) and \(L_{i}(\mathbf{p})\) are the predicted and ground truth semantic label at pixel \(\mathbf{p}\).
We design a novel network called _HandNeRF_ that predicts a semantic neural radiance field from a single RGB image, as shown in Fig. 3. It is composed of ResNet-18 [37] for feature extraction, sparse 3D convolution layers [38] for volume feature encoding, and linear layers for the neural field estimation. During training, the estimated semantic neural radiance field is validated by projecting on sparse views.
**Input Features:** We deproject a 2D image feature extracted from an image to the points in the 3D volume \(\mathcal{V}\) to compose the 3D hand feature \(\mathbf{f}_{h}\) and the 2D object feature \(\mathbf{f}_{o}\) in Equation (4). The 3D hand feature is made of three components: \(\mathbf{f}_{h}=\begin{bmatrix}\mathbf{h}^{\mathsf{T}}&\phi(\overline{ \mathbf{m}}_{i})^{\mathsf{T}}&\psi(i)^{\mathsf{T}}\end{bmatrix}^{\mathsf{T}}\), where \(\mathbf{h}\) encodes the visual context of hand-object interaction that is obtained from the 2D image feature at the projection of \(\overline{\mathbf{m}}_{i}\). \(\phi(\overline{\mathbf{m}}_{i})\) is a positional encoding of the centroid's coordinate, and \(\psi(i)\) is the positional encoding of the face index. \(\psi(i)\) semantically differentiates a 3D hand point \(\mathbf{x}_{h}\) from different points that are possibly empty, belong to different hand faces, or an object. The 2D object feature is designed in a similar fashion: \(\mathbf{f}_{o}=\begin{bmatrix}\mathbf{o}^{\mathsf{T}}&\phi(\mathbf{x}_{o})^{ \mathsf{T}}&\mathbf{e}^{\mathsf{T}}\end{bmatrix}^{\mathsf{T}}\), where \(\mathbf{o}\) is the 2D image feature at the projection of \(\mathbf{x}_{o}\), and \(\mathbf{e}\) is a constant value for all \(\mathbf{x}_{o}\), where \(\mathbf{x}_{o}\in\{\mathbf{x}\mid\mathrm{II}\mathbf{x}\in\mathcal{O}\text{ and }\mathbf{x}\notin\{\overline{\mathbf{m}}_{i}\}\}\).
**3D CNN Design:** We correlate the 3D hand feature \(\mathbf{f}_{h}\) and the 2D object feature \(\mathbf{f}_{o}\) with a sparse 3D CNN [38] that takes the feature volume \(\mathcal{V}\) as input, to learn the interaction feature. \(\mathcal{V}\) rasterizes 3D point coordinates in the neural radiance field with a voxel size of 5mm\(\times\)5mm\(\times\)5mm. Before the rasterization, 3D coordinates of object points are perturbated by a random Gaussian noise during training, for augmentation.
Fig. 2: We visualize object reconstruction with the hand estimation from HandOccNet [5]. Using explicit hand-object interaction features, HandNeRF generates more accurate reconstruction.
The sparse 3D CNN produces multi-scale feature volumes, which conceptually add up to the interaction feature volume \(\mathcal{F}\) by concatenation along the feature channel dimension. In practice, we keep the feature volumes separated and extract the interaction feature \(\mathcal{F}|_{\mathbf{x}}\) of a query point \(\mathbf{x}\) per volume with tri-linear interpolation.
## IV Experiments
In this section, we first validate our design of HandNeRF method by conducting detailed ablation studies. Then, we evaluate against the state-of-the-art baselines [14, 15, 16] that are adapted to be trained on sparse view images without an object template and to be tested on a single image. We adapted IHOI [15] to training with sparse view images, instead of template-based object annotation as in the original paper, and named it as IHOINeRF. For IHOINeRF, training with semantic labels fails to converge where reconstruction accuracy for hand and object cannot be measured.
**Metrics:** To assess the rendering quality, we use four metrics by comparing with the ground truth images: peak signal-to-noise ratio (PSNR), semantic segmentation intersection over union (IoU), structural similarity index (SSIM), and LPIPS [39]. For the 3D reconstruction accuracy, we compare 3D distance with the ground truth by converting the reconstructed neural radiance field to a 3D mesh using Marching cubes algorithm [40]. F-scores at 5mm and 10mm thresholds, and Chamfer distance (CD) in millimeters are used. We evaluate hand and object separately using 3D segmentation from the predicted semantics.
**Datasets:** We use DexYCB [9] and HO-3D v3 [13] datasets for comparison. In DexYCB, a hand performing object handover is captured from 8 views, and 5 sequences per object are recorded where each sequence shows a distinct grasp pattern. Per object, we keep 4 sequences for training and 1 sequence for testing to validate generalization to novel hand grasps. In Ho-3D v3, an object grasped in a hand is captured from 5 views and 1 sequence per object is recorded where a grasping hand pose changes over time during the sequence. For every object, we split the data to training and testing sets such that the testing set has significantly different grasping hand poses that those in the training set.
### _Ablation Study_
In Table I, we summarize our ablation study to measure the impact of our method design choices. To focus on evaluation of generalization to novel grasp configurations, we train each method per object and average the metrics in the table. We use 4 objects from DexYCB dataset with distinct shapes: 'Cracker Box', 'Banana', 'Power Drill', and 'Coffee Can'. In inference time, all methods use the same 3D hand mesh and 2D object segmentation provided in DexYCB.
**Effect of explicit interaction encoding:** Our main hypothesis is that learning the correlation between hand and object geometry explicitly can regulate the 3D reconstruction of the grasped object given the 3D hand shape. We compare two methods to validate this hypothesis: (M2: \(\mathbf{f}_{\mathrm{2D}}\)) PixelNeRF [14] adapted to a single input image that uses the 2D image feature without the 3D hand feature and (M3: \(\mathbf{f}_{h},\mathbf{f}_{\mathrm{o}}\)) a method that uses the 3D hand and 2D object feature without the 2D image feature. As shown in Table I, by exploiting the 3D hand feature, M3 successfully imposes constraints on the relative geometries of hand and object, and provides
\begin{table}
\begin{tabular}{l|c|c c c|c c c|c c c|c c c} \hline Method: features & Architect. & PSNR\(\uparrow\) & IoU\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & F\({}_{\mathrm{w}}\)-5 \(\uparrow\) & F\({}_{\mathrm{w}}\)-10 \(\uparrow\) & CD\({}_{\mathrm{w}}\)\(\downarrow\) & F\({}_{\mathrm{o}}\)-5 \(\uparrow\) & F\({}_{\mathrm{o}}\)-10 \(\uparrow\) & CD\({}_{\mathrm{o}}\)\(\downarrow\) & F\({}_{\mathrm{h}}\)-5 \(\uparrow\) & F\({}_{\mathrm{h}}\)-10 \(\uparrow\) & CD\({}_{\mathrm{o}}\)\(\downarrow\) \\ \hline M1: \(\mathbf{f}_{h},\mathbf{f}_{h}\)+\(\mathbf{f}_{\mathrm{2D}}\) & Transf. & 19.40 & 0.61 & 0.63 & 0.30 & 0.36 & 0.64 & 0.85 & 0.32 & 0.56 & 1.57 & 0.27 & 0.55 & 1.42 \\ M2: \(\mathbf{f}_{\mathrm{2D}}\) & & & 19.09 & 0.61 & 0.60 & 0.31 & 0.30 & 0.56 & 1.17 & 0.28 & 0.47 & 2.90 & 0.19 & 0.49 & 1.56 \\ M3: \(\mathbf{f}_{h},\mathbf{f}_{\mathrm{2D}}\) & & & 20.11 & 0.77 & 0.65 & 0.27 & 0.47 & 0.78 & 0.30 & 0.41 & 0.68 & 0.62 & **0.54** & **0.92** & 0.12 \\ M4: \(\mathbf{f}_{h}\)+\(\mathbf{f}_{\mathrm{2D}}\) & & & 20.31 & 0.72 & 0.68 & 0.26 & 0.40 & 0.68 & 0.79 & 0.30 & 0.53 & 1.73 & **0.54** & **0.92** & **0.09** \\
**M5 (ours): \(\mathbf{f}_{h},\mathbf{f}_{\mathrm{o}}\)+\(\mathbf{f}_{\mathrm{2D}}\)** & & & **21.66** & **0.79** & **0.70** & **0.24** & **0.47** & **0.79** & **0.27** & **0.43** & **0.70** & **0.56** & 0.53 & 0.91 & 0.10 \\ \hline \end{tabular}
\end{table} TABLE I: Ablation study. Our model M5 provides the highest rendering quality, F-scores, and lowest Chamfer Distances (CD) for novel hand-object interaction scenes’ object geometry. Subscripts \({}_{w}\), \({}_{o}\), and \({}_{h}\) indicate whole, object, and hand evaluation respectively.
Fig. 3: HandNeRF takes a single RGB image and predicts the volume density, color radiance, and semantic label of each query point in a neural field. Different from comparable works of Ye et al. [15] and Choi et al. [16] that implicitly learns the interaction between hand and object, it explicitly encodes the correlation between hand and object features in 3D space, which provides more accurate 3D reconstruction and novel view synthesis.
better generalization to novel hand-object interactions. The results support that our learned interaction feature \(\mathcal{F}\) that explicitly encodes hand-object correlations is effective to infer a 3D object geometry without requiring its 3D ground truth during training. The performance of the 3D feature is more pronounced for our method (M5: \(\mathbf{f}_{h},\mathbf{f}_{o}+\mathbf{f}_{2\mathrm{D}}\)) that leverages both the 2D image feature and the 3D feature.
**Significance of 2D object feature:** HandNeRF differs from the existing approaches [15, 16] by using explicit representation of an object with respect to a 3D hand. To verify the effectiveness of the 2D object feature, we compare two methods: (M4: \(\mathbf{f}_{h}+\mathbf{f}_{2\mathrm{D}}\)) a method that implicitly learns the hand-object interactions similar to MonoNHR [16] and (M5: \(\mathbf{f}_{h},\mathbf{f}_{o}+\mathbf{f}_{2\mathrm{D}}\)) our method that explicitly models the
\begin{table}
\begin{tabular}{l|c|c c c c|c c c|c c c|c c c} \hline Method & Dataset & PSNR\(\uparrow\) & IoU\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & \(\mathbf{F}_{\mathrm{w}}\)-\(5\uparrow\) & \(\mathbf{F}_{\mathrm{w}}\)-\(5\uparrow\) & \(\mathbf{F}_{\mathrm{w}}\)-\(10\uparrow\) & \(\mathbf{\mathrm{CD}}_{\mathrm{w}}\) & \(\downarrow\) & \(\mathbf{F}_{\mathrm{o}}\)-\(5\uparrow\) & \(\mathbf{F}_{\mathrm{o}}\)-\(10\uparrow\) & \(\mathbf{\mathrm{CD}}_{\mathrm{o}}\) & \(\downarrow\) & \(\mathbf{F}_{\mathrm{o}}\)-\(5\uparrow\) & \(\mathbf{F}_{\mathrm{h}}\)-\(10\uparrow\) & \(\mathbf{\mathrm{CD}}_{\mathrm{h}}\)\(\downarrow\) \\ \hline \hline PixelNeRF & & 19.09 & 0.61 & 0.60 & 0.31 & 0.30 & 0.56 & 1.17 & 0.28 & 0.47 & 2.90 & 0.19 & 0.49 & 1.56 \\ IHOINeRF & & 18.49 & - & 0.60 & 0.31 & 0.31 & 0.60 & 1.15 & \(=\) & \(=\) & \(=\) & \(=\) & \(=\) & \(=\) \\ IHOINeRF\(\uparrow\) & & 19.82 & - & 0.64 & 0.27 & 0.38 & 0.69 & 0.54 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ MonoNHR & DexYCB & 19.36 & 0.63 & 0.64 & 0.30 & 0.37 & 0.62 & 3.19 & 0.25 & 0.43 & 8.59 & 0.49 & 0.89 & 0.11 \\ MonoNHR\(\uparrow\) & & 19.66 & 0.68 & 0.66 & 0.29 & 0.40 & 0.64 & 3.05 & 0.26 & 0.45 & 8.58 & **0.55** & **0.92** & **0.08** \\ HandNeRF & & 21.19 & 0.75 & 0.68 & 0.25 & 0.44 & 0.77 & 0.31 & 0.42 & 0.68 & 0.59 & 0.46 & 0.88 & 0.12 \\ HandNeRF\(\uparrow\) & & **21.66** & **0.79** & **0.70** & **0.24** & **0.47** & **0.79** & **0.27** & **0.43** & **0.70** & **0.56** & 0.53 & 0.91 & 0.10 \\ \hline PixelNeRF & & 18.82 & 0.69 & 0.65 & 0.23 & 0.38 & 0.69 & 0.75 & 0.36 & 0.58 & 1.14 & 0.29 & 0.68 & 0.41 \\ IHOINeRF & & 18.55 & - & 0.65 & 0.23 & 0.28 & 0.56 & 1.15 & \(=\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ IHOINeRF\(\uparrow\) & & 19.40 & - & 0.68 & 0.21 & 0.41 & 0.73 & 0.65 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ MonoNHR & HO3D v3 & 16.98 & 0.66 & 0.60 & 0.21 & 0.28 & 0.53 & 2.09 & 0.19 & 0.37 & 3.49 & 0.33 & 0.65 & 0.64 \\ MonoNHR\(\uparrow\) & & 19.34 & 0.75 & 0.70 & 0.22 & 0.45 & 0.74 & 0.97 & 0.36 & 0.59 & 1.50 & 0.52 & 0.92 & **0.08** \\ HandNeRF & & 18.04 & 0.68 & 0.63 & 0.23 & 0.38 & 0.70 & 0.35 & 0.40 & 0.66 & 0.41 & 0.45 & 0.51 & 0.78 \\ HandNeRF\(\uparrow\) & & **20.54** & **0.82** & **0.74** & **0.18** & **0.51** & **0.83** & **0.19** & **0.47** & **0.74** & **0.31** & **0.54** & **0.94** & **0.08** \\ \hline \end{tabular}
\end{table} TABLE II: Comparison with state-of-the-art baselines on DexYCB and HO-3D v3. Subscripts \(w\), \(o\), and \(h\) indicate whole, object, and hand evaluation, respectively. \(\dagger\) indicates use of ground truth 3D hand meshes for inputs, otherwise HandOccNet [5]’s estimation is used. MPJPEs (mean per joint position error) of the estimation are 12mm and 34mm in DexYCB and HO3D v3, respectively.
Fig. 4: Qualitative results of novel view synthesis (image, depth, and semantic segmentation) and 3D mesh on DexYCB and HO3D v3. Ground truth hand meshes are used as input.
Fig. 5: Qualitative results of novel view synthesis (image, depth, and semantic segmentation) and 3D mesh on DexYCB [9] and HO3D v3 [41], given hand mesh estimation of HandOccNet [5]. The bottom results for scissors are using ground truth hand mesh for reference.
interactions through the 2D object feature. As shown in Table I, M4 tends to overfit to hand reconstruction and produces poor results for object reconstruction. This implies that without the explicitly defined 2D object feature and its correlation with the hand pose, a strong prior coming from the given hand pose information dominates the prediction while ignoring object information from an input image.
**Effect of 3D CNN:** Learning hand-object interaction from Equation (2) is challenging due to the complex quadratic pairwise relationship, requiring a large number of data. Instead, we approximate the quadratic relationship in 3D space using a 3D CNN in Equation (3). We compare against a method (M1) that directly learns the all pairwise relationship between hand and object points using Transformer [43]. As shown in Table I, using the 3D CNN (M5) outperforms M1 in all metrics. Considering the higher gap between training and testing PSNR of M1 (e.g., M1: 4.55 vs. M5: 3.6), the result indicates that our method based on a 3D CNN is resilient to overfitting. Moreover, the model complexity of M1 is over 10 times than ours, comparing the number of model parameters (M1: 27.1K vs. M5: 2.1K).
### _Comparison with State-of-the-art Methods_
We first evaluate the generalization to novel grasps on the objects seen during training. We assess the generalization to novel object shape by training on 15 DexYCB objects and testing on 4 unseen ones. Finally, we demonstrate the use of reconstruction for grasp planning for robotic handover.
**Generalization to novel grasps:** Table II and Fig. 4 present quantitative and qualitative evaluation on DexYCB and
\begin{table}
\begin{tabular}{l|l|c} \hline Input & Reconstruction & Grasp proposal success ratio\(\uparrow\) \\ \hline \multirow{3}{*}{RGB} & PixelNeRF & 0.46 \\ & HOINeRF & 0.42 \\ & MonoNHR & 0.36 \\ & HandNeRF & **0.63** \\ \hline RGBD & - & 0.26 \\ \hline GT mesh & - & 0.77 \\ \hline \end{tabular}
\end{table} TABLE IV: Downstream grasping: We compare Contact-GraspNet [42] grasp quality on HandNeRF versus baseline reconstructions for DexYCB handover scenes. HandOccNet hand estimation is used for the methods.
Fig. 8: Qualitative results of Contact-GraspNet [42]’s grasp proposals on reconstructed meshes of HandNeRF and the baselines. The ground truth MANO hand mesh, which is used as input and grasp collision filtering, is also visualized.
\begin{table}
\begin{tabular}{l|c c c c|c c c|c c c|c c c} \hline Method & PSNR\(\uparrow\) & IoU\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & F\(\text{-}5\uparrow\) F\(\text{-}10\uparrow\) & CD\(\text{-}\downarrow\) & F\(\text{-}5\uparrow\) F\(\text{-}10\uparrow\) & CD\(\text{-}\downarrow\) & F\(\text{-}5\uparrow\) F\(\text{-}10\uparrow\) & CD\(\text{-}\downarrow\) \\ \hline PixelNeRF & 18.85 & 0.58 & 0.58 & 0.34 & 0.21 & 0.47 & 1.09 & 0.22 & 0.44 & 1.31 & 0.10 & 0.31 & 2.09 \\ & HOINeRF & 17.89 & - & 0.58 & 0.34 & 0.21 & 0.45 & 571.55 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline & HOINeRF\(\uparrow\) & 19.94 & - & 0.64 & 0.30 & 0.43 & 0.67 & 1.03 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ MonoNHR & 17.14 & 0.45 & 0.53 & 0.37 & 0.27 & 0.51 & 1.70 & 0.17 & 0.33 & 54.89 & 0.40 & 0.73 & 0.73 \\ MonoNHR\(\uparrow\) & 19.27 & 0.67 & 0.63 & 0.31 & 0.47 & 0.71 & 1.22 & 0.41 & 0.62 & 1.77 & **0.54** & **0.82** & **0.50** \\ HandNeRF & 18.85 & 0.56 & 0.55 & 0.33 & 0.30 & 0.61 & 0.62 & 0.25 & 0.49 & 0.85 & 0.36 & 0.69 & 0.88 \\ HandNeRF\(\uparrow\) & **20.83** & **0.72** & **0.66** & **0.27** & **0.51** & **0.75** & **0.72** & **0.46** & **0.68** & **1.11** & 0.51 & 0.80 & 0.52 \\ \hline \end{tabular}
\end{table} TABLE III: Comparison with state-of-the-art baselines on unseen objects of DexYCB [9]. Subscripts \(w\), \(o\), and \(h\) indicate whole, object, and hand evaluation, respectively. The mesh estimate inputs are from HandOccNet [5]. \(\uparrow\) indicates using ground truth 3D hand meshes.
Fig. 6: HandNeRF generalizes to significantly different grasp configurations at the inference time on in-house data. The reconstructed object meshes are visualized with the input hand mesh.
Fig. 7: Qualitative results of generalization to novel object shapes. The reconstructed 3D object mesh is visualized along with the ground truth MANO hand mesh, which is used as input.
HO3D v3. Given the ground truth grasping hand shape, HandNeRF shows the highest rendering quality scores and the highest F-scores, and the lowest CD (mm) for the whole and object geometry of the novel hand-object interaction scenes. For example, HandNeRF achieves approximately 1.5 times higher F-scores and significantly lower CD for object reconstruction than those of PixelNeRF [14] and MonoNHR [16]. This demonstrates HandNeRF's effective hand-object interaction priors compared to the baselines.
We also demonstrate HandNeRF's robustness to erroneous input hand meshes in Fig. 5, which is quantitatively verified in Table II. When using the hand pose estimation from HandOccNet [5], instead of the ground truth, small pose errors in the input hand mesh significantly impact IHOINeRF and MonoNHR outputs. These methods fail to recover half the scissors, implying limitations of implicit interaction encoding. In contrast, given inaccurate 3D hand input, HandNeRF retains reasonable reconstruction quality and renders more accurate novel views far from the input.
We further qualitatively demonstrate HandNeRF's generalization capability on in-house data in Fig. 6. Only 7 RGB cameras are used for data collection and annotation of 3D hand mesh and 2D object segmentation is fully automated. Without ground truth of 3D object geometry during training, HandNeRF exhibits good generalization to significantly different grasping poses and reasonably reconstructs the grasped object from a single RGB image, leveraging the learned hand-object interaction prior.
**Generalization to novel object shape:** As shown in Table III and Fig. 7, HandNeRF consistently outperforms baselines on most metrics, especially reconstructing robust object meshes despite substantial shape dissimilarity between training and test sets. Due to depth ambiguity in a single 2D RGB image, baselines fail to recover overall object shape. For instance, the 'Banana' is only partially reconstructed as its length is unclear from the input. These results demonstrate HandNeRF's superior generalization, likely due to the explicit hand and object geometry encoding effectively regularizing plausible novel object geometry.
**Application to grasp planning for handover:** We evaluate grasp proposals from Contact-GraspNet [42] on reconstructed meshes of HandNeRF and the baselines [14, 15, 16], RGBD pointcloud of the input image, and ground truth meshes. Grasps colliding with the hand mesh are filtered out before evaluation. We measure the ratio of successful grasp proposals, where a grasp is counted as successful if it envelops a part of the ground truth object mesh without colliding with it. Unseen scenes of two DexYCB objects ('Banana', 'Power Drill') are used, as Contact-GraspNet performed reliably on their ground truth meshes.
HandNeRF's object reconstruction enables a 1.5 times higher grasp proposal success ratio compared to baselines, as shown in Table IV. Without depth information, HandNeRF achieves a 63% grasp success ratio, approaching the 77% achieved by Contact-GraspNet using ground truth meshes and far exceeding the 26% from the input image pointcloud. Fig. 8 visually demonstrates how HandNeRF's more accurate reconstruction increases successful grasp proposals. Although the surface is locally coarse, HandNeRF's reconstructed global geometry, including the unobserved regions, enables more accurate grasp planning.
## V Limitation and Future work
The limitation of our method in practice is that it strongly depends on hand mesh estimation of off-the-shelf methods. Despite the advance of the recent methods [5, 6, 23], when the hand is severely occluded by the object, the estimated mesh is not accurate enough for inferring further correlation between the hand and object geometry. In such cases, the wrongly estimated hand information can rather heart the object reconstruction. In the future, we will explore to integrate the hand mesh estimation into our system along with the uncertainty modeling to adjust the hand mesh's impact to the final output.
Despite outperformance, our synthesized RGB images are still blurry, when rendered from significantly different view from an input view. Inspired by recent progress on 3D scene generation via language grounding [44], another avenue for future research will be to leverage self-supervised perceptual supervision, such as CLIP [45] feature consistency and object coherency.
## VI Conclusion
This work investigates representation learning for hand-object interactions from a single RGB image. We propose HandNeRF, a method that predicts the semantic neural radiance field of the interaction scenes. The key novelty is the utilization of hand shape to constrain the relative 3D configuration of hands and objects, encoding their correlation explicitly. Unlike existing works, HandNeRF does not require object templates for training and testing, avoiding expensive 3D labeling. Instead, it is supervised with sparse view RGB images, where conventional multi-view reconstruction methods, such as Sf (Structure from Motion), do not apply. HandNeRF outperforms state-of-the-art baselines in rendering and reconstruction on real-world data. Further, we demonstrate improved performance on downstream tasks resulting from HandNeRF's more accurate object meshes, both quantitatively and qualitatively.
| この論文では、単一RGB画像から3D手-物体シーンを復元するための手-物体相互作用の事前学習方法を提案しています。3D手-物体シーンの推論とトレーニングデータの生成は、単一画像の深さAmbiguityと手とオブジェクトの隠蔽性により、困難に感じられます。この課題をチャンスに捉え、手形状を介して手とオブジェクトの幾何学的相対的な配置を制限するのに役立てます。HandNeRFという汎用的な非 pues の設計は、3D手形状の特徴と2D物体特徴の相関関係を明確にエンコードして、手とオブジェクトのシーン幾何学を予測します。実世界データセットを用いた実験では、HandNeRFは、比較的手法に比べて、新しいのらの握り方を持つ手-物体シーンをより正確に復元できます。さらに、HandNeRFによる物体復元は、 grasping |
2302.14221 | Gröbner-Shirshov bases and linear bases for free multi-operated
algebras over algebras with applications to differential Rota-Baxter algebras
and integro-differential algebras | Quite much recent studies has been attracted to the operated algebra since it
unifies various notions such as the differential algebra and the Rota-Baxter
algebra. An $\Omega$-operated algebra is a an (associative) algebra equipped
with a set $\Omega$ of linear operators which might satisfy certain operator
identities such as the Leibniz rule. A free $\Omega$-operated algebra $B$ can
be generated on an algebra $A$ similar to a free algebra generated on a set. If
$A$ has a Gr\"{o}bner-Shirshov basis $G$ and if the linear operators $\Omega$
satisfy a set $\Phi$ of operator identities, it is natural to ask when the
union $G\cup \Phi$ is a Gr\"{o}bner-Shirshov basis of $B$. A previous work
answers this question affirmatively under a mild condition, and thereby obtains
a canonical linear basis of $B$.
In this paper, we answer this question in the general case of multiple linear
operators. As applications we get operated Gr\"{o}bner-Shirshov bases for free
differential Rota-Baxter algebras and free integro-differential algebras over
algebras as well as their linear bases. One of the key technical difficulties
is to introduce new monomial orders for the case of two operators, which might
be of independent interest. | Zuan Liu, Zihao Qi, Yufei Qin, Guodong Zhou | 2023-02-28T00:54:36 | http://arxiv.org/abs/2302.14221v3 | Grobner-Shirshov bases and linear bases for free multi-operated algebras over algebras with applications to differential rota-Baxter algebras and integro-differential algebras
###### Abstract.
Quite much recent studies has been attracted to the operated algebra since it unifies various notions such as the differential algebra and the Rota-Baxter algebra. An \(\Omega\)-operated algebra is a an (associative) algebra equipped with a set \(\Omega\) of linear operators which might satisfy certain operator identities such as the Leibniz rule. A free \(\Omega\)-operated algebra \(B\) can be generated on an algebra \(A\) similar to a free algebra generated on a set. If \(A\) has a Grobner-Shirshov basis \(G\) and if the linear operators \(\Omega\) satisfy a set \(\Phi\) of operator identities, it is natural to ask when the union \(G\cup\Phi\) is a Grobner-Shirshov basis of \(B\). A previous work answers this question affirmatively under a mild condition, and thereby obtains a canonical linear basis of \(B\).
In this paper, we answer this question in the general case of multiple linear operators. As applications we get operated Grobner-Shirshov bases for free differential Rota-Baxter algebras and free integro-differential algebras over algebras as well as their linear bases. One of the key technical difficulties is to introduce new monomial orders for the case of two operators, which might be of independent interest.
2010 Mathematics Subject Classification: 16Z10 03C05 08B20 12H05 16S10 17B38
###### Contents
* 1 Introduction
* 1.1 Operated GS basis theory: from a single operator to multiple operators
* 1.1.1 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.2 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.3 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.4 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.5 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.6 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.7 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.8 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.9 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.1 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.2 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.3 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.4 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.5 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.6 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.7 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.8 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.9 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.10 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.11 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.12 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.13 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.14 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.15 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.16 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.17 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.18 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.19 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.20 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.21 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.22 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.23 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.24 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.25 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.26 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.27 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.28 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.29 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.30 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.4 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.5 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.6 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.7 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.8 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.9 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.10 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.11 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.12 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.13 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.14 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.15 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.16 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.17 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.18 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.19 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.20 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.21 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.22 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.23 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.30 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.4 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.5 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.6 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.7 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.8 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.9 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.10 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.11 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.12 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.13 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.14 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.15 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.16 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.17 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.18 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.19 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.20 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.21 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.21 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.22 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.23 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.31 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.4 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.5 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.6 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.7 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.8 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.9 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.10 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.11 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.11 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.12 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.13 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.14 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.15 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.16 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.17 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.18 The linear operators \(\Omega\) and
* 1.2.19 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.20 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.21 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.22 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.23 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.24 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.25 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.26 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.7 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.8 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.9 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.10 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.11 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.12 The linear operators \(\Omega\) and \(
4.2 Case of nonunital algebras with \(\lambda=0\)4.3 Case of unital algebras4.4 Differential Rota-Baxter algebras vs integro-differential algebras
## Introduction
This paper extends the results of [17] to algebras endowed with several operators, with applications to differential Rota-Baxter algebras and integro-differential algebras.
### Operated GS basis theory: from a single operator to multiple operators
Since its introduction by Shirshov [20] and Buchberger [4] in the sixties of last century, Grobner-Shirshov (=GS) basis theory has become one of the main tools of computational algebra; see for instance [10, 1, 3]. In order to deal with algebras endowed with operators, Guo and his coauthors introduced a GS basis theory in a series of papers [11, 23, 15, 6] (see also [2]) with the goal to attack Rota's program [19] to classify "interesting" operators on algebras. Guo et al. considered operators satisfying some polynomial identities, hence called operated polynomial identities (aka. OPIs) [11, 23, 15, 6]. Via GS basis theory and the somewhat equivalent theory: rewriting systems, they could define when OPIs are GS. They are mainly interested into two classes of OPIs: differential type OPIs and Rota-Baxter type OPIs, which are carefully studied in [15, 23, 6]. For the state of art, we refer the reader to the survey paper [8] and for recent development, see [22, 12, 17, 21].
In these papers [11, 23, 15, 6], the operated GS theory and hence Rota's classification program have been carried out only for algebras endowed with a single operator. It would be very interesting to carry out further Rota's program for the general case of multiple linear operators.
The paper [2] contains a first step of this program by developing the GS basis theory in this generalised setup. We will review and update the GS basis theory in the multi-operated setup in Section 2.
Another direction is to generalise from operated algebras over a base field to operated algebras over a base ring. While previous papers [17, 18] considered this aspect for single operator case, this paper is aimed to deal with this aspect for multiple linear operator case. In particular, some new monomial orders for the two operator case will be constructed which enable us to study operated GS bases for free operated algebras generated by algebras, while it seems that the monomial orders appeared in previous papers can be applied directly when the base ring is not a field any more.
### Free operated algebras over algebras
Recently, there is a need to develop free operated algebras satisfying some OPIs over a fixed algebras and construct GS bases and linear bases for these free algebras as long as a GS basis is known for the given algebra. Ebrahimi-Fard and Guo [5] used rooted trees and forests to give explicit constructions of free noncommutative Rota-Baxter algebras on modules and sets; Lei and Guo [16] constructed the linear basis of free Nijenhuis algebras over associative algebras; Guo and Li [12] gave a linear basis of the free differential algebra over associative algebras by introducing the notion of differential GS bases.
In a previous paper [17], the authors considered a question which can be roughly stated as follows:
**Question 0.1**.: Given a (unital or nonunital) algebra \(A\) with a GS basis \(G\) and a set \(\Phi\) of OPIs, assume that these OPIs \(\Phi\) are GS in the sense of [2, 15, 23, 6]. Let \(B\) be the free operated algebra satisfying \(\Phi\) over \(A\). When will \(\Phi\cup G\) be a GS basis for \(B\)?
They answer this question in the affirmative under a mild condition in [17, Theorem 5.9]. When this condition is satisfied, \(\Phi\cup G\) is a GS basis for \(B\) and as a consequence, we also get a linear basis of \(B\). This result has been applied to all Rota-Baxter type OPIs, a class of differential type OPIs, averaging OPIs and Reynolds OPI in [17]. It was also applied to differential type OPIs by introducing some new monomial orders [18].
In this paper, we consider a similar question for multi-operated algebras.
Let \(\Omega\) be a nonempty set which will be the index set of operators. Algebras endowed with operators indexed by \(\Omega\) will be called \(\Omega\)-algebras. OPIs can be extended to the multi-operated setup and one can introduce the notion of \(\Omega\)-GS for OPIs.
**Question 0.2**.: Let \(\Phi\) be a set of OPIs of a set of operators indexed by \(\Omega\). Let \(A\) be a (unital) algebra together with a GS basis \(G\). Assume that these OPIs \(\Phi\) are GS in the sense of Section 2. Let \(B\) be the free \(\Omega\)-algebra over \(A\) such that the operators satisfy \(\Phi\). When will \(\Phi\cup G\) be an \(\Omega\)-GS basis for \(B\)?
We extend the main result of [17] to multi-operated cases; see Theorem 2.12 for unital algebras and Theorem 2.13 for nonunital algebras.
### Differential Rota-Baxter algebras and integro-differential algebras
The main motivation of this paper comes, in fact, from differential Rota-Baxter algebras and integro-differential algebras.
Differential Rota-Baxter algebras were introduced by Guo and Keigher [13] which reflect the relation between the differential operator and the integral operator as in the First Fundamental Theorem of Calculus. Free differential Rota-Baxter algebras were constructed by using various tools including angularly decorated rooted forests and GS basis theory [13, 2].
Integro-differential algebras (of zero weight) were defined for the algebraic study of boundary problems for linear systems of linear ordinary differential equations. Guo, Regensburger and Rosenkranz [14] introduced Integro-differential algebras with weight. Free objects and their linear bases were constructed by using GS basis theory [14, 9, 7]
The main goal of this paper is to study free differential Rota-Baxter algebras and free integro-differential algebras over algebras from the viewpoint of operated GS basis theory. In particular, when the base algebra is reduced to \(\mathbf{k}\), our results also give GS bases and linear bases for free differential Rota-Baxter algebras and free integro-differential algebras.
However, the original monomial orders used in [2, 14, 9, 7] do not satisfy the hypothesis in Theorems 2.12 and 2.13 for free multi-operated algebras over algebras, and we have to introduce a new monomial order \(\leq_{\text{PD}}\) (resp. \(\leq_{\text{uPD}}\)) to overcome the problem; see Section 1.3.
In contrast to the use different monomial orders while dealing with free differential Rota-Baxter algebras and free integro-differential algebras in [2] and [7] respectively, we will demonstrate that our monomial ordering \(\leq_{\text{PD}}\) can be applied to both types of algebras simultaneously, as we shall see in Sections 3 and 4. Moreover, since the case of the unital algebras was not discussed in [2], this aspect is addressed in Subsection 3.3 by using our monomial order \(\leq_{\text{uPD}}\).
### Outline of the paper
This paper is organized as follows.
The first section contains remainder on free objects in multi-operated setting and on the construction of free \(\Omega\)-semigroups and related structures, and introduces some new monomial orders for the case of two operators, which will be the key technical tool of this paper.
In the second section, we recall the theory of GS bases for the multi-operated setting. After introducing OPIs, GS property for OPIs and \(\Omega\)-GS bases for multi-operated algebras are defined; after giving some facts about free multi-operated \(\Phi\)-algebras on algebras, answers to Question 0.2 are presented.
In the third section, multi-operated GS bases and linear bases for free differential Rota-Baxter algebras on algebras are studied and the fourth section contains our investigation for free integro-differential algebras on algebras.
**Notation:** Throughout this paper, \(\mathbf{k}\) denotes a base field. All the vector spaces and algebras are over \(\mathbf{k}\).
## 1. New monomial orders on free multi-operated semigroups and monoids
In this section, we recall free objects in multi-operated setting and the construction of free \(\Omega\)-semigroups and related structures, and define two new monomial orders \(\leq_{\text{PD}}\) and \(\leq_{\text{uPD}}\) on free multi-operated semigroups and monoids. The main results of this paper will highly depend on these new monomial orders.
For a set \(Z\), denote by \(\mathbf{k}Z\) (resp. \(\mathcal{S}(Z)\), \(\mathcal{M}(Z)\)) the free \(\mathbf{k}\)-vector space (resp. free semigroup, free monoid) generated by \(Z\). Denote the category of sets (resp. semigroups, monoids) by \(\mathfrak{S}\mathfrak{e}\mathfrak{t}\) (resp. \(\mathfrak{S}\mathfrak{e}\mathfrak{t}\), \(\mathfrak{M}\mathfrak{o}\mathfrak{t}\)). Denote the categories of \(\mathbf{k}\)-algebras and unital \(\mathbf{k}\)-algebras by \(\mathfrak{H}\mathfrak{l}\mathfrak{g}\) and \(\mathfrak{u}\mathfrak{H}\mathfrak{l}\mathfrak{g}\) respectively.
Throughout this section, let \(\Omega\) be a nonempty set which will be the index set of operators.
### Free objects in the multi-operated setup
**Definition 1.1**.: An operated set with an operator index set \(\Omega\) or simply an \(\Omega\)-set is a set \(S\) endowed with a family of maps \(P_{\omega}:S\to S\) indexed by \(\omega\in\Omega\). The morphisms between \(\Omega\)-sets can be defined in the obvious way. Denote the category of \(\Omega\)-sets by \(\Omega\)-\(\mathfrak{S}\mathfrak{e}\mathfrak{t}\).
Similarly, we can define \(\Omega\)-semigroups and \(\Omega\)-monoids. Their categories are denoted by \(\Omega\)-\(\mathfrak{S}\mathfrak{e}\mathfrak{t}\) and \(\Omega\)-\(\mathfrak{M}\mathfrak{o}\mathfrak{t}\) respectively.
\(\Omega\)-vector spaces, nonunital or unital \(\Omega\)-algebras can be defined in a similar way, except asking, moreover, that all the operators are \(\mathbf{k}\)-linear maps. Denote the category of \(\Omega\)-vector spaces, (resp. nonunital \(\Omega\)-algebras) by \(\Omega\)-\(\mathfrak{S}\mathfrak{e}\mathfrak{t}\) (resp. \(\Omega\)-\(\mathfrak{M}\mathfrak{l}\mathfrak{g}\), \(\Omega\)-\(\mathfrak{u}\mathfrak{H}\mathfrak{l}\mathfrak{g}\)) with obvious morphisms.
As in [17], there exists the following diagram of functors:
In this diagram, all functors from right to left, from below to above and from southwest to northeast are the obvious forgetful functors. The other functors are free object functors which are left adjoint to the forgetful functors.
Our notations for free object functors are analogous to those in [17]. For instance, \(\mathcal{F}_{\mathfrak{Alg}}^{\Omega\text{-}\mathfrak{Alg}}\) denotes the free object functor from the category of algebras to that of nonunital \(\Omega\)-algebras.
We could give similar constructions of these free object functors as in Sections 1-3 of [17]. However, as we don't need the details, we will not repeat them. The curious readers could consult [17] and extend the constructions in [17] without essential difficulties.
### Free multi-operated semigroups and monoids
Now we explain the construction of the free \(\Omega\)-semigroup generated by a set \(Z\).
For \(\omega\in\Omega\), denote by \([Z]_{\omega}\) the set of all formal elements \([z]_{\omega}\), \(z\in Z\) and put \([Z]_{\Omega}=\sqcup_{\omega\in\Omega}\,[Z]_{\omega}\). The inclusion into the first component \(Z\hookrightarrow Z\sqcup[Z]_{\Omega}\) induces an injective semigroup homomorphism
\[i_{0,1}:\,\mathfrak{S}_{\Omega,0}(Z):=\mathcal{S}(Z)\hookrightarrow\mathfrak{ S}_{\Omega,1}(Z):=\mathcal{S}(Z\sqcup[Z]_{\Omega}).\]
For \(n\geq 2\), assume that we have constructed \(\mathfrak{S}_{\Omega,n-2}(Z)\) and \(\mathfrak{S}_{\Omega,n-1}(Z)=\mathcal{S}(Z\sqcup[\mathfrak{S}_{\Omega,n-2}(Z )]_{\Omega})\) endowed with an injective homomorphism of semigroups \(i_{n-2,n-1}:\,\mathfrak{S}_{\Omega,n-2}(Z)\hookrightarrow\mathfrak{S}_{ \Omega,n-1}(Z).\) We define the semigroup
\[\mathfrak{S}_{\Omega,n}(Z):=\mathcal{S}(Z\sqcup\lfloor\mathfrak{S}_{\Omega,n- 1}(Z)\rfloor_{\Omega})\]
and the natural injection
\[\operatorname{Id}_{Z}\sqcup\lfloor i_{n-2,n-1}\rfloor_{\Omega}:Z\sqcup \lfloor\mathfrak{S}_{\Omega,n-2}(Z)\rfloor_{\Omega}\hookrightarrow Z\sqcup \lfloor\mathfrak{S}_{\Omega,n-1}(Z)\rfloor_{\Omega}\]
induces an injective semigroup homomorphism
\[i_{n-1,n}:\,\mathfrak{S}_{\Omega,n-1}(Z)=\mathcal{S}(Z\sqcup\lfloor\mathfrak{ S}_{\Omega,n-2}(Z)\rfloor_{\Omega})\hookrightarrow\mathfrak{S}_{\Omega,n}(Z)= \mathcal{S}(Z\sqcup\lfloor\mathfrak{S}_{\Omega,n-1}(Z)\rfloor_{\Omega}).\]
Define \(\mathfrak{S}_{\Omega}(Z)=\varinjlim\mathfrak{S}_{\Omega,n}(Z)\) and the maps sending \(u\in\mathfrak{S}_{\Omega,n}(Z)\) to \(\lfloor u\rfloor_{\omega}\in\mathfrak{S}_{\Omega,n+1}(Z)\) induces a family of operators \(\mathcal{P}_{\omega},\omega\in\Omega\) on \(\mathfrak{S}_{\Omega}(Z)\).
The construction of the free \(\Omega\)-monoid \(\mathfrak{M}_{\Omega}(M)\) over a set \(Z\) is similar, by just replacing \(\mathcal{S}(Z)\) by \(\mathcal{M}(Z)\) everywhere in the construction.
**Remark 1.2**.: We will use another construction of \(\mathfrak{M}_{\Omega}(Z)\). In fact, add some symbols \(\lfloor 1\rfloor_{\Omega}=\{\lfloor 1\rfloor_{\omega},\omega\in\Omega\}\) to \(Z\) and form \(\mathfrak{S}_{\Omega}(Z\sqcup\lfloor 1\rfloor_{\Omega})\), then \(\mathfrak{M}_{\Omega}(Z)\) can be obtained from \(\mathfrak{S}_{\Omega}(Z\sqcup\lfloor 1\rfloor_{\Omega})\) by just adding the empty word \(1\).
It is easy to see that \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\)(resp. \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\)) is the free nonunital (resp. unital) \(\Omega\)-algebra generated by \(Z\).
### Monomial orders
In this subsection, we introduce some new monomial orders on free \(\Omega\)-semigroups and free \(\Omega\)-monoids. We only consider the case of two operators, say \(\Omega=\{P,D\}\) as the main examples in mind are differential Rota-Baxter algebras and integro-differential algebras following the convention from [7].
We first recall the definitions of well orders and monomial orders.
**Definition 1.3**.: Let \(Z\) be a nonempty set.
1. A preorder \(\leq\) is a binary relation on \(Z\) that is reflexive and transitive, that is, for all \(x,y,z\in Z\), we have 1. \(x\leq x\); and 2. if \(x\leq y,y\leq z\), then \(x\leq z\). In the presence of a preorder \(\leq\), we denote \(x=_{Z}y\) if \(x\leq y\) and \(x\geq y\); if \(x\leq y\) but \(x\neq y\), we write \(x<y\) or \(y>x\).
2. A pre-linear order \(\leq\) on \(Z\) is a preorder \(\leq\) such that either \(x\leq y\) or \(x\geq y\) for all \(x,y\in Z\).
3. A linear order or a total order \(\leq\) on \(Z\) is a pre-linear order \(\leq\) such that \(\leq\) is symmetric, that is, \(x\leq y\) and \(y\leq x\) imply \(x=y\).
4. A preorder \(\leq\) on \(Z\) is said to satisfy the descending chain condition, if for each descending chain \(x_{1}\geq x_{2}\geq x_{3}\geq\cdots\), there exists \(N\geq 1\) such that \(x_{N}=_{Z}x_{N+1}=_{Z}\cdots\). A linear order satisfying the descending chain condition is called a well order.
Before giving the definition of monomial orders, we need to introduce the following notions generalising the case of one operator.
**Definition 1.4**.: Let \(Z\) be a set and \(\star\) a symbol not in \(Z\).
1. Define \(\mathfrak{M}_{\Omega}^{\star}(Z)\) to be the subset of \(\mathfrak{M}_{\Omega}(Z\cup\star)\) consisting of elements with \(\star\) occurring only once.
2. For \(q\in\mathfrak{M}_{\Omega}^{\star}(Z)\) and \(u\in\mathfrak{M}_{\Omega}(Z)\), we define \(q|_{u}\in\mathfrak{M}_{\Omega}(Z)\) to be the element obtained by replacing the symbol \(\star\) in \(q\) by \(u\). In this case, we say \(u\) is a subword of \(q|_{u}\).
3. For \(q\in\mathfrak{M}_{\Omega}^{\star}(Z)\) and \(s=\sum_{i}c_{i}u_{i}\in\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with \(c_{i}\in\mathbf{k}\) and \(u_{i}\in\mathfrak{M}_{\Omega}(Z)\), we define \[q|_{s}:=\sum_{i}c_{i}q|_{u_{i}}.\]
4. Define \(\mathfrak{S}_{\Omega}^{\star}(Z)\) to be the subset of \(\mathfrak{S}_{\Omega}(Z\cup\star)\) consisting of elements with \(\star\) occurring only once. It is easy to see \(\mathfrak{S}_{\Omega}^{\star}(Z)\) is a subset of \(\mathfrak{M}_{\Omega}^{\star}(Z)\), so we also have notations in (a)-(c) for \(\mathfrak{S}_{\Omega}^{\star}(Z)\) by restriction.
**Definition 1.5**.: Let \(Z\) be a set.
1. A monomial order on \(\mathcal{S}(Z)\) is a well-order \(\leq\) on \(\mathcal{S}(Z)\) such that \[u<v\Rightarrow uw<vw\text{ and }wu<wv\text{ for any }u,v,w\in\mathcal{S}(Z);\]
2. a monomial order on \(\mathcal{M}(Z)\) is a well-order \(\leq\) on \(\mathcal{M}(Z)\) such that \[u<v\Rightarrow wuz<wv\text{ for any }u,v,w,z\in\mathcal{M}(Z);\]
3. a monomial order on \(\mathfrak{S}_{\Omega}(Z)\) is a well-order \(\leq\) on \(\mathfrak{S}_{\Omega}(Z)\) such that \[u<v\Rightarrow q|_{u}<q|_{v}\quad\text{for all }u,v\in\mathfrak{S}_{\Omega}(Z)\text{ and }q\in\mathfrak{S}_{\Omega}^{\star}(Z);\]
4. a monomial order on \(\mathfrak{M}_{\Omega}(Z)\) is a well-order \(\leq\) on \(\mathfrak{M}_{\Omega}(Z)\) such that \[u<v\Rightarrow q|_{u}<q|_{v}\quad\text{for all }u,v\in\mathfrak{M}_{\Omega}(Z)\text{ and }q\in\mathfrak{M}_{\Omega}^{\star}(Z).\]
Let us recall some known preorders.
**Definition 1.6**.: For two elements \(u,v\in\mathfrak{S}_{\Omega}(Z)\),
1. define \[u\leq_{\mathrm{D}}v\Leftrightarrow\deg_{D}(u)\leq\deg_{D}(v),\] where the \(D\)-degree \(\deg_{D}(u)\) of \(u\) is the number of occurrence of \(\lfloor\ \rfloor_{D}\) in \(u\);
2. define \[u\leq_{\mathrm{P}}v\Leftrightarrow\deg_{P}(u)\leq\deg_{P}(v),\] where the \(P\)-degree \(\deg_{P}(u)\) of \(u\) is the number of occurrence of \(\lfloor\ \rfloor_{P}\) in \(u\);
3. define \[u\leq_{\mathrm{d}Z}v\Leftrightarrow\deg_{Z}(u)\leq\deg_{Z}(v),\] where the \(Z\)-degree \(\deg_{Z}(u)\) is the number of elements of \(Z\) occurring in \(u\) counting the repetitions;
**Definition 1.7**.: Let \(Z\) be a set endowed with a well order \(\leq_{Z}\). Introduce the degree-lexicographical order \(\leq_{\mathrm{dlex}}\) on \(\mathcal{S}(Z)\) by imposing, for any \(u\neq v\in\mathcal{S}(Z)\), \(u<_{\mathrm{dlex}}v\) if
1. either \(\deg_{Z}(u)<\deg_{Z}(v)\), or
2. \(\deg_{Z}(u)=\deg_{Z}(v)\), and \(u=mu_{i}n\), \(v=mv_{i}n^{\prime}\) for some \(m,n,n^{\prime}\in\mathcal{M}(Z)\) and \(u_{i},v_{i}\in Z\) with \(u_{i}<_{Z}v_{i}\).
It is obvious that the degree-lexicographic order \(\leq_{\mathrm{dlex}}\) on \(\mathcal{S}(Z)\) is a well order.
We now define a preorder \(\leq_{\mathrm{Dlex}}\) on \(\mathfrak{S}_{\Omega}(Z)\), by the following recursion process:
1. For \(u,v\in\mathfrak{S}_{\Omega,0}(Z)=\mathcal{S}(Z)\), define \[u\leq_{\mathrm{Dlex}_{0}}v\Leftrightarrow u\leq_{\mathrm{dlex}}v.\]
2. Assume that we have constructed a well order \(\leq_{\mathrm{Dlex}_{n}}\) on \(\mathfrak{S}_{\Omega,n}(Z)\) for \(n\geq 0\) extending all \(\leq_{\mathrm{Dlex}_{n}}\) for any \(0\leq i\leq n-1\). The well order \(\leq_{\mathrm{Dlex}_{n}}\) on \(\mathfrak{S}_{\Omega,n}(Z)\) induces a well order on \(\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{P}\) (resp. \(\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{D}\)), by imposing \([u]_{P}\leq[v]_{P}\) (resp. \([u]_{D}\leq[v]_{D}\)) whenever \(u\leq_{\mathrm{Dlex}_{n}}v\in\mathfrak{S}_{\Omega,n}(Z)\). By setting \(u<v<w\) for all \(u\in Z\), \(v\in\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{D}\), and \(w\in\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{P}\), we obtain a well order on \(Z\sqcup\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{P}\sqcup\lfloor\mathfrak{S} _{\Omega,n}(Z)\rfloor_{D}\). Let \(\leq_{\mathrm{Dlex}_{n+1}}\) be the degree lexicographic order on \(\mathfrak{S}_{\Omega,n+1}(Z)=\mathcal{S}(Z\sqcup\lfloor\mathfrak{S}_{\Omega,n }(Z)\rfloor_{P}\sqcup\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{D})\) induced by that on \(Z\sqcup\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{P}\sqcup\lfloor\mathfrak{S} _{\Omega,n}(Z)\rfloor_{D}\).
Obviously \(\leq_{\mathrm{Dlex}_{n+1}}\) extends \(\leq_{\mathrm{Dlex}_{n}}\). By a limit process, we get a preorder on \(\mathfrak{S}_{\Omega}(Z)\) which will be denoted by \(\leq_{\mathrm{Dlex}}\). As is readily seen, \(\leq_{\mathrm{Dlex}}\) is a linear order.
**Remark 1.8**.: It is easy to see that the above construction of \(\leq_{\mathrm{Dlex}}\) can be extended to the case of more than two operators.
In fact, for a given well order \(\leq_{\Omega}\) in the index set \(\Omega\), the defining process of \(\leq_{\mathrm{Dlex}}\) on \(\mathfrak{S}_{\Omega}(Z)\) is the same as above except one detail in step (b), where we need to put \(u<v<w\) for all \(u\in Z\), \(v\in\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{\omega_{1}}\) and \(w\in\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{\omega_{2}}\) with \(\omega_{1}\leq_{\Omega}\omega_{2}\in\Omega\).
**Definition 1.9**.: For any \(u\in\mathfrak{S}_{\Omega}(Z)\), let \(u_{1},\ldots,u_{n}\in Z\) be all the elements occurring in \(u\) from left to right. If a right half bracket \(\rfloor_{D}\) locates in the gap between \(u_{i}\) and \(u_{i+1}\), where \(1\leq i<n\), the GD-degree of this right half bracket is defined to be \(n-i\); if there is a right half bracket \(\rfloor_{D}\) appearing on the right of \(u_{n}\), we define the GD-degree of this half bracket to be \(0\). We define the GD-degree of \(u\), denoted by \(\deg_{GD}(u)\), to be the sum of the GD-degrees of all the half right brackets in \(u\).
For example, the GD-degrees of the half right brackets in \(u=\lfloor x\rfloor_{D}\lfloor y\rfloor_{D}\) with \(x,y\in Z\) are respectively \(1\) and \(0\) from left to right, so \(\deg_{GD}(u)=1\) by definition.
For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), define the GD-degree order \(\leq_{\mathrm{GD}}\) by
\[u\leq_{\mathrm{GD}}v\Leftrightarrow\deg_{GD}(u)\leq\deg_{GD}(v).\]
**Definition 1.10**.: For any \(u\in\mathfrak{S}_{\Omega}(Z)\), let \(u_{1},\ldots,u_{n}\in Z\) be all the elements occurring in \(u\) from left to right. If there are \(i\) elements in \(Z\) contained in a bracket \(\lfloor\ \rfloor_{P}\), the GP-degree of this bracket is defined to be \(n-i\). We denote by \(\deg_{GP}(u)\) the sum of the GP-degree of all the brackets \(\lfloor\ \rfloor_{P}\) in \(u\).
For example, the the GP-degrees of the brackets \(\lfloor\ \rfloor_{P}\) of \(u=\lfloor xy\rfloor_{P}\lfloor z\rfloor_{P}\) with \(x,y,z\in Z\) are respectively \(1\) and \(2\) from left to right, so \(\deg_{GP}(u)=3\) by definition.
For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), define the GD-degree order \(\leq_{\mathrm{GD}}\) by
\[u\leq_{\mathrm{GP}}v\Leftrightarrow\deg_{GP}(u)\leq\deg_{GP}(v).\]
It is easy to obtain the following lemma whose proof is thus omitted.
**Lemma 1.11**.: _The orders \(\leq_{\mathrm{D}},\ \leq_{\mathrm{P}},\ \leq_{\mathrm{dZ}},\ \leq_{\mathrm{GD}}\) and \(\leq_{\mathrm{GP}}\) are pre-linear orders satisfying the descending chain condition._
Combining all the orders above, we can now construct an order \(\leq_{\mathrm{PD}}\) of \(\mathfrak{S}_{\Omega}(Z)\):
\[u\leq_{\mathrm{PD}}v\Leftrightarrow\left\{\begin{array}{l}u\leq_{\mathrm{D} }v,\text{or}\\ u=_{\mathrm{D}}v\text{ and }u\leq_{\mathrm{P}}v,\text{or}\\ u=_{\mathrm{D}}v,u=_{\mathrm{P}}v\text{ and }u\leq_{\mathrm{dZ}}v,\text{or}\\ u=_{\mathrm{D}}v,u=_{\mathrm{P}}v,u=_{\mathrm{dZ}}v\text{ and }u\leq_{\mathrm{GD}}v,\text{or}\\ u=_{\mathrm{D}}v,u=_{\mathrm{P}}v,u=_{\mathrm{dZ}}v,u=_{\mathrm{GD}}v\text{ and }u\leq_{\mathrm{GP}}v,\text{or}\\ u=_{\mathrm{D}}v,u=_{\mathrm{P}}v,u=_{\mathrm{dZ}}v,u=_{\mathrm{GD}}v,u=_{ \mathrm{GP}}v\text{ and }u\leq_{\mathrm{Dlex}}v.\end{array}\right.\]
To prove that the \(\leq_{\mathrm{PD}}\) is a well-order, we need some preparation.
**Definition 1.12**.:
1. Given some preorders \(\leq_{\alpha_{1}},\ldots,\leq_{\alpha_{k}}\) on a set \(Z\) with \(k\geq 2\), introduce another preorder \(\leq_{\alpha_{1},\ldots,\alpha_{k}}\) by imposing recursively \[u\leq_{\alpha_{1},\ldots,\alpha_{k}}v\Leftrightarrow\left\{\begin{array}{l}u <_{\alpha_{1}}v,\text{ or}\\ u=_{\alpha_{1}}v\text{ and }u\leq_{\alpha_{2},\ldots,\alpha_{k}}v.\end{array}\right.\]
2. Let \(k\geq 2\) and let \(\leq_{\alpha_{i}}\) be a pre-linear order on \(Z_{i},\ 1\leq i\leq k\). Define the lexicographical product order \(\leq_{\mathrm{clex}}\) on the cartesian product \(Z_{1}\times Z_{2}\times\cdots\times Z_{k}\) by defining \[(x_{1},\cdots,x_{k})\leq_{\mathrm{clex}}(y_{1},\cdots,y_{k})\Leftrightarrow \left\{\begin{array}{l}x_{1}<_{\alpha_{1}}y_{1},\text{or}\\ x_{1}=_{Z_{1}}y_{1}\text{ and }(x_{2},\cdots,x_{k})\leq_{\mathrm{clex}}(y_{2}, \cdots,y_{k})\,,\end{array}\right.\] where \((x_{2},\cdots,x_{k})\leq_{\mathrm{clex}}(y_{2},\cdots,y_{k})\) is defined by induction, with the convention that \(\leq_{\mathrm{clex}}\) is the trivial relation when \(k=1\).
**Lemma 1.13** ([18, Lemma 1.7]).:
1. _For_ \(k\geq 2\)_, let_ \(\leq_{\alpha_{1}},\ldots,\leq_{\alpha_{k-1}}\) _be pre-linear orders on_ \(Z\)_, and_ \(\leq_{\alpha_{k}}\) _a linear order on_ \(Z\)_. Then_ \(\leq_{\alpha_{1},\ldots,\alpha_{k}}\) _is a linear order on_ \(Z\)_._
2. _Let_ \(\leq_{\alpha_{i}}\) _be a well order on_ \(Z_{i}\)_,_ \(1\leq i\leq k\)_. Then the lexicographical product order_ \(\leq_{\mathrm{clex}}\) _is a well order on the cartesian product_ \(Z_{1}\times Z_{2}\times\cdots\times Z_{k}\)_._
**Proposition 1.14**.: _The order \(\leq_{\mathrm{PD}}\) is a well order on \(\mathfrak{S}_{\Omega}(Z)\)._
Proof.: Since \(\leq_{\rm Dlex}\) is a linear order, so is \(\leq_{\rm PD}\) by Lemma 1.11 and Lemma 1.13(a).
It suffices to verify that \(\leq_{\rm PD}\) satisfies the descending chain condition. Let
\[v_{1}\geq_{\rm PD}v_{2}\geq_{\rm PD}v_{3}\geq_{\rm PD}\cdots\in\mathfrak{S}_{ \Omega}(Z)\]
be a descending chain. By Lemma 1.11, there exist \(N\geq 1\) such that
\[\deg_{D}(v_{N})=\deg_{D}(v_{N+1})=\deg_{D}(v_{N+2})=\cdots=:k,\] \[\deg_{P}(v_{N})=\deg_{P}(v_{N+1})=\deg_{P}(v_{N+2})=\cdots=:p,\] \[\deg_{Z}(v_{N})=\deg_{Z}(v_{N+1})=\deg_{Z}(v_{N+2})=\cdots\] \[\deg_{GD}(v_{N})=\deg_{GD}(v_{N+1})=\deg_{GD}(v_{N+2})=\cdots,\]
and
\[\deg_{GP}(v_{N})=\deg_{GP}(v_{N+1})=\deg_{GP}(v_{N+2})=\cdots.\]
Thus all \(v_{i}\) with \(i\geq N\) belong to \(\mathfrak{S}_{\Omega,k+p}(Z)\). The restriction of the order \(\leq_{\rm Dlex}\) to \(\mathfrak{S}_{\Omega,k+p}(Z)\) equals to the well order \(\leq_{\rm Dlex_{k+p}}\), which by definition satisfies the descending chain condition, so the chain \(v_{1}\geq_{\rm PD}v_{2}\geq_{\rm PD}v_{3}\geq_{\rm PD}\cdots\) stabilizes after finite steps.
**Definition 1.15** ([23, Definition 5.6]).: A preorder \(\leq_{\alpha}\) on \(\mathfrak{S}_{\Omega}(Z)\) is called bracket compatible (resp. left compatible, right compatible) if
\[u\leq_{\alpha}v\Rightarrow\lfloor u\rfloor_{D}\leq_{\alpha}\lfloor v\rfloor_{D} \text{ and }\lfloor u\rfloor_{P}\leq_{\alpha}\lfloor v\rfloor_{P},\text{ (resp. }wu\leq_{\alpha}wv,\text{ }uw\leq_{\alpha}vw,\text{ }\text{ for all }w\in\mathfrak{S}_{\Omega}(Z))\]
**Lemma 1.16** ([23, Lemma 5.7]).: _A well order \(\leq\) is a monomial order on \(\mathfrak{S}_{\Omega}(Z)\) if and only if \(\leq\) is bracket compatible, left compatible and right compatible._
Now we can prove the main result of this section which is the main technical point of this paper.
**Theorem 1.17**.: _The well order \(\leq_{\rm PD}\) is a monomial order on \(\mathfrak{S}_{\Omega}(Z)\)._
Proof.: Let \(u\leq_{\rm PD}v\). It is obvious that preorders \(\leq_{\rm D}\), \(\leq_{\rm P}\) and \(\leq_{\rm AZ}\) are bracket compatible, left compatible and right compatible. This solves the three cases \(u<_{\rm D}v\); \(u=_{\rm D}v\), \(u<_{\rm dgp}v\); \(u=_{\rm D}v\), \(u=_{\rm dep}v\) and \(u<_{\rm dgv}v\).
If \(u=_{\rm D}v\), \(u=_{\rm P}v\), \(u=_{\rm AZ}v\) and \(u<_{\rm GD}v\), obviously \(\lfloor u\rfloor_{D}<_{\rm GD}\lfloor v\rfloor_{D}\), \(\lfloor u\rfloor_{P}<_{\rm GD}\lfloor v\rfloor_{P}uw\)\(<_{\rm GD}vw\) and \(wu<_{\rm GD}wv\) for \(w\in\mathfrak{S}_{\Omega}(Z)\). So \(\lfloor u\rfloor_{D}<_{\rm PD}\lfloor v\rfloor_{D}\), \(\lfloor u\rfloor_{P}<_{\rm PD}\lfloor v\rfloor_{P}\), \(uw<_{\rm PD}vw\) and \(wu<_{\rm PD}wv\).
The case that \(u=_{\rm D}v\), \(u=_{\rm P}v\), \(u=_{\rm AZ}v\), \(u=_{\rm GD}v\) and \(u<_{\rm GP}v\) is similar to the above one.
It remains to consider the case that \(u=_{\rm D}v\), \(u=_{\rm P}v\), \(u=_{\rm AZ}v\), \(u=_{\rm GD}v\), \(u=_{\rm GP}v\) and \(u<_{\rm Dlex}v\). Let \(n\geq\deg_{D}(u),\deg_{P}(u)\). Since \(u,v\in\mathfrak{S}_{\Omega,n}(Z)\), thus \(u\leq_{\rm Dlex_{n}}v\). By the fact that the restriction of \(\leq_{\rm Dlex_{n+1}}\) to \(\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{D}\) is induced by \(\leq_{\rm Dlex_{n+1}}\), \(\lfloor v\rfloor_{D}\), \(\lfloor u\rfloor_{D}\leq_{\rm Dlex}\lfloor v\rfloor_{D}\), and \(\lfloor u\rfloor_{D}\leq_{\rm PD}\lfloor v\rfloor_{D}\). Similarly \(\lfloor u\rfloor_{P}\leq_{\rm PD}\lfloor v\rfloor_{P}\). Let \(w\in\mathfrak{S}_{\Omega,m}(Z)\). One can obtain \(uw\leq_{\rm Dlex}\), \(vw\) and \(wu\leq_{\rm Dlex}\), \(wv\) for \(r=\max\{m,n\}\), so \(uw\leq_{\rm PD}vw\) and \(wu\leq_{\rm PD}wv\).
We are done.
Now let's move to the unital case. Now we extend \(\leq_{\rm PD}\) from \(\mathfrak{S}_{\Omega}(Z)\) to \(\mathfrak{M}_{\Omega}(Z)\) by using Remark 1.2.
**Definition 1.18**.: Let \(Z\) be a set with a well order. Let \(\dagger_{P}\) (resp. \(\dagger_{D}\) ) be a symbol which is understood to be \(\lfloor 1\rfloor_{P}\) (resp. \(\lfloor 1\rfloor_{D}\)) and write \(Z^{\prime}=Z\sqcup\{\dagger_{P},\dagger_{D}\}\). Consider the free operated semigroup \(\mathfrak{S}_{\Omega}(Z^{\prime})\) over the set \(Z^{\prime}\). The well order on \(Z\) extends to a well order \(\leq\) on \(Z^{\prime}\) by setting \(\dagger_{P}>z>\dagger_{D}\), for any \(z\in Z\). Besides, we impose \(\deg_{P}(\dagger_{P})=1\) and \(\deg_{GP}(\dagger_{P})=0\). Then the monomial order \(\leq_{\rm PD}\) on \(\mathfrak{S}_{\Omega}(Z^{\prime})\) induces a well order \(\leq_{\rm uPD}\) on \(\mathfrak{M}_{\Omega}(Z)=\mathfrak{S}_{\Omega}(Z^{\prime})\sqcup\{1\}\) (in which \(\lfloor 1\rfloor_{P}\) and \(\lfloor 1\rfloor_{D}\) is identified with \(\dagger_{P}\) and \(\dagger_{D}\) respectively), by setting \(u>_{\rm uPD}1\) for any \(u\in\mathfrak{S}_{\Omega}(Z^{\prime})\).
**Theorem 1.19**.: _The well order \(\leq_{\mathrm{uPD}}\) is a monomial order on \(\mathfrak{M}_{\Omega}(Z)\)._
Proof.: Obviously, the well order \(\leq_{\mathrm{uPD}}\) is bracket compatible on \(\mathfrak{M}_{\Omega}(Z)\backslash\{1\}\). Let \(x\in\mathfrak{M}_{\Omega}(Z)\backslash\{1\}\). By definition, \(x>_{\mathrm{uPD}}1\). We have \([x]_{P}>_{\mathrm{Dlex}}[1]_{P}\) which implies \([x]_{P}>_{\mathrm{uPD}}\dagger_{P}\). It is ready to see that \([x]_{D}>_{\mathrm{uPD}}x>_{\mathrm{uPD}}\dagger_{D}\). Thus \(\leq_{\mathrm{uPD}}\) is bracket compatible.
Clearly, \(\leq_{\mathrm{uPD}}\) is left and right compatible.
We record several important conclusions which will be useful later.
**Proposition 1.20**.: _For any \(u,v\in\mathfrak{M}_{\Omega}(Z)\backslash\{1\}\), we have_
1. \([u]_{P}[1]_{P}>_{\mathrm{uPD}}[u]_{P}\geq_{\mathrm{uPD}}[\lfloor u\rfloor_{P}] _{P}\)_,_
2. \([1]_{P}[v]_{P}>_{\mathrm{uPD}}[\lfloor v\rfloor_{P}]_{P}\geq_{\mathrm{uPD}}[ \lfloor 1\rfloor_{P}v]_{P}\)_,_
3. \([1]_{P}[1]_{P}>_{\mathrm{uPD}}[\lfloor 1\rfloor_{P}]_{P}\)_,_
4. \([1]_{P}[v]_{D}>_{\mathrm{uPD}}[\lfloor 1\rfloor_{P}]_{D}\)_,_
5. \([u]_{D}[1]_{P}>_{\mathrm{uPD}}[u]_{D}\)_._
Proof.: Let \(u,v\in\mathfrak{M}_{\Omega}(Z)\backslash\{1\}=\mathfrak{S}_{\Omega}(Z^{\prime})\).
1. It is easy to see \([\lfloor u\rfloor_{P}]_{P}\) have lowest \(\deg_{Z^{\prime}}\) among \([u]_{P}[1]_{P},[u[1]_{P}]_{P},[\lfloor u\rfloor_{P}]_{P}\), and we also have \(\deg_{Gp}([u]_{P}[1]_{P})>\deg_{Gp}([u[1]_{P}]_{P})\).
2. It is similar to \((a)\).
3. It follows from \(\deg_{Z^{\prime}}([1]_{P}[1]_{P})>\deg_{Z^{\prime}}([1]_{P}]_{P})\).
4. It can be deduced from \([1]_{P}[v]_{D}>_{\mathrm{Dlex}}[\lfloor 1\rfloor_{P}v]_{D}\) by Definition 1.7.
5. It holds because \(\deg_{GD}([u]_{D}[1]_{P})>\deg_{GD}([u[1]_{P}]_{D})\).
## 2. Operator polynomial identities and multi-operated GS bases
In this section, we extend the theory of operated GS bases due to [2, 15, 23, 6] from the case of single operator to multiple operators case. The presentation is essentially contained in [7].
### Operator polynomial identities
In this subsection, we give some basic notions and facts related to operator polynomial identities. Throughout this section, \(X\) denotes a set.
**Definition 2.1**.: We call an element \(\phi(x_{1},\ldots,x_{n})\in\mathbf{k}\mathfrak{S}_{\Omega}(X)\) (resp. \(\mathbf{k}\mathfrak{M}_{\Omega}(X)\)) with \(n\geq 1,x_{1},\ldots,x_{n}\in X\) an operated polynomial identity (aka OPI).
_From now on, we always assume that OPIs are multilinear, that is, they are linear in each \(x_{i}\)._
**Definition 2.2**.: Let \(\phi(x_{1},\ldots,x_{n})\) be an OPI. A (unital) \(\Omega\)-algebra \(A=(A,\{P_{\omega}\}_{\omega\in\Omega})\) is said to satisfy the OPI \(\phi(x_{1},\ldots,x_{n})\) if \(\phi(r_{1},\ldots,r_{n})=0\), for all \(r_{1},\ldots,r_{n}\in A.\) In this case, \((A,\{P_{\omega}\}_{\omega\in\Omega})\) is called a (unital) \(\phi\)-algebra.
Generally, for a family \(\Phi\) of OPIs, we call a (unital) \(\Omega\)-algebra \((A,\{P_{\omega}\}_{\omega\in\Omega})\) a (unital) \(\Phi\)-algebra if it is a (unital) \(\phi\)-algebra for any \(\phi\in\Phi\). Denote the category of \(\Phi\)-algebras (resp. unital \(\Phi\)-algebras) by \(\Phi\)-\(\mathfrak{M}\mathfrak{I}_{\Omega}\) (resp. \(\Phi\)-\(\mathfrak{M}\mathfrak{I}_{\Omega}\)).
**Definition 2.3**.: An \(\Omega\)-ideal of an \(\Omega\)-algebra is an ideal of the associative algebra closed under the action of the operators. The \(\Omega\)-ideal generated by a subset \(S\subseteq A\) is denoted by \(\langle S\rangle_{\Omega\cdot\mathfrak{M}\mathfrak{I}_{\Omega}}\) (resp. \(\langle S\rangle_{\Omega\cdot\mathfrak{M}\mathfrak{I}_{\Omega}}\)).
Obviously the quotient of an \(\Omega\)-algebra (resp. unital \(\Omega\)-algebra) by an \(\Omega\)-ideal is naturally an \(\Omega\)-algebra (resp. \(\Omega\)-unital algebra).
From now on, \(\Phi\) denotes a family of OPIs in \(\mathbf{k}\,\widetilde{\Sigma}_{\Omega}(X)\) or \(\mathbf{k}\mathfrak{M}_{\Omega}(X)\). For a set \(Z\) and a subset \(Y\) of \(\mathfrak{M}_{\Omega}(Z)\), introduce the subset \(S_{\Phi}(Y)\subseteq\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) to be
\[S_{\Phi}(Y):=\{\phi(u_{1},\ldots,u_{k})\mid u_{1},\ldots,u_{k}\in Y,\ \phi(x_{1},\ldots,x_{k})\in\Phi\}.\]
### Multi-operated GS bases for \(\Phi\)-algebras
In this subsection, operated GS basis theory is extended to algebras with multiple operators following closely [2].
**Definition 2.4**.: Let \(Z\) be a set, \(\leq\) a linear order on \(\mathfrak{M}_{\Omega}(Z)\) and \(f\in\mathbf{k}\mathfrak{M}_{\Omega}(Z)\).
* Let \(f\notin\mathbf{k}\). The leading monomial of \(f\), denoted by \(\bar{f}\), is the largest monomial appearing in \(f\). The leading coefficient of \(f\), denoted by \(c_{f}\), is the coefficient of \(\bar{f}\) in \(f\). We call \(f\) monic with respect to \(\leq\) if \(c_{f}=1\).
* Let \(f\in\mathbf{k}\) (including the case \(f=0\)). We define the leading monomial of \(f\) to be \(1\) and the leading coefficient of \(f\) to be \(c_{f}=f\).
* A subset \(S\subseteq\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) is called monicized with respect to \(\leq\), if each nonzero element of \(S\) has leading coefficient \(1\).
Obviously, each subset \(S\subseteq\mathfrak{M}_{\Omega}(Z)\) can be made monicized if we divide each nonzero element by its leading coefficient.
We need another notation. Let \(Z\) be a set. For \(u\in\mathfrak{M}_{\Omega}(Z)\) with \(u\neq 1\), as \(u\) can be uniquely written as a product \(u_{1}\cdots u_{n}\) with \(u_{i}\in Z\cup\lfloor\mathfrak{M}_{\Omega}(Z)\rfloor_{\Omega}\) for \(1\leq i\leq n\), call \(n\) the breadth of \(u\), denoted by \(|u|\); for \(u=1\), we define \(|u|=0\).
**Definition 2.5**.: Let \(\leq\) be a monomial order on \(\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathfrak{M}_{\Omega}(Z)\)) and \(f,g\in\mathbf{k}\,\widetilde{\Sigma}_{\Omega}(Z)\) (resp. \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\)) be monic.
* If there are \(p,u,v\in\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathfrak{M}_{\Omega}(Z)\)) such that \(p=\bar{f}u=v\bar{g}\) with \(\max\{|\bar{f}|,|\bar{g}|\}<|p|<|\bar{f}|+|\bar{g}|\), we call \[(f,g)_{p}^{u,v}:=fu-vg\] the intersection composition of \(f\) and \(g\) with respect to \(p\).
* If there are \(p\in\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathfrak{M}_{\Omega}(Z)\)) and \(q\in\mathfrak{S}_{\Omega}^{\star}(Z)\) (resp. \(\mathfrak{M}_{\Omega}^{\star}(Z)\)) such that \(p=\bar{f}=q|_{\bar{g}}\), we call \[(f,g)_{p}^{q}:=f-q|_{g}\] the inclusion composition of \(f\) and \(g\) with respect to \(p\).
**Definition 2.6**.: Let \(Z\) be a set and \(\leq\) a monomial order on \(\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathfrak{M}_{\Omega}(Z)\)). Let \(\mathcal{G}\subseteq\mathbf{k}\,\widetilde{\Sigma}_{\Omega}(Z)\) (resp. \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\)).
* An element \(f\in\mathbf{k}\,\widetilde{\Sigma}_{\Omega}(Z)\) (resp. \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\)) is called trivial modulo \((\mathcal{G},p)\) for \(p\in\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathfrak{M}_{\Omega}(Z)\)) if \[f=\sum_{i}c_{i}q_{i}|_{s_{i}}\ \text{with}\ q_{i}|_{s_{i}}<p\text{, where }c_{i}\in\mathbf{k},\ q_{i}\in \mathfrak{S}_{\Omega}^{\star}(Z)\ \text{(resp. }\mathfrak{M}_{\Omega}^{\star}(Z)\ \text{) and }s_{i}\in\mathcal{G}.\] If this is the case, we write \[(f,g)_{p}\equiv 0\ \text{mod}\ (\mathcal{G},p).\] In general, for any \(u,v\in\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathfrak{M}_{\Omega}(Z)\)), \(u\equiv v\ \text{mod}\ (\mathcal{G},p)\) means that \(u-v=\sum c_{i}q_{i}|_{s_{i}}\), with \(q_{i}|_{s_{i}}<p\), where \(c_{i}\in\mathbf{k},\ q_{i}\in\widetilde{\Sigma}_{\Omega}^{\star}(Z)\) (resp. \(\mathfrak{M}_{\Omega}^{\star}(Z)\)) and \(s_{i}\in\mathcal{G}\).
2. The subset \(\mathcal{G}\) is called a GS basis in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\)) with respect to \(\leq\) if, for all pairs \(f,g\in\mathcal{G}\) monicized with respect to \(\leq\), every intersection composition of the form \((f,g)_{p}^{u,v}\) is trivial modulo \((\mathcal{G},p)\), and every inclusion composition of the form \((f,g)_{p}^{q}\) is trivial modulo \((\mathcal{G},p)\).
_To distinguish from usual GS bases for associative algebras, from now on, we shall rename GS bases in multi-operated contexts by \(\Omega\)-GS bases._
**Theorem 2.7**.: _(Composition-Diamond Lemma) Let \(Z\) be a set, \(\leq\) a monomial order on \(\mathfrak{M}_{\Omega}(Z)\) and \(\mathcal{G}\subseteq\mathbf{k}\mathfrak{M}_{\Omega}(Z)\). Then the following conditions are equivalent:_
1. \(\mathcal{G}\) _is an_ \(\Omega\)_-GS basis in_ \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\)_._
2. _Denote_ \[\mathrm{Irr}(\mathcal{G}):=\mathfrak{M}_{\Omega}(Z)\setminus\left\{q|_{ \overline{s}}\mid s\in\mathcal{G},\ \ q\in\mathfrak{M}_{\Omega}^{\star}(Z)\right\}.\] _As a_ \(\mathbf{k}\)_-space,_ \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)=\mathbf{k}\mathrm{Irr}(\mathcal{G})\oplus \langle\mathcal{G}\rangle_{\Omega\cdot\mathfrak{M}_{\Omega}}\) _and_ \(\mathrm{Irr}(\mathcal{G})\) _is a_ \(\mathbf{k}\)_-basis of_ \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)/\left\langle\mathcal{G}\right\rangle_{ \Omega\cdot\mathfrak{M}_{\Omega}}\)_._
**Theorem 2.8**.: _(Composition-Diamond Lemma) Let \(Z\) be a set, \(\leq\) a monomial order on \(\mathfrak{S}_{\Omega}(Z)\) and \(\mathcal{G}\subseteq\mathbf{k}\mathfrak{S}_{\Omega}(Z)\). Then the following conditions are equivalent:_
1. \(\mathcal{G}\) _is an_ \(\Omega\)_-GS basis in_ \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\)_._
2. _Denote_ \[\mathrm{Irr}(\mathcal{G}):=\mathfrak{S}_{\Omega}(Z)\setminus\left\{q|_{ \overline{s}}\mid s\in\mathcal{G},\ \ q\in\mathfrak{S}_{\Omega}^{\star}(Z)\right\}.\] _As a_ \(\mathbf{k}\)_-space,_ \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)=\mathbf{k}\mathrm{Irr}(\mathcal{G})\oplus \langle\mathcal{G}\rangle_{\Omega\cdot\mathfrak{M}_{\Omega}}\) _and_ \(\mathrm{Irr}(\mathcal{G})\) _is a_ \(\mathbf{k}\)_-basis of_ \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)/\left\langle\mathcal{G}\right\rangle_{ \Omega\cdot\mathfrak{M}_{\Omega}}\)_._
**Definition 2.9** ([6, Definiton 2.21(a)]).:
1. Let \(\Phi\subseteq\mathbf{k}\mathfrak{S}_{\Omega}(X)\) be a family of OPIs. Let \(Z\) be a set and \(\leq\) a monomial order on \(\mathfrak{S}_{\Omega}(Z)\). We call \(\Phi\)\(\Omega\)-GS on \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq\) if \(S_{\Phi}(\mathfrak{S}_{\Omega}(Z))\) is an \(\Omega\)-GS basis in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq\).
2. Let \(\Phi\subseteq\mathbf{k}\mathfrak{M}_{\Omega}(X)\) be a family of OPIs. Let \(Z\) be a set and \(\leq\) a monomial order on \(\mathfrak{M}_{\Omega}(Z)\). We call \(\Phi\)\(\Omega\)-GS on \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq\) if \(S_{\Phi}(\mathfrak{M}_{\Omega}(Z))\) is an \(\Omega\)-GS basis in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq\).
### Multi-operated GS basis for free \(\Phi\)-algebras over algebras
In this subsection, we consider multi-operated GS basis for free \(\Phi\)-algebras over algebras and generalise the main result of [17] to multi-operated cases.
We will use the following results without proof as they are counterparts in multi-operated setup of [17, Propositions 4.8].
**Proposition 2.10**.:
1. _Let_ \(\Phi\subset\mathbf{k}\mathfrak{S}_{\Omega}(X)\) _and_ \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) _an algebra. Then_ \[\mathcal{F}_{\mathfrak{M}_{\Omega}}^{\Phi\cdot\mathfrak{M}_{\Omega}}(A):= \mathbf{k}\mathfrak{S}_{\Omega}(Z)/\left\langle S_{\Phi}(\mathfrak{S}_{\Omega} (Z))\cup I_{A}\right\rangle_{\Omega\cdot\mathfrak{M}_{\Omega}}\] _is the free_ \(\Phi\)_-algebra generated by_ \(A\)_._
2. _Let_ \(\Phi\subset\mathbf{k}\mathfrak{M}_{\Omega}(X)\) _and_ \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) _a unital algebra. Then_ \[\mathcal{F}_{\mathfrak{M}_{\Omega}}^{\Phi\cdot\mathfrak{M}_{\Omega}}(A):= \mathbf{k}\mathfrak{M}_{\Omega}(Z)/\left\langle S_{\Phi}(\mathfrak{M}_{\Omega} (Z))\cup I_{A}\right\rangle_{\Omega\cdot\mathfrak{M}_{\Omega}}\] _is the free unital_ \(\Phi\)_-algebra over_ \(A\)_._
As in [17], we consider the following question:
**Question 2.11**.: Let \(A\) be a (unital) algebra together with a Grobner-Shirshov basis \(G\). Assume that a set \(\Phi\) of operated polynomial identities is \(\Omega\)-GS in the sense of Definition 2.9. Considering the free (unital) \(\Phi\)-algebra \(B\) over \(A\), when will the union "\(\Phi\cup G\)" be a \(\Omega\)-GS basis for \(B\)?
It is surprising that the answer of the corresponding question given in [17] can be generalised to multi-operated case without much modifications.
**Theorem 2.12**.: _Let \(X\) be a set and \(\Phi\subseteq\mathbf{k}\mathfrak{M}_{\Omega}(X)\) a system of OPIs. Let \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) be a unital algebra with generating set \(Z\). Assume that \(\Phi\) is \(\Omega\)-GS on \(Z\) with respect to a monomial order \(\leq\) in \(\mathfrak{M}_{\Omega}(Z)\) and that \(G\) is a GS basis of \(I_{A}\) in \(\mathbf{k}\mathcal{M}(Z)\) with respect to the restriction of \(\leq\) to \(\mathcal{M}(Z)\)._
_Suppose that the leading monomial of any OPI \(\phi(x_{1},\ldots,x_{n})\in\Phi\) has no subword in \(\mathcal{M}(X)\backslash X\), and that \(\phi(u_{1},\ldots,u_{n})\) vanishes or its leading monomial is \(\overline{\phi}(u_{1},\ldots,u_{n})\) for all \(u_{1},\ldots,u_{n}\in\mathfrak{M}_{\Omega}(Z)\). Then \(S_{\Phi}(\mathfrak{M}_{\Omega}(Z))\cup G\) is an \(\Omega\)-GS basis of \(\langle S_{\Phi}(\mathfrak{M}_{\Omega}(Z))\cup I_{A}\rangle_{\Omega\text{-} \mathfrak{M}\mathfrak{M}\mathfrak{M}}\) in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq\)._
Proof.: The proof of [17, Theorem 5.9] carries verbatim over multi-operated case, because it reveals that the key point is that the leading monomial of any OPI \(\phi(x_{1},\ldots,x_{n})\in\Phi\) has no subword in \(\mathcal{M}(X)\backslash X\).
For details, see the proof of [17, Theorem 5.9].
There exists a nonunital version of the above result, which is also a multi-operated version of [18, Theorem 2.15].
**Theorem 2.13**.: _Let \(X\) be a set and \(\Phi\subseteq\mathbf{k}\mathfrak{S}_{\Omega}(X)\) a system of OPIs. Let \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) be an algebra with generating set \(Z\). Assume that \(\Phi\) is \(\Omega\)-GS on \(Z\) with respect to a monomial order \(\leq\) in \(\mathfrak{S}_{\Omega}(Z)\) and that \(G\) is a GS basis of \(I_{A}\) in \(\mathbf{k}\mathcal{S}(Z)\) with respect to the restriction of \(\leq\) to \(\mathcal{S}(Z)\)._
_Suppose that the leading monomial of any OPI \(\phi(x_{1},\ldots,x_{n})\in\Phi\) has no subword in \(\mathcal{S}(X)\backslash X\), and that for all \(u_{1},\ldots,u_{n}\in\mathfrak{S}_{\Omega}(Z)\), \(\phi(u_{1},\ldots,u_{n})\) vanishes or its leading monomial is \(\overline{\phi}(u_{1},\ldots,u_{n})\). Then \(S_{\Phi}(\mathfrak{S}_{\Omega}(Z))\cup G\) is an \(\Omega\)-GS basis of \(\langle S_{\Phi}(\mathfrak{S}_{\Omega}(Z))\cup I_{A}\rangle_{\Omega\text{-} \mathfrak{M}\mathfrak{M}\mathfrak{M}}\) in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq\)._
## 3. Free differential Rota-Baxter algebras over algebras
In this section, we apply Theorems 2.12 and 2.13 to differential Rota-Baxter algebras.
From now on, let \(\Omega=\{D,P\}\), fix a set \(X=\{x,y\}\) with two elements such that variables in OPIs will take values in \(X\). When talking about algebras or reductions of OPIs, fix a set \(Z\) and we understand that variables in OPIs will be replaced by elements of \(\mathfrak{S}_{\Omega}(Z)\) or \(\mathfrak{M}_{\Omega}(Z)\).
We first recall the definition of differential Rota-Baxter algebras. We use \(D(\ )\) and \(P(\ )\) instead of the linear operators \(\lfloor\ \rfloor_{D}\) and \(\lfloor\ \rfloor_{P}\).
**Definition 3.1** ([7, Definition 2.1]).: Let \(\lambda\in\mathbf{k}\) be fixed.
* A (unital) differential \(\mathbf{k}\)-algebra of weight \(\lambda\) (also called a (unital) \(\lambda\)-differential \(\mathbf{k}\)-algebra) is a (unital) associative \(\mathbf{k}\)-algebra \(R\) together with a linear operator \(D:R\to R\) such that \[D(uv)=D(u)v+uD(v)+\lambda D(u)D(v)\quad\text{ for all }u,v\in R;\] when \(R\) has a unity \(1\), it is asked that \(D(1)=0\).
* A Rota-Baxter \(\mathbf{k}\)-algebra of weight \(\lambda\) is an associative \(\mathbf{k}\)-algebra \(R\) together with a linear operator \(P:R\to R\) such that \[P(u)P(v)=P(uP(v))+P(P(u)v)+\lambda P(uv)\quad\text{ for all }u,v\in R.\]
* A (unital) differential Rota-Baxter \(\mathbf{k}\)-algebra of weight \(\lambda\) (also called a (unital) \(\lambda\)-differential Rota-Baxter \(\mathbf{k}\)-algebra) is a (unital) differential k-algebra \((R,D)\) of weight \(\lambda\) and a Rota-Baxter operator \(P\) of weight \(\lambda\) such that \[D\circ P=\ \mathrm{id}\.\]
When we consider free differential Rota-Baxter algebras on algebras, it is disappointing to see that the traditional order (see [2]) would not meet the condition of Theorems 2.12 and 2.13. This is the intention of the new monomial orders \(\leq_{\mathrm{PD}}\) and \(\leq_{\mathrm{uPD}}\) introduced in Section 1.3.
### Case of nonunital algebras with \(\lambda\neq 0\)
Assume in this subsection that \(\lambda\neq 0\). Denote
1. \(\phi_{1}(x,y)=P(x)P(y)-P(xP(y))-P(P(x)y)-\lambda P(xy)\),
2. \(\phi_{2}(x,y)=D(x)D(y)+\lambda^{-1}D(x)y+\lambda^{-1}xD(y)-\lambda^{-1}D(xy)\),
3. \(\phi_{3}(x)=D(P(x))-x\).
We first consider nonunital free differential Rota-Baxter algebras on algebras.
**Proposition 3.2**.: _For any \(u,v\in\mathfrak{S}_{\Omega}(Z)\), the leading monomials of \(\phi_{1}(u,v)\), \(\phi_{2}(u,v)\) and \(\phi_{3}(u)\) under \(\leq_{\mathrm{PD}}\) are respectively \(P(u)P(v),D(u)D(v)\) and \(D(P(u))\)._
Proof.: Let \(u_{1},\cdots,u_{n}\) and \(v_{1},\cdots,v_{m}\) be all the elements of \(Z\) occurring in \(u\) and \(v\) from left to right.
For \(\phi_{1}(u,v)=P(u)P(v)-P(uP(v))-P(P(u)v)-\lambda P(uv)\), we have \(\deg_{P}(P(uv))\) is smaller than those of the other three terms, while the \(\deg_{D},\deg_{Z}\) and \(\deg_{GD}\) of the other elements are the same. And one can see
\[\deg_{GP}(P(u)P(v))-\deg_{GP}(P(uP(v)))=m>0,\] \[\deg_{GP}(P(u)P(v))-\deg_{GP}(P(P(u)v))=n>0,\]
so the leading monomial of \(\phi_{1}(u,v)\) is \(P(u)P(v)\).
The statements about \(\phi_{2}(u,v)\) and \(\phi_{3}(u)\) are obvious by comparing \(\deg_{D}\).
Now let
\[\Phi_{\mathsf{DRB}}{}^{\prime}:=\{\phi_{1}(x,y),\phi_{2}(x,y),\phi_{3}(x)\}\,.\]
However, \(\Phi_{\mathsf{DRB}}{}^{\prime}\) is not \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\).
**Example 3.3**.: For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[f = \phi_{2}(P(u),v)=D(P(u))D(v)+\lambda^{-1}D(P(u))v+\lambda^{-1}P( u)D(v)-\lambda^{-1}D(P(u)v),\] \[g = \phi_{3}(u)=D(P(u))-u,\] \[q = \star D(v),\] \[p = D(P(u))D(v)=\bar{f}=\,q|_{\bar{g}}\,.\]
Then
\[(f,g)_{p}^{q}=f-\,q|_{g}\equiv\lambda^{-1}(P(u)D(v)-D(P(u)v)+uv+\lambda uD(v)).\]
Let
\[\phi_{4}(x,y)=P(x)D(y)-D(P(x)y)+xy+\lambda xD(y).\]
It is clear that the leading monomial of \(\phi_{4}(u,v)\) is \(P(u)D(v)\) with respect to \(\leq_{\mathrm{PD}}\) which cannot be reduced further.
**Example 3.4**.: For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[f = \phi_{2}(u,P(v))=D(u)D(P(v))+\lambda^{-1}D(u)P(v)+\lambda^{-1}uD (P(v))-\lambda^{-1}D(uP(v)),\] \[g = \phi_{3}(v)=D(P(v))-v,\] \[q = D(u)\star,\] \[p = D(u)D(P(v))=\bar{f}=\,q|_{\bar{g}}\,.\]
Then
\[(f,g)_{p}^{q}=f-\,q|_{g}\equiv\lambda^{-1}(D(u)P(v)-D(uP(v))+uv+\lambda D(u)v).\]
Let
\[\phi_{5}(x,y)=D(x)P(y)-D(xP(y))+xy+\lambda D(x)y.\]
It is clear that the leading monomial of \(\phi_{5}(u,v)\) is \(D(u)P(v)\) with respect to \(\leq_{\mathrm{PD}}\) which cannot be reduced further.
Now denote \(\Phi_{\mathrm{DRB}}\) to be the set of the following OPIs:
1. \(\phi_{1}(x,y)=P(x)P(y)-P(xP(y))-P(P(x)y)-\lambda P(xy)\),
2. \(\phi_{2}(x,y)=D(x)D(y)+\lambda^{-1}D(x)y+\lambda^{-1}xD(y)-\lambda^{-1}D(xy)\),
3. \(\phi_{3}(x)=D(P(x))-x\),
4. \(\phi_{4}(x,y)=P(x)D(y)-D(P(x)y)+xy+\lambda xD(y)\),
5. \(\phi_{5}(x,y)=D(x)P(y)-D(xP(y))+xy+\lambda D(x)y\).
It is obvious that \(\left\langle S_{\Phi_{\mathrm{DRB}}}(Z)\right\rangle_{\Omega\cdot\mathbb{M} \mathbb{M}_{\mathbb{M}}}=\left\langle S_{\Phi_{\mathrm{DRB}}}(Z)\right\rangle_ {\Omega\cdot\mathbb{M}_{\mathbb{M}}}\) for each set \(Z\).
Next we will show that \(\Phi_{\mathrm{DRB}}\) is \(\Omega\)-GS with respect to \(\leq_{\mathrm{PD}}\). Before that, we need the following lemma to simplify our proof.
**Lemma 3.5**.: _Let \(\phi(x_{1},\ldots,x_{n})\) and \(\psi(y_{1},\ldots,y_{m})\) be two OPIs. Let \(Z\) be a set. Suppose that, for any \(u_{1},\ldots,u_{n},v_{1},\ldots,v_{m}\in\mathfrak{S}_{\Omega}(Z)\), the leading monomial of \(\phi(u_{1},\ldots,u_{n})\) is \(\bar{\phi}(u_{1},\ldots,u_{n})\) and leading monomial of \(\psi(v_{1},\ldots,v_{m})\) is \(\bar{\psi}(v_{1},\ldots,v_{m})\)._
_Now write \(f=\phi(u_{1},\ldots,u_{n})\) and \(g=\psi(v_{1},\ldots,v_{m})\) for fixed \(u_{1},\ldots,u_{n}\), \(v_{1},\ldots,v_{m}\in\mathfrak{S}_{\Omega}(Z)\). If there exists \(i\)\((1\leq i\leq n)\) and \(r\in\mathfrak{S}_{\Omega}{}^{\star}(Z)\) such that \(u_{i}=r|_{\bar{g}}\), then the inclusion composition \((f,g)_{p}^{q}=f-q|_{g}\) with \(p=\bar{f}\) and \(q=\bar{\phi}(u_{1},\ldots,u_{i-1},r,u_{i+1},\ldots,u_{n})\), is trivial modulo \((S_{\{\phi,\phi\}}(Z),w)\). We call this type of inclusion composition as complete inclusion composition._
Proof.: The assertion follows from
\[(f,g)_{p}^{q} =f-q|_{g}\] \[=(\phi-\bar{\phi})(u_{1},\ldots,u_{i-1},r|_{\bar{g}},u_{i+1}, \ldots,u_{n})-\bar{\phi}(u_{1},\ldots,u_{i-1},r|_{g-\bar{g}},u_{i+1},\ldots,u_{ n})\] \[=(\phi-\bar{\phi})(u_{1},\ldots,u_{i-1},r|_{g},u_{i+1},\ldots,u_{ n})-\phi(u_{1},\ldots,u_{i-1},r|_{g-\bar{g}},u_{i+1},\ldots,u_{n}).\]
**Remark 3.6**.: Lemma 3.5 extends [6, Theorem 4.1(b)] to the case of multiple operators.
Now we can prove \(\Phi_{\mathrm{DRB}}\) is \(\Omega\)-GS.
**Theorem 3.7**.: \(\Phi_{\mathrm{DRB}}\) _is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\)._
Proof.: We write \(i\wedge j\) the composition of OPIs of \(\phi_{i}\) and \(\phi_{j}\), which means \(\phi_{i}\) lies on the left and \(\phi_{j}\) lies on the right for intersection composition or \(\phi_{j}\) is included in \(\phi_{i}\) for inclusion composition. The ambiguities of all possible compositions in \(\Phi_{\mathrm{DRB}}\) are listed as below: for arbitrary \(u,v,w\in\mathfrak{S}_{\Omega}(Z)\) and \(q\in\mathfrak{S}_{\Omega}{}^{\star}(Z)\),
\[\begin{array}{ll}1\wedge 1&\frac{P(u)P(v)P(w)}{P\left(q|_{P(u)P(v)} \right)P\left(w\right)}\\ 1\wedge 2&\frac{P\left(u\right)P(v)P(w)}{P\left(q|_{D(u)D(v)}\right)P\left(w \right)},\quad P\left(u\right)P\left(q|_{D(u)D(v)}\right),\\ 1\wedge 3&P\left(q|_{D(P(u))}\right)P\left(v\right),\quad P\left(u\right)P\left(q| _{D(P(v))}\right),\\ 1\wedge 4&\frac{P(u)P(v)D(w)}{P\left(q|_{D(u)D(v)}\right)P\left(w \right)},\quad P\left(u\right)P\left(q|_{P(v)D(w)}\right),\\ 1\wedge 5&\frac{P\left(u\right)P(v)P(w)}{P\left(q|_{D(u)P(v)}\right)P\left(w \right)},\quad P\left(u\right)P\left(q|_{D(v)P(w)}\right),\\ 2\wedge 1&D\left(q|_{P(u)(P(v))}D\left(w\right),\quad D\left(u\right)D\left(q| _{P(v)P(w)}\right),\\ 2\wedge 2&\frac{D(u)D(v)D(w)}{P\left(q|_{D(u)D(v)}\right)D\left(w \right)},\quad D\left(u\right)D\left(q|_{D(v)D(w)}\right),\end{array}\]
\[\begin{split} 2\wedge 3&\frac{D(P(u))D(v)}{D\left(q|_{P(u) }D(v)\right)},\quad D\left(u\right)D\left(q|_{D(P(u))}\right)D\left(v\right), \quad D\left(u\right)D\left(q|_{D(P(v))}\right),\\ 2\wedge 4&\frac{D(P(u))D(v)}{D\left(q|_{P(u)}D(v) \right)}D\left(w\right),\quad D\left(u\right)D\left(q|_{D(P(v))}\right),\\ 2\wedge 5&\frac{D(u)D(v)P(w)}{D\left(q|_{D(u)P(v)} \right)}D\left(w\right),\quad D\left(u\right)D\left(q|_{D(v)P(w)}\right),\\ 3\wedge 1&\frac{D\left(P\left(q|_{D(u)D(v)} \right)\right)}{D\left(P\left(q|_{P(u)P(v)}\right)\right)},\\ 3\wedge 2&D\left(P\left(q|_{D(u)D(v)}\right) \right),\\ 3\wedge 3&D\left(P\left(q|_{D(P(u))}\right) \right),\\ 3\wedge 4&D\left(P\left(q|_{D(u)D(v)}\right) \right),\\ 3\wedge 5&D\left(P\left(q|_{D(u)P(v)}\right) \right),\\ 4\wedge 1&P\left(q|_{P(u)P(v)}\right)D\left(w\right),\quad P\left(u\right)D\left(q|_{P(v)P(w)}\right),\\ 4\wedge 2&\frac{P(u)D(v)D(w)}{D\left(q|_{D(u)D(v) }\right)}D\left(w\right),\quad P\left(u\right)D\left(q|_{D(v)D(w)}\right),\\ 4\wedge 3&\frac{P(u)D(P(v))}{D\left(q|_{D(v)} \right)}D\left(v\right),\quad P\left(u\right)D\left(q|_{D(P(v))}\right),\\ 4\wedge 4&\frac{P(u)D(v)D(v)}{D\left(q|_{P(u)D(v) }\right)}D\left(w\right),\quad P\left(u\right)D\left(q|_{P(v)D(w)}\right),\\ 4\wedge 5&\frac{P(u)D(v)P(w)}{D\left(u\right)P(v)},\quad P\left(u\right)D\left(q|_{D(v)P(w)}\right),\\ 5\wedge 1&\frac{D(u)P(v)P(w)}{D\left(q|_{D(u)P(v) }\right)}D\left(w\right),\quad D\left(u\right)P\left(q|_{D(v)P(w)}\right),\\ 5\wedge 2&\frac{D(u)D(v)P(w)}{D\left(q|_{D(u)D(v) }\right)}P\left(w\right),\quad D\left(u\right)P\left(q|_{D(v)D(w)}\right),\\ 5\wedge 3&\frac{D(P(u))P(v)}{D\left(q|_{D(P(u) )}\right)}P\left(v\right),\quad D\left(u\right)P\left(q|_{D(P(v))}\right),\\ 5\wedge 4&\frac{D(u)P(v)D(w)}{D\left(q|_{D(u)D(v) }\right)}P\left(w\right),\quad D\left(u\right)P\left(q|_{D(v)D(w)}\right),\\ 5\wedge 5&\frac{D(u)P(v)P(w)}{D\left(q|_{D(u)P(v) }\right)}P\left(w\right),\quad D\left(u\right)P\left(q|_{D(v)P(w)}\right). \end{split}\]
Notice that all compositions above but the underlined ones can be dealt with by Lemma 3.5. There remains to consider the underlined compositions. We only give the complete proof for the case \(5\wedge 1\), the other cases being similar. For the case \(5\wedge 1\), write \(f=\phi_{5}(u,v)\), \(g=\phi_{1}(v,w)\) and \(p=D(u)P(v)P(w)\). So we have
\[(f,g)_{p}^{P(w),D(u)} =-D(uP(v))P(w)+uvP(w)+\lambda D(u)vP(w)\] \[\quad+D(u)P(vP(w))+D(u)P(P(v)w)+\lambda D(u)P(vw)\] \[\equiv-D(uP(v)P(w))+uP(v)w+\lambda D(uP(v))w+uvP(w)+\lambda D(u)vP (w)\] \[\quad+D(uP(P(v)w))-uP(v)w-\lambda D(u)P(v)w\] \[\quad+D(uP(vP(w)))-uvP(w)-\lambda D(u)vP(w)\] \[\quad+\lambda D(uP(vw))-\lambda uvw-\lambda^{2}D(u)vw\] \[\equiv-D(uP(v)P(w))+D(uP(vP(w)))+D(uP(P(v)w))+\lambda D(uP(vw))\] \[\quad-\lambda D(u)P(v)w+\lambda D(uP(v))w-\lambda uvw-\lambda^{2 }D(u)vw\] \[=-D(u\phi_{1}(v,w))-\lambda\phi_{5}(u,v)w\] \[\equiv 0\ \mathrm{mod}\ \left(S_{\phi_{\mathrm{DRB}}}(Z),p\right).\]
We are done.
**Theorem 3.8**.: _Let \(Z\) be a set, \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}}}}}}}}}}}}}}}}(A)= \mathbf{k}\widetilde{\Sigma}_{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrakmathfrak{\mathfrak{\mathfrak{\mathfrak }}}}}}}}}}}}}}}(Z)\cup I_{A} \right)_{\mathfrak{\Omega}\cdot\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\rm{dlex}}\). Then \(S_{\Phi_{\sf{DRB}}}(Z)\cup G\) is an operated GS basis of \(\langle S_{\Phi_{\sf{DRB}}}(Z)\cup I_{A}\rangle_{\Omega\cdot\mathfrak{sl}_{\Omega }}\) in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\rm{PD}}\)._
Proof.: Since the leading monomial in \(\Phi_{\sf{DRB}}\) has no subword in \(\mathcal{S}(X)\backslash X\), the result follows immediately from Theorem 3.7 and Theorem 2.13.
As a consequence, we obtain a linear basis.
**Theorem 3.9**.: _Let \(Z\) be a set, \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\rm{dlex}}\). Then the set \({\rm{Irr}}(S_{\Phi_{\sf{DRB}}}(Z)\cup G)\) which is by definition the complement of_
\[\{q|_{\bar{s}},q|_{P(u)P(v)},q|_{D(u)D(v)},q|_{D(P(u))},q|_{P(u)D(v)},q|_{D(u) P(v)},\,s\in G,q\in\mathfrak{S}_{\Omega}^{\star}(Z),u,v\in\mathfrak{S}_{ \Omega}(Z)\}\]
_in \(\mathfrak{S}_{\Omega}(Z)\) is a linear basis of the free nonunital \(\lambda\)-differential Rota-Baxter algebra \(\mathcal{F}_{\mathfrak{sl}_{\Omega}}^{\Phi_{\sf{DRB}}\cdot\mathfrak{sl}_{ \Omega}}(A)\) over \(A\)._
Proof.: It can be induced directly by Theorem 2.8.
**Remark 3.10**.: Since the monomial order used in [2] does not satisfy the conditions of Theorem 2.13, we have to make use of a new monomial order while treating free differential Rota-Baxter algebras over an algebra. In fact, since the leading monomials are different, even for free differential Rota-Baxter algebras over a field, our monomial order will provide new operated GS basis and linear basis.
### Case of nonunital algebras with \(\lambda=0\)
Now we consider nonunital free differential Rota-Baxter algebras on algebras with \(\lambda=0\). This case can be studied similarly to the case \(\lambda\neq 0\), so we omit the details in this subsection.
Denote \(\phi_{1}(x,y)\) with \(\lambda=0\) by \(\phi_{1}^{0}(x,y)\). Let
\[\phi_{2}^{0}(x,y)=D(x)y+xD(y)-D(xy).\]
We also write \(\phi_{3}^{0}(x)=\phi_{3}(x)\) for convenience.
**Proposition 3.11**.: _For any \(u,v\in\mathfrak{S}_{\Omega}(Z)\), the leading monomials of \(\phi_{1}^{0}(u,v)\), \(\phi_{2}^{0}(u,v)\) and \(\phi_{3}^{0}(u)\) with respect to \(\leq_{\rm{PD}}\) are \(P(u)P(v),D(u)v\) and \(D(P(u))\) respectively._
Let
\[\Phi_{\sf{DRB}}^{0}{}^{\prime}:=\left\{\,\phi_{1}^{0}(x,y),\phi_{2}^{0}(x,y), \phi_{3}^{0}(x)\right\}.\]
By the following example, one can see that \(\Phi_{\sf{DRB}}^{0}{}^{\prime}\) is not \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\rm{PD}}\).
**Example 3.12**.: For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[\begin{array}{rcl}f&=&\phi_{2}^{0}(P(u),v)=D(P(u))v+P(u)D(v)-D(P(u)v),\\ g&=&\phi_{3}^{0}(u)=D(P(u))-u,\\ q&=&\star v,\\ p&=&D(P(u))v=\bar{f}=\,q|_{\bar{g}}\,.\end{array}\]
Then
\[(f,g)_{p}^{q}=f-\,q|_{g}\equiv P(u)D(v)-D(P(u)v)+uv.\]
Let
\[\phi_{4}^{0}(x,y)=P(x)D(y)-D(P(x)y)+xy.\]
It is clear that the leading monomial of \(\phi_{4}^{0}(u,v)\) with \(u,v\in\mathfrak{S}_{\Omega}(Z)\) is \(P(u)D(v)\) with respect to \(\leq_{\rm{PD}}\) which cannot be reduced further.
Now denote \(\Phi_{\sf{DRB}}^{0}\) to be the set of the following OPIs:
1. \(\phi_{1}^{0}(x,y)=P(x)P(y)-P(xP(y))-P(P(x)y)\),
2. \(\phi_{2}^{0}(x,y)=D(x)y+xD(y)-D(xy)\),
3. \(\phi_{3}^{0}(x)=D(P(x))-x\),
4. \(\phi_{4}^{0}(x,y)=P(x)D(y)-D(P(x)y)+xy\).
It is obvious that \(\left<S_{\phi_{\mathsf{DRB}}^{0}}{}^{\prime}(Z)\right>_{\Omega\cdot\mathfrak{ MI}_{\mathsf{lg}}}=\left<S_{\phi_{\mathsf{DRB}}^{0}}(Z)\right>_{\Omega\cdot \mathfrak{MI}_{\mathsf{lg}}}\) for arbitrary set \(Z\).
Similar to the case \(\lambda\neq 0\), it can be proved that \(\Phi_{\mathsf{DRB}}^{0}\) is \(\Omega\)-GS with respect to \(\leq_{\mathrm{PD}}\).
**Remark 3.13**.: Note that \(\phi_{4}^{0}(x,y)\) is just \(\phi_{4}(x,y)\) with \(\lambda=0\), and for \(u,v\in\mathfrak{S}_{\Omega}(Z)\)
\[\phi_{2}^{0}(u,P(v))=D(u)P(v)+uD(P(v))-D(uP(v))\equiv D(u)P(v)+uv-D(uP(v)),\]
which is exactly \(\phi_{5}(u,v)\) with \(\lambda=0\). So \(\phi_{5}(x,y)\) (\(\lambda=0\)) does not appear in \(\Phi_{\mathsf{DRB}}^{0}\).
**Theorem 3.14**.: \(\Phi_{\mathsf{DRB}}^{0}\) _is \(\Omega\)-GS in \(\mathbf{k}\widetilde{\Sigma}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\)._
Proof.: As in the proof of Theorem 3.7, we write \(i\wedge j\) the composition of OPIs of \(\phi_{i}\) and \(\phi_{j}\). There are two kinds of ambiguities of all possible compositions in \(\Phi_{\mathsf{DRB}}^{0}\). Since \(\phi_{1}^{0}(x,y)\), \(\phi_{3}^{0}(x)\), and \(\phi_{4}^{0}(x,y)\) have the same leading monomials as in the case \(\lambda\neq 0\), the corresponding ambiguities \(i\wedge j\) with \(i,j\in\{1,3,4\}\) are the same in the proof of Theorem 3.7. Since \(\phi_{2}^{0}(x,y)\) has a different leading monomial, the ambiguities of the case \(i\wedge j\) with \(i=2\) or \(j=2\) are the following: for arbitrary \(u,v,w\in\mathfrak{S}_{\Omega}(Z),q\in\mathfrak{S}_{\Omega}{}^{\star}(Z)\) and \(s\in\mathfrak{S}_{\Omega}(Z)\) or \(\emptyset\),
\[\begin{array}{ll}1\wedge 2&P\left(q|_{D(uv)}\right)P\left(w\right),\quad P \left(u\right)P\left(q|_{D(v)w}\right);\\ 2\wedge 1&D\left(q|_{P(u)P(v)}\right)w,\quad D\left(u\right)q|_{P(u)P(v)};\\ 2\wedge 2&\dfrac{D(u)sD(v)w}{D\left(q|_{D(u)}\right)}w,\quad D\left(u\right)q|_{D (v)w};\\ 2\wedge 3&\dfrac{D\left(P(u)\right)v}{D\left(q|_{D(P(u))}\right)}v,\quad D \left(u\right)q|_{D(P(v))};\\ 2\wedge 4&\dfrac{D(u)sP(v)D(w)}{D\left(P\left(q|_{D(u)P}\right)\right)}w,\quad D \left(u\right)q|_{P(v)D(w)};\\ 3\wedge 2&\dfrac{P(u)D(v)w}{D\left(P\left(q|_{D(u)v}\right)\right)};\\ 4\wedge 2&\dfrac{P(u)D(v)w}{D\left(q|_{D(u)v}\right)D\left(w\right)}.\end{array}\]
Almost all the cases can be treated similarly as in the proof of Theorem 3.7, except a slight difference in the case \(2\wedge 2\). In fact, let \(f=\phi_{2}^{0}(u,sD(v))\), \(g=\phi_{2}^{0}(v,w)\) and \(p=D(u)sD(v)w\). So we have
\[(f,g)_{p}^{D(u)s,D(w)} =uD(sD(v))w-D(usD(v))w-D(u)svD(w)+D(u)sD(vw)\] \[\equiv-usD(v)D(w)+uD(sD(v)w)+usD(v)D(w)-D(usD(v)w)\] \[\quad+uD(svD(w))-D(usvD(w))-uD(sD(vw))+D(usD(vw))\] \[=uD(s\phi_{2}^{0}(v,w))-D(us\phi_{2}^{0}(v,w))\] \[\equiv 0\ \mathrm{mod}\left(S_{\phi_{\mathsf{DRB}}^{0}}(Z),p \right).\]
We are done.
**Theorem 3.15**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}_{\mathfrak{MI}_{\mathsf{lg}}}^{\phi_{\mathsf{DRB}}^{0}\cdot \mathfrak{MI}_{\mathsf{lg}}}(A)=\mathbf{k}\widetilde{\Sigma}_{\Omega}(Z)/ \left<S_{\phi_{\mathsf{DRB}}^{0}}(Z)\cup I_{A}\right>_{\Omega\cdot\mathfrak{ MI}_{\mathsf{lg}}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\phi_{\mathsf{DRB}}^{0}}(Z)\cup G\) is an operated GS basis of \(\left<S_{\phi_{\mathsf{DRB}}^{0}}(Z)\cup I_{A}\right>_{\Omega\cdot\mathfrak{ MI}_{\mathsf{lg}}}\) in \(\mathbf{k}\widetilde{\Sigma}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\)._
As a consequence, we obtain a linear basis.
**Theorem 3.16**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{N}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then the set \(\mathrm{Irr}(S_{\Phi_{\Omega\mathsf{DBB}}^{0}}(Z)\cup G)\) which is by definition the complement of_
\[\{q|_{s},q|_{P(u)P(v)},q|_{D(u)v},q|_{D(P(u))},q|_{P(u)D(v)},s\in G,q\in \mathfrak{S}_{\Omega}^{\star}(Z),u,v\in\mathfrak{S}_{\Omega}(Z)\}\]
_in \(\mathfrak{S}_{\Omega}(Z)\) is a linear basis of the free nonunital \(0\)-differential Rota-Baxter algebra \(\mathcal{F}_{\mathfrak{u}\mathfrak{Alg}}^{\Phi_{\mathsf{DBB}}^{0}\cdots \mathfrak{Alg}}(A)\) over \(A\)._
### Case of unital algebras
Now we consider unital differential Rota-Baxter algebras. Since the proofs are similar to those in the previous subsections, we omit most of them. The study still divided into cases of \(\lambda\neq 0\) and \(\lambda=0\).
When \(\lambda\neq 0\), since unital differential Rota-Baxter algebras have the condition \(D(1)=0\), put \(\Phi_{\mathsf{UDRB}}\) to be the union of \(\Phi_{\mathsf{DRB}}\) with \(\{D(1)\}\), but by abuse of notation, in \(\Phi_{\mathsf{DRB}}\), \(x,y\) take their values in \(\mathfrak{M}_{\Omega}(Z)\) instead of \(\mathfrak{S}_{\Omega}(Z)\).
**Remark 3.17**.: We have:
\[\left\{\begin{array}{ll}\phi_{2}(u,v)\equiv 0&\text{when $u=1$ or $v=1$;}\\ \phi_{4}(u,v)\equiv-D(P(u))+u=-\phi_{3}(u)&\text{when $v=1$;}\\ \phi_{5}(u,v)\equiv-D(P(v))+v=-\phi_{3}(v)&\text{when $u=1$.}\end{array}\right.\]
So adding of the unity \(1\) will not produce new OPIs. Moreover, it is clear that except the above cases, the leading monomials of OPIs in \(\Phi_{\mathsf{DRB}}\) are the same with respect to \(\leq_{\mathrm{PD}}\) and \(\leq_{\mathsf{uPD}}\) by Proposition 1.20.
With similar proofs as in Subsection 3.1, we can prove the following results.
**Theorem 3.18**.: \(\Phi_{\mathsf{UDRB}}\) _is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathsf{uPD}}\)._
**Theorem 3.19**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}_{\mathfrak{u}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}}^{\Phi_{\mathsf{ UDRB}}-\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}\mathfrak{r}}(A)=\mathbf{k}\mathfrak{M}_{\Omega}(Z)/\left\langle S_ {\Phi_{\mathsf{UDRB}}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot\mathfrak{r} \mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\Phi_{\mathsf{UDRB}}}(Z)\cup G\) is an operated GS basis of \(\left\langle S_{\Phi_{\mathsf{UDRB}}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot \mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}}\) in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathsf{uPD}}\)._
**Theorem 3.20**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then the set \(\mathrm{Irr}(S_{\Phi_{\mathsf{UDRB}}}(Z)\cup G)\) which is by definition the complement of_
\[[q|_{s},q|_{P(u)P(v)},q|_{D(u)D(v)},q|_{D(P(u))},q|_{P(u)D(v)},q|_{D(u)P(v)},q|_{ D(u)P(v)},q|_{D(1)},s\in G,q\in\mathfrak{M}_{\Omega}^{\star}(Z),u,v\in\mathfrak{M}_{ \Omega}(Z)\}\]
_in \(\mathfrak{M}_{\Omega}(Z)\) is a linear basis of the free unital \(\lambda\)-differential Rota-Baxter algebra \(\mathcal{F}_{\mathfrak{u}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}}^{\Phi_{\mathsf{UDRB}}-\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}}(A)\) over \(A\)._
When \(\lambda=0\), denote \(\Phi_{\mathsf{UDRB}}^{0}:=\Phi_{\mathsf{DRB}}^{0}\) (again by abuse of notation, \(\Phi_{\mathsf{DRB}}^{0}\) is understood that \(u,v\) take their values in \(\mathfrak{M}_{\Omega}(X)\) instead of \(\mathfrak{S}_{\Omega}(X)\)).
**Remark 3.21**.: In \(\Phi_{\mathsf{uDRB}}^{0}\), we have
\[\phi_{2}^{0}(1,1)=D(1)+D(1)-D(1)=D(1),\]
so it is not necessary to add \(D(1)\) into \(\Phi_{\mathsf{uDRB}}^{0}\).
Note that \(\phi_{4}^{0}(u,1)\equiv-D(P(v))+v=-\phi_{3}^{0}(v)\), so adding the unity \(1\) will not induce any new OPI.
By using similar proofs in Subsection 3.2, one can show the following results.
**Theorem 3.22**.: \(\Phi^{0}_{\mathfrak{u}\mathfrak{u}\mathfrak{p}\mathfrak{R}\mathfrak{B}}\) _is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with \(\leq_{\mathfrak{u}\mathfrak{p}\mathfrak{D}}\)._
**Theorem 3.23**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}^{\Phi^{0}_{\mathfrak{u}\mathfrak{u}\mathfrak{p}\mathfrak{R}} \mathfrak{u}\mathfrak{M}_{\Omega}}_{\mathfrak{u}\mathfrak{M}_{\Omega}}(A)= \mathbf{k}\mathfrak{M}_{\Omega}(Z)/\left\langle S_{\Phi^{0}_{\mathfrak{u} \mathfrak{u}\mathfrak{D}\mathfrak{R}}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot \mathfrak{u}\mathfrak{M}_{\Omega}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\Phi^{0}_{\mathfrak{u}\mathfrak{u}\mathfrak{D}\mathfrak{R}}}(Z)\cup G\) is an operated GS basis of \(\left\langle S_{\Phi^{0}_{\mathfrak{u}\mathfrak{u}\mathfrak{D}\mathfrak{R}} }(Z)\cup I_{A}\right\rangle_{\Omega\cdot\mathfrak{u}\mathfrak{M}_{\Omega}}\) in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathfrak{u}\mathfrak{p}\mathfrak{D}}\)._
**Theorem 3.24**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then the set \(\mathrm{Irr}(S_{\Phi^{0}_{\mathfrak{u}\mathfrak{u}\mathfrak{D}\mathfrak{R}}}(Z )\cup G)\) which is by definition the complement of_
\[\{q|_{\bar{s}},q|_{P(u)P(v)},q|_{D(u)v},q|_{D(P(u))},q|_{P(u)D(v)},s\in G,q\in \mathfrak{M}^{\star}_{\Omega}(Z),u,v\in\mathfrak{M}_{\Omega}(Z)\}\]
_in \(\mathfrak{M}_{\Omega}(Z)\) is a linear basis of the free unital \(0\)-differential Rota-Baxter algebra \(\mathcal{F}^{\Phi^{0}_{\mathfrak{u}\mathfrak{D}\mathfrak{R}}\mathfrak{u} \mathfrak{M}_{\Omega}}_{\mathfrak{u}\mathfrak{M}_{\Omega}}(A)\) over \(A\)._
So far, we have completed the study of differential Rota-Baxter algebras.
## 4. Free integro-differential algebras over algebras
In this section, we carry the study of GS bases of free integro-differential algebras over algebras. It reveals that integro-differential algebras can be investigated by using a method similar to differential Rota-Baxter algebras, but the details are more difficult.
We first recall the definition of integro-differential algebras.
**Definition 4.1**.: Let \(\lambda\in\mathbf{k}\). An integro-differential \(\mathbf{k}\)-algebra of weight \(\lambda\) (also called a \(\lambda\)-integro-differential \(\mathbf{k}\)-algebra) is a differential \(\mathbf{k}\)-algebra \((R,d)\) of weight \(\lambda\) with a linear operator \(P:R\to R\) which satisfies (c) in Definition 3.1:
\[D\circ P=\ \mathrm{id}\,\]
and such that
\[\begin{array}{ll}P(D(u)P(v))=uP(v)-P(uv)-\lambda P(D(u)v)&\text{for all $u,v\in R$},\\ P(P(u)D(v))=P(u)v-P(uv)-\lambda P(uD(v))&\text{for all $u,v\in R$}.\end{array}\]
Recall that
1. \(\phi_{1}(x,y)=P(x)P(y)-P(xP(y))-P(P(x)y)-\lambda P(xy)\),
2. \(\phi_{2}(x,y)=D(x)D(y)+\lambda^{-1}D(x)y+\lambda^{-1}xD(y)-\lambda^{-1}D(xy)\),
3. \(\phi_{3}(x)=D(P(x))-x\),
4. \(\phi_{4}(x,y)=P(x)D(y)-D(P(x)y)+xy+\lambda xD(y)\),
5. \(\phi_{5}(x,y)=D(x)P(y)-D(xP(y))+xy+\lambda D(x)y\),
and denote
1. \(\phi_{6}(x,y)=P(D(x)P(y))-xP(y)+P(xy)+\lambda P(D(x)y)\),
2. \(\phi_{7}(x,y)=P(P(x)D(y))-P(xy)+P(xy)+\lambda P(xD(y))\).
Notice that for \(u,v\in\mathfrak{S}_{\Omega}(Z)\), since \(P(D(u)P(v))\) (resp. \(P(P(u)D(v))\)) has the largest \(P\)-degree in \(\phi_{6}(u,v)\) (resp. \(\phi_{7}(u,v)\)), the leading monomial of \(\phi_{6}(u,v)\) (resp. \(\phi_{7}(u,v)\)) with respect to \(\leq_{\mathrm{PD}}\) is \(P(D(u)P(v))\) (resp. \(P(P(u)D(v))\)).
### Case of nonunital algebras with \(\lambda\neq 0\)
Assume in this subsection that \(\lambda\neq 0\). We first consider nonunital free integro-differential \(\mathbf{k}\)-algebras over algebras.
According to the definition of integro-differential algebras, define
\[\Phi_{\mathsf{ID}}{}^{\prime}:=\left\{\ \phi_{2}(x,y),\phi_{3}(x),\phi_{6}(x,y), \phi_{7}(x,y)\ \right\}.\]
By Example 3.3, Example 3.4, Example 4.2 and Example 4.3, \(\Phi_{\mathsf{ID}}{}^{\prime}\) is not \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(X)\) with respect to \(\leq_{\mathrm{PD}}\).
**Example 4.2**.: For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[\begin{array}{rcl}f&=&\phi_{7}(u,v)=P(P(u)D(v))-P(u)v+P(uv)+\lambda P(uD(v)), \\ g&=&\phi_{4}(u,v)=P(u)D(v)-D(P(u)v)+uv+\lambda uD(v),\\ q&=&P(\star),\\ p&=&P(P(u)D(v))=\bar{f}=\left.q\right|_{\bar{g}}.\end{array}\]
Then
\[(f,g)_{p}^{q}=f-\left.q\right|_{g}\equiv-P(D(P(u)v))+P(u)v.\]
Let
\[\phi_{8}(x,y)=P(D(P(x)y))-P(x)y.\]
It is clear that the leading monomial of \(\phi_{8}(u,v)\) is \(P(D(P(u)v))\) with respect to \(\leq_{\mathrm{PD}}\) which cannot be reduced further.
**Example 4.3**.: For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[\begin{array}{rcl}f&=&\phi_{6}(u,v)=P(D(u)P(v))-uP(v)+P(uv)+\lambda P(D(u)v), \\ g&=&\phi_{5}(u,v)=D(u)P(v)-D(uP(v))+uv+\lambda D(u)v,\\ q&=&P(\star),\\ p&=&P(D(u)P(v))=\bar{f}=\left.q\right|_{\bar{g}}.\end{array}\]
Then
\[(f,g)_{p}^{q}=f-\left.q\right|_{g}\equiv-P(D(uP(v)))+uP(v).\]
Let
\[\phi_{9}(x,y)=P(D(xP(y)))-xP(y).\]
It is clear that the leading monomial of \(\phi_{9}(u,v)\) is \(P(D(uP(v)))\) with respect to \(\leq_{\mathrm{PD}}\) which cannot be reduced further.
**Remark 4.4**.: Note that the OPI \(\phi_{1}(x,y)\) can be induced by \(\phi_{3}(x,y)\) and \(\phi_{6}(x,y)\). So an integro-differential algebra can be seen as a differential Rota-Baxter algebra. Explicitly, for \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[\begin{array}{rcl}f&=&\phi_{6}(P(u),v)=P(D(P(u))P(v))-P(u)P(v)+P(P(u)v)+ \lambda P(D(P(u))v),\\ g&=&\phi_{3}(u)=D(P(u))-u,\\ q&=&P(\star P(v)),\\ p&=&P(D(P(u))P(v))=\bar{f}=\left.q\right|_{\bar{g}}.\end{array}\]
Then
\[(f,g)_{p}^{q}=f-\left.q\right|_{g}\equiv P(u)P(v)-P(uP(v))-P(P(u)v)-\lambda P (uv)=\phi_{1}(u,v).\]
Now denote \(\Phi_{\mathsf{ID}}\) to be the set of the following OPIs:
1. \(\phi_{1}(x,y)=P(x)P(y)-P(xP(y))-P(P(x)y)-\lambda P(xv),\)
2. \(\phi_{2}(x,y)=D(x)D(y)+\lambda^{-1}D(x)y+\lambda^{-1}xD(y)-\lambda^{-1}D(xy),\)
3. \(\phi_{3}(x)=D(P(x))-x\),
4. \(\phi_{4}(x,y)=P(x)D(y)-D(P(x)y)+xy+\lambda xD(y)\),
5. \(\phi_{5}(x,y)=D(x)P(y)-D(xP(y))+xy+\lambda D(x)y\),
6. \(\phi_{8}(x,y)=P(D(P(x)y))-P(x)y\),
7. \(\phi_{9}(x,y)=P(D(xP(y)))-xP(y)\).
Notice that \(\Phi_{\text{ID}}=\Phi_{\text{DRB}}\cup\{\phi_{8}(x,y),\phi_{9}(x,y)\}\).
**Proposition 4.5**.: \(\left\langle S_{\Phi_{\text{ID}}}(Z)\right\rangle_{\Omega\cdot\mathfrak{M} _{\mathfrak{I}_{\mathfrak{I}_{\mathfrak{I}}}}}=\left\langle S_{\Phi_{\text{ID} }}(Z)\right\rangle_{\Omega\cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}_{ \mathfrak{I}}}}}\) _for each set \(Z\)._
Proof.: We firstly prove \(\left\langle S_{\Phi_{\text{ID}}}(Z)\right\rangle_{\Omega\cdot\mathfrak{M}_{ \mathfrak{I}_{\mathfrak{I}}}}\subseteq\left\langle S_{\Phi_{\text{ID}}}(Z) \right\rangle_{\Omega\cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}_{\mathfrak{ I}}}}}\), which follows from
\[\left\{\begin{array}{l}\phi_{1}(u,v)\in\left\langle\phi_{3}(u,v),\phi_{6}(u,v)\right\rangle_{\Omega\cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}_{ \mathfrak{I}}}}}\text{ by Remark \ref{lem:2.1},}\\ \phi_{4}(u,v)\in\left\langle\phi_{2}(u,v),\phi_{3}(u)\right\rangle_{\Omega \cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}_{\mathfrak{I}}}}}\text{ by Example \ref{lem:2.1},}\\ \phi_{5}(u,v)\in\left\langle\phi_{2}(u,v),\phi_{3}(u)\right\rangle_{\Omega \cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}_{\mathfrak{I}}}}}\text{ by Example \ref{lem:2.1},}\\ \phi_{8}(u,v)\in\left\langle\phi_{4}(u,v),\phi_{7}(u,v)\right\rangle_{\Omega \cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}}}}\text{ by Example \ref{lem:2.1},}\\ \phi_{9}(u,v)\in\left\langle\phi_{5}(u,v),\phi_{6}(u,v)\right\rangle_{\Omega \cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}}}}\text{ by Example \ref{lem:2.1},}\end{array}\right.\]
where \(u,v\in\Xi_{\Omega}(Z)\).
Next we show \(\left\langle S_{\Phi_{\text{ID}}}(Z)\right\rangle_{\Omega\cdot\mathfrak{M}_{ \mathfrak{I}_{\mathfrak{I}}}}\subseteq\left\langle S_{\Phi_{\text{ID}}}(Z) \right\rangle_{\Omega\cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}}}}\). Note that
\[P(\phi_{4}(u,v)) =P(P(u)D(v))-P(D(P(u)v))+P(uv)+\lambda P(uD(v))\] \[=P(P(u)D(v))-P(u)v+P(uv)+\lambda uD(v)-P(D(P(u)v))+P(u)v\] \[=\phi_{7}(u,v)-\phi_{8}(u,v),\]
and
\[P(\phi_{5}(u,v)) =P(D(u)P(v))-P(D(uP(v)))+P(uv)+\lambda P(D(u)v)\] \[=P(D(u)P(v))-uP(v)+P(uv)+\lambda D(u)v-P(D(uP(v)))+uP(v)\] \[=\phi_{6}(u,v)-\phi_{9}(u,v).\]
So we have
\[\left\{\begin{array}{l}\phi_{6}(u,v)\in\left\langle\phi_{5}(u,v),\phi_{9}(u, v)\right\rangle_{\Omega\cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}_{ \mathfrak{I}}}}},\\ \phi_{7}(u,v)\in\left\langle\phi_{4}(u,v),\phi_{8}(u,v)\right\rangle_{\Omega \cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}}}}.\end{array}\right.\]
It proves \(\left\langle S_{\Phi_{\text{ID}}}(Z)\right\rangle_{\Omega\cdot\mathfrak{M}_{ \mathfrak{I}_{\mathfrak{I}}}}\subseteq\left\langle S_{\Phi_{\text{ID}}}(Z) \right\rangle_{\Omega\cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}}}}\).
We are done.
Now we can prove \(\Phi_{\text{ID}}\) is \(\Omega\)-GS.
**Theorem 4.6**.: \(\Phi_{\text{ID}}\) _is \(\Omega\)-GS in \(\mathbf{k}\widetilde{\Sigma}_{\Omega}(Z)\) with respect to \(\leq_{\text{PD}}\)._
Proof.: Since the ambiguities \(i\wedge j\) with \(i,j=1,2,3,4,5\) in \(\Phi_{\text{ID}}\) are the same as in Theorem 3.7, we only need to consider the ambiguities involving \(\phi_{8}\) and \(\phi_{9}\). The cases that cannot be dealt with directly by Lemma 3.5 are listed below: for arbitrary \(u,v,w\in\Xi_{\Omega}(Z),q\in\Xi_{\Omega}{}^{\star}(Z)\) and \(s\in\Xi_{\Omega}(Z)\) or \(\emptyset\),
\[\begin{array}{ll}1\wedge 8&P\left(D\left(P\left(u\right)v\right)\right)P\left(w \right),&P\left(u\right)P\left(D\left(P\left(v\right)w\right)\right),\\ 3\wedge 8&D\left(P\left(D\left(P\left(u\right)v\right)\right)\right),\\ 4\wedge 8&P\left(D\left(P(u)v\right)\right)D\left(w\right),\\ 5\wedge 8&D\left(u\right)P\left(D\left(P\left(v\right)w\right)\right),\\ 1\wedge 9&P\left(D\left(uP(v)\right)\right)P\left(w\right),&P\left(u\right)P\left(D \left(vP(w)\right)\right),\\ 3\wedge 9&D\Big{(}P\big{(}D\left(uP(v)\right)\big{)}\Big{)},\\ 4\wedge 9&P\big{(}D\left(uP(v)\right)\big{)}D\left(w\right),\\ 5\wedge 9&D\left(u\right)P\left(D\left(vP\left(w\right)\right)\right),\end{array}\]
\[\begin{array}{ll}8\wedge 1&P\big{(}D\left(P(u)P(v)s\right)\big{)},\\ 8\wedge 4&P\big{(}D\left(P(u)D(v)s\right)\big{)},\\ 9\wedge 1&P\big{(}D\left(sP(u)P(v)\right)\big{)},\\ 9\wedge 5&P\big{(}D\left(sD(u)P(v)\right)\big{)},\\ 8\wedge 8&P\big{(}D\Big{(}P\big{(}D\left(P(u)v\right)w\big{)}\Big{)},\\ 8\wedge 9&P\big{(}D\Big{(}P\big{(}D\left(uP(v)\right)\big{)}w\big{)}\Big{)},\\ 9\wedge 8&P\big{(}D\Big{(}uP(D\left(P(v)w\right)\big{)}\big{)}\big{)}.\end{array}\]
All these compositions can be treated similarly as in the proof of Theorem 3.7. We only give the complete proof for the case \(8\wedge 1\). Take \(f=\phi_{8}(u,P(v)s)\), \(g=\phi_{1}(u,v)\), \(p=P(D(P(u)P(v)s))\) and \(q=P(D(\star s))\). Then we have
\[\begin{array}{rl}(f,g)_{p}^{q}&=-P(u)P(v)s+P(D(P(uP(v))s))+P(D(P(P(u)v)s))+ \lambda P(D(P(uv)s))\\ &\equiv-P(uP(v))s-P(P(u)v)s-\lambda P(uv)s+P(uP(v))s+P(P(u)v)s+\lambda P(uv)s \\ &\equiv 0\ \mathrm{mod}\left(S_{\Phi_{\mathrm{D}}}(Z),p\right).\end{array}\]
We are done.
**Theorem 4.7**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}}}}}}}}}}}}^{\Phi_{\mathrm{D}}\cdot\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}}{\mathfrak{M}_{\mathfrak{M}}}}}}}}}^{\Phi_{\mathrm{D}}\cdot \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}}{\mathfrak{M}_{\mathfrak{M}}}}}}}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\Phi_{\mathrm{D}}}(Z)\cup G\) is an operated GS basis of \(\big{\langle}S_{\Phi_{\mathrm{D}}}(Z)\cup I_{A}\big{\rangle}_{\Omega\cdot \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}}}}}}}}\) in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\)._
Proof.: Since the leading monomial in \(\Phi_{\mathrm{ID}}\) has no subword in \(\mathcal{S}(X)\backslash X\), the result follows immediately from Theorem 4.6 and Theorem 2.13.
As a consequence, we obtain a linear basis.
**Theorem 4.8**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then a linear basis of the free nonunital \(\lambda\)-integro-differential algebra \(\mathcal{F}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}}}}}}}}^{\Phi_{\mathrm{D}\cdot\mathfrak{ M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}}}}}}}(A)\) over \(A\) is given by the set \(\mathrm{Irr}(S_{\Phi_{\mathrm{D}}}(Z)\cup G)\), which is by definition the complement in \(\mathfrak{S}_{\Omega}(Z)\) of the subset consisting of \(q|_{w}\) where \(w\) runs through_
\[\bar{s},P(u)P(v),D(u)D(v),D(P(u)),P(u)D(v),D(u)P(v),P(D(P(u)v)),P(D(uP(v)))\]
_for arbitrary \(s\in G\), \(q\in\mathfrak{S}_{\Omega}^{\star}(Z),u,v\in\mathfrak{S}_{\Omega}(Z)\)._
Proof.: It can be induced directly from Theorem 2.8.
**Remark 4.9**.: Since the monomial order \(\leq_{\mathrm{PD}}\) is different from that used in [7], our operated GS basis and linear basis are different from theirs. The reason is that the monomial order in [7] does not satisfy the condition of Theorem 2.13, thus cannot enable us to discuss free integro-differential algebras over algebras.
**Remark 4.10**.: Define a new OPI \(\phi_{10}(x)=P(D(x))\), and let
\[\Phi_{\mathrm{ID}}=\{\,\phi_{1}(x,y),\phi_{2}(x,y),\phi_{3}(x),\phi_{10}(x)\,\}.\]
A \(\Phi_{\mathrm{ID}}\)-algebra is just a nonunital \(\lambda\)-integro-differential algebra with the operators \(P\) and \(D\) being the inverse operator of each other, so we call such an operated algebra an invertible integro-differential algebra. One can show that \(\Phi_{\mathrm{ID}}\cup\{\phi_{4}(x,y),\phi_{5}(x,y)\}\) is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\).
### Case of nonunital algebras with \(\lambda=0\)
Now we consider nonunital free integro-differential algebras on algebras with \(\lambda=0\). This case can be studied similarly as the case \(\lambda\neq 0\), so we omit the details in this subsection.
As in Subsection 3.2, for a OPI \(\phi\), we denote \(\phi^{0}\) for \(\phi\) with \(\lambda=0\) and also write \(\phi^{0}=\phi\) when \(\lambda\) does not appear in \(\phi\) for convenience. Let
\[\Phi^{0}_{\mathsf{ID}}{}^{\prime}:=\left\{\,\phi^{0}_{2}(x,y),\phi^{0}_{3}(x), \phi^{0}_{6}(x,y),\phi^{0}_{7}(x,y)\right\}.\]
Again, \(\Phi^{0}_{\mathsf{ID}}{}^{\prime}\) is not \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\).
**Remark 4.11**.: By Example 4.2, we can get \(\phi^{0}_{8}(u,v)\) from \(\phi^{0}_{4}(u,v)\) and \(\phi^{0}_{7}(u,v)\).
One can not obtain \(\phi^{0}_{9}(u,v)\) from \(S_{\Phi^{0}_{\mathsf{ID}}}{}^{\prime}(Z)\) as in Example 4.3, since \(\phi_{5}\) does not belong to \(\Phi^{0}_{\mathsf{ID}}{}^{\prime}\). However, we can still generate \(\phi^{0}_{9}(u,v)\) as follows: for \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[\begin{array}{rcl}f&=&\phi^{0}_{6}(u,v)=P(D(u)P(v))-uP(v)+P(uv),\\ g&=&\phi^{0}_{2}(u,P(v))=D(u)P(v)+uD(P(v))-D(uP(v)),\\ q&=&P(\star),\\ w&=&P(D(u)P(v))=\bar{f}=\,q|_{\bar{g}}\,.\end{array}\]
Then
\[(f,g)_{w}=f-\,q|_{g}\equiv P(D(uP(v)))-uP(v)=\phi^{0}_{9}(u,v).\]
Now denote \(\Phi^{0}_{\mathsf{ID}}\) to be the set of the following OPIs:
1. \(\phi^{0}_{1}(x,y)=P(x)P(y)-P(xP(y))-P(P(x)y)\),
2. \(\phi^{0}_{2}(x,y)=D(x)y+xD(y)-D(xy)\),
3. \(\phi^{0}_{3}(x)=D(P(x))-x\),
4. \(\phi^{0}_{4}(x,y)=P(x)D(y)-D(P(x)y)+xy\),
5. \(\phi^{0}_{8}(x,y)=P(D(P(x)y))-P(x)y\),
6. \(\phi^{0}_{9}(x,y)=P(D(xP(y)))-xP(y)\).
As in the previous subsection, one can prove the following results.
**Proposition 4.12**.: \(\left\langle S_{\Phi^{0}_{\mathsf{ID}}}{}^{\prime}(Z)\right\rangle_{\Omega \cdot\mathfrak{M}_{\mathsf{IB}}}=\left\langle S_{\Phi^{0}_{\mathsf{ID}}}(Z) \right\rangle_{\Omega\cdot\mathfrak{M}_{\mathsf{IB}}}\) _for any set \(Z\)._
**Theorem 4.13**.: \(\Phi^{0}_{\mathsf{ID}}\) _is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\)._
**Theorem 4.14**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathbf{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}^{\Phi^{0}_{\mathsf{ID}}\cdot\mathfrak{M}_{\mathsf{IB}}}_{\mathsf{IB }}(A)=\mathbf{k}\mathfrak{S}_{\Omega}(Z)/\left\langle S_{\Phi^{0}_{\mathsf{ID} }}(Z)\cup I_{A}\right\rangle_{\Omega\cdot\mathfrak{M}_{\mathsf{IB}}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\Phi^{0}_{\mathsf{ID}}}(Z)\cup G\) is an operated GS basis of \(\left\langle S_{\Phi^{0}_{\mathsf{ID}}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot \mathfrak{M}_{\mathsf{IB}}}\) in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\)._
**Theorem 4.15**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathbf{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then the set \(\mathrm{Irr}(S_{\Phi^{0}_{\mathsf{ID}}}(Z)\cup G)\) which is by definition the complement of_
\[\{q|_{s},q|_{P(u)P(v)},q|_{D(u)v},q|_{D(P(u))},q|_{P(u)D(v)},q|_{P(D(P(u)v))},q| _{P(D(uP(v)))},s\in G,q\in\mathfrak{S}^{\star}_{\Omega}(Z),u,v\in\mathfrak{S} _{\Omega}(Z)\}\]
_in \(\mathfrak{S}_{\Omega}(Z)\) is a linear basis of the free nonunital \(0\)-integro-differential algebra \(\mathcal{F}^{\Phi^{0}_{\mathsf{ID}}\cdot\mathfrak{M}_{\mathsf{IB}}}_{\mathsf{IB }}(A)\) over \(A\)._
### Case of unital algebras
Now we consider unital integro-differential algebras. Since the proofs are similar to those in the previous subsections, we omit most of them. The study still is divided into cases of \(\lambda\neq 0\) and \(\lambda=0\).
When \(\lambda\neq 0\), since unital integro-differential algebras have the condition \(D(1)=0\), we put \(\Phi_{\mathrm{ulD}}:=\Phi_{\mathrm{lD}}\cup\{D(1)\}\).
**Theorem 4.16**.: \(\Phi_{\mathrm{ulD}}\) _is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{uPD}}\)._
**Theorem 4.17**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}_{\mathrm{uHTl}_{\mathrm{uHTl}_{\mathrm{u}}}}^{\Phi_{\mathrm{ulD}} \cdot\mathrm{uHTl}_{\mathrm{u}}}(A)=\mathbf{k}\mathfrak{M}_{\Omega}(Z)/ \left\langle S_{\Phi_{\mathrm{ulD}}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot \mathrm{uHTl}_{\mathrm{u}}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\Phi_{\mathrm{ulD}}}(Z)\cup G\) is an operated GS basis of \(\left\langle S_{\Phi_{\mathrm{ulD}}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot \mathrm{uHTl}_{\mathrm{u}}}\) in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{uPD}}\)._
**Theorem 4.18**.: _Let \(Z\) be a set, \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then a linear basis of the free unital \(\lambda\)-integro-differential algebra \(\mathcal{F}_{\mathrm{uHTl}_{\mathrm{uHTl}_{\mathrm{u}}}}^{\Phi_{\mathrm{ulD}} \cdot\mathrm{uHTl}_{\mathrm{u}}}(A)\) over \(A\) is given by the set \(\mathrm{Irr}(S_{\Phi_{\mathrm{ulD}}}(Z)\cup G)\), which is by definition the complement in \(\mathfrak{M}_{\Omega}(Z)\) of the subset consisting of \(q|_{w}\) where \(w\) runs through_
\[\bar{s},P(u)P(v),D(u)D(v),D(P(u)),P(u)D(v),D(u)P(v),P(D(P(u)v)),D(1)\]
_for arbitrary \(s\in G,q\in\mathfrak{M}_{\Omega}^{\star}(Z),u,v\in\mathfrak{M}_{\Omega}(Z)\)._
When \(\lambda=0\), denote \(\Phi_{\mathrm{ulD}}^{0}:=\Phi_{\mathrm{lD}}^{0}\).
**Theorem 4.19**.: \(\Phi_{\mathrm{ulD}}^{0}\) _is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{uPD}}\)._
**Theorem 4.20**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}_{\mathrm{uHTl}_{\mathrm{u}}}^{\Phi_{\mathrm{ulD}}^{0}\cdot\mathrm{ uHTl}_{\mathrm{u}}}(A)=\mathbf{k}\mathfrak{M}_{\Omega}(Z)/\left\langle S_{\Phi_{ \mathrm{ulD}}^{0}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot\mathrm{uHTl}_{ \mathrm{u}}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\Phi_{\mathrm{ulD}}^{0}}(Z)\cup G\) is an operated GS basis of \(\left\langle S_{\Phi_{\mathrm{ulD}}^{0}}(Z)\cup I_{A}\right\rangle_{\Omega \cdot\mathrm{uHTl}_{\mathrm{u}}}\) in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{uPD}}\)._
**Theorem 4.21**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then the set \(\mathrm{Irr}(S_{\Phi_{\mathrm{ulD}}^{0}}(Z)\cup G)\) which is by definition the complement of_
\[\{q|_{s},q|_{P(u)P(v)},q|_{D(u)w},q|_{D(P(u))},q|_{P(u)D(v)},q|_{P(D(P(u)v))}, q|_{P(D(P(u)v))},s\in G,q\in\mathfrak{M}_{\Omega}^{\star}(Z),u,v\in\mathfrak{M}_{ \Omega}(Z)\}\]
_in \(\mathfrak{M}_{\Omega}(Z)\) is a linear basis of the free unital \(0\)-integro-differential algebra \(\mathcal{F}_{\mathrm{uHTl}_{\mathrm{u}}}^{\Phi_{\mathrm{ulD}}^{0}\cdot\mathrm{ uHTl}_{\mathrm{u}}}(A)\) over \(A\)._
### Differential Rota-Baxter algebras vs integro-differential algebras
Since integro-differential algebras have one more defining relation than differential Rota-Baxter algebras, by Proposition 2.10, the free integro-differential algebra over an algebra \(A\) is a quotient of the free differential Rota-Baxter algebra over \(A\) in general. However, by using the descriptions of \(\Phi_{\mathrm{DRB}}\) and \(\Phi_{\mathrm{lD}}\) and Theorems 3.7 and 4.6, we could also show that the former one is a differential Rota-Baxter algebra subalgebra of the later one.
**Theorem 4.22**.: _The free nonunital \(\lambda\)-integro-differential algebra \(\mathcal{F}_{\mathfrak{M}_{\mathrm{lg}}}^{\Phi_{\mathrm{lD}}\cdot\mathrm{uHTl}_ {\mathrm{q}}}(A)\) over an algebra \(A\) is differential Rota-Baxter subalgebra of the free nonunital \(\lambda\)-differential Rota-Baxter algebra \(\mathcal{F}_{\mathfrak{M}_{\mathrm{lg}}}^{\Phi_{\mathrm{lD}}\cdot\mathrm{uHTl}_ {\mathrm{u}}}(A)\) over \(A\)._
Proof.: We have the observation mentioned before
\[\Phi_{\text{ID}}=\Phi_{\text{DRB}}\cup\{\phi_{8}(x,y),\phi_{9}(x,y)\}.\]
That is to say, the operated Grobner-Shirshov basis of the free nonunital \(\lambda\)-differential Rota-Baxter algebra \(\mathcal{F}_{\mathfrak{nil}_{\mathfrak{g}}}^{\Phi_{\text{DRB}}\cdot \mathfrak{nil}_{\mathfrak{g}}}(A)\) over an algebra \(A\) is a subset of that of the free nonunital \(\lambda\)-integro-differential algebra \(\mathcal{F}_{\mathfrak{nil}_{\mathfrak{g}}}^{\Phi_{\text{DC}}\cdot\mathfrak{ nil}_{\mathfrak{g}}}(A)\) over \(A\). So by Diamond Lemma, \(\mathcal{F}_{\mathfrak{nil}_{\mathfrak{g}}}^{\Phi_{\text{DC}}\cdot\mathfrak{ nil}_{\mathfrak{g}}}(A)\) is a subspace of \(\mathcal{F}_{\mathfrak{nil}_{\mathfrak{g}}}^{\Phi_{\text{DC}}\cdot\mathfrak{ nil}_{\mathfrak{g}}}(A)\). It is obvious that \(\mathcal{F}_{\mathfrak{nil}_{\mathfrak{g}}}^{\Phi_{\text{DC}}\cdot\mathfrak{ nil}_{\mathfrak{g}}}(A)\) is also differential Rota-Baxter subalgebra of \(\mathcal{F}_{\mathfrak{nil}_{\mathfrak{g}}}^{\Phi_{\text{DRB}}\cdot\mathfrak{ nil}_{\mathfrak{g}}}(A)\).
**Remark 4.23**.: Gao and Guo [7] also studied GS bases of the free integro-differential algebras and free differential Rota-Baxter algebra both generated by sets, and they deduced that the free integro-differential algebra generated by a set is a subalgebra of the free differential Rota-Baxter algebra generated by the same set. Theorem 4.22 proves an analogous fact for these free algebras generated by algebras. However, our method is completely different from theirs.
**Remark 4.24**.: By using the descriptions of \(\Phi_{\text{DRB}}^{0}\) and \(\Phi_{\text{ID}}^{0}\) (resp. \(\Phi_{\text{uDRB}}\) and \(\Phi_{\text{uID}}\), \(\Phi_{\text{uDRB}}^{0}\) and \(\Phi_{\text{uID}}^{0}\)) and Theorems 3.14 and 4.13 (resp. Theorems 3.18 and 4.16, Theorems 3.22 and 4.19), we always have the same result in both unital and nonunital cases with any \(\lambda\) (zero or nonzero).
**Acknowledgements:** The authors were supported by NSFC (No. 11771085, 12071137) and by STCSM (22DZ2229014).
| 最近の研究は、操作アルgebらに、微分代数とRota-Baxterアルgebらのような様々な概念を統一する機能によって多くの関心を集めている。$\Omega$-操作アルgebは、線形演算子$\Omega$を備えた(I.結合)アルgebであり、これらの演算子は、リーブル ruleのような特定の演算子アイデンティティを満たす可能性がある。自由な$\Omega$-操作アルgeb $B$ は、アルgeb $A$ に類似して生成できる自由アルgebであり、$A$ がGr\"{o}bner-Shirshov基底 $G$ を持つ場合、演算子 $\Omega$ が$\Phi$の演算子アイデンティティを満たす場合、$G \cup \Phi$ が $B$ のGr\"{o}bner-Shirshov基底であるかどうかを自然に問うことになる |
2309.06557 | Unsupervised Bias Detection in College Student Newspapers | This paper presents a pipeline with minimal human influence for scraping and
detecting bias on college newspaper archives. This paper introduces a framework
for scraping complex archive sites that automated tools fail to grab data from,
and subsequently generates a dataset of 14 student papers with 23,154 entries.
This data can also then be queried by keyword to calculate bias by comparing
the sentiment of a large language model summary to the original article. The
advantages of this approach are that it is less comparative than reconstruction
bias and requires less labelled data than generating keyword sentiment. Results
are calculated on politically charged words as well as control words to show
how conclusions can be drawn. The complete method facilitates the extraction of
nuanced insights with minimal assumptions and categorizations, paving the way
for a more objective understanding of bias within student newspaper sources. | Adam M. Lehavi, William McCormack, Noah Kornfeld, Solomon Glazer | 2023-09-11T06:51:09 | http://arxiv.org/abs/2309.06557v1 | # Unsupervised Bias Detection in College Student Newspapers
###### Abstract
This paper presents a pipeline with minimal human influence for scraping and detecting bias on college newspaper archives. This paper introduces a framework for scraping complex archive sites that automated tools fail to grab data from, and subsequently generates a dataset of 14 student papers with 23,154 entries. This data can also then be queried by keyword to calculate bias by comparing the sentiment of a large language model summary to the original article. The advantages of this approach are that it is less comparative than reconstruction bias and requires less labelled data than generating keyword sentiment. Results are calculated on politically charged words as well as control words to show how conclusions can be drawn. The complete method facilitates the extraction of nuanced insights with minimal assumptions and categorizations, paving the way for a more objective understanding of bias within student newspaper sources.
## 1 Introduction
In a world filled with so much information, being able to automatically get and understand data can save countless hours. Across various fields of study, data scraping and sentiment analysis can enable researchers to reveal nuanced patterns for meaningful insights [1, 13]. Sentiment analysis, or using machine learning to discern text's tone and emotion, can be used for financial and social benefit. In the public health sector, sentiment analysis of real-time tweets was shown to detect and localize Covid-19 outbreaks [1].
In the field of media study, detecting media bias has broad implications in understanding what source to trust [1]. Being able to do so in an automated and streamlined manner could allow for both the monitoring and ranking of existing media as well as scoring for future content to improve its credibility.
Despite the noteworthy progress made in these disciplines, several obstacles remain. A considerable portion of automated extraction methods grapple with cases of multi-tiered content extraction, particularly when from voluminous media archives [1]. Simultaneously, past bias identification strategies frequently depend on labeled and grouped data to either highlight how media outlets differ from one another, or what outlets' stances are on subjects. This inherently pushes the data's skews and assumptions onto the results, which can be avoided by looking at media outlets as a population to draw conclusions from [1].
To tackle these challenges, we propose an innovative, largely automated pipeline for unsupervised sentiment analysis. This new approach aims to detect media bias with fewer assumptions and less classification, thus promoting a more nuanced and precise understanding of media bias. By presenting a unique approach that combines data extraction with unsupervised sentiment analysis, this research seeks to contribute to ongoing scholarly conversations about media bias and its detection [15]. In the sections that follow, we delve deeper into the methodology and findings of this research, underscoring its potential for enhancing current approaches to media bias detection and analysis.
In this research, we introduce a novel methodology that merges state-of-the-art data extraction and unsupervised sentiment analysis, contributing significantly to the ongoing academic discourse on media bias and its detection. We start by positioning our work within the swiftly evolving realms of sentiment detection and bias identification, and underscore how our approach extends the existing body of knowledge. We proceed to detail our workflow which includes data extraction from college newspaper archives, followed by an overview of our sentiment analysis pipeline. The subsequent sections focus on the examination of our methods, a comparative study of statistical outcomes across various text granularities, an acknowledgement of potential constraints of our approach, and a forecast of future research directions building upon this foundation.
## 2 Literature Review
### Data Scraping
Data scraping has emerged as a popular and efficient method for mass data collection, offering numerous benefits such as ease of use. Researchers can now employ unsupervised models to rapidly extract, synthesize, and organize vast amounts of data without human oversight. This technological advancement has revolutionized the research landscape,
enabling studies involving thousands of articles across various websites in a time-efficient manner, a feat that would otherwise be nearly impossible.
Numerous approaches to data scraping have been developed over the years. One of the earliest techniques, pioneered in 1997 [16], involved the manual coding of multiple extraction programs, each tailored to extract data from individual websites based on their observed format and patterns. Another approach, introduced in 2001 [14], focused on uncovering patterns within hidden databases that underpin visible web pages. However, both of these methods were unsuitable for our study, as final content pages had repetitive HTML but each archive's layout presented a unique challenge, with sites often having flaws and inconsistencies.
A third approach, introduced by Liu, Grossman, and Zhai in 2003 and known as MDR [15], offered greater relevance to our final methodology. This technique leveraged the similarities found in the HTML structures of article pages to design a program capable of extracting data based on these HTML signatures. While the differences in HTML signatures across different student publications prevented us from entirely generalizing one program to extract data, we show in our pipeline a partial implementation in the form of grouped types of websites.
A modern approach demonstrated by Kusumasari and Prabow in 2020 [17] utilized parameters such as specific keywords and time periods to scrape data from Twitter for trends. Although many approaches similar to this exist for social media scraping, our college media was flawed in that it lacked consistent format, easy searching, and the many other utilities of organized standardized media content.
### College Media Data Collection
Existing research in college media bias has primarily focused on political party views regarding election influence. Aimee Burch and Raluca Cozma's 2016 study [1] investigated how student publications in swing states covered the 2012 presidential election, concluding that these newspapers displayed a generally more neutral tone compared to their professional counterparts. Similarly, Hans Schmidt's 2015 study [1] aimed to assess whether student journalists' personal preferences and biases influenced their article content during the 2008 and 2012 presidential elections.
Both of these method suffered from the pitfalls of manual data collection and organization, and drew conclusions from limited data with hand-chosen papers. Leveraging a machine-based collection method significantly reduces human resource requirements in scraping, organizing, and summarizing. Using unsupervised bias, as well, one can focus on distance from a ground truth, without an associated need to draw party lines.
### Bias Detection
Sentiment analysis plays a crucial role in bias detection, as machine learning models can effectively classify text into positive, negative, or neutral attitudes. This attribute has been extensively researched, particularly in the context of identifying biases in political news articles. A study by Minh Vu in 2017 [23] investigated whether politically charged information consumed by individuals is influenced by their pre-existing political opinions. The study classified articles based on liberal, conservative, or neutral attitudes, comparing these classifications to the sentiment model's analysis of each article's sentences. While the results were promising, the study employed a relatively simplistic approach to sentiment analysis, solely categorizing each article with one political position and exclusively based on analysis of each individual sentence.
Furthermore, a study published in 2020 [18] explored the use of sentiment analysis, particularly with the BERT natural language processing model, to detect hate speech in tweets and categorize them into different types of hate speech, such as racism and sexism. Similar to the previous study, this research employed a classification strategy and labeled tweets as either containing hate speech or not.
In our approach, each article is classified on a spectrum of positive, negative, and neutral biases concerning multiple topics. Moreover, our methodology allowed for the separation of article biases based on overall sentiment, paragraph-based sentiment, and sentence-based sentiment, providing a more comprehensive analysis compared to the previous studies' binary classifications.
### Text Summarization Applications
Text summarization, a powerful tool for streamlining the data collection process, involves programming AI models to coherently and concisely summarize large volumes of text. In systematic reviews, machine-based text summarization proves exceptionally useful during the initial stages. For instance, text mining and summarizing can be used to find relevant studies based on flagged words [19]. Our study builds upon the utility of text summarization, using summarization to establish a ground truth of what the basis of a newspaper article is.
## 3 Methodology
Our complete workflow is shown in figure 1, with all code and intricacies described relating back to this general layout.
### Inspecting for Validity
In our research, we focused on student newspaper archives from 16 public and private universities across the United States and Canada. Schools chosen were chosen from Hillel's list of 60 largest schools by Jewish student population [10]. This is because schools with larger Jewish student populations are more likely to have more articles relating to Israel and Palestine both in a worldwide sense and referring to campus events. Without this precaution, campus events would be more grouped with queries of India and China and introduce a fault in results.
Of the articles on this list, schools were not explored sequential to ranking. Rather, schools were chosen to generate a mix of types of schools in results, by location, size,
and private or public. School media chosen was chosen by searching "[School] Student Newspaper" followed by "[Student Newspaper] Archive", invalidating cases where schools did not have any large media affiliated or unaffiliated with campuses. Archives were not pursued further if the organizations had messages prohibiting scraping, or if no archive could be found. Archives going back to 2009 were preferable for the sake of consistency. This process and a constraint of checking 30 schools because of time caps led us to the schools we have.
### Extracting Archive Subpages and Article Pages
The biggest block to complete automation is taking the first page of an archive to extrapolate all archive subpages and all article pages resulting from them. The archive subpages can be accessed using one of the following methods: by examining the page content to identify the maximum page number in a bottom navigation bar, by inspecting the original subpage link to manually navigate to future pages, by going to the second subpage to obtain this link information, or by analyzing network data to directly access the backend API. For professional sites, a sitemap could generate all this information in a clean manner. However, many of these sites were built with old technologies.
Once on these subpages, something similar to MDR could be considered for getting all links for given dates. However, the content tags were volatile enough such that it was faster to manually grab the needed tags, and then use a script to iterate over information for day-by-day dictionary creation.
### Scraping Content
All archives can be grouped as having recurring PDFs, having text data directly on the site, or loading in text data with JavaScript. PDFs generated particularly noisy content, even after processing. As such, the gathered data was too fragmented to safely deduce bias, and so these schools are not included in final results. These schools are the motivating factor for why data is grouped by date, and the hope is that future improvements in processing can lead to their inclusion. Text data from the other sites was sufficient, with the only notable flaw being watermarks. Certain schools always generated headers or footers, either related to the web content itself, or copyright notices. These can be trimmed off in processing steps, but are maintained in data gathering such that the data itself is a reflective of the original source as possible.
### Querying and Processing
Querying is done by a simple keyword search. Experiments with modifying the regular expression to search for instances of the word where there was a portion cut off were used, but showed little additional yield for the added runtime. The main result of this simple method is the inclusion of articles where the keyword is not the topic.
As has been done in past literature, the main topic could be extracted through a large language model and used for filtering. This method was avoided as the conclusions it would generate would be heavily reliant on the nature of the language model used and the particular hyperparameters applied.
### Sentiment and Summarization
Sentiment is computed through the use of either NLTK Bird, Klein, and Loper (2009), a leading python natural language library, or through the BART model Lewis et al. (2019) from the HuggingFace library.
Results shown use NLTK's VADER model for sentiment
Figure 1: Workflow Layout
intensity. While a larger, more sophisticated model better captures the nuance of the text, the runtime for doing so on a large corpus of text makes it generally impractical. Beyond this, a simple sentiment model suffices.
Text is calculated for a summary of the entire article, for paragraph-by-paragraph summaries, and for each sentence of the original article. In all of these groupings, the sentiment model needs only calculate sentiment a single sentence at a time, to be averaged and grouped. This is to focus on the language used, and so allows for basic models to still generate meaningful conclusions.
Summarization is done using Google's T5 model (Raffel et al., 2020). The particular summarizer, similar to the choice for sentiment, is not the focus of the work and should not heavily hinder output. This model, in particular, was chosen because it shows state-of-the-art performance on many tasks and was trained on broad datasets, making it less likely to lose information when important.
### Bias Calculation
**Principle 1**: _Media bias can be measured as the difference between what a media outlet believes the truth to be to what they report._
We calculate bias under principle 1. The most central difference between this and what other works have done is that there is no single truth of an event. The concept behind this is to encapsulate that media should aim to report facts, even if the reporting of certain facts with certain nuances leads to a change in belief. Imbuing emotion and drawing conclusions in an article is a focal and preventative source of bias.
As such, sentiment is found on article, paragraph, and sentence level granularities. Article-wide sentiment is found by summarizing the entire article and calculating sentiment on the summary. Paragraph sentiment is done by summarizing each paragraph and averaging sentiment. Sentence sentiment is the average from each sentence. When we average, we keep negative, neutral, and positive separate from one another.
Bias is calculated as the difference between the article summary sentiment, considered the media's truth, and the sentence summary, considered what an outlet reports.
### Conclusions to be Drawn
Based on the nuances of the process, it is worth explicitly noting that a conclusion must contextualize the methodology.
* **Valid Conclusion:** A given school's articles show more bias when including a certain topic.
* **Valid Conclusion:** When looking at the population of schools or articles of a given school, one keyword seems to have more biased reporting in articles with it compared to other keywords.
* **Invalid Conclusion:** A given school is biased in favor or against a certain topic.
Conclusions that certain schools are more biased than others in reporting are plausible, and compared to reconstruction bias carry more weight as they are more independent of political ideology of the population. However, without isolating variables like the topics of articles, weekly or recurring reporting segments, or which authors contribute, broad conclusions are ill-mannered.
## 4 Results
All code was run on Google Colab notebooks with the default Intel CPU and 13GB of RAM, or on local desktop computers with equal or lesser RAM and no GPUs. Running sentiment and summarization for all shown keywords took roughly 40 hours, with the majority of time spent on summarization.
### Scraping
In total, 14 student papers with 23,154 entries were collected and aggregated, as shown in table 1. This was done using the pipeline methodology. This means that schools listed were manually guided towards constructing a dictionary of dates and associated article links. Past that point, all further steps were automated. Certain schools, UF for example, had issues with encoded text, and so could not be used for sentiment.
### Sentiment Distribution
Sentiment distribution both for article and sentence granularity for most schools appeared similar to figure 2. In this sense, positive and negative sentiment defaulted to 0, and neutral sentiment to 1. These graphs inspired a much more heavy focus on negative sentiment for comparison, as there were less false associations than positive sentiment.
When graphing the different sentiment scores relative to each other, such as in figure 3, there were a few general trends observed. Sentence sentiment was much less likely to be zero than article sentiment, which is expected of an averaged value. Matching sentiments over different granularities appeared to trend together, and differing sentiments generally trended against one another, with articles having to be primarily one of the three.
### Bias
Bias for schools ran is shown in table 2. Not all schools were used because of runtime and model constraints. T5 has a context limit of 500 tokens, and so schools that continued to show articles of over 500 tokens of length were avoided for being problematic. In the table, the values shown are all calculated as the percentage point difference from the article to
Figure 2: Distribution of Article Sentiment Scores for CMU’s Articles for Keywords [India, China, Israel, Palestime]
sentence, meaning a positive value of 2.86 indicates the article is 2.86 percentage points higher in whichever category. Certain schools seem to generate more bias than others over almost all cases. York, which gets the largest bias values, also has only 73 articles within the 4 keywords. Georgetown and CMU, both having over 125 articles, are better for comparing high and low bias. Certain keywords bring more associated bias with them, such as Palestine when compared to Israel.
For American University, additional keywords were calculated, as displayed in table 3. Within similar types of keywords, such as countries or political descriptions, conclusions can be drawn on the basis of absolute scale. However, there are many words or cases where a keyword has more bias in a certain category but not others. These examples are difficult to decipher as of now.
The bias of given keywords can also be compared with violin plots or box and whisker plots of the sentiment distribution to get a fuller picture of what the raw values were and how they may have been skewed. An example is shown in figure 4 for Georgetown's negative bias, which can be furthered contextualized with keyword distribution from figure 5.
## 5 Conclusion
In this study, we introduce and use a complete pipeline for detecting and comparing bias in student newspaper archives. Conclusions about the general group of schools or any one in particular are difficult to make. Rather, we generate exploratory results. With these results, a school can focus on balancing reporting between keywords, or curating future articles to match the balance. Compared to prior research, we succeed in generating our measure of bias unsupervised. We scrape college newspapers while overcoming many of the challenges in site variance and noise.
Future work can entail more in-depth comparison of summarization and sentiment models. More advanced processing and computation could allow for a more complete bias comparison over all scraped schools. The workflow can also hopefully be further automated to expand the dataset. Comparing more schools or even comparing school media to professional media could base scores of bias more soundly.
All code and data used to generate results is present at [https://anonymous.4open.science/r/UnsupervisedBias-C0F0](https://anonymous.4open.science/r/UnsupervisedBias-C0F0). We hope that the field of college media bias detection can grow to acknowledge and explore more of the nuances in successfully generated automated results.
| この論文は、学生新聞アーカイブのスクレイピングとバイアス検出において、最小限の人間介入によるパイプラインを提示しています。この論文では、自動ツールが取り扱えない複雑なアーカイブサイトをスクレイピングするためのフレームワークを導入し、その結果、14件の学生論文から23,154件のデータセットを生成しました。このデータは、キーワードを使って検索することで、大規模言語モデルの要約と元の文章との感情比較を通じてバイアスを計算することができます。このアプローチのメリットは、再構成バイアスよりも比較が少なく、キーワード感情の生成に比べてラベルデータの必要性が低いことです。結果の計算は、政治的な言葉とコントロール言葉で行い、結論を導き出す方法を示しています。この完全な方法が持つ特徴は、最小限の仮定と分類で、学生新聞のソースにおけるバイアスに対するより客観的な理解を促進することです。 |
2309.09102 | CppFlow: Generative Inverse Kinematics for Efficient and Robust
Cartesian Path Planning | In this work we present CppFlow - a novel and performant planner for the
Cartesian Path Planning problem, which finds valid trajectories up to 129x
faster than current methods, while also succeeding on more difficult problems
where others fail. At the core of the proposed algorithm is the use of a
learned, generative Inverse Kinematics solver, which is able to efficiently
produce promising entire candidate solution trajectories on the GPU. Precise,
valid solutions are then found through classical approaches such as
differentiable programming, global search, and optimization. In combining
approaches from these two paradigms we get the best of both worlds - efficient
approximate solutions from generative AI which are made exact using the
guarantees of traditional planning and optimization. We evaluate our system
against other state of the art methods on a set of established baselines as
well as new ones introduced in this work and find that our method significantly
outperforms others in terms of the time to find a valid solution and planning
success rate, and performs comparably in terms of trajectory length over time.
The work is made open source and available for use upon acceptance. | Jeremy Morgan, David Millard, Gaurav S. Sukhatme | 2023-09-16T21:55:45 | http://arxiv.org/abs/2309.09102v2 | # CppFlow: Generative Inverse Kinematics for Efficient and Robust Cartesian Path Planning
###### Abstract
In this work we present CppFlow - a novel and performant planner for the Cartesian Path Planning problem, which finds valid trajectories up to 129x faster than current methods, while also succeeding on more difficult problems where others fail. At the core of the proposed algorithm is the use of a learned, generative Inverse Kinematics solver, which is able to efficiently produce promising entire candidate solution trajectories on the GPU. Precise, valid solutions are then found through classical approaches such as differentiable programming, global search, and optimization. In combining approaches from these two paradigms we get the best of both worlds - efficient approximate solutions from generative AI which are made exact using the guarantees of traditional planning and optimization. We evaluate our system against other state of the art methods on a set of established baselines as well as new ones introduced in this work and find that our method significantly outperforms others in terms of the time to find a valid solution and planning success rate, and performs comparably in terms of trajectory length over time. The work is made open source and available for use upon acceptance.
## I Introduction
Moving a robot's manipulator along a specified cartesian space path is a fundamental operation in robotics, with applications across nearly all domains. Tasks such as performing a weld, painting a surface, or turning a door handle are all naturally expressed through the declaration of a reference path for the end effector to follow. Further, in settings such as a kitchen, hospital, or assembly line, it is of great importance that a robot be able to _quickly_ generate motion plans so as to avoid down time and lost productivity. Thus, the ability to quickly generate smooth, collision free paths for these tasks and in general along provided paths is of great utility, and while a core problem, there are further gains to be made.
More concretely, the Cartesian Path Planning (CPP) problem, otherwise known as Pathwise-Inverse Kinematics (Pathwise-IK) problem is defined whereby the robot must generate smooth, collision free trajectories (including robot-robot and robot-environment collisions) that result in the end effector tracking a specified cartesian space path. In this paper, we consider this task for redundant robots - those with 7 or greater degrees of freedom (DoFs) - which may have an infinite number of IK solutions for a given pose. It is this redundancy which makes the problem difficult, as IK solutions no longer have a discrete family they can be checked against - such as elbow up or down.
Current state of the art (SOTA) approaches which run in realtime generate motion by formulating and solving a weighted optimization problem [1, 2]. Those that plan trajectories ahead of time either build a graph or perform gradient based optimization [3, 4, 5]. However, while performant for problems where there is a clear basin of good solutions, these methods may get stuck in local optima and fail to find a solution in a reasonable amount of time on more difficult problems.
In this work we present CppFlow, a novel Cartesian Path Planning planner that utilizes recent advances in learned, generative IK to generate smooth, collision free paths faster than existing SOTA methods while also succeeding on more difficult problems. Using a generative IK model addresses a key issue with trajectory optimization methods - that they work well but require a good initial solution. In this work we use the Levenberg-Marquardt algorithm for trajectory optimization, a powerful quasi-Newton optimization procedure that quickly converges to precise and constraint satisfying solutions. Additionally, a search module finds the optimally smooth config-space path by interweaving the trajectories returned by the generative IK model, which dramatically improves the quality of the optimization seed.
To evaluate CppFlow, we benchmark our method on 5 standard test problems present in the literature. To demonstrate our method's capability on more difficult problems, we introduce a suite of new robots and target paths. The additional robots include the Franka Panda arm and a modified kinematic chain from the Fetch robot that includes a
Fig. 1: A CppFlow generated trajectory for the ‘Panda - flappy-bird’ problem. In this problem the robot must navigate through a tight corridor imposed by two vertical obstacles.
prismatic joint. As opposed to many of the existing tests which contain a basin of valid solutions, the newly introduced tests contain problems that require choosing between disjoint paths in configuration space to reveal the capability of a planner to avoid local minima. Three key metrics of planner performance are reported on: success rate, time to get an initial valid solution, and trajectory length over time. We find that CppFlow outperforms all other planners on two axes and performs comparably on the third.
## II Problem Specification
The goal of the Cartesian Path Planning problem is to find a trajectory \(\xi\) such that the robots end effector stays along a target path \(\mathbf{Y}\) while satisfying additional constraints.
It is assumed that the robot has \(\geq\)7 Degrees of Freedom \(d\) and is therefore a redundant manipulator. Owing to this additional freedom, there may be a potentially infinite set of IK solutions for a given end-effector pose. We denote the Forward Kinematics mapping as \(\text{FK}(q):\mathbb{R}^{d}\rightarrow\text{SE}(3)\). The target path and trajectory are discretized into \(n\) timesteps, such that \(\xi=[q_{1},...,q_{n}]\) and \(\mathbf{Y}=[y_{1},...,y_{n}]\), where \(q_{i}\in\mathbb{R}^{d}\) is the robots joint configuration at timestep \(i\) and \(y_{i}\in\text{SE}(3)\) is the target pose at timestep \(i\).
There are two paradigms to addressing time in the Cartesian Path Planning problem. Either a time parameterization is provided with the target path and the goal is to find a satisfying path that minimizes jerk or some other smoothness measure, or no time parameterization is provided, and it is assumed one is found _after_ the joint configurations are calculated by some external module. This system uses the latter paradigm - it is assumed that an external module will calculate the temporal component of each \(q_{i}\) such that the robots velocity, acceleration, and jerk limits are respected while execution time is minimized. As such, time parameterization is such omitted from this work. Given this setup, the constraints on the problem are as follows:
**Pose error**. At every timestep \(i\), the deviation of the positional and rotational components of the robots end effector from the target pose, found by calculating \(\text{FK}(q_{i})-y_{i}\), must be within the mechanical repeatability of the robot. Here and in the rest of the paper, the subtraction between poses in \(\text{SE}(3)\) indicates elementwise subtraction between the x, y, z, and roll, pitch, and yaw components of each pose. Specifically, positional error is the euclidean norm of the positional difference, and rotational error is the geodesic distance between the rotational components of \(\text{FK}(q_{i})\) and \(y_{i}\). It is the authors view that refining a joint configuration that results in end effector displacement less then the mechanical repeatability of the robot is wasted computation, as displacements below the repeatability limit will have no impact on real hardware. Thus, a positional error threshold of 0.1 mm is assigned, as this is the reported positional mechanical repeatability of both the Franka Panda and the Fetch Manipulator. The rotational error threshold is set to 0.1 deg, which is found through a procedure that finds the expected maximum rotational error for a configuration within the region of positional repeatability.
**Collision avoidance**. At no timestep in the trajectory can the robot collide with itself or the environment. A virtual capsule is affixed to each link of the given robot _a priori_. Each capsule is the minimum enclosing capsule for a given links visual mesh and is found through a Quadratic Programming optimization procedure. Given capsules for each link, self collision checking is performed by evaluating whether any two capsules are intersecting. Checks between certain links are removed when it is impossible for these links to collide if joint limits are respected. The exact position and shape of every obstacle in the environment is assumed to be provided. In a similar manner, robot-environment collisions are found by checking for intersections between any link's capsule and the obstacles in the environment. Obstacles are all cuboids in this work for simplicity. It is assumed that the time parameterization between configurations is small enough such that collision checks between timesteps are not required.
**Joint limits**. The configuration \(q\) at each timestep \(i\) must be within the robots joint limits.
**Discontinuities**. The configurations across two different timesteps must stay close to one another as large changes in configuration space may not be achievable by the robot. A value of 7 degrees and 2 cm is chosen as the maximum absolute distance an individual revolute or prismatic joint may change across a timestep, respectively.
Lastly, while not a constraint, _trajectory length_ is an important metric for evaluating the quality of a trajectory. Trajectory length is defined as the cumulative change in configuration-space across a trajectory (measured in radians and meters) and gives an estimate for how long the trajectory will take to execute on an actual robot. The _time to find a valid solution_ is an important metric as well. As the name suggests this is the total elapsed time before a constraint satisfying trajectory is found.
Fig. 2: The CppFlow system
## III CppFlow
We now present the CppFlow method. It is composed of three main components: a candidate motion plan generator, a global discrete search procedure, and a trajectory optimizer.
### _Candidate Motion Plan Generation_
CppFlow begins by generating \(K\) approximate motion plans using IKFlow, a learned generative IK solver (\(K\) is a hyperparameter set here to 175) [6]. IKFlow generates IK solutions as a function of a latent vector \(z\) and a target cartesian pose \(y\), in batch on the GPU. Additionally, IKFlow models exhibit a desirable property: small changes to either the latent vector or target pose result in small changes to returned IK solutions. We use IKFlow as the generative-IK model because of this smoothness property and its low inference time and high accuracy.
To generate the \(K\) motion plans, \(K\) latent vectors are drawn uniformly at random from a unit hypercube, and repeated \(n\) times. Randomizing the latent vectors ensures a diverse set of IK solutions, and resulting motion plans are returned. Each set of \(n\) repeated latent codes is paired with a copy of the target path \(\mathbf{Y}\). A batch of inputs is formed for IKFlow by concatenating each latent vector-target path pair to create a batch of \(nK\) latent vector-target pose pairs. This batch is passed to IKFlow, which transforms it into \(nK\) IK solutions. Candidate motion plans are generated by segmenting these solutions according to their original latent vector. Advantageously, the returned IK solutions for each of the \(K\) approximate motion plans are found to change slowly. This is because the latent vector is held fixed and the target path is slowly changing, satisfying the conditions for the smoothness property stated above. The runtime of this procedure scales linearly with the hyperparameter \(K\), target path length \(n\), and as a function of the performance characteristics of the GPU.
### _Global discrete search_
The objective of the global discrete search module is to find the optimal \(\xi\) as measured by collision avoidance and the _Maximum Joint Angle Change_ (mjac) - the maximum absolute change in joint angle between any two timesteps along a trajectory. Formally, \(\texttt{mjac}=\max(\max(|q_{i+1}[j]-q_{i}[j]|\forall j\in[1,...,d]]\forall i\in [1,...,n])\).
The module first creates a directed graph from the \(K\) motion plans. Each IK solution is considered a node. Edges are connected from each IK solution to the \(K\) IK solutions at the following time step. A dynamic-programming-based search procedure is run on the graph. The cost at each node is set to the minimum mjac achievable if the node is included in the returned path. To avoid trajectories that are close to the joint limits, an additional cost of 10 (which is greater than the possible mjac cost of \(\pi\) for revolute joints, and the range of the prismatic joint for Fetch.Full) is added to configurations that are within 1.5 degrees/3 cm of their joint limits. An additional cost of 100 is added to configurations that would lead to a collision. Upon completion, a backtracking procedure is executed, which returns \(\xi_{\text{search}}\).
Trajectories with large mjac values are likely to fail in the next step, as this indicates the existence of large joint space discontinuities, which are difficult to optimize. To remedy this, after the search finishes, the mjac of \(\xi_{\text{search}}\) is calculated. If it is above a threshold (12 deg/3 cm), an additional set of motion plans is generated and added to the existing set before repeating the search. Collision and joint limit checks performed on the initial plans are retained to conserve computation. This is an effective approach to prevent bad optimization seeds, based on the intuition that a denser covering of the relevant portions of configuration space likely contains a better configuration space path.
### _Levenberg-Marquardt for Trajectory Optimization_
The optimization problem is framed as a nonlinear least squares problem and solved using the Levenberg-Marquardt
Fig. 4: The ‘Fetch.Arm - hello’ problem visualized. The sweeping motion of the robots arm is generated by overlaying the first 20 configurations from a trajectory \(\xi\) returned by CppFlow. The red cursive ‘hello’ is the target path, denoted \(\mathbf{Y}\). The target path \(\mathbf{Y}\) is composed of \(n\) target poses \(y_{1},...,y_{n}\)
Fig. 3: Visualizations of a subset of the problems evaluated on.
algorithm. As opposed to stochastic gradient descent and other first-order optimization methods previously used to solve this problem [5], Levenberg-Marquardt approximates the objective Hessian matrix with first derivative information in order to make significantly larger and more accurate steps on least-squares objectives. As a result, when initial seeds are in close vicinity to a valid solution, only 1 to 3 steps are required to find a valid solution.
The optimizer keeps track of the latest valid solution; this can be returned if the user wants to exit making CppFlow an _anytime_ planner i.e., it continuously improves its plan until stopped. If a valid solution is not found, the dynamic programming search is repeated using additional motion plans from IKFlow. This lowers the mjac of the \(\xi\) used as a seed, increasing the chance it will be successfully optimized.
Similar to Torm, a non-stationary objective function is used, leading to improved convergence performance [5]. The optimizer switches between optimizing only for pose error and for trajectory length and obstacle avoidance. To balance the frequency of the two, the optimizer optimizes for pose error exclusively until the positional and rotational error of the end effector is below the respective specified threshold at every timestep. A trajectory length and obstacle avoidance optimization step is then taken, after which the optimizer switches back to pose only, and the cycle continues. This ordering ensures that the trajectory stays close to a valid solution throughout the optimization process. The residual terms are \(r_{\text{pose}}\) for pose error, and \(r_{\text{diffr}}\) for trajectory length and obstacle avoidance error. Figure 6 shows the composition of each residual. The residual components are listed below.
**Pose error** The pose of the end effector at each timestep is calculated by a Forward Kinematics function using an efficient custom PyTorch implementation which performs calculations in parallel on the GPU. A residual term for the error in the x, y, z, and roll, pitch, and yaw dimensions (\(\text{FK}(q_{i}).x-y_{i}.x\),..., \(\text{FK}(q_{i}).\Psi-y_{i}.\Psi\)) are added to \(r_{\text{pose}}\), for every \(q_{i}\) in \(\xi\). The Jacobian of these residuals wrt \(\xi\) is found by observing that in the derivative of \(\text{FK}(q_{i}).x-y_{i}.x\) wrt \(q_{i}\), the constant term \(y_{i}.x\) goes to 0 so the derivative is \(J_{\text{R}}(q_{i}).x\). A second custom PyTorch implementation quickly calculates this kinematic Jacobian term in parallel on the GPU. Once the residual and Jacobian are calculated, the Levenberg-Marquardt update \((J^{T}J+\lambda I)\Delta\xi=Jr_{\text{pose}}\) is calculated using PyTorch's batched LU decomposition functionality.
**Trajectory length**. A differencing loss is used as a surrogate error for trajectory length. It penalizes changes in joint angle across consecutive timesteps: \(q_{i+1}-q_{i}\forall i\in[1,...,n-1]\), which encourages configurations to stay close to one another. The Jacobian of each differencing error \((q_{i+1}-q_{i})\) wrt \(\xi\) is the identity matrix for configuration \(q_{i+1}\), and the negated identity matrix for \(q_{i}\).
**Robot-robot collision avoidance**. For self-collision checking, each robot is represented as a set of capsules with endpoints \((a,b)\) and radius \(c\). As in [7], we formulate the minimum distance between a capsule pair \((a_{1},b_{1},c_{1})\) and \((a_{2},b_{2},c_{2})\) as the minimum cost, less \(c_{1}+c_{2}\), of a convex quadratic program
\[\min_{t_{1},t_{2}} (a_{1}+(b_{1}-a_{1})t_{1})^{T}(a_{2}+(b_{2}-a_{2})t_{2})\] subject to \[0\leq t_{1},t_{2}\leq 1\]
Fig. 5: Trajectory length (rad) convergence results for CppFlow and Torm. Stampede results are ommited as the planner is not anytime - it returns only a single trajectory. A lower value is better, as this indicates a trajectory that can be executed in a shorter time. Plots are generated by averaging the trajectory length convergence curves of the designated planner from 10 planning runs. The mean time to a valid solution time is the first time instant plotted. The convergence behavior for CppFlow is strictly better for ‘Fetch.Full - square’ and ‘Panda - 1cube’, as measured by initial valid solution time and asymptotic limit. CppFlow has a 1.169x larger final trajectory length for ‘Fetch.Arm - circle’ problem as compared to Torm, however, it takes Torm 1.83x as long for Torm to find an initial valid trajectory for this problem.
Fig. 6: The residual term used by the optimizer. The symbols \(\Phi\), \(\Theta\), and \(\Psi\) represent the roll, pitch, and yaw of the pose.
which we solve with a custom batched active-set method implementation. While the solutions of convex programs are differentiable [8], since we evaluate many collision pairs for the same joint configuration, it is efficient to compute Jacobians by batched forward-mode [9] automatic differentiation.
**Joint limits**. During optimization, joint limit constraints are enforced by clamping \(\xi\) to the robot joint limits.
**Robot-environment collision avoidance**. We represent obstacles as cuboids with arbitrary spatial positioning. We formulate the minimum distance between a capsule and a cuboid in a frame axis-aligned to the cuboid, in which the capsule has endpoints \((a,b)\) and radius \(r\), and the cuboid has extents \(p_{min}=(x_{min},y_{min},z_{min})\) and \(p_{max}=(x_{max},y_{max},z_{max})\) as the minimum cost, less \(c\), of the convex quadratic program.
\[\min_{t,p} p^{T}(a+(b-a)t)\] subject to \[0\leq t\leq 1\] \[p_{min}\leq p\leq p_{max}\]
## IV Experiments & Analysis
CppFlow is evaluated on 13 different planning problems and 3 different robots. Other SOTA methods, including Torm and Stampede [5, 3] are also evaluated to provide a comparison. The Fetch.Arm problems are the same as reported in [5]. The Fetch.Full problems are the same except for the addition of the Fetch prismatic joint to the kinematic chain. The Panda problems were created in this work.
We grade the planner on three axes: success rate, time to get an initial valid solution, and trajectory length over time.
\begin{table}
\begin{tabular}{l l l l l l l l}
**Problem** & **Planner** & **Time to** & **Initial** & **Planning** & **Trajectory** & **Planning** & **Trajectory** \\ & & **valid** & **solution (s)** & **time (s)** & **success** & **length (rad/m)** & **success** & **length (rad/m)** \\ & & **solution (s)** & **time (s)** & **rate (\%)** -** & **-2.5s max** & & **rate (\%)** -** & **-50s max** \\ \hline Fetch.Arm - circle & cpflow - ours & **1.211** & **0.014** & **90.0** & 17.39 & **100.0** & 14.98 \\ & stampede\({}^{*}\) & 91.566 & 91.566 & 0.0 & — & **100.0** & **11.105** \\ & torn & 2.916 & 0.715 & 50.0 & **14.82** & **100.0** & 12.81 \\ \hline Fetch.Arm - hello & cpflow - ours & **1.126** & **0.016** & **100.0** & 64.46 & **100.0** & 62.52 \\ & stampede\({}^{*}\) & inf & inf & 0.0 & — & 0.0 & — \\ & torm & 3.598 & 1.414 & 30.0 & **62.426** & **100.0** & **58.653** \\ \hline Fetch.Arm - rotation & cpflow - ours & **0.535** & **0.014** & **100.0** & **30.423** & **100.0** & **26.758** \\ & stampede\({}^{*}\) & inf & inf & 0.0 & — & 0.0 & — \\ & torn & 1.27 & 1.066 & 70.0 & 30.5 & **100.0** & 27.13 \\ \hline Fetch.Arm - s & cpflow - ours & **0.788** & **0.014** & **100.0** & 17.18 & **100.0** & 15.47 \\ & stampede\({}^{*}\) & 137.569 & 137.569 & 0.0 & — & **100.0** & **10.856** \\ & torn & 1.024 & 0.844 & 80.0 & **13.549** & **100.0** & 12.77 \\ \hline Fetch.Arm - square & cpflow - ours & **0.747** & **0.014** & **100.0** & 21.71 & **100.0** & 18.48 \\ & stampede\({}^{*}\) & 110.774 & 110.774 & 0.0 & — & **100.0** & **14.841** \\ & torn & 1.045 & 0.79 & **100.0** & **17.287** & **100.0** & 16.56 \\ \hline Fetch.Full - circle & cpflow - ours & **1.02** & **0.011** & **100.0** & **20.13 / 0.37** & **100.0** & **13.28 / 0.46** \\ & torn & 14.595 & 2.363 & 10.0 & 21.99 / 0.46 & 90.0 & 20.67 / 0.45 \\ \hline Fetch.Full - hello & cpflow - ours & **1.243** & **0.011** & **70.0** & **49.74 / 2.9** & **100.0** & **39.97 / 3.49** \\ & torn & inf & 4.12 & 0.0 & — & — & 20.0 & 76.07 / 2.39 \\ \hline Fetch.Full - rotation & cpflow - ours & **0.605** & **0.011** & **100.0** & **28.47 / 1.01** & **100.0** & **21.8 / 1.24** \\ & torn & 15.31 & 1.784 & 0.0 & — & **100.0** & 29.15 / 0.77 \\ \hline Fetch.Full - s & cpflow - ours & **0.785** & **0.011** & **100.0** & **22.08 / 0.55** & **100.0** & **14.14 / 0.77** \\ & torn & inf & inf & 0.0 & — & 0.0 & — \\ \hline Fetch.Full - square & cpflow - ours & **0.751** & **0.011** & **100.0** & **19.65 / 0.35** & **100.0** & **13.97 / 0.39** \\ & torn & 9.255 & 2.997 & 0.0 & — & 70.0 & 18.9 / 0.6 \\ \hline Panda - 1cube & cpflow - ours & **0.434** & **0.011** & **100.0** & **8.075** & **100.0** & **7.585** \\ & torn & 2.912 & 2.585 & 0.0 & — & **100.0** & 8.13 \\ \hline Panda - 2cubes & cpflow - ours & **0.484** & **0.011** & **100.0** & **9.627** & **100.0** & **9.031** \\ & torn & 62.478 & 39.346 & 0.0 & — & 40.0 & 13.21 \\ \hline Panda - flappy-bird & cpflow - ours & **0.63** & **0.011** & **100.0** & **20.323** & **100.0** & **20.323** \\ & torn & inf & inf & 0.0 & — & 0.0 & — \\ \hline \hline \end{tabular}
\end{table} TABLE I: Results from CppFlow, Torm, and Stampede on the 13 test problems. ‘Time to Valid Solution’ - median time to find a trajectory with less than 1mm/.1deg pose error and all other constraints satisfied. ‘Initial Solution Time’ - median time to find a trajectory, regardless of its validity. Failed runs count for inf planning time for both ‘Time to Valid Solution’ and ‘Initial Solution Time’. ‘Planning Success Rate, \(x\) seconds’ - the percentage of runs which have found a valid trajectory before \(x\) seconds have elapsed. ‘Trajectory length (rad/m), \(x\) seconds’: The mean cumulative change in configuration-space of the revolute, prismatic joints of the latest valid trajectory found before \(x\) seconds have elapsed. A lower trajectory is better, as generally this means the robot can execute the trajectory faster. The aricks on the stampede\({}^{*}\) rows indicates that the planning success rate and trajectory length is for the final returned trajectory, which may be returned after 50s. Stampede failed to run at all with the Fetch.Full kinematic chain so was excluded from the results. CppFlow returns an initial solution faster than either of the other methods on all problems. It will take Torm and Stampede 19.84x and 132.8x as long on average respectively to find a valid solution.
Metrics include the time to find a valid solution (maximum of.1 mm position and.1 degree rotation error without violating any other constraints), initial solution time (regardless of validity), planning success rate, and trajectory length. For CppFlow and Torm, which are anytime, these are reported throughout the optimization whereas for Stampede only the final returned trajectory is analyzed.
### _Experiments_
There are four obstacle-free problems, including 'hello', 'rotation' for both Fetch.Full and Fetch.Arm. The problems with obstacles include 'circle','s', and'square' for Fetch.Full and Fetch.Arm and the three Panda problems. These are visualized in 3. While Torm allows for an initial joint configuration to be set, this feature is disabled for a fair comparison. Further, the target path for the 'rotation' problems provided in the Torm GitHub repository had 40 additional waypoints added to reduce the distance between the poses of the target path. The poses still parameterize the same motion, however. We run Stampede on the obstacle problems even though it does not account for obstacles as a further point of comparison - the obstacles are ignored in these cases. A software bug in the provided code prevented Stampede from running on the Fetch.Full robot.
The three test robots are 'FetchArm', 'FetchFull', and 'Panda' (7, 8, and 7 DOF, respectively). The difference between 'FetchArm' and 'FetchFull' is that 'FetchFull' includes a prismatic joint (the 'torso_lift_joint') which lifts the entire arm. Each planner is run 10X on each problem. CppFlow and Torm are allotted 60s per run, whereas Stampede has no time limit (it stops on its own after returning a solution). Tests are run on an Intel i9 with 20 CPUs, 124 GBs of RAM, and an Nvidia RTX 4090 graphics card.
### _Results_
The numerical results are presented in Table I; selected trajectory length convergence plots are shown in Figure 5.
CppFlow dominates on the first axis, planning success rate. The success rate is 100% for all problems after 50s and reaches 96.7% on average after only 2.5 seconds. In comparison, Torm fails to generate any plans more than 50% of the time on three of the problems, and Stampede fails to generate any plans for 2/5 of its problems. This indicates CppFlow is a robust planner that is likely to succeed on difficult problems. CppFlow also dominates on the second axis: the time to get a valid solution. Valid solutions are generally found within 1 second and often in under 600ms. Compared to CppFlow, Torm takes between 1.29x to 129.15x as long to find its first valid solution. On average, it will take Torm 19.84x as long to find a valid solution. On the final axis, the trajectory length convergence behavior for CppFlow is strictly better for the Fetch.Full and Panda problems compared to Torm. The convergence results are roughly tied for 2/5 Fetch.Arm problems, while Torm has a lower asymptotic limit for the other 3 (including 'Fetch.Arm - circle', which is shown in Figure 5). The results indicate that CppFlow generates valid solutions faster than all other SOTA methods while producing trajectories of overall similar quality when run as an anytime planner. Crucially, CppFlow succeeds on the hardest problems, which indicates it is a highly capable planner. Given this, CppFlow would be a good all-around choice for a CPP planner, especially in settings that require quick planning times for unseen and potentially difficult planning problems, such as in a home or hospital.
## V Related work
Torm solves CPP using gradient descent-based optimization to reach an acceptable solution [5]. It contains a candidate trajectory generator module that performs traditional IK along the reference trajectory to find a good approximate trajectory to optimize. RelaxedIK solves the problem of generating motion in real-time, which satisfies end-effector pose goal matching and motion feasibility [1]. Stampede [3] solves time-constrained CPP by constructing and searching through a solution graph of viable configurations to find a satisfying trajectory. Luo and Hauser solve time-constrained CPP [10] through a decoupled optimization procedure in which pose error is first reduced before temporal matching is decoupled; however, only the position of the end effector is considered - orientation is ignored. While IK for redundant kinematic chains has been traditionally solved by using the Jacobian to iteratively optimize joint configuration, recent work has shown that generative modeling techniques can learn this mapping instead. These methods generate solutions in parallel on the GPU, enabling significantly better runtime scaling characteristics, albeit at the cost of accuracy. IK-Flow [6], the solver used here, uses conditional Normalizing Flows to represent this mapping. The method in [11] learns the density of IK solutions at a given pose using a Block Neural Autoregressive Flow. NodeIK learns this mapping using a Neural Ordinary Difference Equation model [12]. In [13] Generative Adversarial Networks for IK are trained for the Fetch Manipulator and used for generating trajectories along a path; however, statistics on the pose error of returned solutions are not included. In [14], the IK problem is reformulated as a distance geometry problem whose solutions are learned via Graph Neural Networks. A single GNN can produce IK solutions for multiple kinematic chains. However, the resulting solution accuracy for individual robots is lower than that of IKFlow and NodeIK.
## VI Conclusion
We propose an efficient and capable anytime Cartesian Path Planner that combines techniques from search, optimization and learning to achieve SOTA results. Our planner achieves a 100% success rate on a representative problem set and finds valid trajectories within 1.3 seconds, in the worst case. Additionally, it continually decreases trajectory length when time is allowed. Our results demonstrate the usefulness of generative IK models for kinematic planning and raise the question of where else they can be applied. We are exploring how CppFlow may be adapted for kinodynamic planning, which requires precise temporal specifications. | この研究では、Cartesian Path Planning 問題用の新規で効率的なプランナーである CppFlow を提案します。これは、既存の方法は129倍高速で、さらに困難な問題にも成功させます。この提案のアルゴリズムの核心には、学習された生成型逆kinematicsソルバーの使用があり、GPUで効率的に、可能性のあるすべての候補ソリューションのトラジェクトリを生成することができます。その後、古典的なアプローチである微分可能なプログラミング、グローバルサーチ、最適化を用いることで、正確な、有効なソリューションを導き出します。これらを組み合わせることで、生成型AIからの効率的な近似解と伝統的な計画と最適化の保証の組み合わせを実現します。この方法を他の先進的な方法と比較して、従来の基底を基に評価し、新たな基底も評価することで、私たちの方法は、有効なソリューションの発見に費やす時間、そして計画成功率において、 |
2305.00595 | Impact of Deep Learning Libraries on Online Adaptive Lightweight Time
Series Anomaly Detection | Providing online adaptive lightweight time series anomaly detection without
human intervention and domain knowledge is highly valuable. Several such
anomaly detection approaches have been introduced in the past years, but all of
them were only implemented in one deep learning library. With the development
of deep learning libraries, it is unclear how different deep learning libraries
impact these anomaly detection approaches since there is no such evaluation
available. Randomly choosing a deep learning library to implement an anomaly
detection approach might not be able to show the true performance of the
approach. It might also mislead users in believing one approach is better than
another. Therefore, in this paper, we investigate the impact of deep learning
libraries on online adaptive lightweight time series anomaly detection by
implementing two state-of-the-art anomaly detection approaches in three
well-known deep learning libraries and evaluating how these two approaches are
individually affected by the three deep learning libraries. A series of
experiments based on four real-world open-source time series datasets were
conducted. The results provide a good reference to select an appropriate deep
learning library for online adaptive lightweight anomaly detection. | Ming-Chang Lee, Jia-Chun Lin | 2023-04-30T22:38:06 | http://arxiv.org/abs/2305.00595v2 | # Impact of Deep Learning Libraries on Online Adaptive Lightweight Time Series Anomaly Detection
###### Abstract
We propose a novel algorithm for the Deep Learning Library to detect the
# Impact of Deep Learning Libraries on Online Adaptive Lightweight Time Series Anomaly Detection
Ming-Chang Lee\({}^{1}\)
\({}^{1}\)Department of Computer science, Electrical engineering and Mathematical sciences, Hogskulen pa Vestlander (HVL), Bergen, Norway
\({}^{2}\)Department of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gjovik, Norway
\({}^{3}\)Department of Computer Science, Electrical engineering and Mathematical sciences, Hogskulen pa Vestlander (HVL), Bergen, Norway
\({}^{2}\)Department of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gjovik, Norway
\({}^{3}\)Department of Computer Science, Electrical engineering and Mathematical sciences, Hogskulen pa Vestlander (HVL), Bergen, Norway
\({}^{3}\)Department of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gjovik, Norway
\({}^{3}\)Department of Computer Science, Electrical engineering and Mathematical sciences, Hogskulen pa Vestlander (HVL), Bergen, Norway
\({}^{4}\)Department of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gjovik, Norway
\({}^{5}\)Department of Computer Science, Electrical engineering and Mathematical sciences, Hogskulen pa Vestlander (HVL), Bergen, Norway
\({}^{6}\)Department of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gjovik, Norway
\({}^{7}\)Department of Computer Science, Electrical engineering and Mathematical sciences, Hogskulen pa Vestlander (HVL), Bergen, Norway
\({}^{8}\)Department of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gjovik, Norway
\({}^{9}\)Department of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gjovik, Norway
\({}^{10}\)Department of Computer Science, Electrical engineering and Mathematical sciences, Hogskulen pa Vestlander (HVL), Bergen, Norway
\({}^{11}\)Department of Computer Science, Electrical engineering and Mathematical sciences, Hogskulen pa Vestlander (HVL), Bergen, Norway
\({}^{12}\)Department of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gjovik, Norway
\({}^{13}\)Department of Computer Science, Electrical engineering and Mathematical sciences, Hogskulen pa Vestlander (HVL), Bergen, Norway
\({}^{14}\)Department of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gjovik, Norway
\({}^{15}\)Department of Computer Science, Electrical engineering and Mathematical sciences, Hogskulen pa Vestlander (HVL), Bergen, Norway
[MISSING_PAGE_POST]
\({}^{41}\)Department
of abnormal data without any label. Since most of real-world time series data do not have any label, it is desirable to have an unsupervised anomaly detection approach. Conventional machine learning models are usually trained with a pre-collected dataset in an offline manner. Once the models are trained, they are used for inference without any change. Hence, they cannot reflect unseen situations or adapt to changes on time series (Eom et al., 2015). Unlike offline model training, online model training enables a machine learning model to be trained on the fly, implying that the model can adapt to changes in the pattern of the time series (i.e., adaptability). This feature is getting more and more popular, and it has been provided by some systems or approaches such as (Lee et al., 2020; Eom et al., 2015; Chi et al., 2021). Finally, lightweight means that an anomaly detection approach neither has a complex network structure/design nor requires excessive computation resources such as General-Purpose Graphics processing units (GPGPUs) or high-performance computers.
According to our survey, only few state-of-the-art approaches satisfy all the above-mentioned characteristics, such as RePAD (Lee et al., 2020), ReRe (Lee et al., 2020), SALAD (Lee et al., 2021), and RePAD2 (Lee and Lin, 2023). However, all of them were only implemented in one specific deep learning library. In fact, a number of deep learning (DL) libraries have been introduced and widely used, such as TensorFlow (Abadi et al., 2016), PyTorch (Paszke et al., 2019), and Deeplearning4j (Deeplearning4j, 2023). They have a common goal to facilitate the complicated data analysis process and offer integrated environments on top of standard programming languages (Nguyen et al., 2019). However, it is unclear the impact of these DL libraries on online adaptive lightweight anomaly detection.
Therefore, this paper focuses on investigating how different DL libraries affect online adaptive lightweight time series anomaly detection by implementing two state-of-the-art anomaly detection approaches in three widely-used deep learning libraries. It is worth noting that our focus is not to compare different time series anomaly detection approaches regarding their detection accuracy or response time. Instead, we emphasize on investigating how these approaches are individually affected by different DL libraries.
A series of experiments based on open-source time series datasets were performed. The results show that DL libraries have a great impact on not only anomaly detection accuracy but also response time. Therefore, it is important to take the selection of DL libraries into consideration when one would like to design and implement an online adaptive lightweight time series anomaly detection approach.
The rest of the paper is organized as follows: Section 2 describes time series anomaly detection approaches and DL libraries. Section 3 gives an overview of the related work. Section 4 introduces evaluation setup. Section 5 presents the evaluation results. Section 6 concludes this paper and outlines future work.
## 2 Background
In this section, we introduce state-of-the-art anomaly detection approaches for univariate time series and some well-known DL libraries.
### Anomaly Detection Approaches for Univariate Time Series
Existing anomaly detection approaches for univariate time series can be roughly classified into two categories: statistical based and machine learning based. Statistical-based anomaly detection approaches attempt to create a statistical model for normal time series data and use this model to determine if a data point is anomalous or not. Example approaches include AnomalyDetectionTs and AnomalyDetectionVec proposed by Twitter (Twitter, 2015), and Luminol introduced by LinkedIn (LinkedIn, 2018). However, statistical-based approaches might not perform well if the data does not follow a known distribution (Alimohammadi and Chen, 2022).
On the other hand, machine learning based approaches attempt to detect anomalies without assuming a specific generative model based on the fact that it is unnecessary to know the underlying process of the data (Braei and Wagner, 2020). Greenhouse (Lee et al., 2018) is a time series anomaly detection algorithm based on Long Short-Term Memory (LSTM), which is a special recurrent neural network suitable for long-term dependent tasks (Hochreiter and Schmidhuber, 1997). Greenhouse adopts a Look-Back and Predict-Forward strategy to learn the distribution of the training data. For a given time point, a window of most recently observed data point values are used to predict future data point values. However, Greenhouse is not an online approach since its LSTM model is trained with a pre-collected training data. Besides, it requires users to determine a proper detection threshold.
RePAD (Lee et al., 2020) is an online real-time lightweight unsupervised time series anomaly detection approaches based on LSTM and the Look-Back
and Predict-Forward strategy. RePAD utilizes a simple LSTM network (with only one hidden layer and ten hidden units) to train a LSTM model with short-term historical data points, predict each upcoming data point, and then decide if each data point is anomalous based on a dynamically calculated detection threshold. Different from Greenhouse, RePAD does not need to go through any offline training. Instead, RePAD trains its LSTM model on the fly. RePAD will keep using the same LSTM model if the model predicts well. When the prediction error of the model is higher than or equal to a dynamically calculated detection threshold, RePAD will retrain another new model with recent data points.
ReRe (Lee et al., 2020) is an enhanced time series anomaly detection based on RePAD, and it was designed to further reduce false positive rates. ReRe utilizes two LSTM models to jointly detect anomalous data points. One model works exactly like RePAD, whereas the other model works similar to RePAD but with a stricter detection threshold. Compared with RePAD, ReRe requires more compute resources due to the use of two LSTM models.
SALAD (Lee et al., 2021) is another online self-adaptive unsupervised time series anomaly detection approach designed for time series with a recurrent data pattern, and it is also based on RePAD. Different from RePAD, SALAD consists of two phases. The first phase converts the target time series into a series of average absolute relative error (AARE) values on the fly. The second phase predicts an AARE value for every upcoming data point based on short-term historical AARE values. If the difference between a calculated AARE value and the corresponding forecast AARE value is higher than a self-adaptive detection threshold, the corresponding data point is considered anomalous.
Ziu et al. (Niu et al., 2020) introduced LSTM-based VAE-GAN, which stands for a Long Short-Term Memory-based variational autoencoder generation adversarial networks. This method consists of one offline training stage to learn the distribution of normal time series, and one anomaly detection stage to calculate anomaly score for each data point in the target time series. This method jointly trains the encoder, the generator, and the discriminator to take advantage of the mapping ability of the encoder and the discriminatory ability of the discriminator. However, the method requires that the training data contains no anomalies. Besides, the method is not an online approach since its detection model will not be retrained or updated after the training stage, meaning that it is not adaptive.
Ibrahim et al. (Ibrahim et al., 2022) proposed a hybrid deep learning approach that combines one-dimensional convolutional neural network with bidirectional long short-term memory (BiLSTM) for anomaly detection in univariate time series. However, the approach requires offline training and considerable training time due to parameter tuning required by the used hybrid approach.
### Deep Learning Libraries
Over the last few years, machine learning has seen significant advances. Many different machine learning algorithms have been introduced to address different problems. In the meantime, many DL libraries have been developed by academy, industry, and open-source communities, attempting to provide a fair abstraction on the ground complex tasks with simple functions that can be used as tools for solving larger problems (Ketkar and Santana, 2017).
TensorFlow (Abadi et al., 2016) is a popular open-source Python-based DL library created and maintained by Google. It uses dataflow graphs to represent both the computation in an algorithm and the state on which the algorithm operates. TensorFlow is designed for large-scale distributed training and inference. It can run on a single CPU system, GPUs, mobile devices, and large-scale distributed systems. However, its low-level application programming interface (API) makes it difficult to use (Nguyen et al., 2019). Because of this, TensorFlow is usually used in combination with Keras (Keras, 2023), which is a Python wrapper library providing high-level, highly modular, and user-friendly API.
CNTK (CNTK, 2023) stands for Cognitive Toolkit, and it was introduced by Microsoft and written in C++ programming language. It supports the Open Neural Network Exchange (ONNX) format, allowing easy model transformation from one DL library to another one. As compared with TensorFlow, CNTK is less popular (Nguyen et al., 2019). Moreover, the official website of CNTK shows that CNTK is no longer actively developed.
PyTorch (Paszke et al., 2019) is an open-source DL framework based on the Torch library. It aims to provide an easy to use, extend, develop, and debug framework. It is equipped with a high-performance C++ runtime that developers can leverage for production environments while avoiding inference via Python (Ketkar and Santana, 2017). PyTorch supports tensor computation with strong GPU acceleration and allows a network to change the way it behaves with small effort using dynamic computational graphs. Similar to CNTK, it also supports the ONNX format.
Deeplearning4j is an open source distributed deep learning library released by a startup company called Skymind in 2014 (Deeplearning4j, 2023)(Wang et al., 2019). Deeplearning4j is written for java programming language and java virtual machine (JVM). It is powered by its own open-source numerical computing library called ND4J, and it supports both CPUs and GPUs. Deeplearning4j provides implementations of the restricted Boltzmann machine, deep belief net, deep autoencoder, recurrent neural network, word2vec, doc2vec, etc.
## 3 Related Work
Nguyen et al. (Nguyen et al., 2019) conducted a survey on several DL libraries. They also analyzed strong points and weak points for each library. However, they did not conduct any experiments to compare these DL libraries. Wang et al. (Wang et al., 2019) compared several DL libraries in terms of model design ability, interface property, deployment ability, performance, framework design, and development prospects by using some benchmarks. The authors also made suggestions about how to choose DL frameworks in different scenarios. Nevertheless, their general evaluation and analysis are unable to answer the specific question that this paper attempts to answer, i.e., how DL libraries affect online adaptive lightweight time series anomaly detection approaches.
Kovalev et al. (Kovalev et al., 2016) evaluated the training time, prediction time, and classification accuracy of a fully connected neural network (FCNN) under five different DL libraries: Theano with Keras, Torch, Caffe, Tensorflow, and Deeplearning4j. Apparently, their results are not applicable to lightweight anomaly detection approaches.
Zhang et al. (Zhang et al., 2018) evaluated the performance of several state-of-the-art DL libraries, including TensorFlow, Caffe2, MXNet, PyTorch and TensorFlow Lite on different kinds of hardware, including MacBook, FogNode, Jetson TX2, Raspberry Pi, and Nexus 6P. The authors chose a large-scale convolutional neural network (CNN) model called AlexNet (Krizhevsky et al., 2017) and a small-scale CNN model called SqueezeNet (Iandola et al., 2016), and evaluated how each of them performs under different combination of hardware and DL libraries in terms of latency, memory footprint, and energy consumption. According to the evaluation results, there is no single winner on every metric since each has its own metric. Due to the fact that two used CNN models are much complex than lightweight anomaly detection approaches, their evaluation results and suggestions may not be applicable.
Zahidi et al. (Zahidi et al., 2021) conducted an analysis to compare different Python-based and Java-based DL libraries and to see how they support different natural language processing (NLP) tasks. Due to the difference between NLP tasks and time series analysis, their results still cannot be applied to the work of this paper.
Zhang et al. (Zhang et al., 2022) built a benchmark that includes six representative DL libraries on mobile devices (TFLite, PyTorchMobile, ncnn, MNN, Mace, and SNPE) and 15 DL models (10 of them are for image classification, 3 of them are for object detection, 1 for semantic segmentation, and 1 for text classification). The authors then performed a series of experiments to evaluate the performance of these DL libraries on the 15 DL models and different mobile devices. According to their analysis and observation, there is no DL libraries that perform best on all tested scenarios and that the impacts of DL libraries may overwhelm DL algorithm design and hardware capacity. Apparently, the target of our paper is completely different from that of Zhang et al.'s paper. Even though their results point out some useful conclusions, their results cannot help us get a clear answer about how different DL libraries affect online adaptive lightweight anomaly detection.
## 4 Evaluation Setup
Based on the description in the Background section, we chose RePAD and SALAD to be our target anomaly detection approaches because both of them possess all previously mentioned desirable features (i.e., unsupervised learning, online model training, adaptability, and lightweight). As for DL libraries, we chose TensorFlow-Keras, PyTorch, and Deeplearning4j because they are popular and widely used. Recall both TensorFlow-Keras and PyTorch are based on Python, it would be interesting to see how Deeplearning4j performs as compared with TensorFlow-Keras and PyTorch. Here, the versions of TensorFlow-Keras, PyTorch, and Deeplearning4j are 2.9.1, 1.13.1, and 0.7-SNAPSHOT, respectively.
We implemented RePAD and SALAD in the three DL libraries. Hence, there are six combinations as shown in Table 1. RePAD-TFK refers to RePAD implemented in TensorFlow-Keras, SALAD-PT refers to SALAD implemented in PyTorch, and so on so forth.
### Real-world datasets
To evaluate the three RePAD combinations, two real-world time series were used. One is called ec2-cpu-utilization-825cc2 (CC2 for short), and the other is called rds-cpu-utilization-e47b3b (B3B for short). Both time series are provided by the Numenta Anomaly Benchmark (NAB) [15]. CC2 contains two point anomalies and one collective anomaly, whereas B3B contains one point anomaly and one collective anomaly. Note that a point anomaly is a single data point which is identified as anomalous with respect to the rest of the time series, whereas a collective anomaly is defined as a sequence of data points which together form an anomalous pattern [14].
Since CC2 and B3B consist of only 4032 data points, they are unable to show the long-term performance of the three RePAD combinations. Hence, we created two long time series called CC2-10 and B3B-10 by individually duplicating CC2 and B3B ten times. Table 2 lists their details. Figures 1 and 2 illustrate all data points in CC2-10 and B3B-10, respectively. Each point anomaly is marked as a red circle, whereas each collective anomaly is marked as a red curve line.
On the other hand, to evaluate the three SALAD combinations, we selected another two real-world recurrent time series. One is Taipei Mass Rapid Transit (TMRT for short) [20], and the other is New York City Taxi demand (NYC for short) from the Numenta Anomaly Benchmark [15]. The former consists of 1260 data points, whereas the latter consists of 10320 data points. Table 3 summarizes the details of TMRT and NYC. They contain only collective anomalies.
### Hyperparameters, parameters, and environment
To ensure a fair evaluation, the three RePAD combinations were configured with the same hyperparameters and parameters, as listed in Table 4, following the setting used by RePAD [16]. Recall that RePAD utilizes the Look-Back and Predict-Forward strategy to determine data size for online model training and data size for prediction. In this paper, we respectively set the Look-Back parameter and the Predict-Forward parameter to 3 and 1 based on the setting suggested by [16]. In other words, the LSTM models used by RePAD-TFK, RePAD-PT, and RePAD-DL4J will be always trained with three historical data points, and the trained models will be used to predict the next upcoming data point in the target time series.
In addition, RePAD-TFK, RePAD-PT, and RePAD-DL4J inherited the simple LSTM structure used by RePAD [16], i.e., only one hidden layer and ten hidden units. Note that Early stopping [16] was not used to automatically determine the number of epochs since this technique is not officially supported by PyTorch.
\begin{table}
\begin{tabular}{l l l} \hline \hline Name & Number of data & Time & Duration \\ & points & interval & \\ \hline \hline CC2-10 & 40,320 & 5 & 140 days \\ \hline B3B-10 & 40,320 & 5 & 140 days \\ \hline \hline \end{tabular}
\end{table}
Table 2: Two extended real-world time series used to evaluate RePAD-TFK, RePAD-PT, and RePAD-DL4J.
Figure 1: All data points on the CC2-10 time series. Each anomaly is marked in red.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Name & Number of data & Time & Duration \\ & points & interval & \\ \hline \hline CC2-10 & 40,320 & 5 & 140 days \\ \hline B3B-10 & 40,320 & 5 & 140 days \\ \hline \hline \end{tabular}
\end{table}
Table 2: Two extended real-world time series used to evaluate RePAD-TFK, RePAD-PT, and RePAD-DL4J.
Figure 2: All data points on the B3B-10 time series. Each anomaly is marked in red.
For fairness, the number of epochs was set to 50 for the three RePAD combinations.
Similarly, to make sure a fair evaluation, the three SALAD combinations were all configured with the same hyperparameter and parameter setting, as listed in Table 5. However, the setting is slightly different when it comes to the two used time series TMRT and NYC. Recall that SALAD consists of one conversion phase and one detection phase. The conversion phase requires more data points for model training than the detection phase does. Hence, the Look-Back parameter for the conversion phase of SALAD-TFK, SALAD-PT, and SALAD-DLAJ were all set to 288 and 63 on NYC and TMRT, respectively. Due to the same reason, we configured 100 and 50 epochs for the conversion phase and the detection phase of the three SALAD combinations, respectively. On the other hand, the Look-Back parameter for the detection phase of the three SALAD combinations were all set to 3 no matter the used time series is TMRT or NYC. This is because the detection phase works exactly like RePAD, and three is the recommend value suggested by Lee et al. (2021) for the Look-Back parameter of RePAD.
The evaluations for all the six combinations were individually performed on the same laptop running MacOS 10.15.1 with 2.6 GHz 6-Core Intel Core i7 and 16GB DDR4 SDRAM. Note that we did not choose GPUs or high-performance computers to conduct the evaluation since it is interesting to know how TensorFlow-Keras, PyTorch, and Deeplearning4j impact RePAD and SALAD on a commodity computer.
## 5 Evaluation Results
In this section, we detail the evaluation results of the three RePAD combinations and the three SALAD combinations.
### Three RePAD combinations
To measure the detection accuracy for each RePAD combination, we chose precision, recall, and F-score. Precision is the ratio between the true positives (TP) and all the positives, i.e., precision= TP/(TP+FP) where FP represents false positive. Recall is the measure of the correctly identified anomalies from all the actual anomalies, i.e., recall= TP/(TP+FN) where FN represents false negative. F-score is a well-known composite measure to evaluate the accuracy of a model, and it is defined as 2-(precision-recall)/(precision+recall). A higher value of F-score indicates better detection accuracy.
It is worth noting that we did not utilize the traditional pointwise approach to measure precision, recall, and F-score. Instead, we refer to the evaluation method used by Lee et al. (2020). More specifically, if a point anomaly occurring at time point \(Z\) can be detected within a time period ranging from time point \(Z\)\(-\)\(K\) to time point \(Z\)\(+\)\(K\), this anomaly is considered correctly detected. On the other hand, for any collective anomaly, if it starts at time point \(A\) and ends at time point \(B\) (\(B\)\(>\)\(A\)), and it can be detected within a period between \(A-K\) and \(B\), we consider this anomaly correctly detected. In this paper, we set \(K\) to 7 following the setting suggested by Ren et al. (2019), i.e., \(K\) is 7 if the measurement interval of a time series is a minute, and \(K\) is 3 for a hourly time series.
In addition, we used three performance metrics to evaluate the efficiency of each RePAD combination. The first one is LSTM training ratio, which is the ratio between the number of data points that require a new LSTM model training and the total number of data points in the target time series. A lower ratio indicates less computation resources and quicker response time because LSTM model training takes some time. The second one is average detection time for each data point when LSTM model training is not required (ADT-NT for short). According to the design of RePAD, the LSTM model will not be replaced if it can accurately predict the next data point, which also means that the detection can be performed immediately without any delay. The last performance metric is average detection time when LSTM model training is required (ADT-T for short). When LSTM model training is required, the time to detect if a data point is anomalous consists of the time to train a new LSTM
\begin{table}
\begin{tabular}{l l} \hline \hline Hyperparameters/parameters & Value \\ \hline \hline The Look-Back parameter & 3 \\ \hline The Predict-Forward parameter & 1 \\ \hline The number of hidden layers & 1 \\ \hline The number of hidden units & 10 \\ \hline The number of epochs & 50 \\ \hline Learning rate & 0.005 \\ \hline Activation function & tanh \\ \hline Random seed & 140 \\ \hline \end{tabular}
\end{table}
Table 4: The hyperparameter and parameter setting used by RePAD-TFK, RePAD-PT, and RePAD-DL4J.
\begin{table}
\begin{tabular}{l l l} \hline Hyperparameters/parameters & The conversion phase & The detection phase \\ \hline \hline The Look-Back parameter & 288 for NYC, & 3 \\ & 63 for TMRT & \\ \hline The Predict-Forward parameter & 1 & \\ \hline The number of hidden layers & 1 & 1 \\ \hline The number of hidden units & 10 & 10 \\ \hline The number of epochs & 100 & 50 \\ \hline Learning rate & 0.001 & 0.001 \\ \hline Activation function & tanh & 140 \\ \hline \end{tabular}
\end{table}
Table 5: The hyperparameter and parameter setting used by SALAD-TFK, SALAD-PT, and SALAD-DLAJ.
model, the time for this new model to re-predict the value of the data point, and the time to determine if the data point is anomalous. Apparently, ADT-T would be longer than ADT-NT due to LSTM model training.
Tables 6 to 9 show the performance of the three RePAD combinations on the CC2-10 time series. It is clear that RePAD-PT performs the best since it provides the highest detection accuracy, the least number of LSTM training, and the shortest ADT-T. The result shows that PyTorch seems to be a good choice for RePAD.
Although RePAD-TFK provides the second best detection accuracy, its ADT-NT and ADT-T were obviously the longest. It seems like TensorFlow-Keras is less efficient than PyTorch and Deeplearning4j.
On the other hand, we can see from Table 6 that RePAD-DL4J provides the lowest detection accuracy due to the lowest recall. Nevertheless, its ADT-NT is the shortest and its ADT-T is the second shortest with the smallest standard deviation. It seems that Deeplearning4j offers more stable execution performance than the other two libraries.
Tables 10 to 13 show the detection results of the three RePAD combinations on another time series B3B-10. Apparently, RePAD-TFK has the highest detection accuracy and the lowest LSTM training ratio. However, its ADT-NT and ADT-T are the longest. This result confirms that TensorFlow-Keras introduces more overhead to RePAD than the other two libraries do.
When RePAD was implemented in PyTorch, it has the second best detection accuracy, the second shortest ADT-NT, and the shortest ADT-T. In other words, PyTorch provides a very good balance between detection accuracy and response time. On the other hand, when RePAD-DL4J worked on B3B-10, its performance is similar to its performance on CC2-10 (i.e., the lowest detection accuracy but satisfactory execution performance).
### Three SALAD combinations
To evaluate the detection accuracy of the three SALAD combinations, we also used precision, recall, and F-Score. Furthermore, we measured the average time for each SALAD combination to process each data point in their conversion phases and detection phases.
Figure 3 shows the detection results of the three SALAD combinations on the TMRT time series. Apparently, all of them can detect the collective anomaly without any false positive or false negative. Hence, the precision, recall, and F-score of the three combinations are all one as shown in Table 14.
Table 15 lists the time consumption of the three SALAD combinations on TMRT. It is clear that SALAD-PT has the shortest average conversion time and average detection time, whereas SALAD-TFK has the longest average conversion time and average detection time. It seems like PyTorch is also the best choice for SALAD so far.
Table 16 lists the detection results of the three SALAD combinations on the NYC time series. We can see that SALAD-DL4J has the best detection accuracy. Recall that the conversion phase of SALAD
\begin{table}
\begin{tabular}{l c c c} \hline Combination & Precision & Recall & F-score \\ \hline \hline RePAD-TFK & 0.957 & 0.9 & 0.928 \\ \hline RePAD-PT & 0.954 & 0.934 & 0.944 \\ \hline RePAD-DL4J & 0.964 & 0.7 & 0.811 \\ \hline \end{tabular}
\end{table}
Table 6: The detection accuracy of the three RePAD combinations on the CC2-10 time series.
\begin{table}
\begin{tabular}{l c} \hline Combination & LSTM training ratio \\ \hline \hline RePAD-TFK & 0.0094 (379/40320) \\ \hline RePAD-PT & 0.0089 (357/40320) \\ \hline RePAD-DL4J & 0.0131 (528/40320) \\ \hline \end{tabular}
\end{table}
Table 7: The LSTM training ratio of the three RePAD combinations on the CC2-10 time series.
\begin{table}
\begin{tabular}{l c c} \hline Combination & ADT-NT (sec) & Std. Dev. (sec) \\ \hline \hline RePAD-TFK & 0.518 & 0.726 \\ \hline RePAD-PT & 0.069 & 0.263 \\ \hline RePAD-DL4J & 0.028 & 0.022 \\ \hline \end{tabular}
\end{table}
Table 8: The ADT-NT of the three RePAD combinations on the CC2-10 time series.
\begin{table}
\begin{tabular}{l c c} \hline Combination & LSTM training ratio \\ \hline \hline RePAD-TFK & 0.0026 (105/40320) \\ \hline RePAD-PT & 0.0028 (112/40320) \\ \hline RePAD-DL4J & 0.0042 (168/40320) \\ \hline \end{tabular}
\end{table}
Table 10: The detection accuracy of the three RePAD combinations on B3B-10.
\begin{table}
\begin{tabular}{l c c} \hline Combination & ADT-T (sec) & Std. Dev. (sec) \\ \hline \hline RePAD-TFK & 1.913 & 1.409 \\ \hline RePAD-PT & 0.100 & 0.318 \\ \hline RePAD-DL4J & 0.375 & 0.030 \\ \hline \end{tabular}
\end{table}
Table 9: The ADT-T of the three RePAD combinations on the CC2-10 time series.
\begin{table}
\begin{tabular}{l c c} \hline Combination & Precision & Recall & F-score \\ \hline \hline RePAD-TFK & 0.892 & 1 & 0.943 \\ \hline RePAD-PT & 0.872 & 1 & 0.932 \\ \hline RePAD-DL4J & 0.828 & 1 & 0.906 \\ \hline \end{tabular}
\end{table}
Table 11: The LSTM training ratio of the three RePAD combinations on B3B-10.
\begin{table}
\begin{tabular}{l c c} \hline Combination & LSTM training ratio \\ \hline \hline RePAD-TFK & 0.0026 (105/40320) \\ \hline RePAD-PT & 0.0028 (112/40320) \\ \hline RePAD-DL4J & 0.0042 (168/40320) \\ \hline \end{tabular}
\end{table}
Table 11: The LSTM training ratio of the three RePAD combinations on B3B-10.
(Lee et al., 2021b) aims to convert a complex time series into a less complex AARE series by predicting the value for each future data point, measuring the difference between every pair of predicted and actual data points, and deriving the corresponding AARE values. As we can see from Figure 4 that most of the data points predicted by the conversion phase of SALAD-DL4J matched the real data points. Consequently, as shown in Figure 5, the detection phase of SALAD-DL4J was able to detect all the collective anomalies even though there are some false positives. However, the good performance of the conversion phase of SALAD-DL4J comes at the price of a long conversion time (see Table 17) due to required LSTM model training for many data points.
On the other hand, when SALAD-TFK and SALAD-PT worked on NYC, they both had very poor detection accuracy (see Table 16). SALAD-TFK could detect only one collective anomaly, i.e., the snow storm. This is because the conversion phase of SALAD-TFK was unable to correctly predict data points (as shown in Figure 6). This bad performance consequently affected the detection phase of SALAD-TFK and disabled it to detect anomalies. We can see from Figure 7 that almost all AARE values are lower than the detection threshold.
If we look at Figure 7 more closely, we can see that the detection threshold was very high in the be
\begin{table}
\begin{tabular}{l l l l} \hline Combination & Precision & Recall & F-score \\ \hline \hline SALAD-TFK & 1 & 1 & 1 \\ \hline SALAD-PT & 1 & 1 & 1 \\ \hline SALAD-DL4J & 1 & 1 & 1 \\ \hline \end{tabular}
\end{table}
Table 14: The detection accuracy of the three SALAD combinations on the TMRT time series.
\begin{table}
\begin{tabular}{l l l} \hline Combination & ADT-NT (sec) & Std. Dev. (sec) \\ \hline \hline RePAD-TFK & 1.989 & 1.436 \\ \hline RePAD-PT & 0.105 & 0.325 \\ \hline RePAD-DL4J & 0.388 & 0.039 \\ \hline \end{tabular}
\end{table}
Table 13: The ADT-T of the three RePAD combinations on the B3B-10 time series.
\begin{table}
\begin{tabular}{l l l} \hline Combination & Precision & Recall & F-score \\ \hline \hline SALAD-TFK & 0.447 & 0.2857 & 0.349 \\ \hline SALAD-PT & 0.338 & 0.2857 & 0.310 \\ \hline SALAD-DL4J & 0.709 & 1 & 0.830 \\ \hline \end{tabular}
\end{table}
Table 15: The time consumption of the three SALAD combinations on the TMRT time series.
Figure 4: The original data points in the NYC time series versus the data points predicted by the conversion phase of SALAD-DL4J.
Figure 5: The AARE values generated by the detection phase of SALAD-DL4J versus the self-adaptive detection threshold of SALAD-DL4J on the NYC time series.
\begin{table}
\begin{tabular}{l l l} \hline Combination & ADT-NT (sec) & Std. Dev. (sec) \\ \hline \hline RePAD-TFK & 1.989 & 1.436 \\ \hline RePAD-PT & 0.105 & 0.325 \\ \hline RePAD-DL4J & 0.388 & 0.039 \\ \hline \end{tabular}
\end{table}
Table 12: The ADT-NT of the three RePAD combinations on the B3B-10 time series.
Figure 3: The detection results of the three SALAD combinations on the TMRT time series.
\begin{table}
\begin{tabular}{l l l} \hline Combination & Precision & Recall & F-score \\ \hline \hline SALAD-TFK & 1 & 1 & 1 \\ \hline SALAD-PT & 1 & 1 & 1 \\ \hline SALAD-DL4J & 1 & 1 & 1 \\ \hline \end{tabular}
\end{table}
Table 14: The detection accuracy of the three SALAD combinations on the TMRT time series.
\begin{table}
\begin{tabular}{l l l} \hline Combination & APC-NT (sec) & Std. Dev. (sec) \\ \hline \hline RePAD-TFK & 1.989 & 1.436 \\ \hline RePAD-PT & 0.105 & 0.325 \\ \hline ReLU-DL4J & 0.388 & 0.039 \\ \hline \end{tabular}
\end{table}
Table 13: The ADT-T of the three RePAD combinations on the B3B-10 time series.
ginning due to the high AARE values, which makes SALAD felt that its current LSTM model did not need to be replaced. Even though the threshold dropped afterwards, it was still much higher than many subsequent AARE values. This is why most of the anomalies could not be detected. Since SALAD-TFK requires only a few model training, its average conversion time is much shorter than that of SALAD-DL4J (see Table 17).
The same situation happened to SALAD-PT when it worked on the NYC series. SALAD-PT has very poor detection accuracy even though its average conversion time and average detection time are the shortest.
## 6 Conclusions and Future Work
In this paper, we investigated how DL libraries impact online adaptive lightweight time series anomaly detection by implementing two state-of-the-art anomaly detection approaches (RePAD and SALAD) in three well-known DL libraries (TensorFlow-Keras, PyTorch, and Deeplearning4j) and conducting a series of experiments to evaluate their detection performance and time consumption based on four open-source time series. The results indicate that DL libraries have a significant impact on RePAD and SALAD in terms of not only their detection accuracy but also their time consumption and response time.
According to the results, TensorFlow-Keras is not recommended for online adaptive lightweight time series anomaly detection because it might lead to unstable detection accuracy and more time consumption. When it was used to implement RePAD, RePAD had satisfactory detection accuracy. However, when it was used to implement SALAD, SALAD had unstable detection accuracy on one used time series. Besides, TensorFlow-Keras is less efficient than PyTorch and Deeplearning4j because it causes the longest response time for both RePAD and SALAD.
On the other hand, PyTorch is the most efficient library among the three DL libraries since it enables RePAD and SALAD to provide real-time processing and instant responses. It also enables RePAD to provide high detection accuracy. However, similar to TensorFlow-Keras, it causes unstable detection accuracy when it was used to implement SALAD and worked on the NYC time series.
Deeplearning4j is considered the most stable library among the three DL libraries because it not only enables RePAD and SALAD to provide satisfactory detection accuracy, but also enables RePAD and SALAD to have reasonable time consumption and response time.
We found that it is very important to carefully choose DL libraries for online adaptive lightweight time series anomaly detection because DL libraries might not show the true performance of an anomaly detection approach. What makes it even worse is that they might mislead developers or users in believing that one bad anomaly detection approach implemented in a good DL library is better than a good anomaly detection approach implemented in a bad DL library.
In our future work, we would like to release all the source code (i.e., RePAD and SALAD implemented in the three DL libraries) on a public software repository such as GitHub, GitLab, or Bitbucket.
## Acknowledgement
The authors want to thank the anonymous reviewers for their reviews and suggestions for this paper.
Figure 6: The original data points in the NYC time series versus the data points predicted by the conversion phase of SALAD-TFK.
Figure 7: The AARE values generated by the detection phase of SALAD-TFK versus the self-adaptive detection threshold of SALAD-TFK on the NYC time series. | オンラインで適応的軽量な時間系列異変検出を行うことは、人間の介入やドメイン知識なしに非常に価値があります。過去数年には、このような異変検出アプローチがいくつか導入されてきましたが、すべてが実装されたのは1つの深層学習ライブラリのみです。深層学習ライブラリの発展に伴い、これらの異変検出アプローチへの影響を評価する資料は存在しません。深層学習ライブラリを選んで異変検出アプローチを実装する場合、そのアプローチの真の性能を示すことは難しく、ユーザーに1つのアプローチが他のアプローチよりも優れていると誤解させる可能性があります。そこで、この論文では、オンライン適応的軽量な時間系列異変検出において、深層学習ライブラリがどのように影響するかを調査するために、2つの最先端の異変検出アプローチを3つのよく知られた深層学習ライブラリで実装し、これらのアプローチが3つの |
2309.06231 | Steady-state selection in multi-species driven diffusive systems | We introduce a general method to determine the large scale non-equilibrium
steady-state properties of one-dimensional multi-species driven diffusive
systems with open boundaries, generalizing thus the max-min current principle
known for systems with a single type of particles. This method is based on the
solution of the Riemann problem of the associated system of conservation laws.
We demonstrate that the effective density of a reservoir depends not only on
the corresponding boundary hopping rates but also on the dynamics of the entire
system, emphasizing the interplay between bulk and reservoirs. We highlight the
role of Riemann variables in establishing the phase diagram of such systems. We
apply our method to three models of multi-species interacting particle systems
and compare the theoretical predictions with numerical simulations. | Luigi Cantini, Ali Zahra | 2023-09-12T13:49:56 | http://arxiv.org/abs/2309.06231v1 | # Steady-state selection in multi-species driven diffusive systems
###### Abstract
We introduce a general method to determine the large scale non-equilibrium steady-state properties of one-dimensional multi-species driven diffusive systems with open boundaries, generalizing thus the max-min current principle known for systems with a single type of particles. This method is based on the solution of the Riemann problem of the associated system of conservation laws. We demonstrate that the effective density of a reservoir depends not only on the corresponding boundary hopping rates but also on the dynamics of the entire system, emphasizing the interplay between bulk and reservoirs. We highlight the role of Riemann variables in establishing the phase diagram of such systems. We apply our method to three models of multi-species interacting particle systems and compare the theoretical predictions with numerical simulations.
Driven diffusive systems appear in various areas across physics, chemistry, and theoretical biology [1; 2; 3] and are widely regarded as a fundamental playground in order to understand the behavior of complex systems away from thermal equilibrium [4]. A classic illustration of such systems involves particles moving within a lattice and subject to hard-core exclusion. The introduction of a bias in their movement, simulating the influence of an external driving force, builds up macroscopic currents in the stationary state. A particularly relevant setting consists in putting a one-dimensional system in contact with boundary particles reservoir, the interplay between boundary dynamics and bulk driving leading to genuinely out of equilibrium phenomena such as boundary induced phase transitions [5]. In this case, when the system presents a single species of particles, a simple general principle known as the max-min current principle [5; 6; 7; 8] allows to determine the phase diagram for the steady state current and particle density as a function of the boundary reservoir densities. Despite the success of this principle in treating one-dimensional open boundary problems, its generalization to systems containing several different species of particles has been a long-standing challenge [9; 10; 11; 12].
The goal of the present paper is to put forward a scheme that permits to determine the steady state average particle densities and currents of one-dimensional multi-species driven system with open boundaries. Such a scheme is based essentially on the sole knowledge of the bulk hydrodynamic behavior of the model. As a starting point, similarly to the max-min principle, one supposes the boundary densities to be known. In a systems with \(n\) different particle species, these are denoted by \(\mathbf{\rho}^{L}=\{\rho_{1}^{L},\rho_{2}^{L},\ldots,\rho_{n}^{L}\}\) for the left boundary and \(\mathbf{\rho}^{R}=\{\rho_{1}^{R},\rho_{2}^{R},\ldots,\rho_{n}^{R}\}\) for the right boundary. Then the bulk density is determined by the solution of the associated Riemann problem at the origin (\(\mathrm{RP}_{0}\))
\[(\mathbf{\rho}^{L},\mathbf{\rho}^{R})\xrightarrow{\mathrm{RP}_{0}}\mathbf{\rho}^{B}. \tag{1}\]
As a first argument in support of this claim, we shall show that this principle is equivalent to to Krug's max-min current principle when applied to the case of single-species model. We shall moreover present a further heuristic justification of it based on a vanishing viscosity regularization of the associated conservation laws which applies to general multi-species case.
By itself the principle (1) is not enough to determine the bulk densities since one has at the same time to make sense of the boundary densities. If one supposes that the boundary currents are functions of the boundary densities alone, then current conservation through the entire systems provides the missing conditions to completely determine both bulk and boundary densities. We apply this scheme to three models, where we have access to the particle currents as functions of the particle densities (which is necessary in order to solve numerically the associated Riemann problem): 2-TASEP with arbitrary bulk hopping rates, hierarchical 2-ASEP and a 3-TASEP. In all these three model we find good agreement with numerical simulations.
## I The scheme
The large scale behavior of driven diffusive system consisting of \(n\) species of particles is generally governed by a system of conservation laws
\[\partial_{t}\mathbf{\rho}+\partial_{x}\mathbf{J}=0 \tag{2}\]
where the \(n\) locally conserved quantities are the coarse-grained particle densities \(\mathbf{\rho}(x,t)=(\rho_{1}(x,t),...,\rho_{n}(x,t))\), with associated currents \(\mathbf{J}(\mathbf{\rho})=(J_{1}(\mathbf{\rho}),..,J_{n}(\mathbf{\rho}))\). When the system is defined on a finite interval \(x\in[0,L]\) and coupled to two reservoirs with densities \(\mathbf{\rho}^{L}\) and \(\mathbf{\rho}^{R}\) the system reaches in the limit \(t\to\infty\) a steady state with uniform bulk densities \(\mathbf{\rho}^{B}(\mathbf{\rho}^{L},\mathbf{\rho}^{R})\). We claim that for \(L\to\infty\), these bulk densities are determined by solving a Riemann problem. Such a problem is formulated on an infinite line \(x\in\mathbb{R}\) with an initial condition consisting of two regions of uniform densities, on the left and on the
right of the origin \(x=0\)
\[\mathbf{\rho}(x,0)=\mathbf{\rho}^{L}\mathds{1}_{x<0}(x)+\mathbf{\rho}^{R}\mathds{1}_{x>0}(x) \quad x\in\mathbb{R}.\]
The solution of the Riemann problem is invariant under the rescaling \((x,t)\rightarrow(\lambda x,\lambda t)\) and therefore takes the form \(\mathbf{\rho}(x,t)=\mathbf{\rho}(\frac{\pi}{t})\). In particular, for \(t>0\), \(\mathbf{\rho}(0,t)\) is independent of time, so we define: \(\mathbf{\rho}|_{0}(\mathbf{\rho}^{L},\mathbf{\rho}^{R}):=\mathbf{\rho}(0,t)\) and we call it _the solution to the Riemann problem at the origin_. Our claim is that the bulk densities for the open boundary problem with given boundary densities coincide with the solution at zero of the corresponding Riemann problem, namely:
\[\boxed{\mathbf{\rho}^{B}(\mathbf{\rho}^{L},\mathbf{\rho}^{R})=\mathbf{\rho}|_{0}(\mathbf{\rho}^{L},\mathbf{\rho}^{R})} \tag{3}\]
The exact meaning of the boundary conditions is a mathematically subtle issue [13; 14; 15]. We define them in an operative way as the densities of the first and last site of the lattice, meaning that the two boundary sites can be conceptually considered as part of their nearby reservoirs. Let us be more specific about the boundary dynamics we shall consider. At each boundary a particle can either enter or exit the system, or it can change its own species. If we identify to empty sites as particles of a species \(0\), the dynamics is fully encoded in the rates \(\mathbf{\nu}^{L}=\{\nu^{L}_{i,j},0\leq i\neq j\leq n\}\) at the left and \(\mathbf{\nu}^{R}=\{\nu^{R}_{i,j},0\leq i\neq j\leq n\}\) at the right boundary
\[j\xrightarrow{\nu^{L}_{i,j}}i\qquad i\xrightarrow{\nu^{R}_{i,j}}j\]
The boundary densities \(\mathbf{\rho}^{L}\) and \(\mathbf{\rho}^{R}\), as well as the bulk ones are then functions of the boundary rates.
Since the boundary hopping rates are independent of the rest of the system, we can write the current on a given boundary as a function of the density of that boundary only
\[J^{L}_{i}(\mathbf{\rho}^{L}) =\sum_{j=1}^{n}\rho_{j}\nu^{L}_{ij}-\rho_{i}\sum_{j=1}^{n}\nu^{L}_ {ji} \tag{4}\] \[J^{R}_{i}(\mathbf{\rho}^{R}) =\rho_{i}\sum_{j=1}^{n}\nu^{R}_{ij}-\sum_{j=1}^{n}\rho_{j}\nu^{R}_ {ji}\]
In the steady state, we have
\[\mathbf{J}^{L}(\mathbf{\rho}^{L})=\mathbf{J}(\mathbf{\rho}^{B})=\mathbf{J}^{R}(\mathbf{\rho}^{R}) \tag{5}\]
In conclusion, eqs.(3,5) provide a system of equation enabling to determine the bulk and boundary densities of the system.
### Reformulation of the max-minximal Current Principle
A first argument in favor of the principle eq.(3) is the fact that in the case of a single species of particle it coincides with Krug's max-min current principle. According to this principle, the steady-state current is obtained as: [5; 7; 8; 16]
\[j=\begin{cases}\max_{\rho\in[\rho^{L},\rho^{L}]}J(\rho)&\text{if }\rho^{L}>\rho^{R}\\ \min_{\rho\in[\rho^{L},\rho^{R}]}J(\rho)&\text{if }\rho^{L}<\rho^{R}\end{cases} \tag{6}\]
Let's compare this result with what one would obtain by applying eq.(3). Let's start with the case where \(\rho^{R}>\rho^{L}\), which corresponds to a minimum current phase. When considering the associated Riemann problem we can assume the current \(J\) to be a convex function of the density in the interval \([\rho^{L},\rho^{R}]\), otherwise one has to replace it with its the convex hull in the interval \([\rho^{L},\rho^{R}]\)[17]. The solution to the Riemann problem can be expressed as a function of \(u=\frac{\pi}{t}\):
\[\rho(u)=\rho^{L}\mathbf{1}_{u<v(\rho^{L})}+\rho^{R}\mathbf{1}_{u>v(\rho^{R})}+v^{-1}(u )\mathbf{1}_{v(\rho^{L})<u<v(\rho^{R})} \tag{7}\]
where \(v(\rho):=\frac{dJ}{d\rho}\). To compare the solution at zero with the density predicted by the minimum current phase, we can identify three cases:
1) If \(v(\rho^{L})>0\), then the solution at zero has a value of \(\rho^{L}\), and simultaneously, the minimum \(\min_{\rho\in[\rho^{L},\rho^{R}]}(J(\rho))\) is reached at \(\rho^{L}\). In this case, the bulk has the same density as the left boundary, which we refer to as the _left-induced phase_.
2) If \(v(\rho^{R})<0\), then the solution at zero has a value of \(\rho^{R}\), and simultaneously, the minimum \(\min_{\rho\in[\rho^{L},\rho^{R}]}(J(\rho))\) is attained at \(\rho^{R}\). This is referred to as a _right-induced phase_.
3) If neither of the two previous statements is true, there exists, due to the monotonicity of the derivative, a unique value \(\rho^{B}\in[\rho^{L},\rho^{R}]\) for which \(v(\rho^{B})=0\). This value corresponds to both the Riemann solution at zero and the minimum \(\min_{\rho\in[\rho^{L},\rho^{R}]}(J(\rho))\). We refer to this situation as the _bulk-induced phase_.
When \(\rho^{R}<\rho^{L}\), a similar reasoning can be applied, but we replace \(J(\rho)\) with its concave hull over the interval \([\rho^{R},\rho^{L}]\). So we conclude that the max-min current principle and the eq.(3) give the same answer.
As an example, in the case of a single-species TASEP, we have \(v(\rho^{B})=1-2\rho^{B}\). When \(v>0\), we have \(\rho^{B}<\frac{1}{2}\), which corresponds to the low-density phase, and the bulk is left-induced. The high-density regime corresponds to a right-induced bulk density. The maximal current phase, where \(\rho=\frac{1}{2}\), is not induced from either the left or the right.
### Multiple Conserved Quantities
In this section we shall provide a plausibility argument for eq.(3). It will be by no means a proof of that equation, but more support will come from the comparison with simulations, discussed in the next section. Our argument is based on a vanishing viscosity approach. This involves adding a diffusive component to the current such that
the total current, which remains constant in the steady state, is given by:
\[\mathbf{J}^{total}=\mathbf{J}(\mathbf{\rho})-\epsilon D(\mathbf{\rho})\frac{\partial\mathbf{\rho}}{ \partial x} \tag{8}\]
Here, \(\epsilon>0\) and \(D(\mathbf{\rho})\) is a positive-definite matrix. Since the conservation laws become locally scalar in the directions of the eigenvectors of the Jacobian \(\frac{\partial J_{i}}{\partial\rho_{j}}\) we assume that this property extends to the viscous case, implying that \(D(\mathbf{\rho})\) commutes with the Jacobian. This assumption ensures a mathematically stable regularization scheme for the boundary problem.
For the rest of the argument we shall assume that the conservation laws eq.(2) admit \(n\) independent Riemann variables \(\mathbf{\xi}=(\xi_{1},\ldots,\xi_{n})\). These are functions of the densities \(\mathbf{\xi}(\mathbf{\rho})\), that "diagonalize" the conservation equations eq.(2), in the sense
\[\partial_{t}\xi_{i}(x,t)+v_{i}(\mathbf{\xi})\partial_{x}\xi_{i}(x,t)=0,\]
where it can be shown that the speeds \(v_{k}\) are the eigenvalues of the Jacobian matrix \(\frac{\partial J_{i}}{\partial\rho_{j}}(\mathbf{\rho})\). We remark that the existence of the Riemann variables is ensured for \(n=1,2\) (for \(n=1\) the Riemann variable is the density itself). Now, rewriting eq.(8) in terms of the Riemann variables we get the ordinary differential equation:
\[\frac{\partial\mathbf{\xi}}{\partial x}=\epsilon^{-1}M^{-1}D^{-1}(J(\mathbf{\xi})-J^ {total}):=F(\mathbf{\xi}) \tag{9}\]
where \(M_{ij}=\frac{\partial\rho_{j}}{\partial\xi_{j}}\). In the limit \(\epsilon\to 0\) we have as expected \(J(\mathbf{\xi})=J^{total}\) on all the system, with the possible exception of microscopic regions close to the boundaries. This means that the bulk value \(\mathbf{\xi}^{B}\) represents a stationary point of the ODE (9), \(F(\mathbf{\xi}^{B})=0\). In order to determine the relation between the bulk and boundary values of each Riemann variable, we linearize the ODE around the the stationary point. It is not difficult to show that the Jacobian matrix \(\frac{\partial F}{\partial\xi}\) is diagonal at the stationary point \(\frac{\partial F}{\partial\xi_{j}}(\mathbf{\xi}^{B})=\epsilon^{-1}d_{i}^{-1}v_{i }\delta_{ij}\), where \(d_{i}>0\) are the eigenvalues of the diffusion matrix \(D\). An illustrative example of the field associated to the ODE for a two-component system is in figure 2
* When \(v_{i}<0\), then \(\xi_{i}(x)\) experiences exponential decay towards the stationary bulk value. The decay rate is given by \(\mu_{i}=\epsilon^{-1}d_{i}^{-1}v_{i}\). In this scenario, the bulk stationary value is attained on the left side after a boundary layer of typical size \(1/\mu_{i}\), which is proportional to \(\epsilon\). On the right boundary, the system simply extends the bulk behavior, indicating a right-induced phase.
* When \(v_{i}>0\), using a similar argument we can infer that \(\xi_{i}\) is induced from the left, and the boundary layer is located on the right.
* When \(v_{i}=0\); the size of boundary layer diverges for finite \(\epsilon\). The flow of the ODE in the direction of the associated eigenvector indeed ceases to be exponential and becomes rather polynomial. The bulk is therefore not induced by any boundary, however, it belongs to the manifold \(v_{i}(\mathbf{\xi})=0\). We say that we are in a bulk-induced phase for \(\xi_{i}\).
This is the same result one would obtain by considering the solution of Riemann problem at the origin. Let's point out that the idea of looking at the signs of eigenvalues governing the phase transition in multi-species driven diffusive systems has already been discussed in [18], however without reference to the Riemann variables.
## II Application to multi-components interacting particles systems
In this section we consider three different driven diffusive systems. The first two contain each two species of particles. More specifically the first one is the 2-TASEP introduced in [19; 20], while the second one is a hierarchical 2-species ASEP. The third model is a particular case of 3-species TASEP. For all this models we compare numerical simulations with the predictions of the system of equations eq.(3) and eq.(5).
This system of equations cannot be solved analytically therefore we make use of an iterative procedure: we begin by selecting random initial densities for the boundaries. Then, we determine the bulk density using equation 3, which provides information about the current. Subsequently, we calculate the boundary densities by inverting equation 4. We continue this iteration process between the boundaries and the bulk until convergence is achieved. However, it is worth noticing that this algorithm may encounter cyclic trajectories. To prevent this issue, we introduce a damping parameter \(\gamma\), which should be chosen sufficiently small. The updated equation becomes: \(\mathbf{x}^{n+1}=\gamma\mathbf{f}(\mathbf{x}^{n})+(1-\gamma)\mathbf{x}^{n}\) Here, \(\mathbf{x}^{n}\) represents the set of variables after the \(n\)-th iteration, and \(\mathbf{f}\) represents the set of functions governing the iterations.
### 2-TASEP with arbitrary hopping rates
This first model is a two-species generalization of TASEP, it consists of two types of particles, denoted by \(\bullet\) and \(\circ\), (empty sites are denoted by \(*\)). The hopping rates in the bulk are :
\[\bullet*\xrightarrow{\beta}**\bullet\qquad\ast\circ\xrightarrow{\alpha}\circ* \qquad\bullet\circ\xrightarrow{1}\circ\bullet\]
while the only non vanishing boundary rates we consider are \(\nu_{\bullet\bullet}^{L/R},\nu_{\epsilon\circ}^{L/R},\nu_{\bullet}^{L/R}\). The currents for this model have been calculated in [21] and used in [22] in order to study its hydrodynamic behavior and in particular to solve the corresponding Riemann problem. Let's recall the expression of the currents:
\[J_{\circ}(\rho_{\circ},\rho_{\bullet}) =z_{\alpha}(z_{\beta}-1)+\rho_{\circ}(z_{\alpha}-z_{\beta}) \tag{10}\] \[J_{\bullet}(\rho_{\circ},\rho_{\bullet}) =z_{\beta}(1-z_{\alpha})+\rho_{\bullet}(z_{\alpha}-z_{\beta}) \tag{11}\]
where \(z_{\alpha}\in[0,\min(1,\alpha)]\) and \(z_{\beta}\in[0,\min(1,\beta)]\) are solution of the saddle point equations
\[\frac{\rho_{\circ}}{z_{\alpha}}+\frac{\rho_{\bullet}}{z_{\alpha}-1 }+\frac{1-\rho_{\circ}-\rho_{\bullet}}{z_{\alpha}-\alpha} =0 \tag{12}\] \[\frac{\rho_{\bullet}}{z_{\beta}}+\frac{\rho_{\circ}}{z_{\beta}-1 }+\frac{1-\rho_{\circ}-\rho_{\bullet}}{z_{\beta}-\beta} =0. \tag{13}\]
The variables \(z_{\alpha},z_{\beta}\) happen to be the Riemann variables for this model [22]. In figure 1 (left) we reported two examples of simulations of the 2-TASEP on a lattice of size \(L=100\) and with different values of the model parameters. We see that the numerical result agrees very well with the theoretical prediction obtained through the iterative solution of eqs.(3,5). The convergence of the iterative procedure is reported on the right of the same figure.
#### iii.1.1 Phase diagram
Following the discussion in Section I.2 we partition the phase space of the bulk densities of this model in phases, characterized by the sign of the functions \(v_{k}(\mathbf{z}^{B})\). This a priori results in 9 phases for a two-component system, however, hyperbolicity of the corresponding conservation laws implies that some phases are forbidden as illustrated in the following table
\[\begin{array}{l|c|c|c}&\hskip-14.226378pt\left|v_{\alpha}<0\right|&\hskip-14.226378ptv _{\alpha}=0&\hskip-14.226378ptv_{\alpha}>0\\ \hline\hline\hskip-14.226378ptv_{\beta}<0&RR&BR&LR\\ \hline v_{\beta}=0&\times&BB&LB\\ \hline v_{\beta}>0&\times&\times&LL\\ \end{array}\]
In the preceding table the first letter represents the state of \(z_{\alpha}\): L: left induced, R: right induced, B: bulk induced. The second letter is for the state of \(z_{\beta}\). The symbol \(\times\) is for a forbidden phase. See figure 2 for the result of this partitioning for the values \(\alpha=0.8,\beta=0.9\) of the bulk exchange rates.
Numerical evidence for this diagram is reported in figure 3, where the results of simulations are shown together with theoretical predictions with varying parameter \(\nu^{L}_{\bullet\circ}\) and all the other parameters fixed. We notice that \(z_{\beta}^{B}\) coincides with \(z_{\beta}^{L}\) within the region where \(v_{\beta}<0\), and they split in the region where \(v_{\beta}=0\). At the same time \(z_{\alpha}^{L}\) coincides with \(z_{\alpha}^{B}\) for both regions since \(v_{\alpha}<0\).
### 2-species ASEP and 3-species TASEP
We have considered other two models for which we have access to the exact expressions of the hydrodynamic current as functions of the densities,.
The first model, a 2-species ASEP, contains two species
Figure 1: On the left, two examples of Monte-Carlo simulation of the density profile for 2-TASEP (continuous lines) along with the corresponding Riemann variables (dashed lines) for a lattice of size \(L=100\). The horizontal segments represent the predicted values. On the right, the evolution of densities for the iterative algorithm with damping \(\gamma=0.01\) (up to \(1000\) iterations). Parameters values for top diagrams: \(\alpha=0.5\), \(\beta=1.5\), \((\nu^{R}_{\bullet\circ},\nu^{R}_{\bullet\circ},\nu^{R}_{\bullet\bullet})=(0.2,0.08,0.07)\), \((\nu^{L}_{\circ},\nu^{L}_{\circ},\nu^{L}_{\bullet\bullet})=(0.24,0.04,0.12)\). For the bottom diagrams: \(\alpha=0.4\), \(\beta=0.7\), \((\nu^{R}_{\bullet\circ},\nu^{R}_{\bullet\circ},\nu^{R}_{\bullet\bullet})=(0.5,0.1,0.8)\), \((\nu^{L}_{\bullet\circ},\nu^{L}_{\bullet\circ},\nu^{L}_{\bullet\bullet})=(0.1,0.2,0.5)\).
Figure 3: Bulk and boundary densities (left) and the corresponding Riemann variables (right) of 2-TASEP as a function of the \(\nu^{L}_{\bullet\bullet}\). The crosses represent the numerical simulations, while the lines are the theoretical predictions. For the green shaded region \(v_{\beta}>0\), while for the yellow shaded section \(v_{\beta}=0\) (in both regions \(v_{\alpha}<0\)).
Figure 2: Phase diagram of a 2-species TASEP (\(\alpha=0.8,\beta=0.9\)). The signs on the left correspond to the velocities \(v_{\alpha}\) and \(v_{\beta}\) in order. On the right, we have an example of the ODE flow exhibiting a sink singularity in the left-induced phase and a saddle point in the mixed-induced phase.
of particles and the following bulk exchange rates:
\[\nu_{ij}=\begin{cases}1&\text{if}\quad\ i>j\\ q&\text{if}\quad\ i<j\end{cases} \tag{14}\]
where we have chosen the following order on the species: \(\bullet>*>\circ\).
Although the stationary measure for a uniform state is not a product measure, yet, it's straightforward to write the currents-density relations since each of the \(\bullet\) and \(\circ\) particles dynamics can be decoupled in the bulk:
\[\begin{split}& J_{\bullet}=(1-q)\rho_{\bullet}(1-\rho_{\bullet}) \\ & J_{\circ}=(q-1)\rho_{\circ}(1-\rho_{\circ}).\end{split} \tag{15}\]
From these equations it is immediate that the densities are also Riemann variables for this model. However, the dynamics of the two species cannot in general be decoupled on the boundaries, making the max-min principle not applicable in this case.
The last model we have considered, a 3-species TASEP, contains particles with labels \((1,2,3,4)\), where the type 4 can be seen as empty sites, and bulk hopping rates:
\[ij\xrightarrow{\nu_{ij}}ji\qquad\nu_{ij}=\begin{cases}0&\text{if}\quad i>j \\ \nu_{12}&\text{if}\quad(i,j)=(1,2)\\ \nu_{34}&\text{if}\quad(i,j)=(3,4)\\ 1&\text{otherwise}\end{cases} \tag{16}\]
The particle currents of this model can be derived from those of the 2-TASEP, \(J_{\circ/\bullet}(\rho_{\circ},\rho_{\bullet},\alpha,\beta)\), by making some particle identifications. Firstly, the particles 4 and 3 can be seen as \(\circ\), 1 as \(\bullet\) and 2 as \(*\), for \(\alpha=1,\beta=\nu_{12}\). Secondly, 1 and 2 can be seen as \(\bullet\), 3 as \(*\) and 4 as \(\circ\) with \(\alpha=\nu_{34},\beta=1\). Using densities of particles of species \(1,2\) and 4 as independent variables one finds
\[\begin{split}& J_{1}=J_{\bullet}(1-\rho_{1}-\rho_{2},\rho_{1},1, \nu_{12})\\ & J_{2}=J_{\bullet}(\rho_{4},\rho_{1}+\rho_{2},\nu_{34},1)-J_{1} \\ & J_{4}=J_{\circ}(\rho_{4},\rho_{1}+\rho_{2},\nu_{34},1).\end{split} \tag{17}\]
In figure 4 and 5 we report the results for the bulk and boundary densities of these models, obtained though simulations of a system of size \(L=100\), along with the theoretical predictions. One boundary parameter is varied (\(\nu_{\bullet*}^{L}\) in the 2-ASEP and \(\nu_{12}^{L}\) in the 3-TASEP) while all the other parameters are fixed. Similarly to the case of the 2-TASEP seen in the previous section, we find good agreement.
### Conclusion
In conclusion, this paper introduces a method which allows to determine the steady state average particle densities and currents of one-dimensional multi-species driven system with open boundaries. The method, rooted in the bulk hydrodynamic behavior of the model, extends the max-min principle applicable to single-species models [5; 6; 7; 8]. By comparing our method's predictions with numerical simulations across three models, we observed good agreement. Our analysis of bulk hydrodynamic conservation laws enables us to predict the phase diagram, which becomes more intelligible when considering the behavior of the Riemann variables of the model (when they exist).
The major open question pertains to the method's domain of validity, particularly in establishing precise definitions of boundary densities for more general boundary conditions. The heuristic argument in favor of our method rests on the existence of a complete set of Riemann variables in bulk dynamics. Therefore, exploring models with more than two species, lacking this completeness, and subjecting our method to such models, presents an intriguing avenue for future research.
###### Acknowledgements.
We thank Gunter Schutz for useful discussions. The work of A. Zhara has been partially funded by the ERC Starting Grant 101042293 (HEPIQ) and completed while he was a member of LPTM.
| ```
多様な種粒子を持つ1次元多相 driven diffusive系における大規模非平衡steady-stateの特性を決定するための一般的方法を導入します。この方法を、単一種の粒子を持つ系で知られている最大最小流電原理を一般化したもので、この方法は、関連する保存法則の系 Riemann 問題を解くことに基づいています。この方法を適用して、リソースの有効密度が、その境界 hopping 率だけでなく、全体の系動的な影響を受けることを示します。これは、体系のBulkとリソース間の相互作用を強調しています。Riemann変数の役割は、このような系のパラメータ図を構築するのに重要であると指摘しています。本研究では、多様な種粒子を持つ粒子系3モデルに適用し、理論予測を数値シミュレーションと比較しました。
``` |
2309.16022 | GNNHLS: Evaluating Graph Neural Network Inference via High-Level
Synthesis | With the ever-growing popularity of Graph Neural Networks (GNNs), efficient
GNN inference is gaining tremendous attention. Field-Programming Gate Arrays
(FPGAs) are a promising execution platform due to their fine-grained
parallelism, low-power consumption, reconfigurability, and concurrent
execution. Even better, High-Level Synthesis (HLS) tools bridge the gap between
the non-trivial FPGA development efforts and rapid emergence of new GNN models.
In this paper, we propose GNNHLS, an open-source framework to comprehensively
evaluate GNN inference acceleration on FPGAs via HLS, containing a software
stack for data generation and baseline deployment, and FPGA implementations of
6 well-tuned GNN HLS kernels. We evaluate GNNHLS on 4 graph datasets with
distinct topologies and scales. The results show that GNNHLS achieves up to
50.8x speedup and 423x energy reduction relative to the CPU baselines. Compared
with the GPU baselines, GNNHLS achieves up to 5.16x speedup and 74.5x energy
reduction. | Chenfeng Zhao, Zehao Dong, Yixin Chen, Xuan Zhang, Roger D. Chamberlain | 2023-09-27T20:58:33 | http://arxiv.org/abs/2309.16022v1 | # GNNHLS: Evaluating Graph Neural Network Inference via High-Level Synthesis
###### Abstract
With the ever-growing popularity of Graph Neural Networks (GNNs), efficient GNN inference is gaining tremendous attention. Field-Programming Gate Arrays (FPGAs) are a promising execution platform due to their fine-grained parallelism, low-power consumption, reconfigurability, and concurrent execution. Even better, High-Level Synthesis (HLS) tools bridge the gap between the non-trivial FPGA development efforts and rapid emergence of new GNN models. In this paper, we propose GNNHLS, an open-source framework to comprehensively evaluate GNN inference acceleration on FPGAs via HLS, containing a software stack for data generation and baseline deployment, and FPGA implementations of 6 well-tuned GNN HLS kernels. We evaluate GNNHLS on 4 graph datasets with distinct topologies and scales. The results show that GNNHLS achieves up to \(50.8\times\) speedup and \(423\times\) energy reduction relative to the CPU baselines. Compared with the GPU baselines, GNNHLS achieves up to \(5.16\times\) speedup and \(74.5\times\) energy reduction.
field-programmable gate arrays, graph neural networks, high-level synthesis
## I Introduction
Graphs are widely adopted to model the relational-structured data in social networks, bioinformatics, etc [26]. Machine learning (ML) on graphs has experienced a surge of popularity in the past decade, since traditional ML models, which are designed to process Euclidean data with regular structures, are ineffective at performing prediction tasks on graphs. Due to their simplicity and superior representation learning ability, Graph Neural Networks (GNNs) [6, 12, 19, 23, 25] have achieved impressive performance on various graph learning tasks, such as node classification, graph classification, etc.
To implement GNNs, a set of widespread libraries, such as PyTorch Geometric (PYG) [8] and Deep Graph Library (DGL) [20], are built upon general-purpose ML frameworks (e.g. PyTorch [17]) targeting CPU and GPU platforms. However, the performance and energy consumption of GNN implementations are hindered by both hardware platforms and software frameworks: (1) Distinct from traditional NNs, GNNs combine the irregular communication-intensive patterns of graph processing and the regular computation-intensive patterns of NNs. This feature can lead to ineffectual computation on CPUs and GPUs. (2) Since these frameworks assemble functions in a sequential way, one function will not start until the previous one finishes. This execution model leads to extra memory accesses, footprint, and implicit barriers for intermediate results, limiting the potential performance, energy consumption and the scale of graph datasets.
Field-Programmable Gate Arrays (FPGAs) are potentially an attractive approach to GNN inference acceleration. FPGAs' massive fixed-grained parallelism provides opportunities to exploit GNNs' inherent parallelism. They also deliver better performance per watt than general-purpose computing platforms. In addition, FPGAs' reconfigurability and concurrency provide great flexibility to solve the challenges of hybrid computing patterns and ineffectual execution. Most of the prior works investigating FPGAs focus on accelerating a specific GNN model implemented using Hardware Description Languages (HDL). AWB-GCN [9], as one of the earliest FPGA-based works, proposes a GCN accelerator using HDL to solve the workload imbalance problem due to the distinct sparsity of different components. BoostGCN [24] proposes a graph partition algorithm in a preprocessing step to address workload imbalance issues. Despite these promising results, HDL design methodology is not suitable for widespread adoption for GNN implementations due to the conflict between the non-trivial development efforts with HDL and the rapid emergence of new GNN models. To address this challenge, High-Level Synthesis (HLS) tools are proposed to create GNN kernels using popular languages such as C/C++. With the help of HLS, development time is substantially shortened relative to HDL designs. Lin et al. [15], as one of the first works, proposes an HLS-based accelerator for GCN with separated sparse-dense matrix multiplication units and dense matrix multiplication units which are connected by shared memory and execute sequentially. GenGNN [1] proposes a framework to accelerate GNNs for real-time requirements where the whole graph and corresponding intermediate results are stored in on-chip resources on the FPGA. Despite these promising results, this work is limited to small-scale graphs with low edge-to-node ratio due to on-chip memory usage being proportional to graph scale and feature dimensions.
Distinct from pure software programming, HLS developers need to adopt multiple optimization pragmas and follow certain coding styles to achieve best performance and energy cost. As reported in [3], the performance difference between a well-optimized version and a non-optimized version of the same kernel can be two to three orders of magnitude. This invites an open question: _how effectively can modern HLS tools_
accelerate GNN inference?_
In this paper, we introduce GNNHLS1 an open-source framework for comprehensive evaluation of GNN kernels on FPGAs via HLS. GNNHLS contains a software stack extended from a prior GNN benchmark [7] based on PyTorch and DGL for input data generation and conventional platform baseline deployments (i.e., CPUs and GPUs). It also contains six well-optimized general-purpose GNN applications. These kernels can be classified into 2 classes: (1) isotropic GNNs in which every neighbor contributes equally to the update of the target vertex, and (2) anisotropic GNNs in which edges and neighbors contribute differently to the update due to the adoption of operations such as attention and gating mechanisms. In this paper, we make several contributions:
Footnote 1: Released as a benchmark suite [28] and also available at [https://github.com/Chernfeng/Zhao/GNNHLS](https://github.com/Chernfeng/Zhao/GNNHLS)
* We propose GNNHLS, a framework to evaluate GNN inference acceleration via HLS, containing: (a) a software stack based on PyTorch and DGL for data generation and baseline deployment, and (b) FPGA implementation including 6 well-tuned GNN HLS kernels with host and configuration files which can also be used as benchmarks.
* We characterize the GNN kernels in terms of locality scores and instruction mix to obtain insight into their memory access and computational properties.
* We provide a comprehensive evaluation of our GNN HLS implementations on 4 graph datasets, assessing both performance improvement and energy reduction.
Our evaluation results show that GNNHLS provides up to \(50.8\times\) speedup and \(423\times\) energy reduction relative to the multicore CPU baseline. Compared with the GPU baselines, GNNHLS achieves up to \(5.16\times\) speedup and \(74.5\times\) energy reduction.
## II Framework Description
### _GNNHLS Overview_
The GNNHLS framework, as depicted in Figure 1, comprises two primary components: data generation and HLS FPGA. The former is designed to generate input and output files and measure baselines on a CPU and a GPU, while the latter is designed to implement the optimized HLS applications on an FPGA. The data generation component mainly consists of the training system and the inference system, which are based on PyTorch and DGL. To account for the impact of graph topology on GNN model performance, it uses graph datasets with various topologies, including those from Open Graph Benchmarks [11]. In addition, six commonly used DGL GNN models obtained from a previous GNN benchmark [7] are incorporated. Thus, realistic model parameters, generated in the training phase, are utilized in inference.
The HLS FPGA component implements the GNN kernels on the FPGA. These kernels match the functionality of the DGL baselines and are optimized with several optimization techniques [4]. The optimized HLS kernels, with associated host files, data header files, and configuration files, are compiled by Vitis and executed on the FPGA. The optimization techniques applied in GNNHLS are described as follows:
**Pipeline**: Enable instruction-level concurrent execution to improve overall throughput. **Loop Merge**: Optimize the finite state machine (FSM) of nested loops to remove the impact of inner loop latency on the overall throughput. **Burst Memory Access & Memory Port Widening**: access large chunks of data in contiguous addresses and increase memory port width to improve memory bandwidth. **Loop Unroll**: Leverage instruction-level parallelism by executing multiple copies of loop iterations in parallel to increase throughput at the cost of resource utilization. **Dataflow**: Enable task-level parallelism by connecting multiple functions with FIFOs to form a pipeline-style architecture and executing them concurrently. **Multiple Compute Units (CUs)**: Execute multiple kernel instances as CUs in parallel for different data portions at the cost of resource usage.
Figure 2 illustrates the Dataflow diagrams of the GNNHLS kernels, in which memory and computation operations are divided and pipelined based on the complexity of each kernel. To mitigate the cost of Dataflow, we also (1) tune the location of FIFO accesses to achieve better throughput, (2) apply vectors for FIFO widening and associated operations, and (3) split loops to optimize the FIFO properties of loop indices.
### _Graph Convolutional Network (GCN)_
Graph Convolutional Network (GCN) [12] is one of the earliest GNN models and has a simple structure. It updates node features by aggregating neighboring node features and performing linear projection. The formula is given as follows:
\[h_{i}^{l+1}=\mathrm{ReLU}\left(U^{l}\sum_{j\in N_{i}}h_{j}^{l}\right) \tag{1}\]
Where \(U^{l}\in\mathbb{R}^{d\times d}\) is the learnable weight matrix of the linear projection, which performs vector-matrix multiplication. \(h_{i}^{l}\in\mathbb{R}^{d\times 1}\) is the feature vector of vertex \(i\) in layer \(l\), and \(N_{i}\) represents the neighboring vertices of vertex \(i\).
Based on the above equation, we create the GCN HLS implementation, the Dataflow diagram of which is depicted in Figure 2(a). In addition to the memory access modules for input graphs and \(h\), we split the computation operations into
Fig. 1: Diagram of the GNNHLS framework.
two modules: Aggregation of neighbor node vectors \(h_{j}\) and vector-matrix multiplication (VMM) for linear projection. We perform all the optimization techniques described previously to the GCN kernel. The memory burst length vector \(h\) is \(d\), limited by the irregularity of the graph topology. The initiation interval (II) of the aggregation module is \(4\left|N_{i}\right|+2\). Since Vitis is not good at synthesizing tree-structured floating-point operations, we separate VMM into 2 functions in the Dataflow scope for grouped VMM and sum, respectively. The II of VMM is thereby reduced from \(d^{2}\) to \(d+36\). All these modules are reused in the following GNN models. Due to its simplicity, we create 2 CUs to process distinct vertices in parallel.
### _GraphSage (GS)_
GraphSage (GS) [10] introduces an inductive framework to improve the scalability over GCN by aggregating information from the fixed-size set of neighbors via uniform sampling, explicitly incorporating feature vectors of both the target vertex and its source neighbors. The mathematical expression of GraphSage with a mean aggregator is formulated as follows:
\[\begin{split} h_{i}^{l+1}&=\mathrm{ReLU}\left(U^{l }\mathrm{Concat}\left(h_{i}^{l},\frac{1}{\left|N_{i}\right|}\sum_{j\in N_{i}}h _{j}^{l}\right)\right)\\ &=\mathrm{ReLU}\left(V^{l}h_{i}^{l}+W^{l}\frac{1}{\left|N_{i} \right|}\sum_{j\in N_{i}}h_{j}^{l}\right)\end{split} \tag{2}\]
Where \(N_{i}\) is the set of source neighbors of vertex \(i\), and \(h_{i}^{l}\in\mathbb{R}^{d\times 1}\) is the feature vector of vertex \(i\) in layer \(l\). The learnable weight matrix of the linear projection, \(U^{l}\in\mathbb{R}^{d\times 2d}\), is stored in on-chip memory. Given that distinct weight parameters are used for the target vertex and source neighbors, \(U^{l}\) is divided into \(V^{l}\in\mathbb{R}^{d\times d}\) and \(W^{l}\in\mathbb{R}^{d\times d}\), enabling parallel execution of both paths to hide the latency of linear projection for the target vertex. Figure 2(b) illustrates the Dataflow structure of GraphSage. The memory read accesses and linear projection of the target feature, and neighbors' feature aggregation are executed simultaneously, and then summed up to update \(h_{i}\).
### _Graph Isomorphism Network (GIN)_
Graph Isomorphism Network (GIN) [23] employs the Weisfeiler-Lehman Isomorphism Test [22] as its foundation to investigate the discriminative ability of GNNs. The formula of GIN is described as follows:
\[h_{i}^{l+1}=\mathrm{ReLU}\left(U^{l}\mathrm{ReLU}\left(V^{l}\left((1+\epsilon )h_{i}^{l}+\sum_{j\in N_{i}}h_{j}^{l}\right)\right)\right) \tag{3}\]
where \(\epsilon\) is a learnable scalar weight, \(U^{l}\) and \(V^{l}\in\mathbb{R}^{d\times d}\) denote learnable weight matrices of cascaded VMM modules, \(h_{i}^{l}\in\mathbb{R}^{d\times 1}\) again refers to the feature vector of vertex \(i\) in layer \(l\), and \(N_{i}\) is again the source neighbors of vertex \(i\). In contrast to GraphSage, GIN illustrated in Figure 2(c) first sums up the aggregated vector of neighbors \(h_{j}\) and the target vertex vector \(h_{i}\), hiding the latency of reading \(h_{i}\), then performs two cascaded VMM modules with weight matrices \(U^{l}\) and \(V^{l}\), respectively. This framework avoids the generation of long critical paths and achieves a higher clock frequency.
### _Graph Attention Network (GAT)_
Graph Attention Network (GAT) [19] is an anisotropic GNN model that uses self-attention mechanisms to weight and learn representations of neighbor vertices unequally. The equation is described as follows:
Fig. 2: Dataflow diagrams of GNN HLS kernels in GNNHLS.
\[h_{i}^{l+1} =\mathrm{Concat}_{k=1}^{K}\left(\mathrm{ELU}\left(\sum_{j\in N_{i}} \alpha_{ij}^{k,l}U^{k,l}h_{j}^{l}\right)\right) \tag{4}\] \[\alpha_{ij}^{k,l} =\mathrm{Softmax}(e_{ij}^{k,l})=\frac{\exp(e_{ij}^{k,l})}{\sum_{j^ {\prime}\in N_{i}}\exp(e_{ij^{\prime}}^{k,l})}\] (5) \[e_{ij}^{k,l} =\mathrm{LeakyReLU}(\vec{a}^{T}Concat(U^{k,l}h_{i}^{l},U^{k,l}h_{ j}^{l}))\] \[=\mathrm{LeakyReLU}(a_{src}^{k,l}U^{k,l}h_{i}^{l}+a_{dest}^{k,l}U ^{k,l}h_{j}^{l}) \tag{6}\]
where \(\alpha_{ij}^{l}\in\mathbb{R}^{K}\) is the attention score between vertex \(i\) and vertex \(j\) of layer \(l\), \(U^{k,l}\in\mathbb{R}^{d\times d}\) and \(\vec{a}\in\mathbb{R}^{2d}\) are learnable parameters. Note that the weight parameter \(\vec{a}^{T}\) is decomposed into \(a_{src}^{l}\) and \(a_{dest}^{l}\in\mathbb{R}^{d}\) in the DGL library, because it is more efficient in terms of performance and memory footprint by transferring VMM between \(U^{k,l}\) and \(h^{l}\) from edge-wise to node-wise operations, especially for sparse graphs where the edge number is larger than the vertex number.
Figure 2(d) depicts the Dataflow framework of GAT. Due to the unbalanced workload of the numerator and the denominator in (5), the results of \(\exp(e_{ij})\), size \(O(|N_{i}|)\), need to be temporarily stored prior to being accumulated. Considering the irregularity and large maximum \(|N_{i}|\) of graphs, we divide the GAT model into 2 HLS kernels linked to the same memory banks for shared intermediate results: kernel 1 is designed to perform VMM with \(U\) and \(h\), and multi-headed element-wise multiplication (MHEWM) with \(a_{src}\) and \(a_{dest}\), respectively, in (6). After being optimized, the II of MHEWM is \(k+112\). The intermediate results are written back to memory and then read by kernel 2 to implement (4) and (5). Note that \(e_{ij}\) is computed twice in parallel to avoid performance degradation and deadlock issues. The II of aggregation, softmax, and MHEWM is \(k\cdot|N_{i}|+2k+38\), \(k\cdot|N_{i}|+k+17\), and \(k\cdot|N_{i}|+k+14\), respectively.
### _Mixture Model Networks (MoNet)_
Mixture Model Networks (MoNet) [16] is a general anisotropic GNN framework designed for graph and node classification tasks using Baysian Gaussian Mixture Model (GMM) [5]. The model is formulated as follow:
\[h_{i}^{l+1} =\mathrm{ReLU}\left(\sum_{k=1}^{K}\sum_{j\in N_{i}}w_{k}(u_{ij})U ^{k,l}h_{j}^{l}\right)\] \[=\mathrm{ReLU}\left(\sum_{k=1}^{K}U^{k,l}\sum_{j\in N_{i}}w_{k}(u _{ij})h_{j}^{l}\right) \tag{7}\] \[w_{k}(u_{ij}) =\exp\left(-\frac{1}{2}(u_{ij}^{l}-\mu_{k}^{l})^{T}(\sum_{k}^{l} )^{-1}(u_{ij}^{l}-\mu_{k}^{l})\right)\] (8) \[u_{ij}^{l} =\mathrm{Tanh}(V^{l}pseudo_{ij}^{l}+v^{l})\] (9) \[pseudo_{ij}^{l} =\mathrm{Concat}(deg_{i}^{-0.5},deg_{j}^{0.5}) \tag{10}\]
where \(v^{l}\in\mathbb{R}^{2}\), \(V^{l}\in\mathbb{R}^{2\times 2}\), \(\mu\in\mathbb{R}^{K\times 2}\), \((\sum_{k}^{l})^{-1}\in\mathbb{R}^{K\times 2}\), and \(U^{l}\in\mathbb{R}^{d\times d}\) are learnable parameters of GMM. \(v^{l}\) and \(V^{l}\) represent the pseudo-coordinates between the target vertex and its neighbors, \(\mu\in\mathbb{R}^{K\times 2}\) and \((\sum_{k}^{l})^{-1}\in\mathbb{R}^{K\times 2}\) denote the mean vector and covariance matrix. \(U^{k,l}\) is the weight matrix.
The Dataflow diagram of MoNet is depicted in Figure 2(e). In our HLS implementation, \(pseudo_{ij}\) of each edge is processed by a small VMM module with \(V^{l}\) and \(v^{l}\) in (9) and the Gaussian Weight Computation module with \(\mu\) and \((\sum_{k}^{l})^{-1}\) in (8). Meanwhile, \(h_{j}\) is read from memory for the subsequent MHEWM with aggregation, MHVMM with \(U\), and MH Aggregation modules. Note that we perform the MH VMM with \(U\) after aggregation in (7), transferring it from an edge-wise to node-wise operation to reduce its occurrence. After optimization, the II of the VMM for \(u_{ij}\), Gaussian computation, MHEWM with aggregation, MHVMM with \(U\), and MH Aggregation are 1, 1, 4, \(d+k+28\), and \(7k+10\), respectively. We create 2 CUs for the HLS kernel to process vertices with distinct indices.
### _Gated Graph ConvNet (GatedGCN)_
The Gated Graph ConvNet (GatedGCN) [2] is a type of anisotropic graph neural network (GNN) model that employs a gating mechanism to regulate the flow of information during message passing, allowing the model to emphasize relevant information and filter out irrelevant one. The gating mechanism utilizes gate functions (e.g., sigmoid) to control the flow of messages at each layer. The mathematical expression for GatedGCN is provided below:
\[h_{i}^{l+1} =\mathrm{ReLU}\left(A^{l}h_{i}^{l}+\frac{\sum_{j^{\prime}\in N_{i} }B^{l}h_{j^{\prime}}^{l}\odot\sigma(e_{ij^{\prime}}^{l+1})}{\sum_{j^{\prime} \in N_{i}}\sigma(e_{ij^{\prime}}^{l+1})+\epsilon}\right) \tag{11}\] \[e_{ij}^{l+1} =E^{l}h_{i}^{l}+D^{l}h_{j}^{l}+C^{l}e_{ij}^{l} \tag{12}\]
where \(A^{l}\), \(B^{l}\), \(D^{l}\), \(E^{l}\) and \(C^{l}\in\mathbb{R}^{d\times d}\) are learnable matrix parameters, \(e_{ij}^{l}\in\mathbb{R}^{1\times d}\) denote the edge features from vertex \(i\) to \(j\) layer \(l\), \(h_{i}^{l}\) represents node features of vertex \(i\) in layer \(l\), \(\odot\) denotes Hadamard product, \(\sigma\) denotes the sigmoid function, and \(\epsilon\) is a constant for numerical stability.
Since the soft attention of GatedGCN shown in (11) is distinct from GAT, performing accumulation operations for \(e_{ij}\) on both the numerator and denominator, we implement a single pipeline to build the HLS kernel. Figure 2(f) illustrates the Dataflow framework of GatedGCN. To hide the latency of multiple VMM modules in GatedGCN, we perform all of them in parallel with parameters \(A\), \(B\), \(D\), \(E\), and \(C\), respectively. Then the soft attention module is implemented to update \(h_{i}\). After optimization, the II of the soft attention and sum modules to generate \(h_{i}^{l+1}\) are \(10\cdot|N_{i}|+72\) and 31, respectively.
## III Experimental Methodology
**Datasets:** Table I shows the graph datasets used in our evaluation. All these graphs are collected from Open Graph Benchmark [11], a widely-used graph library for GNNs, and have a wide range of fields and scales. These graphs represent two classes of graphs with distinct topologies used in the GNN community: MH and MT consist of multiple small dense graphs, while AX and PT each consist of one single sparse
graph. The maximum and average degree shown in Table I indicates their varying distributions ranging from regular-like to powerlaw-like. In addition, we set feature dimensions for the kernels: GCN, GraphSage, and GIN have the same input and output dimensions at 128. The input, head, and output dimensions of GAT and MoNet are (128, 8, 16) and (64, 2, 64), respectively. All the dimensions of GatedGCN are 32.
**Evaluation methods:** To perform evaluation, we use a Xilinx Alveo U280 FPGA card, provided by the Open Cloud Testbed [13], to execute the HLS kernels. This FPGA card provides 8 GB of HBM2 with 32 memory banks at 460 GB/s total bandwidth, 32 GB of DDR memory at 38 GB/s, and 3 super logic regions (SLRs) with 1205K look-up tables (LUTs), 2478K registers, 1816 BRAMs, and 9020 DSPs. We adopt 32-bit floating point as the data format. We use Vitis 2020.2 for synthesis and hardware linkage with the power-profile option enabled to perform power profiling during runtime, and Vitis Analyzer to view resource utilization, execution time and power consumption. We compare our HLS implementation with CPU and GPU baselines with PyTorch and the highly-optimized DGL library. We perform CPU baseline runs on an Intel Xeon Silver 4114 at 2.2 GHz with 10 cores, 20 threads, and 13.75 MB L3 cache. The GPU baseline is implemented on an Nvidia RTX 2080 Ti with 2994 CUDA cores at 1.5 GHz and 8 GB GDDR6 at 448 GB/s total bandwidth. We measure the energy consumption of the CPU and GPU baselines using the same technique as prior work [15].
## IV Characterization
To capture insight into the properties of GNNHLS, we first characterize the GNN kernels using instruction mix, spatial locality, and temporal locality. We use Workload ISA-Independent Characterization (WIICA) [18], a workload characterization tool, to capture ISA-independent properties by generating and parsing a dynamic trace of runtime information. Due to the limits of disk and processing time, profiling the the full trace is impractical. Thus we use uniform random node sampling [14] to select a sequence of 500 nodes for evaluation.
induces non-contiguous memory references, limiting memory burst transfer and prefetching to the length of feature sizes. Next examining the temporal locality, we observe that the score stays in the range of \(0.5-0.7\), indicating the potential performance benefit of cacheing mechanisms, regardless of the graph topology. In addition, we observe anisotropic kernels show a higher temporal locality than isotropic kernels, due to them having more edge-wise operations.
## V Evaluation
### _Resource Utilization_
We first examine the resource utilization and clock frequency after place & route. FPGA resources include look-up tables (LUT), flip-flops (FF), BRAM, and digital-signal-processors (DSP). Table II shows these results. From the table, we observe that the frequency of all the kernels is lower than the target frequency, which is not unusual in FPGA designs. Among these kernels, GraphSage achieves a low frequency due to some critical paths which are unresolvable by the tool. In addition, we observe that the resources on the FPGA are not over-utilized.
### _Performance_
We next examine the performance improvement by showing the overall speedup, defined as the execution time of the GNN HLS kernels relative to CPU-DGL (using all 10 cores on the CPU), in Figure 5. Table III shows the execution time of baselines and HLS kernels. Note that GPU results of GAT, MN, and GGCN on PT cannot be obtained because of running out of memory (OoM). Examining each kernel in Figure 5, we observe that the HLS implementation is not always outperforming corresponding CPU baselines. Compared with DGL-CPU, the speedup ranges from \(0.47\times\) to \(50.8\times\).
Among isotropic GNN kernels, GCN achieves better performance than GraphSage and GIN, ranging from \(1.08\times\) to \(1.98\times\) because its simpler structure enables us to create two CUs to leverage spatial data parallelism. In contrast, we can only create one CU for GraphSage and GIN each because of their complex structure and heavy resource usage. In addition, we observe that the execution time of GraphSage and GIN are close. Thus, we conclude that the distinction on the structure of these two GNN models will not substantially affect HLS implementation results.
Among anisotropic kernels, MoNet achieves highest performance improvement ranging from \(6.04\times\) to \(50.8\times\) due to (1) its single pipeline structure with computation order optimization where the node-wise operations are placed behind the edge-wise operations, and (2) well-designed MHVMM modules with lower II, especially MHVMM whose II is \(O(d+k)\) instead of \(O(dk)\). In spite of the 2-pipeline structure of GAT, we observe that it still achieves \(4.31\times\) to \(6.61\times\) speedup relative to multi-core CPU baselines. In addition, since the feature size of GatedGCN is smaller, leading to more performance improvement for CPU baselines with time complexity of \(O(d^{2})\), its speedup is not comparable to other anisotropic kernels, ranging from \(0.5\times\) to \(1.16\times\).
Turning our attention to how the performance benefit of HLS implementations varies across graph datasets, we observe that the speedup of isotropic kernels relative to DGL-CPU on regular-like graphs (i.e., MT and MH) is higher than powerlaw-like graphs (i.e., AX and PT) because (1) the edge-wise operations are less computation-intensive than node-wise operations in these kernels, making the baselines more computationally efficient on powerlaw-like graphs containing more edges than nodes; and (2) the edge-wise aggregation operations in HLS implementations are executed sequentially without leveraging edge-level parallelism, making these HLS kernels less computationally efficient for powerlaw-like graphs. Distinct from isotropic kernels, the speedup of anisotropic kernels on powerlaw-like graphs is higher than regular-like graphs because the edge-wise operations of these kernels are more computation-intensive than isotropic kernels, making baselines less efficient on powerlaw-like graphs.
Focusing on the second and the third bar, we observe that DGL-GPU outperforms HLS implementations in many cases, due to the high-performance fixed-function accelerators in the GPU. The speedup of HLS kernels relative to the GPU baselines ranges from \(0.13\times-5.16\times\). In spite of the promising GPU performance, there are still some drawbacks of GPU compared with HLS implementations. For the execution of isotropic GNN models, DGL-GPU achieves lower speedup than HLS on small-scale graphs such as MT and AX. It is speculated that the GPU is designed to achieve high throughput in the cost of latency which plays a more important role for small-scale graphs than large-scale graphs. In addition, compared with HLS implementations on FPGA, GPU is also not suitable for the execution of anisotropic GNN models on large-scale, especially powerlaw-like graphs (e.g., PT) due to (1) the non-trivial memory footprint caused by its sequential execution paradigm to store intermediate results of edge-wise operations, and (2) insufficient memory capacity on the GPU board. That is why we failed to execute anisotropic GNNs on PT with GPU. It is solved by the HLS implementations' pipeline structure not storing the intermediate results.
Since GenGNN [1] also discusses 3 of the GNN models included in this paper (GCN, GIN, and GAT), we can make a limited comparison of our GNN HLS implementations with theirs. The two are not directly comparable for a number of reasons: (1) the feature dimensions of our GNN HLS kernels are higher, (2) we use off-chip memory instead of on-chip memory, (3) our general-purpose GNN HLS kernels focus
more on throughput rather than real-time latency, and (4) the FPGAs are from the same family, but are not same part. The performance of our HLS kernels exceeds that of GenGNN, achieving overall speedup of \(35\times\), \(5\times\), and \(6\times\) over GCN, GIN, and GAT, on MT, respectively.
### _Optimization Techniques_
As described in Section II, we apply multiple optimization techniques to the HLS kernels. In order to evaluate the efficacy of these techniques, we use GraphSage on MT as a case study. Table IV presents the execution time of GraphSage with the combined impact of optimization techniques applied. The reported execution time of each technique represents the effect of both the current technique and above techniques listed in the table. In the table, No Pragma means we don't intentionally apply any pragmas to the HLS code, except for those automatically applied by Vitis (i.e., Pipeline, Loop Merge, and Memory optimizations). Dataflow denotes that we apply dataflow pragma and FIFO streams to exploit the task-level parallelism of each application. Loop Unroll means we apply loop unroll pragmas to completely or partially unroll for loops, keeping II as low as possible while exploiting instruction parallelism. Vectorization means using vector data types to widen the width of FIFO streams and corresponding operations to decrease the cost of FIFO accesses. Split Loops means splitting the outer-most node loop and putting it inside each function connected by streams to further optimize FIFO properties inferred from loop indices.
We observe that Loop Unroll achieves the highest performance improvement. Therefore, exploiting instruction parallelism is still the primary choice for GNN HLS optimization. In order to further improve performance, exploiting task-level parallelism is necessary. Focusing on the first and second row in the table, we observe that only performing the dataflow pragma and streams in a naive way obtains \(1.99\times\) performance improvement. By applying Vectorization and Split Loops as complementary techniques of Dataflow, performance is further improved by \(2.5\times\) and \(3.9\times\), respectively. After applying all the optimization techniques together we observe that the performance of GraphSage is improved by \(132\times\).
### _Energy Consumption_
We next present a quantitative analysis of the energy consumption. Figure 6 displays the energy reduction of both DGL-GPU and HLS implementations relative to DGL-CPU in logarithmic scale. Energy reduction is calculated as the energy consumption of DGL-GPU or HLS divided by that of DGL-CPU. Examining the final bar of each application and dataset, we observe that HLS implementations consume less energy than CPU and GPU baselines in all cases. The energy reduction ranges from \(2.95\times\) to \(423\times\) relative to DGL-CPU and from \(2.38\times\) to \(74.5\times\) relative to DGL-GPU. It is because of the low power of FPGA logic, low clock frequency, and efficient pipeline structure of HLS implementations.
Fig. 5: Speedup of HLS kernels relative to DGL-CPU. The higher the better.
Focusing on the first and last bar, we observe a similar tendency in energy reduction as in performance: for isotropic GNN models, denser graphs result in lower energy reduction, whereas for anisotropic GNN models, denser graphs result in higher energy reduction. This leads us to conclude that improving GNN applications generally will require some degree of graph topology awareness.
## VI Conclusions
In this paper, we propose GNNHLS, an open-source framework to comprehensively evaluate GNN inference acceleration on FPGAs via HLS. GNNHLS consists of a software stack for data generation and baseline deployment, and 6 well-tuned GNN HLS kernels. We characterize the HLS kernels in terms of instruction mix and memory locality scores, and evaluate them on 4 graph datasets with various topologies and scales. Results show up to \(50.8\times\) speedup and \(423\times\) energy reduction relative to the multi-core CPU baselines. Compared with GPU baselines, GNNHLS achieves up to \(5.16\times\) speedup and \(74.5\times\) energy reduction. In the future, we will extend GNNHLS to more GNN models and graph datasets. It can also be useful as a benchmark or baseline for HLS researchers to explore the potential of HLS tools on GNN inference acceleration. GNNHLS has been released for use as a benchmark suite [28].
## Acknowledgment
This work is supported by NSF under grants CNS-1739643 and CNS-1763503 and a gift from BECS Technology, Inc. The authors are grateful for the use of the Open Cloud Testbed [13] as an experimentation platform.
| |
2309.10453 | Mean-Field Limit of Point Vortices for the Lake Equations | In this paper we study the mean-field limit of a system of point vortices for
the lake equations. These equations model the evolution of the horizontal
component of the velocity field of a fluid in a lake of non-constant depth,
when its vertical component can be neglected. As for the axisymmetric Euler
equations there are non-trivial self interactions of the vortices consisting in
the leading order of a transport term along the level sets of the depth
function. If the self-interactions are negligible, we show that the system of
point vortices converges to the lake equations as the number of points becomes
very large. If the self-interactions are of order one, we show that it
converges to a forced lake equations and if the self-interactions are
predominant, then up to time rescaling we show that it converges to a transport
equation.The proof is based on a modulated energy approach introduced by
Duerinckx and Serfaty in (Duke Math. J., 2020) that we adapt to deal with the
heterogeneity of the lake kernel. | Matthieu Ménard | 2023-09-19T09:12:24 | http://arxiv.org/abs/2309.10453v1 | # Mean-field limit of point vortices for the Lake equations.
###### Abstract.
In this paper we study the mean-field limit of a system of point vortices for the lake equations. These equations model the evolution of the horizontal component of the velocity field of a fluid in a lake of non-constant depth, when its vertical component can be neglected. As for the axisymmetric Euler equations there are non-trivial self interactions of the vortices consisting in the leading order of a transport term along the level sets of the depth function.
If the self-interactions are negligible, we show that the system of point vortices converges to the lake equations as the number of points becomes very large. If the self-interactions are of order one, we show that it converges to a forced lake equations and if the self-interactions are predominant, then up to time rescaling we show that it converges to a transport equation.
The proof is based on a modulated energy approach introduced by Duerinckx and Serfaty in (Duke Math. J., 2020) that we adapt to deal with the heterogeneity of the lake kernel.
## 1. Introduction
### Lake equations
The purpose of this article is to investigate the mean-field limit of point vortices (which are dirac masses in the vorticity field of a fluid) in a lake of non-constant depth. Namely we want to establish the convergence of an empirical distribution of point vortices to a continuous density solving the lake equations, as the number of vortices becomes very large. These equations describe the evolution of the horizontal velocity field of an incompressible fluid in a lake, when:
* The depth is small with respect to the lengthscale of horizontal variations of the fluid velocity.
* The surface of the fluid is almost flat (small Froude number).
* The vertical velocity is small with respect to the horizontal velocity.
For a rigorous derivation of these equations from the shallow water system we refer to the work of Bresch, Gisclon and Lin in [7]. A more general introduction to depth-averaged models can be found in [28, Chapter 5] and a discussion on the three upper hypothesis can be found in [55].
These equations are similar to the planar Euler equations, but they take into account the depth of the lake, given by a positive function \(b\). If \(b\) is constant, then one recovers the usual planar Euler equations. The well-posedness of the lake equations on bounded domains have been first investigated by Levermore, Oliver and Titi in [43]. In this paper they studied an analogue of the Yudovich theorem for Euler equations (see [64]). This result
was extended later by Bresch and Metivier in [12] to include the case where the depth function \(b\) vanishes at the boundary and by Lacave, Nguyen and Pausader in [39] to deal with the case of rough bottoms. The existence and uniqueness of global classical solutions have been established by Al Taki and Lacave in [1].
In this paper we will study the case of an infinite lake modeled by the whole plane \(\mathbb{R}^{2}\). We are interested in the following vorticity form of the equations:
\[\begin{cases}\partial_{t}\omega+\operatorname{div}\left(\left(u-\alpha\frac{ \nabla^{\perp}b}{b}\right)\omega\right)=0\\ \operatorname{div}(bu)=0\\ \operatorname{curl}(u)=\omega\end{cases} \tag{1.1}\]
where
* \(\perp\) denotes the rotation by \(\frac{\pi}{2}\) (that is \((x_{1},x_{2})^{\perp}:=(-x_{2},x_{1})\)).
* \(\alpha\in[0,+\infty)\) is a forcing parameter.
* \(b:\mathbb{R}^{2}\longrightarrow(0,+\infty)\) is the depth function satisfying Assumption 1.5 below.
* \(u:[0,+\infty)\times\mathbb{R}^{2}\longrightarrow\mathbb{R}^{2}\) is the velocity field of the fluid.
* \(\omega:[0,+\infty)\times\mathbb{R}^{2}\longrightarrow\mathbb{R}\) is the vorticity field of the fluid, defined by \[\omega=\operatorname{curl}(u):=\partial_{1}u_{2}-\partial_{2}u_{1}.\]
The true lake equations have no forcing term (\(\alpha=0\)), but we will study this more general model as it could arise as a mean-field limit of point vortices (in the regime where the self-interaction of the vortices are not negligible). It is a particular case of a model studied by Duerinckx and Fischer (see [23, Equation (1.9)]). In this work the authors proved the global existence and uniqueness of weak solutions and the local well-posedness of strong solutions. We will consider the following definition of weak solutions:
**Definition 1.1**.: _Let \(T>0\) and \(\omega_{0}\in L^{\infty}(\mathbb{R}^{2})\) with compact support. We say that \((\omega,u)\) is a weak solution of (1.1) on \([0,T]\) with initial condition \(\omega_{0}\) if \(\omega\in L^{1}([0,T],L^{\infty}(\mathbb{R}^{2},\mathbb{R}^{2}))\cap\mathcal{ C}^{0}([0,T],L^{\infty}(\mathbb{R}^{2})-w^{*})\) with compact support in space for all \(t\in[0,T]\), \(u\in L^{2}_{\operatorname{loc}}([0,T]\times\mathbb{R}^{2},\mathbb{R}^{2})\), for almost every \(t\in[0,T]\), \(\operatorname{div}(bu)=0\) and \(\operatorname{curl}(u)=\omega\) distributionally and for all \(\varphi\) smooth with compact support in \([0,T)\times\mathbb{R}^{2}\) and \(t\in[0,T)\),_
\[\iint_{[0,t]\times\mathbb{R}^{2}}\partial_{t}\varphi\omega+\nabla\varphi\cdot \left(u-\alpha\frac{\nabla^{\perp}b}{b}\right)\omega=\int_{\mathbb{R}^{2}} \varphi(t)\omega(t)-\int_{\mathbb{R}^{2}}\varphi(0)\omega_{0}. \tag{1.2}\]
In the regime where the self-interaction of the point vortices is predominant, the system of point vortices will converge in an accelerated timescale to a transport equation along the level sets of the topography:
\[\partial_{t}\overline{\omega}-\operatorname{div}\left(\frac{\nabla^{\perp}b} {b}\overline{\omega}\right)=0. \tag{1.3}\]
For this equation we will use the following definition of weak solutions:
**Definition 1.2**.: _Let \(T>0\) and \(\overline{\omega}_{0}\in L^{\infty}(\mathbb{R}^{2})\) with compact support. We say that is a weak solution of (1.3) on \([0,T]\) with initial condition \(\overline{\omega}_{0}\) if \(\overline{\omega}\in L^{1}([0,T],L^{\infty}(\mathbb{R}^{2},\mathbb{R}^{2})) \cap\mathcal{C}^{0}([0,T],L^{\infty}(\mathbb{R}^{2})-w^{*})\) with compact support in space for all \(t\in[0,T]\) and for all \(\varphi\) smooth with compact support in \([0,T)\times\mathbb{R}^{2}\) and \(t\in[0,T)\),_
\[\iint_{[0,t]\times\mathbb{R}^{2}}\partial_{t}\varphi\overline{\omega}-\nabla \varphi\cdot\frac{\nabla^{\perp}b}{b}\overline{\omega}=\int_{\mathbb{R}^{2}} \varphi(t)\overline{\omega}(t)-\int_{\mathbb{R}^{2}}\varphi(0)\overline{ \omega}_{0}. \tag{1.4}\]
### Point vortices for the lake equations
The forced lake equations (1.1) have been derived as the mean-field limit of complex Ginzburg-Landau vortices with forcing and pinning effects by Duerinckx and Serfaty in [24]. The dynamics of these vortices comes from the physics of supraconductors or superfluids and is very different from the dynamics of vortices in a lake. In this paper we are interested in deriving Equations (1.1) as the mean-field limit of a model introduced by Richardson in [55]. In that work he established by a formal computation the equation followed by the center of vorticity \(q(t)\) of a small vortex of size \(\varepsilon\) in a lake of depth \(b\). To leading order in \(\varepsilon\), this equation gives
\[\dot{q}(t)\approx-\frac{\Gamma|\ln(\varepsilon)|}{4\pi}\frac{\nabla^{\perp}b( q(t))}{q(t)} \tag{1.5}\]
where \(\Gamma\) is the intensity of vorticity (that is \(\Gamma=\int_{B(q(0),\varepsilon)}\omega\)).
This means that to leading order in \(\varepsilon\), a very small vortex follows the level lines of the topography without seeing the interaction with other vortices remaining far from him. The latter equation was rigourously justified by Dekeyser and Van Schaftingen in [19] for the motion of a single vortex and this result was extended later to the case of a finite number of vortices by Hientzsch, Lacave and Miot in [33].
We want to investigate the behavior of \(N\) point vortices of intensity \(N^{-1}\) as \(N\) becomes large. We will see in Section 2 that the elliptic problem
\[\left\{\begin{aligned} \operatorname{div}(bu)&=0\\ \operatorname{curl}(u)&=\omega\end{aligned}\right.\]
has a unique solution given by the kernel
\[g_{b}(x,y):=\sqrt{b(x)b(y)}g(x-y)+S_{b}(x,y)\]
where \(S_{b}\) is a function solving a certain elliptic equation (see Equation (2.9)) and \(g(x):=-\frac{1}{2\pi}\ln|x|\) is the opposite of the Green kernel of the Laplacian on the plane \(\mathbb{R}^{2}\). More precisely, we have
\[u(x)=-\frac{1}{b(x)}\int_{\mathbb{R}^{2}}\nabla_{x}^{\perp}g_{b}(x,y)\omega(y )\,\mathrm{d}y.\]
Recall that a point vortex is asymptotically represented by a dirac mass of vorticity. Therefore using the kernel \(\nabla_{x}^{\perp}g_{b}\) we can compute the velocity field
generated by \(N-1\) vortices \(\delta_{q_{j}}\) of intensity \(\dfrac{1}{N}\) on a vortex \(\delta_{q_{i}}\):
\[-\dfrac{1}{N}{\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}}\dfrac{1}{b(q_{i})}\nabla_{x}^{\perp}g_{b}(q_{i},q_{ j}).\]
This term correspond to the term \(u_{reg}\) given by Richardson in [55, Equation (2.90)]. Combining this equation with the self-interaction term of (1.5) we get the model of point vortices we will study in this paper:
\[\dot{q}_{i}=-\alpha_{N}\dfrac{\nabla^{\perp}b(q_{i})}{b(q_{i})}-\dfrac{1}{N}{ \sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}}\dfrac{1}{b(q_{i})}\nabla_{x}^{\perp}g_{b}(q_{i},q_ {j}) \tag{1.6}\]
where we have denoted
\[\alpha_{N}:=\dfrac{|\ln(\varepsilon_{N})|}{4\pi N}\]
where \(\varepsilon_{N}\) is the size of the vortices.
_Remark 1.3_.: Up to now there is no mathematical justification of Equation (1.6): We do not even expect this equation to describe precisely the motion of a fixed number of small vortices as we have neglected all self-interaction terms of order smaller than \(|\ln(\varepsilon)|\). However Theorem 1.8 will justify that this simplified model is statistically relevant when \(N\) becomes very large.
_Remark 1.4_.: There are several works establishing approximate analytical trajectories of vortices in a lake for some specific depth profiles, and also other numerical and experimental results related to vortex dynamics in lakes. For more details we refer to the results of [55] and the associated bibliography.
Two quantities will be of interest for the study of this system. The interaction energy
\[E_{N}(t):=\dfrac{1}{N^{2}}\sum_{i=1}^{N}{\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}}g_{b}(q_{i}(t),q_{j}(t))\]
and the moment of inertia
\[I_{N}(t):=\dfrac{1}{N}\sum_{i=1}^{N}|q_{i}(t)|^{2}.\]
One could prove that the total energy
\[E_{N}^{tot}:=E_{N}+\dfrac{\alpha_{N}}{N}\sum_{i=1}^{N}b(q_{i})\]
is a conserved quantity for the point vortex system (1.6) or that if \(\omega\) is a solution of (1.1) with enough regularity and decay, the quantity
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega(t,x)\omega(t,y) \,\mathrm{d}x\,\mathrm{d}y+\alpha\int_{\mathbb{R}^{2}}b(x)\omega(t,x)\, \mathrm{d}x\]
is conserved by the flow. The moment of inertia \(I_{N}\) and the interaction energy \(E_{N}\) are not conserved quantity but they are bounded in time, and
this will be useful both for our mean-field limit result and for the well-posedness of System (1.6) (see Section 3).
If \(\alpha_{N}\underset{N\to+\infty}{\longrightarrow}+\infty\) the self-interactions are predominant. In order to study this regime we will study an accelerated timescale as it was done in [19] and [33] to study the motion of a finite number of vortices. Therefore we define:
\[\overline{q_{i}}(t):=q_{i}(\alpha_{N}^{-1}t).\]
This gives
\[\dot{\overline{q_{i}}}=-\frac{\nabla^{\perp}b(\overline{q_{i}})}{b(\overline{ q_{i}})}-\frac{1}{N\alpha_{N}}\underset{j\neq i}{\overset{N}{\underset{j=1} {\sum}}}\frac{1}{b(\overline{q_{i}})}\nabla_{x}^{\perp}g_{b}(\overline{q_{i}},\overline{q_{j}}) \tag{1.7}\]
We also define the rescaled interaction energy
\[\overline{E_{N}}(t):=E_{N}(\alpha_{N}^{-1}t)\]
and the rescaled moment of inertia
\[\overline{I_{N}}(t):=I_{N}(\alpha_{N}^{-1}t).\]
### Mean-field limits
Mean-field limits consist in studying the convergence of a system of ordinary differential equations modeling the evolution of a finite number of particles
\[\dot{x}_{i}=\frac{1}{N}\sum_{i=1}^{N}K(x_{i}-x_{j}) \tag{1.8}\]
to a Euler-like equation modeling the evolution of a continuous density \(\mu(t,x)\):
\[\partial_{t}\mu+\operatorname{div}((K*\mu)\mu)=0 \tag{1.9}\]
when the number of particles becomes large (here \(K:\mathbb{R}^{d}\longrightarrow\mathbb{R}^{d}\) is an interaction kernel). For systems of order two we are interested in the convergence of a system of particles following Newton's second law
\[\ddot{x}_{i}=\frac{1}{N}\sum_{i=1}^{N}K(x_{i}-x_{j})\]
to a Vlasov-like equation modeling the evolution of a continuous density \(f(t,x,v)\):
\[\begin{cases}\partial_{t}f+\operatorname{div}_{x}(fv)+\operatorname{div}_{v}( (K*\mu)f)=0\\ \mu(t,x)=\int_{\mathbb{R}^{d}}f(t,x,v)\,\mathrm{d}v.\end{cases} \tag{1.10}\]
A mean-field limit result consists in proving that if at time zero, the empirical distribution of the particles
\[\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{i}(t)}\qquad\left(\text{respectively}\quad \frac{1}{N}\sum_{i=1}^{N}\delta_{(x_{i}(t),\dot{x}_{i}(t))}\right)\]
converges to the continuous density \(\mu(t,x)\) solution of (1.9) (respectively \(f(t,x,v)\) solution of (1.10)) then the convergence also holds for any finite time.
When \(K\) is Lipschitz the mean-field limit of the upper system was established by compactness arguments in [6, 51] or by optimal transport theory and Wasserstein distances by Dobrushin in [21]. If \(K\) is singular there are numerous results establishing the mean-field limit of systems of order one:
Schochet proved in [61] the mean-field convergence of the point vortex system (that is \(K=\frac{1}{2\pi}\frac{x^{\perp}}{|x|^{2}}\) in dimension \(2\)) to a measure-valued solution of Euler equations up to a subsequence, using arguments previously developed in [20] and [60] to prove existence of such solutions.
For sub-coulombic interactions, that is \(|K(x)|,|x||\nabla K(x)|\leqslant C|x|^{-\alpha}\) with \(0<\alpha<d-1\), the mean-field limit of (1.8) was proved by Hauray in [30] assuming \(\operatorname{div}(K)=0\) and using a Dobrushin-type approach (following the idea of [31, 32]). It was also used by Carillo, Choi and Hauray to study with the mean-field limit of some aggregation models in [16].
In [22] Duerinckx gave another proof of the mean-field limit of several Riesz interaction gradient flows using a "modulated energy" that was introduced by Serfaty in [62].
In [63], Serfaty used this modulated energy approach to prove the mean-field convergence of such systems where \(K\) was a kernel given by Coulomb, logarithmic or Riesz interaction, that is \(K=\nabla g\) for \(g(x)=|x|^{-s}\) with \(\max(d-2,0)\leqslant s<d\) for \(d\geqslant 1\) or \(g(x)=-\ln|x|\) for \(d=1\) or \(2\). For this purpose \(K*\mu\) was supposed to be Lipschitz.
Rosenzweig proved in [57] the mean-field convergence of the point vortex system without assuming Lipschitz regularity for the limit velocity field, using the same energy as in [63] with refined estimates. Remark that it ensures that the point vortex system converges to any Yudovich solutions of the Euler equations (see [64]). This result was extended later for higher dimensional systems (\(d\geqslant 3\)) in [56] by the same author.
In [52] Nguyen, Rosenzweig and Serfaty extended the modulated energy approach to a more general class of potentials \(g\) using the commutator structure of the equations.
With a modulated energy approach, Bresch, Jabin and Wang defined a modulated entropy functionnal which allowed them to prove mean-field limit of interacting particles with noise in [9, 10, 11]. This method was used later to obtain uniform in time convergence for Riesz-type flows by Rosenzweig and Serfaty in [59] and by Rosenzweig, Serfaty and Chodron de Courcel in [18].
For systems of order two, the mean-field limit has been established for several singular kernels:
In [31, 32], Hauray and Jabin dealt with the case of some sub-coulombian interactions (or more precisely \(|K(x)|\leqslant c|x|^{-s}\) with \(0<s<1\)) by using a Dobrushin-type approach.
In [37, 38], Jabin and Wang studied the case of bounded and \(W^{-1,\infty}\) gradients.
In [4, 34, 41, 42] the same kind of results is proved with some cutoff of the interaction kernel.
In the appendix of [63], Duerinckx and Serfaty studied the case of particles interacting with a Coulomb or a Riesz interaction kernel to the Vlasov
equation in the monokinetic regime, that is the pressureless Euler-Poisson equations. The same method have been used to study the mean-field limit of more general models coming from quantum physics, biology or fluid dynamics (see for example [15, 50, 54]).
In [29], Han-Kwan and Iacobelli proved the mean-field limit of particles following Newton's second law to the Euler equation in a quasineutral regime or in the gyrokinetic limit. This result was extended later by Rosenzweig in [58] to allow a larger choice of scaling between the number of particles and the coupling constant.
Recently, Bresch, Jabin and Soler were able in [8] to prove the mean-field limit derivation of the Vlasov-Fokker-Planck equation with the true Coulomb interactions using the BBGKY hierarchy and the diffusivity in the velocity variables to get estimates on the marginals.
Numerous other mean-field limit results were proved for interacting particles with noise with regular or singular kernels. See for example [3, 5, 17, 25, 37, 38, 40, 44, 52, 53]. For a more complete bibliography on the mean-field limit of interacting particles with noise we refer to the bibliography of [18].
For a general introduction to the subject of mean-field limits we refer to the reviews [27, 36].
### Notations and assumptions
#### 1.4.1. Notations
* For \(u\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{2},\mathbb{R}^{2})\), we denote \(\mathrm{curl}(u)=\partial_{1}u_{2}-\partial_{2}u_{1}\).
* For \(h\in\dot{H}^{1}(\mathbb{R}^{2})\), we denote (1.11) \[[h,h]_{i,j}:=2\partial_{i}h\partial_{j}h-|\nabla h|^{2}\delta_{i,j}.\] It is the stress-energy tensor used in [63] to prove the mean-field limit of several singular ODE's. Remark that for \(h\) smooth enough, we have \[\mathrm{div}[h,h]=2\Delta h\nabla h.\]
* We denote \(<\!x\!>=(1+|x|^{2})^{\frac{1}{2}}\).
* \(g\) is the opposite of the Green function of the laplacian: \[g(x):=-\frac{1}{2\pi}\ln|x|.\]
* \(|\cdot|_{\mathcal{C}^{0,s}}\) is the semi-norm associated to the Holder space \(\mathcal{C}^{0,s}\): \[|f|_{\mathcal{C}^{0,s}}=\sup_{x\neq y}\frac{|f(x)-f(y)|}{|x-y|^{s}}.\]
* When \(1\leqslant p\leqslant+\infty\), \(p^{\prime}\) denotes the dual exponent of \(p\).
* If \(\nu\) is a probability measure on \(\mathbb{R}^{2}\), we will denote \(\nu^{\otimes 2}:=\nu\otimes\nu\).
* \(C\) is a generic constant. We will denote \(C_{A,B}\) when a constant depends on some quantities \(A\) and \(B\).
* \(\mathcal{P}(\mathbb{R}^{2})\) is the space of probability measures on \(\mathbb{R}^{2}\).
* For \(Q_{N}=(q_{1},...,q_{N})\in(\mathbb{R}^{2})^{N}\) we denote \(I(Q_{N})=\frac{1}{N}\sum_{i=1}^{N}|q_{i}|^{2}\).
#### 1.4.2. Assumptions
We will make the following assumption on the depth function \(b\):
**Assumption 1.5**.: _We assume that \(b\) is a smooth function, \(\inf b>0\), \(\sup b<+\infty\) and that there exists \(\gamma>0\) such that_
\[(1+|x|)^{4+\gamma}(|\nabla b(x)|+|D^{2}b(x)|)<+\infty.\]
We will consider regular solutions of (1.1) and (1.3) in the following sense:
**Assumption 1.6**.: _We say that a function \(\omega(t,x)\) satisfies Assumption 1.6 if \(\omega\in L^{\infty}([0,T],L^{\infty}(\mathbb{R}^{2})\cap\mathcal{P}(\mathbb{ R}^{2}))\cap\mathcal{C}^{0}([0,T],L^{\infty}(\mathbb{R}^{2})-w^{*})\), if there exists a compact \(K\) such that for every \(t\in[0,T]\), \(\operatorname{supp}(\omega(t))\subset K\) and if \(\nabla G_{b}[\omega]\in L^{\infty}([0,T],W^{1,\infty})\) where \(G_{b}\) is the operator defined by Equation (2.11)._
_Remark 1.7_.: A weak solution of (1.1) in the sense of Definition 1.1 (or a weak solution of (1.3) in the sense of Definition 1.2) does not necessarily verify Assumption 1.6 because of the regularity we ask for the velocity field \(\nabla G_{b}[\omega]\). This assumption will be crucial to apply Proposition 6.1 and prove the mean-field limit Theorem 1.8. The existence and uniqueness of sufficiently regular solutions of (1.1) locally in time is ensured by [23, Theorem 2]. One could also prove that \(\omega\in L^{\infty}([0,T],\mathcal{C}^{0,s})\) is sufficient to have \(\nabla G_{b}[\omega]\in L^{\infty}([0,T],W^{1,\infty})\).
### Main result and plan of the paper
The main result of this paper is the following theorem which gives the mean-field limit of the point vortex system (1.6) and its rescaled version (1.7) (we recall that the kernel \(g_{b}\) is defined by (2.10)):
**Theorem 1.8**.: _Assume that \(b\) satisfies Assumption 1.5. We have mean-field convergence of the point-vortex system in the two following regimes:_
1. _Let_ \(\omega\) _be a solution of (_1.1_) with initial datum_ \(\omega_{0}\) _in the sense of Definition_ 1.1_, satisfying Assumption_ 1.6 _and_ \((q_{1},...,q_{N})\) _be a solution of (_1.6_). Assume that:_ * \((I_{N}(0))_{N}\) _is bounded._ * \(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}^{0}}\underset{N\to+\infty} {\overset{*}{\underset{N\to+\infty}}}\omega_{0}\) _for the weak-_\(*\) _topology of probability measures._ * \(\alpha_{N}\underset{N\to+\infty}{\overset{*}{\underset{N\to+\infty}}}\alpha\)_._ * \(\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(q_{i}^{0},q_{j}^{0}) \underset{N\to+\infty}{\overset{*}{\underset{N\to+\infty}}}\iint_{\mathbb{R}^ {2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega_{0}(x)\omega_{0}(y)\,\mathrm{d}x\, \mathrm{d}y\)_._ _Then for all_ \(t\in[0,T]\)_,_ \(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}(t)}\underset{N\to+ \infty}{\overset{*}{\underset{N\to+\infty}}}\omega(t)\) _for the weak-_\(*\) _topology of probability measures and_ \[\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(q_{i}(t),q_{j}(t)) \underset{N\to+\infty}{\overset{*}{\underset{N\to+\infty}}}\iint_{\mathbb{R} ^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega(t,x)\omega(t,y)\,\mathrm{d}x\, \mathrm{d}y.\]
2. _Let_ \(\overline{\omega}\) _be a solution of (_1.3_) with initial datum_ \(\omega_{0}\) _in the sense of Definition_ 1.2_, satisfying Assumption_ 1.6 _and_ \((\overline{q_{1}},...,\overline{q_{N}})\) _be a solution of (_1.7_). Assume that:_ * \((I_{N}(0))_{N}\) _is bounded._ * \(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}^{0}}\underset{N\to+\infty}{ \overset{*}{\underset{N\to+\infty}}}\omega_{0}\) _for the weak-_\(*\) _topology of probability measures._ * \(\alpha_{N}\underset{N\to+\infty}{\overset{*}{\underset{N\to+\infty}}}\alpha\)_._ * \(\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(q_{i}^{0},q_{j}^{0}) \underset{N\to+\infty}{\overset{*}{\underset{N\to+\infty}}}\iint_{\mathbb{R} ^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega_{0}(x)\omega_{0}(y)\,\mathrm{d}x\, \mathrm{d}y\)_._ _Then for all_ \(t\in[0,T]\)_,_ \(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}(t)}\underset{N\to+ \infty}{\overset{*}{\underset{N\to+\infty}}}\omega(t)\) _for the weak-_\(*\) _topology of probability measures and_ \[\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(q_{i}(t),q_{j}(t)) \underset{N\to+\infty}{\overset{*}{\underset{N\to+\infty}}}\iint_{\mathbb{R} ^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega(t,x)\omega(t,y)\,\mathrm{d}x\, \mathrm{d}y.\]
3. _Let_ \(\overline{\omega}\) _be a solution of (_1.3_) with initial datum_ \(\omega_{0}\) _in the sense of Definition_ 1.2_, satisfying Assumption_ 1.6 _and_ \((\overline{q_{1}},...,\overline{q_{N}})\) _be a solution of (_1.7_). Assume that:_ * \((I_{N}(0))_{N}\) _is bounded._ * \(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}^{0}}\underset{N\to+\infty}{ \overset{*}{\underset{N\to+\infty}}}\omega_{0}\) _for the weak-_\(*\) _topology of probability measures._ * \(\alpha_{N}\underset{N\to+\infty}{\overset{*}{\underset{N\to+\infty}}}\alpha\)_._ * \(\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(q_{i}^{0},q_{j}^{0}) \underset{N\to+\infty}{\overset{*}{\underset{N\to+\infty}}}\iint_{\mathbb{R} ^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega_{0}(x)\omega_{0}(y)\,\mathrm{d}x\, \mathrm{d}y\)_._ _Then for all_ \(t\in[0,T]\)_,_ \(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}(t)}\underset{N\to+ \infty}{\overset{*}{\underset{N\to+\infty}}}\omega(t)\) _for the weak-_\(*\) _topology of probability measures and_ \[\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(q_{i}(t),q_{j}(t)) \underset{N\to+\infty}{\overset{*}{\underset{N\to+\infty}}}\iint_{\mathbb{R} ^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega_{0}(x)\omega_{0}(y)\,\mathrm{d}x\, \mathrm{d}y.\]
4. _Let_ \(\overline{\omega}\) _be a solution of (_1.3_) with initial datum_ \(\omega_{0}\) _in the sense of Definition_ 1.2_, satisfying Assumption_ 1.6 _and_ \((\overline{q_{1}},...,\overline{q_{N}})\) _be a solution of (_1.7_). Assume that:_ * \((I_{N}(0))_{N}\) _is bounded._ * \(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}^{0}}\underset{N\to+\infty}{ \overset{*}{\underset{N\to+\infty}}}\omega_{0}\) _for the weak-_\(*\) _topology of probability measures._ * \(\alpha_{N}\underset{N\to+\infty}{\overset{*}{\underset{N\to+\infty}}}\alpha\)_._ * \(\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(q_{i}^{0},q_{j}^{0}) \underset{N\to+\infty}{\overset{*}{\underset{N\to+\infty}}}\iint_{\mathbb{R} ^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega_{0}(x)\omega_{0}(y)\,\mathrm{d}x\, \mathrm{d}y\)_._ _Then for all_ \(t\in[0,T]\)_,_ \(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}(t)}\underset{N\to+ \infty}{\overset{*}{\underset{N\to+\infty}}}\omega(t)\) _for the weak-_\(*\) _topology of probability measures and_ \[\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(q_{i}(t),q_{j}(t)) \underset{N\to+\infty}{\overset{*}{\underset
* \((\overline{I_{N}}(0))_{N}\) _is bounded._
* \(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}^{0}}\overset{*}{\underset{N \rightarrow+\infty}{\longrightarrow}}\omega_{0}\) _for the weak-_\(*\) _topology of probability measures._
* \(\alpha_{N}\underset{N\rightarrow+\infty}{\longrightarrow}+\infty\)_._
* \(\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(q_{i}^{0},q_{j}^{0}) \underset{N\rightarrow+\infty}{\longrightarrow}\iint_{\mathbb{R}^{2}\times \mathbb{R}^{2}}g_{b}(x,y)\omega_{0}(x)\omega_{0}(y)\,\mathrm{d}x\,\mathrm{d}y\)_._
_Then for all \(t\in[0,T]\), \(\frac{1}{N}\sum_{i=1}^{N}\delta_{\overline{q}_{i}(t)}\overset{*}{ \underset{N\rightarrow+\infty}{\longrightarrow}}\overline{\omega}(t)\) for the weak-_\(*\) _topology of probability measures and_
\[\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(\overline{q}_{i}(t),\overline{q}_{j}(t))\underset{N\rightarrow+\infty}{\longrightarrow}\iint _{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\overline{\omega}(t,x) \overline{\omega}(t,y)\,\mathrm{d}x\,\mathrm{d}y.\]
Remark that in the case \(\alpha_{N}\underset{N\rightarrow+\infty}{\longrightarrow}0\) we recover the classical lake equations ((1.1) with \(\alpha=0\)).
The boundedness of \((I_{N}(0))\) is a technical assumption made to ensure that not too much vorticity is going to infinity. This assumption was not necessary in the original papers of Duerinckx in [22] and of Serfaty in [63] but we will need it to deal with the heterogeneity of the kernel \(g_{b}\) (defined in (2.10)).
The convergence of the interaction energy and the weak\(-*\) convergence of \((\omega_{N})\) to \(\omega\) ensure the convergence of \((\omega_{N})\) to \(\omega\) in a stronger sense: We will prove in Corollary 5.4 that provided certain technical assumptions are satisfied, it is equivalent to the convergence to zero of a "modulated energy" functionnal. For an empirical measure of point vortices \((q_{1},...,q_{N})\) and a vorticity field \(\omega\in L^{\infty}\) with compact support, this modulated energy is defined by:
\[\mathcal{F}_{b}(Q_{N},\omega):=\\ \iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2})\backslash\Delta}g_{ b}(x,y)\,\mathrm{d}\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}}-\omega \right)(x)\,\mathrm{d}\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}}-\omega \right)(y) \tag{1.12}\]
where
\[\Delta:=\{(x,x)\;;\;x\in\mathbb{R}^{2}\}.\]
We will use this energy to control the distance between solutions \(\omega\) and \(Q_{N}\) of (1.1) and (1.6) or solutions \(\overline{\omega}\) and \(\overline{Q_{N}}\) of (1.3) and (1.7) at any given time \(t\):
\[\mathcal{F}_{b,N}(t):=\mathcal{F}_{b}(Q_{N}(t),\omega(t)) \tag{1.13}\]
and
\[\overline{\mathcal{F}}_{b,N}(t):=\mathcal{F}_{b}(\overline{Q_{N}}(t), \overline{\omega}(t)). \tag{1.14}\]
The proof of Theorem 1.8 relies on Gronwall-type estimates on these two quantities. The paper is organised as follows:
* In Section 2 we prove the well-posedness of the elliptic problem linking a velocity field satisfying \(\operatorname{div}(bu)=0\) and its vorticity, the existence of a Green kernel for this elliptic problem and we establish several regularity estimates.
* In Section 3 we prove that the point-vortex system is well-posed and give some estimates on the interaction energy and on the moment of inertia of the system that we will need in Section 7.
* In Section 4 we compute the time derivative of \(\mathcal{F}_{b,N}\) and of \(\overline{\mathcal{F}}_{b,N}\).
* In Section 5 we state several properties of the modulated energy. We prove that it controls the convergence in \(H^{s}\) for \(s<-1\) (see Corollary 5.3) and that having the convergence of the modulated energy is equivalent to have weak-\(*\) convergence of the point vortex system and convergence of its interaction energy (see Corollary 5.4).
* In Section 6 we bound the main term appearing in the derivatives of the modulated energies.
* In Section 7 we use the results of the other sections to prove Theorem 1.8.
The modulated energy \(\mathcal{F}_{b}\) is similar to the modulated energy defined in [63, Equation (1.16)] and the proofs of Sections 4 to 7 follow the same global ideas. The main difference between Theorem 1.8 and other mean-field limit results using modulated energies is that the kernel \(g_{b}\) is not of the form \(a(x,y)=a(x-y)\). Most of the difficulties adressed by this paper consist in dealing with the heterogeneity of the kernel \(g_{b}\).
### Funding
This work is supported by the French National Research Agency in the framework of the project "SINGFLOWS" (ANR-18-CE40-0027-01).
## 2. Velocity reconstruction
There exists a Biot-Savart type law to reconstruct a velocity field \(u\) satisfying \(\operatorname{div}(bu)=0\) from its vorticity. In this section we prove several results concerning this reconstruction. In Subsection 2.1 we prove that the elliptic equations linking \(u\) with its vorticity are well-posed. In Subsection 2.2 we prove some results related to the asymptotic behavior of the velocity field as \(|x|\longrightarrow\infty\). In Subsection 2.3, we give an analogue of the Biot-Savart law for a velocity field satisfying System (2.1). Finally, in Subsection 2.4 we define some regularisations of the Coulomb kernel and of the dirac mass that we will need in Sections 5 and 6.
### Well-posedness of the elliptic problem
In this subsection we justify the well-posedness of the elliptic equations satisfied by the velocity field:
\[\left\{\begin{aligned} &\operatorname{div}(bu)=0\\ &\operatorname{curl}(u)=\omega.\end{aligned}\right. \tag{2.1}\]
As we will write \({u=-\frac{1}{b}\nabla^{\perp}\psi}\) we will also consider the "stream function" formulation of the upper system:
\[-\operatorname{div}\left(\frac{1}{b}\nabla\psi\right)=\omega. \tag{2.2}\]
For this purpose we will consider the following weighted Sobolev spaces:
**Definition 2.1**.: _For \(1<p<\infty\) we consider the Banach space \(W^{2,p}_{-1}(\mathbb{R}^{2})\) defined by_
\[W^{2,p}_{-1}(\mathbb{R}^{2}):=\{u\in\mathcal{D}^{\prime}(\mathbb{R}^{2})\;;\; \forall\alpha\in\mathbb{N}^{2},|\alpha|\leqslant 2,<\cdot>^{|\alpha|-1}D^{ \alpha}u\in L^{p}(\mathbb{R}^{2})\}\]
_and equipped with its natural norm_
\[\|u\|_{W^{2,p}_{-1}}:=\left(\sum_{|\alpha|\leqslant 2}\left\|<\cdot>^{|\alpha|-1}D ^{\alpha}u\right\|_{L^{p}}^{p}\right)^{\frac{1}{p}}.\]
These weighted spaces were first introduced by Cantor in [14] and have been investigated to study elliptic equations on unbounded domains. For a more precise study of these spaces and further references we refer to [45, 48, 49]. The following proposition is a straightforward consequence of [45, Theorem 2] (which is the combination of two theorems proved in [48] and [49]) and states that Equations (2.1) and (2.2) are well-posed.
**Proposition 2.2**.: _Let \(2<p<+\infty\), assume that \(<\cdot>\omega\in L^{p}(\mathbb{R}^{2})\), then there exists a unique solution \(\psi\) of (2.2) in \(W^{2,p}_{-1}(\mathbb{R}^{2})/\mathbb{R}\). Morever if \(u\in L^{p}(\mathbb{R}^{2},\mathbb{R}^{2})\) is a solution of (2.1) in the sense of distributions, then_
\[u=-\frac{1}{b}\nabla^{\perp}\psi.\]
Proof.: We can rewrite Equation (2.2) as
\[-\Delta\psi-b\nabla\left(\frac{1}{b}\right)\cdot\nabla\psi=b\omega.\]
We have that:
* \(-\Delta\) is an elliptic operator with constant coefficients and homogeneous of degree \(2\).
* \(b\nabla\left(\frac{1}{b}\right)\in\mathcal{C}^{0}\) and \[\lim_{|x|\to+\infty}\left|<x\,>^{2-1+0}b(x)\nabla\left(\frac{1}{b}\right)(x) \right|=0\] since \(b\) satisfies Assumption 1.5.
* \(<\cdot>b\omega\in L^{p}\).
* \(-1\leqslant-\frac{2}{p}\) and \(1-\frac{2}{p}\notin\mathbb{N}\).
Therefore by [45, Theorem 2], there exists a unique solution \(\psi\) (up to a constant) of Equation (2.2) in \(W^{2,p}_{-1}(\mathbb{R}^{2})\).
Now if \(u\in L^{p}\) is a solution of (2.1), then
\[\left\|<\cdot>\operatorname{curl}(bu)\right\|_{L^{p}} =\left\|<\cdot>b\omega\right\|_{L^{p}}+\left\|<\cdot>\nabla^{ \perp}b\cdot u\right\|\] \[\leqslant C_{b}(\left\|<\cdot>\omega\right\|_{L^{p}}+\left\|u \right\|_{L^{p}})\]
since \(b\) satisfies Assumption 1.5. Let us consider \(\pi\in W^{2,p}_{-1}(\mathbb{R}^{2})\) to be the unique solution (up to a constant) of \(-\Delta\pi=\operatorname{curl}(bu)\) given by [45, Theorem 1]. Then \(bu+\nabla^{\perp}\pi\) is a div-curl free vector field in \(L^{p}\) so it is zero.
Moreover,
\[-\operatorname{div}\left(\frac{1}{b}\nabla\pi\right)=-\operatorname{curl}\left( \frac{1}{b}\nabla^{\perp}\pi\right)=\operatorname{curl}(u)=\omega\]
so \(\nabla\pi=\nabla\psi\) by uniqueness of solutions of (2.2) in \(W^{2,p}_{-1}(\mathbb{R}^{2})/\mathbb{R}\).
Now we state several estimates for solutions of Equation (2.2), proved by Duerinckx in [23]:
**Lemma 2.3**.: _[From [23, Lemma 2.6]] Let \(p>2\), \(\omega\) be such that \(<\cdot>\omega\in L^{p}(\mathbb{R}^{2})\). If \(\psi\in W^{2,p}_{-1}(\mathbb{R}^{2})\) is the solution of (2.2) given by Proposition 2.2, then:_
1. _There exists_ \(p_{0}>2\) _depending only on_ \(b\) _such that for all_ \(2<p\leqslant p_{0}\)_,_ \[\left\|\nabla\psi\right\|_{L^{p}}\leqslant C_{p}\left\|\omega\right\|_{L^{\frac {2p}{p+2}}}.\]
2. _For all_ \(0<s<1\)_,_ \[\left|\nabla\psi\right|_{\mathcal{C}^{0,s}}\leqslant C_{s}\left\|\omega \right\|_{L^{\frac{2}{1-s}}}.\]
3. \(\left\|\nabla\psi\right\|_{L^{\infty}}\leqslant C\left\|\omega\right\|_{L^{1} \cap L^{\infty}}.\)__
_Remark 2.4_.: In [23], this lemma was stated for any solution of (2.2) with decreasing gradient (which is the case for a solution given by Proposition 2.2 since its gradient is in \(W^{1,p}\)) and for \(\omega\) smooth with compact support but by density it can be extended to all \(\omega\) such that \(<\cdot>\omega\in L^{p}(\mathbb{R}^{2})\) and such that the upper inequalities make sense.
### Asymptotic behavior of the velocity field
The main result of this subsection is the following proposition giving the asymptotic behavior of a velocity field satisfying (2.1).
**Proposition 2.5**.: _Let \(\omega\in L^{\infty}\) with compact support and \(u=-\frac{1}{b}\nabla^{\perp}\psi\) where \(\psi\) is the solution of (2.2) given by Proposition 2.2. There exists \(C>0\) depending only on \(b\) and \(\omega\) such that for all \(x\in\mathbb{R}^{2}\backslash\{0\}\),_
\[\left|u(x)-\frac{1}{2\pi}\left(\int_{\mathbb{R}^{2}}\omega\right)\frac{x^{ \perp}}{|x|^{2}}\right|\leqslant\frac{C}{|x|^{2}}. \tag{2.3}\]
_Moreover there exists \(\delta\in(0,1)\) and \(C\) such that_
\[|\psi(x)|\leqslant C(1+|x|^{\delta}). \tag{2.4}\]
To prove this proposition we will need to use the following result about the asymptotic behavior of a velocity field given by the usual Biot-Savart law:
**Lemma 2.6**.: _Let us assume that \(\mu\) is a measurable function such that \(\mu\in L^{1}((1+|x|^{2})\,\mathrm{d}x)\) and \(|\cdot|^{2}\mu\in L^{p}\) for some \(p>2\). Then there exists \(C,R>0\) depending only on \(\mu\) such that for all \(x\in\mathbb{R}^{2}\backslash\{0\}\),_
\[\left|\int_{\mathbb{R}^{2}}\frac{x-y}{|x-y|^{2}}\,\mathrm{d}\mu(y)-\left(\int _{\mathbb{R}^{2}}\,\mathrm{d}\mu(y)\right)\frac{x}{|x|^{2}}\right|\leqslant \frac{C}{|x|^{2}}.\]
_In particular if \(\int_{\mathbb{R}^{2}}\,\mathrm{d}\mu=0\), then_
\[\int_{\mathbb{R}^{2}}\frac{x-y}{|x-y|^{2}}\,\mathrm{d}\mu(y)=\underset{|x|\to+ \infty}{\mathcal{O}}(|x|^{-2}).\]
This lemma is a classical result in fluid dynamics (see for example [46, Proposition 3.3]) that we will prove for the sake of completeness.
Proof.: If \(x\neq 0\), we have
\[\int_{\mathbb{R}^{2}}\mu(y)\left(\frac{x-y}{|x-y|^{2}}-\frac{x}{|x|^{2}} \right)\,\mathrm{d}y=\frac{1}{|x|^{2}}\int_{\mathbb{R}^{2}}\mu(y)\frac{|x|^{2} (x-y)-x|x-y|^{2}}{|x-y|^{2}}\,\mathrm{d}y.\]
Now remark that
\[|x|^{2} (x-y)-x|x-y|^{2}\] \[=|x|^{2}(x-y)-(x-y)(|x|^{2}+|y|^{2}-2x\cdot y)-y|x-y|^{2}\] \[=(x-y)(|y|^{2}-2(x-y)\cdot y-2|y|^{2})-y|x-y|^{2}\] \[=-y|x-y|^{2}-2[(x-y)\cdot y](x-y)-|y|^{2}(x-y).\]
Thus
\[\bigg{|}\int_{\mathbb{R}^{2}}\mu(y)\left(\frac{x-y}{|x-y|^{2}}- \frac{x}{|x|^{2}}\right)\,\mathrm{d}y\bigg{|}\leqslant \frac{C}{|x|^{2}}\bigg{(}\int_{\mathbb{R}^{2}}|y||\mu(y)|\, \mathrm{d}y\] \[+\int_{\mathbb{R}^{2}}\frac{|y|^{2}|\mu(y)|}{|x-y|}\,\mathrm{d}y \bigg{)}.\]
Now we have that for any \(p>2\),
\[\int_{\mathbb{R}^{2}}\frac{|y|^{2}|\mu(y)|}{|x-y|}\,\mathrm{d}y\leqslant\big{|} |\,|\cdot|^{2}\mu\|_{L^{1}}^{\frac{p-2}{2p-2}}\,\big{|}|\,|\cdot|^{2}\mu\|_{L ^{p}}^{\frac{p}{2p-2}}\]
(see for example [35, Lemma 1]) and therefore we get the proof of Lemma 2.6.
With this result we can now study the asymptotic behavior of a velocity field satisfying System (2.1):
Proof of Proposition 2.5.: We write
\[\mu:=\mathrm{div}(u)=\mathrm{div}\left(\frac{1}{b}bu\right)=\nabla\left(\frac{ 1}{b}\right)\cdot bu=-\frac{\nabla b\cdot u}{b}. \tag{2.5}\]
By Helmholtz decomposition we can write
\[u=-\nabla g\ast\mu-\nabla^{\perp}g\ast\omega. \tag{2.6}\]
Let \(2<p<+\infty\), then by Assumption 1.5,
\[\int_{\mathbb{R}^{2}}(1+|y|^{2})|\mu(y)|\,\mathrm{d}y \leqslant C_{b}\int_{\mathbb{R}^{2}}\frac{1+|y|^{2}}{(1+|y|)^{4+ \gamma}}|u(y)|\,\mathrm{d}y\] \[\leqslant C_{b}\left\|(1+|\cdot|)^{-(2+\gamma)}\right\|_{L^{p^{ \prime}}}\|u\|_{L^{p}}<+\infty\]
and
\[\int_{\mathbb{R}^{2}}|y|^{2p}|\mu(y)|^{p}\,\mathrm{d}y \leqslant C_{b}\int_{\mathbb{R}^{2}}|y|^{2p}(1+|y|)^{-p(4+\gamma)} |u(y)|^{p}\,\mathrm{d}y\] \[\leqslant C_{b}\int_{\mathbb{R}^{2}}|u(y)|^{p}\,\mathrm{d}y<+\infty.\]
If we apply Lemma 2.6 on each term of (2.6) we only need to show that \(\int\mu=0\) to obtain (2.3). We define
\[b_{\infty}:=\lim_{|x|\to+\infty}b(x).\]
Remark that the existence of this limit is guaranteed by Assumption 1.5. Let us prove by induction that for any integer \(n\),
\[\sum_{k=0}^{n}\frac{\ln^{k}(b_{\infty})}{k!}\int_{\mathbb{R}^{2}}\mu=\frac{1} {n!}\int_{\mathbb{R}^{2}}\ln^{n}(b)\mu. \tag{2.7}\]
If \(n=0\) then this equality reduces to \({\int\mu=\int\mu}\). Now let us assume that it holds for some \(n\geqslant 0\). Using Equation (2.5), we get
\[\ln^{n}(b)\mu=-\frac{1}{n+1}\nabla\ln^{n+1}(b)\cdot u.\]
Inserting Equation (2.6), we get
\[\ln^{n}(b)\mu=\frac{1}{n+1}\nabla\ln^{n+1}(b)\cdot(\nabla g*\mu+\nabla^{\perp} g*\omega).\]
Integrating over a ball of center \(0\) and radius \(R\) and integrating by parts we get
\[\begin{split}\int_{B(0,R)}\ln^{n}(b)\mu=&\frac{1}{n +1}\bigg{(}\int_{\partial B(0,R)}\ln^{n+1}(b)(\nabla g*\mu+\nabla^{\perp}g* \omega)\cdot\,\mathrm{d}\vec{S}\\ &-\int_{B(0,R)}\ln^{n+1}(b)\operatorname{div}(\nabla g*\mu+ \nabla^{\perp}g*\omega)\bigg{)}\\ =&\frac{1}{n+1}\bigg{(}\int_{\partial B(0,R)}\ln^{n +1}(b)(\nabla g*\mu+\nabla^{\perp}g*\omega)\cdot\,\mathrm{d}\vec{S}\\ &+\int_{B(0,R)}\ln^{n+1}(b)\mu\bigg{)}\end{split} \tag{2.8}\]
where \(\,\mathrm{d}\vec{S}(x)=2\pi x\,\mathrm{d}\sigma(x)\) and \(\sigma\) is the uniform probability measure on \(\partial B(0,R)\). Using Lemma 2.6, we get that for \(x\in\partial B(0,R)\),
\[(\nabla g*\mu +\nabla^{\perp}g*\omega)(x)\cdot x\] \[=-\frac{1}{2\pi}\left(\left(\int_{\mathbb{R}^{2}}\mu\right)\frac {x}{|x|^{2}}+\left(\int_{\mathbb{R}^{2}}\omega\right)\frac{x^{\perp}}{|x|^{2} }+\mathcal{O}(R^{-2})\right)\cdot x\] \[=-\frac{1}{2\pi}\left(\int_{\mathbb{R}^{2}}\mu\right)+\mathcal{O }(R^{-1}).\]
Thus we get that
\[\frac{1}{n+1}\int_{\partial B(0,R)}\ln^{n+1}(b)(\nabla g*\mu+\nabla^{\perp}g* \omega)\cdot\,\mathrm{d}\vec{S}\underset{R\to+\infty}{\longrightarrow}-\frac{ \ln^{n+1}(b_{\infty})}{n+1}\int_{\mathbb{R}^{2}}\mu.\]
Combining the upper equality with Equations (2.7) and (2.8) we get that
\[\sum_{k=0}^{n+1}\frac{\ln^{k}(b_{\infty})}{k!}\int_{\mathbb{R}^{2}}\mu=\frac{1 }{(n+1)!}\int_{\mathbb{R}^{2}}\ln^{n+1}(b)\mu\]
which ends the proof of Equality (2.7). Now if \(n\) goes to infinity, this gives
\[e^{\ln(b_{\infty})}\int_{\mathbb{R}^{2}}\mu=0\]
and thus
\[\int_{\mathbb{R}^{2}}\mu=0.\]
Now by Lemma 2.3 and Morrey's inequality (see for example [13, Theorem 9.12]), for any \(2<p\leqslant p_{0}\),
\[|\psi(x)| \leqslant|\psi(x)-\psi(0)|+|\psi(0)|\] \[\leqslant C_{p}\left\|\nabla\psi\right\|_{L^{p}}|x|^{1-\frac{2}{p }}+|\psi(0)|.\]
Taking \(\delta=1-\dfrac{2}{p}\) we obtain (2.4).
### Construction of the Green kernel
The main result of this subsection is a Biot-Savart type law for the lake equations, given by Proposition 2.8. Let us begin by giving the definition and some estimates on the function \(S_{b}\) that appears in the definition of the kernel \(g_{b}\) (see Equation (2.10)):
**Lemma 2.7**.: _For \(y\in\mathbb{R}^{2}\), let \(S_{b}(\cdot,y)\) be a solution of_
\[-\operatorname{div}\left(\frac{1}{b}\nabla S_{b}(\cdot,y)\right)=-g(\cdot-y) \sqrt{b(y)}\Delta\left(\frac{1}{\sqrt{b}}\right) \tag{2.9}\]
_given by Proposition 2.2 applied to \(\omega=-g(\cdot-y)\sqrt{b(y)}\Delta\left(\frac{1}{\sqrt{b}}\right)\) and \(\psi=S_{b}(\cdot,y)\). Then:_
1. _For any_ \(y\in\mathbb{R}^{2}\) _and_ \(2<p\leqslant+\infty\)_,_ \(\nabla_{x}S_{b}(\cdot,y)\in L^{p}\) _and_ \[\left\|\nabla_{x}S_{b}(\cdot,y)\right\|_{L^{p}}\leqslant C_{b,p}(1+|y|).\]
2. _There exists_ \(s_{0}\in(0,1)\) _such that for all_ \(0<s<s_{0}\)_,_ \[|\nabla_{x}S_{b}(x,\cdot)|_{\mathcal{C}^{0,s}(B(y,1))} \leqslant C_{b,s}(1+|y|)\] \[|\nabla_{x}S_{b}(\cdot,y)|_{\mathcal{C}^{0,s}(\mathbb{R}^{2})} \leqslant C_{b,s}(1+|y|).\]
Proof.: For any \(p\) such that \(1\leqslant p<+\infty\), we have
\[\left\|\sqrt{b(y)}<\cdot>\Delta\left(\frac{1}{\sqrt{b}}\right)g(\cdot-y) \right\|_{L^{p}}\leqslant\left\|b\right\|_{L^{\infty}}^{\frac{1}{2}}\left\|g( \cdot-y)<\cdot>\Delta\left(\frac{1}{\sqrt{b}}\right)\right\|_{L^{p}}\]
and
\[\left\|<\cdot>g(\cdot-y)\Delta\left(\frac{1}{\sqrt{b}}\right) \right\|_{L^{p}}^{p}\] \[\leqslant \int_{B(y,1)}<\!x\!>^{p}\left|g(x-y)\right|^{p}\left|\Delta \left(\frac{1}{\sqrt{b}}\right)(x)\right|^{p}\,\mathrm{d}x\] \[+\int_{B(y,1)^{c}}<\!x\!>^{p}\left|g(x-y)\right|^{p}\left|\Delta \left(\frac{1}{\sqrt{b}}\right)(x)\right|^{p}\,\mathrm{d}x\] \[\leqslant C\left\|g\right\|_{L^{p}(B(0,1))}^{p}\left\|<\!\cdot>\Delta \left(\frac{1}{\sqrt{b}}\right)\right\|_{L^{\infty}}^{p}\]
\[+\int_{B(y,1)^{c}}(1+|x|^{2})^{\frac{p}{2}}(|x|+|y|)^{p}\left|\Delta \left(\frac{1}{\sqrt{b}}\right)(x)\right|^{p}\,\mathrm{d}x.\]
By Assumption 1.5, we have that
\[\int_{B(y,1)^{c}}(1+|x|^{2})^{\frac{p}{2}}(|x|+|y|)^{p} \left|\Delta\left(\frac{1}{\sqrt{b}}\right)(x)\right|^{p}\, \mathrm{d}x\] \[\leqslant\int_{\mathbb{R}^{2}}\frac{(1+|x|^{2})^{\frac{p}{2}}(|x| +|y|)^{p}}{(1+|x|)^{(4+\gamma)p}}\,\mathrm{d}x\] \[\leqslant C_{b}(1+|y|)^{p}.\]
Therefore we can apply Proposition 2.2 to show that there exists a solution \(S_{b}(\cdot,y)\) of (2.9) in \(W^{2,p}_{-1}(\mathbb{R}^{2})\), unique up to a constant. Since \(<\!x\!>\geqslant 1\) we also have that
\[\left\|\sqrt{b(y)}g(\cdot-y)\Delta\left(\frac{1}{\sqrt{b}}\right)\right\|_{L^ {p}}\leqslant C_{b,p}(1+|y|).\]
By Lemma 2.3, there exists \(p_{0}\) such that for any \(2<p\leqslant p_{0}\) and \(0<s<1\):
\[\left\|\nabla_{x}S_{b}(\cdot,y)\right\|_{L^{p}} \leqslant\left\|\sqrt{b(y)}\Delta\left(\frac{1}{\sqrt{b}}\right) g(\cdot-y)\right\|_{L^{\frac{2p}{p+2}}}\] \[\leqslant C_{b,p}(1+|y|)\]
and
\[\left|\nabla_{x}S_{b}(\cdot,y)\right|_{\mathcal{C}^{0,s}} \leqslant C_{s}\left\|\sqrt{b(y)}\Delta\left(\frac{1}{\sqrt{b}} \right)g(\cdot-y)\right\|_{L^{\frac{2}{1-s}}}\] \[\leqslant C_{b,s}(1+|y|)\]
that is the second inequality of Claim (2). Using that
\[\left\|\cdot\right\|_{L^{\infty}}\leqslant C(\left\|\cdot\right\|_{L^{p}}+| \cdot|_{\mathcal{C}^{0,s}})\]
(see for example the proof of Morrey's embedding theorem in [13, Theorem 9.12]), we get the bound we want on \(\nabla_{x}S_{b}\):
\[\left\|\nabla_{x}S_{b}(\cdot,y)\right\|_{L^{\infty}}\leqslant C_{b}(1+|y|).\]
If we interpolate the inequalities on \(\left\|\nabla_{x}S_{b}(\cdot,y)\right\|_{L^{\infty}}\) and \(\left\|\nabla_{x}S_{b}(\cdot,y)\right\|_{L^{p}}\) for \(2<p\leqslant p_{0}\) we find that for any \(p>2\),
\[\left\|\nabla_{x}S_{b}(\cdot,y)\right\|_{L^{p}}\leqslant C_{b,p}(1+|y|).\]
For the first inequality of Claim (2), let us consider \(z\) such that \(|z|\) is small and remark that \(S_{b}(x,y+z)-S_{b}(x,y)\) solves
\[\mathrm{div} \left(\frac{1}{b}(\nabla_{x}S_{b}(\cdot,y+z)-\nabla_{x}S_{b}( \cdot,y))\right)\] \[=\left(\sqrt{b(y+z)}g(y+z-\cdot)-\sqrt{b(y)}g(y-\cdot)\right) \Delta\left(\frac{1}{\sqrt{b}}\right).\]
Let us find a bound for the second member in \(L^{p}\):
\[\left(\sqrt{b(y+z)}g(y+z-x)-\sqrt{b(y)}g(y-x)\right)\Delta\left( \frac{1}{\sqrt{b}}\right)(x)\] \[= (\sqrt{b(y+z)}-\sqrt{b(y)})g(y-x)\Delta\left(\frac{1}{\sqrt{b}} \right)(x)\]
\[\int_{B(0,|z|^{\alpha})}g(x)^{p}(\mathbf{1}_{B(z,|z|^{\alpha})}(x)-1)\, \mathrm{d}x\] \[\leqslant-\frac{1}{2\pi}\ln^{p}(|z|^{\alpha})\int_{B(0,|z|^{\alpha} )}(\mathbf{1}_{B(z,|z|^{\alpha})}(x)-1)\,\mathrm{d}x\] \[\leqslant-\frac{1}{2\pi}\ln^{p}(|z|^{\alpha})(|B(0,|z|^{\alpha}) \cap B(z,|z|^{\alpha})|-|B(0,|z|^{\alpha})|)\]
and on \(B(0,|z|^{\alpha})^{c}\), \(g(x)\leqslant-\frac{1}{2\pi}\ln(|z|^{\alpha})\) so
\[\int_{B(0,|z|^{\alpha})^{c}\cap B(z,|z|^{\alpha})}g(x)^{p}\, \mathrm{d}x\leqslant-\frac{1}{2\pi}\ln^{p}(|z|^{\alpha})|B(0,|z|^{\alpha})^{c} \cap B(z,|z|^{\alpha})|.\]
We get
\[\int_{B(0,|z|^{\alpha})}g(x+z)^{p}-\int_{B(0,|z|^{\alpha})}g(x)^{p}\,\mathrm{ d}x\leqslant 0\]
and therefore
\[\int_{B(0,|z|^{\alpha})}|g(x+z)-g(x)|^{p}\,\mathrm{d}x \leqslant 2\int_{B(0,|z|^{\alpha})}g(x)^{p}\,\mathrm{d}x\] \[\leqslant C|z|^{2\alpha}\int_{B(0,1)}g(|z|^{\alpha}y)^{p}\,\mathrm{d}y\] \[\leqslant C|z|^{2\alpha}\int_{B(0,1)}(\alpha g(z)+g(y))^{p}\, \mathrm{d}y\] \[\leqslant C_{b}|z|^{2\alpha}g(z)^{p}.\]
Now if \(|z|\) is small enough,
\[C_{b} \int_{B(0,|z|^{\alpha})^{c}}|g(x+z)-g(x)|^{p}\,\mathrm{d}x\left| \Delta\left(\frac{1}{\sqrt{b}}\right)(y-x)\right|^{p}\] \[\leqslant C_{b}\bigg{(}|z|\frac{C}{|z|^{\alpha}}\bigg{)}^{p}\int _{\mathbb{R}^{2}}\left|\Delta\left(\frac{1}{\sqrt{b}}\right)(y-x)\right|^{p} \,\mathrm{d}x\] \[\leqslant C_{b}|z|^{p(1-\alpha)}\]
by Assumption 1.5. Finally, using Lemma 2.3 as for the first claim, we get that for any \(0<\alpha<1\) and some \(p>2\),
\[|\nabla_{x}S_{b}(x,y+z)-\nabla_{x}S_{b}(x,y)|\leqslant C_{b}(1+|y|)|z|+C_{b}( |z|^{\frac{2\alpha}{p}}g(z)+|z|^{1-\alpha}).\]
Dividing both sides by \(|z|^{s}\) for \(s\) small enough proves the first inequality of Claim (2).
With this lemma we are now able to construct the lake kernel. The construction is similar to the one established in [19, Proposition 3.1] for bounded domains.
**Proposition 2.8**.: _There exists a symmetric solution \(S_{b}\) of Equation (2.9) such that \(S_{b}(0,0)=0\). We define \(g_{b}\) as_
\[g_{b}(x,y):=\sqrt{b(x)b(y)}g(x-y)+S_{b}(x,y). \tag{2.10}\]
_Let \(\omega\in L^{\infty}\) with compact support. We define_
\[G_{b}[\omega](x)=\int_{\mathbb{R}^{2}}g_{b}(x,y)\,\mathrm{d}\omega(y). \tag{2.11}\]
_Then \(G_{b}[\omega]\) is a distributional solution of (2.2)._
_Moreover for \(2<p<+\infty\), \(G_{b}[\omega]\) is the unique solution (up to a constant) of (2.2) in \(W^{2,p}_{-1}(\mathbb{R}^{2})\) given by Proposition 2.2._
Proof of Proposition 2.8.: Let us first define
\[g_{b}(x,y):=\sqrt{b(x)b(y)}g(x-y)+S_{b}(x,y)\]
where \(S_{b}\) is a solution of Equation (2.9) given by Proposition 2.2 (not necessarily symmetric). Then we have the following result:
_Claim 2.9_.: If \(\varphi\) is smooth with compact support, then
\[-\int_{\mathbb{R}^{2}}g_{b}(x,y)\,\mathrm{div}\left(\frac{1}{b}\nabla\varphi \right)(x)\,\mathrm{d}x=\varphi(y).\]
Proof of the Claim.: We have
\[-\int_{\mathbb{R}^{2}}g_{b}(x,y)\operatorname{div}\left(\frac{1}{b} \nabla\varphi\right)(x)\operatorname{d}\!x\] \[\quad=-\int_{\mathbb{R}^{2}}\sqrt{b(x)b(y)}g(x-y)\operatorname{div }\left(\frac{1}{b}\nabla\varphi\right)(x)\operatorname{d}\!x\] \[\quad\quad-\int_{\mathbb{R}^{2}}S_{b}(x,y)\operatorname{div} \left(\frac{1}{b}\nabla\varphi\right)(x)\operatorname{d}\!x\] \[=: T_{1}+T_{2}.\]
We have
\[T_{1}= -\sqrt{b(y)}\int_{\mathbb{R}^{2}}\sqrt{b(x)}g(x-y)\operatorname{ div}\left(\frac{1}{b}\nabla\varphi\right)(x)\operatorname{d}\!x\] \[= \sqrt{b(y)}\int_{\mathbb{R}^{2}}g(x-y)\frac{1}{2b(x)\sqrt{b(x)}} \nabla b(x)\cdot\nabla\varphi(x)\operatorname{d}\!x\] \[+\sqrt{b(y)}\int_{\mathbb{R}^{2}}\frac{1}{\sqrt{b(x)}}\nabla g(x -y)\cdot\nabla\varphi(x)\operatorname{d}\!x\] \[=: L_{1}+L_{2}.\]
Integrating by parts in the first integral we get
\[L_{1}= -\sqrt{b(y)}\int_{\mathbb{R}^{2}}\varphi(x)\frac{1}{2b(x)\sqrt{b( x)}}\nabla g(x-y)\cdot\nabla b(x)\operatorname{d}\!x\] \[-\sqrt{b(y)}\int_{\mathbb{R}^{2}}\varphi(x)g(x-y)\operatorname{ div}\left(\frac{1}{2b\sqrt{b}}\nabla b\right)(x)\operatorname{d}\!x.\]
For \(L_{2}\), we use
\[\nabla\left(\frac{1}{\sqrt{b}}\varphi\right)=\frac{1}{\sqrt{b}}\nabla\varphi- \varphi\frac{1}{2b\sqrt{b}}\nabla b\]
to get
\[L_{2}= \sqrt{b(y)}\int_{\mathbb{R}^{2}}\varphi(x)\frac{1}{2b(x)\sqrt{b(x )}}\nabla b(x)\cdot\nabla g(x-y)\operatorname{d}\!x\] \[+\sqrt{b(y)}\int_{\mathbb{R}^{2}}\nabla\left(\frac{1}{\sqrt{b(x )}}\varphi(x)\right)\cdot\nabla g(x-y)\operatorname{d}\!x\] \[= \sqrt{b(y)}\int_{\mathbb{R}^{2}}\varphi(x)\frac{1}{2b(x)\sqrt{b( x)}}\nabla b(x)\cdot\nabla g(x-y)\operatorname{d}\!x\] \[+\varphi(y)\]
since \(-\Delta_{x}g(x-y)=\delta_{y}\) distributionally. Now let us compute \(T_{2}\):
\[T_{2} =-\int_{\mathbb{R}^{2}}S_{b}(x,y)\operatorname{div}\left(\frac{1 }{b}\nabla\varphi\right)(x)\operatorname{d}\!x\] \[=-\int_{\mathbb{R}^{2}}\operatorname{div}\left(\frac{1}{b}\nabla _{x}S_{b}(\cdot,y)\right)(x)\varphi(x)\operatorname{d}\!x\] \[=-\sqrt{b(y)}\int_{\mathbb{R}^{2}}g(x-y)\Delta\left(\frac{1}{ \sqrt{b}}\right)(x)\varphi(x)\operatorname{d}\!x\]
where we used that \(S_{b}\) is a solution of (2.9) in the last line. Now just remark that
\[\Delta\left(\frac{1}{\sqrt{b}}\right)=-\operatorname{div}\left(\frac{1}{2b\sqrt{b }}\nabla b\right)\]
and thus adding \(L_{1}\) and \(L_{2}\) we get
\[-\int_{\mathbb{R}^{2}}g_{b}(x,y)\operatorname{div}\left(\frac{1}{b}\nabla \varphi\right)(x)\,\mathrm{d}x=\varphi(y)\]
and we get the proof of Claim 2.9.
Now let \(\omega\in L^{\infty}(\mathbb{R}^{2})\) with compact support. We have
\[-\int_{\mathbb{R}^{2}} \left(\int_{\mathbb{R}^{2}}g_{b}(x,y)\omega(y)\,\mathrm{d}y\right) \operatorname{div}\left(\frac{1}{b}\nabla\varphi\right)(x)\,\mathrm{d}x\] \[=-\int_{\mathbb{R}^{2}}\left(\int_{\mathbb{R}^{2}}g_{b}(x,y) \operatorname{div}\left(\frac{1}{b}\nabla\varphi\right)(x)\,\mathrm{d}x \right)\omega(y)\,\mathrm{d}y\] \[=\int_{\mathbb{R}^{2}}\varphi(y)\omega(y)\,\mathrm{d}y\]
where we used Claim 2.9 in the last equality. Therefore \(G_{b}[\omega]\) is a distributional solution of (2.2).
Now we prove that with this kernel we recover solutions in the sense of Proposition 2.2:
_Claim 2.10_.: Let \(\omega\in L^{\infty}\) with compact support, then for all \(p\in(2,+\infty)\), we have that \(\nabla G_{b}[\omega]\in L^{p}\). Moreover if \(\psi\) is the solution of (2.2) given by Proposition (2.2), then \(\psi=G_{b}[\omega]\) up to a constant.
Proof of the claim.: We have:
\[\nabla G_{b}[\omega](x)= \int_{\mathbb{R}^{2}}\frac{\nabla b(x)}{2\sqrt{b(x)}}\sqrt{b(y)}g (x-y)\omega(y)\,\mathrm{d}y\] \[+\int_{\mathbb{R}^{2}}\sqrt{b(x)b(y)}\nabla g(x-y)\omega(y)\, \mathrm{d}y\] \[+\int_{\mathbb{R}^{2}}\nabla_{x}S_{b}(x,y)\omega(y)\,\mathrm{d}y\] \[=: T_{1}+T_{2}+T_{3}.\]
Now,
\[|T_{1}|\leqslant C_{b}|\nabla b(x)|\bigg{(}\int_{B(x,1)}|(\ln|x-y|)\omega(y)|\, \mathrm{d}y\] \[+\int_{\operatorname{supp}(\omega)\setminus B(x,1)}(|x|+|y|)| \omega(y)|\,\mathrm{d}y\bigg{)}\] \[\leqslant C_{b}\left\|\omega\right\|_{L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{ \infty}}(1+|x|)^{-(3+\gamma)}\]
by Assumption 1.5. Hence \(T_{1}\in L^{p}\). For the second term, we have
\[T_{2}=\sqrt{b(x)}\nabla g*(\sqrt{b}\omega)\]
and therefore \(T_{2}\in L^{p}\) by Hardy-Littlewood-Sobolev inequality (see for example [2, Theorem 1.7]). For the third term,
\[|T_{3}|\leqslant\left(\int_{\mathbb{R}^{2}}|\omega|\right)\int_{\mathbb{R}^{2}}| \nabla_{x}S_{b}(x,y)|\frac{|\omega(y)|\,\mathrm{d}y}{\int|\omega|}\]
and thus by Jensen inequality
\[\|T_{3}\|_{L^{p}}^{p}\leqslant\left(\int_{\mathbb{R}^{2}}|\omega|\right)^{p-1 }\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}|\nabla_{x}S_{b}(x,y)|^{p}|\omega(y )|\,\mathrm{d}y\,\mathrm{d}x.\]
We have that
\[\|\nabla S_{b}(\cdot,y)\|_{L^{p}}\leqslant C_{b}(1+|y|)\]
by Claim (1) of Lemma 2.7. Therefore
\[\|T_{3}\|_{L^{p}}^{p}\leqslant C_{b}\left(\int_{\mathbb{R}^{2}}|\omega|\right) ^{p-1}\int_{\mathbb{R}^{2}}(1+|y|)^{p}|\omega(y)|\,\mathrm{d}y\]
and it follows that \(\nabla G_{b}[\omega]\in L^{p}\). By Proposition 2.2 we get that \(G_{b}[\omega]=\psi\) up to a constant.
We are only left to justify that there exists a symmetric solution of (2.9). Consider \(\omega_{1},\omega_{2}\) two smooth functions with average zero, then by Claim 2.10, we have
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega_{1}(x )\omega_{2}(y)\,\mathrm{d}x\,\mathrm{d}y =\int_{\mathbb{R}^{2}}(\psi_{2}(x)+C)\omega_{1}(x)\,\mathrm{d}x\] \[=-\int_{\mathbb{R}^{2}}\psi_{2}(x)\,\mathrm{div}\left(\frac{1}{b} \nabla\psi_{1}\right)(x)\,\mathrm{d}x\]
where \(\psi_{i}\) is the solution of
\[-\,\mathrm{div}\left(\frac{1}{b}\nabla\psi_{i}\right)=\omega_{i}\]
given by Proposition 2.2. If \(R>0\), we have that
\[-\int_{B(0,R)}\psi_{2}(x)\,\mathrm{div}\left(\frac{1}{b}\nabla \psi_{1}\right)(x)\,\mathrm{d}x= -\int_{\partial B(0,R)}\frac{1}{b}\psi_{2}\nabla\psi_{1}\cdot \,\mathrm{d}\vec{S}\] \[+\int_{B(0,R)}\frac{1}{b}\nabla\psi_{2}\cdot\nabla\psi_{1}.\]
Using Proposition 2.5, we obtain
\[\left|\int_{\partial B(0,R)}\frac{1}{b}\psi_{2}\nabla\psi_{1}\cdot\,\mathrm{d }\vec{S}\right|\leqslant 2\pi R\left\|b^{-1}\right\|_{L^{\infty}}C(1+R^{\delta}) \frac{C}{R^{2}}\underset{R\to+\infty}{\longrightarrow}0\]
and therefore
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega_{1}(x)\omega_{2}(y )\,\mathrm{d}x\,\mathrm{d}y=\int_{\mathbb{R}^{2}}\frac{1}{b}\nabla\psi_{2} \cdot\nabla\psi_{1}\]
which is a symmetric expression of \(\psi_{1}\) and \(\psi_{2}\). It follows that
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega_{1}(x)\omega_{2}(y )\,\mathrm{d}x\,\mathrm{d}y=\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(y, x)\omega_{1}(x)\omega_{2}(y)\,\mathrm{d}x\,\mathrm{d}y.\]
Since \(\sqrt{b(x)b(y)}g(x-y)\) is symmetric we get that
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}S_{b}(x,y)\omega_{1}(x)\omega_{2}(y) \,\mathrm{d}x\,\mathrm{d}y=\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}S_{b}(y,x) \omega_{1}(x)\omega_{2}(y)\,\mathrm{d}x\,\mathrm{d}y\]
for any \(\omega_{1},\omega_{2}\) smooth with compact support and average zero. Let us define
\[A(x,y):=S_{b}(x,y)-S_{b}(y,x).\]
Now we fix \(\chi,\omega_{1},\omega_{2}\) smooth functions with compact support such that \(\int_{\mathbb{R}^{2}}\omega_{2}=0\) and \(\int_{\mathbb{R}^{2}}\chi=1\). Remark that we no longer assume that \(\int_{\mathbb{R}^{2}}\omega_{1}=0\). We define
\[A_{2}(x):=\int_{\mathbb{R}^{2}}A(x,y)\omega_{2}(y)\,\mathrm{d}y.\]
We have
\[\int_{\mathbb{R}^{2}}A_{2}\omega_{1} =\int_{\mathbb{R}^{2}}A_{2}\left(\omega_{1}-\left(\int_{\mathbb{ R}^{2}}\omega_{1}\right)\chi\right)+\left(\int_{\mathbb{R}^{2}}\omega_{1} \right)\int_{\mathbb{R}^{2}}A_{2}\chi\] \[=0+\left(\int_{\mathbb{R}^{2}}\omega_{1}\right)\int_{\mathbb{R}^ {2}}A_{2}\chi.\]
Thus \(A_{2}\) is constant so for every \(x\in\mathbb{R}^{2}\),
\[\int_{\mathbb{R}^{2}}\nabla_{x}A(x,y)\omega_{2}(y)\,\mathrm{d}y=0\]
for all \(\omega_{2}\) with mean zero and therefore \(\nabla_{x}A(x,y)=U(x)\). It follows that \(A(x,y)=c(x)+d(y)\). Since \(A(x,y)=-A(y,x)\), we have \(d=-c\). Now let us set \(\widetilde{S}_{b}(x,y):=S_{b}(x,y)+c(y)\). We have:
\[\widetilde{S}_{b}(x,y)-\widetilde{S}_{b}(y,x) =S_{b}(x,y)-S_{b}(y,x)+c(y)-c(x)\] \[=c(x)-c(y)+c(y)-c(x)\] \[=0\]
which proves that \(\widetilde{S}_{b}\) a symmetric solution of (2.9). Up to adding a constant we can also assume that \(\widetilde{S}_{b}(0,0)=0\).
The symmetry of \(S_{b}\) allows us to obtain more regularity estimates:
**Lemma 2.11**.: _Let \(S_{b}\) be the symmetric solution of Equation (2.9) given by Proposition 2.8, then_
1. \(S_{b}\) _is smooth on_ \(\mathbb{R}^{2}\times\mathbb{R}^{2}\backslash\{(x,x)\ ;\ x\in\mathbb{R}^{2}\}\)_._
2. \(|S_{b}(x,y)|\leqslant C_{b}(1+|x|^{2}+|y|^{2})\)_._
Proof.: For \(0<r<R\), we define \(C(y,r,R):=B(y,R)\backslash B(y,r)\). We have that \(S_{b}(\cdot,y)\) is a solution of
\[\left\{\begin{aligned} &\operatorname{div}\left(\frac{1}{b}\nabla S_{b}( \cdot,y)\right)=g(\cdot-y)\sqrt{b(y)}\Delta\left(\frac{1}{\sqrt{b}}\right) \qquad\text{in}\ C(y,r,R)\\ & S_{b}(\cdot,y)=S_{b}(\cdot,y)\in\mathcal{C}^{0,s}\qquad\text{ in}\ \partial C(y,r,R).\end{aligned}\right.\]
Thus by elliptic regularity (see for example [26, Theorem 6.13]) we obtain that \(S_{b}(\cdot,y)\in\mathcal{C}^{2,s}(\mathring{C}(y,r,R))\) for all \(y\in\mathbb{R}^{2}\) and \(0<r<R\). By symmetry we get that \(S_{b}\) is \(\mathcal{C}^{2,s}\) on
\[\mathbb{R}^{2}\times\mathbb{R}^{2}\backslash\{(x,x)\ ;\ x\in\mathbb{R}^{2}\}.\]
We can iterate the argument by writing the elliptic system satisfied by the derivatives of \(S_{b}\) to show that \(S_{b}\) is smooth on
\[\mathbb{R}^{2}\times\mathbb{R}^{2}\backslash\{(x,x)\;;\;x\in\mathbb{R}^{2}\}.\]
The second claim is just a consequence of Lemma 2.7, since
\[|S_{b}(x,y)| \leqslant|S_{b}(0,0)-S_{b}(x,0)|+|S_{b}(x,0)-S_{b}(x,y)|\] \[\leqslant|S_{b}(0,0)-S_{b}(x,0)|+|S_{b}(0,x)-S_{b}(y,x)|\] \[\leqslant\|\nabla_{x}S_{b}(\cdot,0)\|_{L^{\infty}}\,|x|+\|\nabla_{ x}S_{b}(\cdot,x)\|_{L^{\infty}}\,|y|\] \[\leqslant C_{b}|x|+C_{b}(1+|x|)|y|\] \[\leqslant C_{b}(1+|x|^{2}+|y|^{2}).\]
We finish this subsection by giving a straightforward consequence of Proposition 2.5 and [23, Lemma 2.7] which will be useful to deal with the regularisation of the dirac mass we will introduce in Subsection 2.4 and use in Sections 5 and 6.
**Lemma 2.12**.: \(\mu\mapsto\nabla G_{b}[\mu]\) _extends into a bounded operator from \(\dot{H}^{-1}\) to \(L^{2}\)._
Proof.: Let \(\mu\) be a smooth function with compact support and average zero. By Proposition 2.5, \(\nabla G_{b}[\mu]\in L^{2}\) and therefore it follows by [23, Lemma 2.7] that
\[\|\nabla G_{b}[\mu]\|_{L^{2}}\leqslant C_{b}\,\|\mu\|_{\dot{H}^{-1}}\]
and the lemma follows from the density of smooth functions with compact support and average zero in \(\dot{H}^{-1}\).
### Regularisations of the Coulomb kernel and the dirac mass
To study our modulated energy we will need to have suitable regularisations of \(g\) and of the dirac mass \(\delta_{y}\). For that purpose, let us first define \(g^{(\eta)}\) for any \(0<\eta<1\) as
\[g^{(\eta)}(x):=\begin{cases}-\dfrac{1}{2\pi}\ln(\eta)&\text{if }|x|\leqslant \eta\\ g(x)&\text{if }|x|\geqslant\eta\end{cases} \tag{2.12}\]
and we define \(\delta_{y}^{(\eta)}\) as the uniform probability measure on the circle \(\partial B(y,\eta)\). We also define
\[\widetilde{\delta}_{y}^{(\eta)}:=m_{b}(y,\eta)\frac{\mathrm{d}\delta_{y}^{( \eta)}}{\sqrt{b}} \tag{2.13}\]
where
\[m_{b}(y,\eta):=\left(\int\frac{\mathrm{d}\delta_{y}^{(\eta)}}{\sqrt{b}}\right) ^{-1}. \tag{2.14}\]
In the following proposition we state several properties related to these regularisations.
**Proposition 2.13**.: _For any \(0<\eta<1\) and \(y\in\mathbb{R}^{2}\), we have_
\[\int g(x-z)\,\mathrm{d}\delta^{(\eta)}_{y}(z)=g^{(\eta)}(x-y) \tag{2.15}\]
_and_
\[|m_{b}(y,\eta)-\sqrt{b(y)}|\leqslant C_{b}\eta. \tag{2.16}\]
Proof.: By a change of variable we may assume that \(y=0\). The function
\[f(x):=\int_{\partial B(0,\eta)}g(x-z)\,\mathrm{d}\delta^{(\eta)}_{0}(z)\]
is locally bounded and satisfies \(\Delta f=-\delta^{(\eta)}_{0}=\Delta g^{(\eta)}\). Now if \(|x|\geqslant\eta\), we have
\[\int_{\partial B(0,\eta)}g(x-z)\,\mathrm{d}\delta^{(\eta)}_{0}(z) -g^{(\eta)}(x) =\int_{\partial B(0,\eta)}\left(g(x-z)-g(x)\right)\mathrm{d} \delta^{(\eta)}_{0}(z)\] \[=\int_{\partial B(0,\eta)}g\bigg{(}\frac{x}{|x|}-\frac{z}{|x|} \bigg{)}\,\mathrm{d}\delta^{(\eta)}_{0}(z)\] \[\underset{|x|\to\infty}{\longrightarrow}\int_{\partial B(0,\eta) }-\frac{1}{2\pi}\ln(1)=0\]
by dominated convergence theorem. Thus \(f-g^{(\eta)}\) is a harmonic bounded function so it is constant. Since \(f(z)=g(\eta)=g^{(\eta)}(z)\) for any \(z\) of norm \(\eta\), we get that \(f=g^{(\eta)}\).
Let us now prove (2.16):
\[m_{b}(y,\eta)-\sqrt{b(y)}=m_{b}(y,\eta)\sqrt{b(y)}\left(\frac{1}{\sqrt{b(y)}} -\int\frac{\mathrm{d}\delta^{(\eta)}_{y}(z)}{\sqrt{b(z)}}\right)\]
and thus
\[|m_{b}(y,\eta)-\sqrt{b(y)}|\leqslant C_{b}\eta\]
by Assumption 1.5.
## 3. Point vortices
To prove Theorem 1.8 we will need to control the evolution of the interaction energy and of the moment of inertia. We recall that the moment of inertia is not conserved for the lake equations, nor for the point vortex system. Due to the self-interactions, the interaction energy \(E_{N}\) is also not conserved.
The following proposition gives bounds on the interaction energy and on the moment of inertia and the global well-posedness of the lake point-vortex system (1.6).
**Proposition 3.1**.: _Let \(T>0\) and \((q^{0}_{1},...,q^{0}_{N})\) be such that \(q^{0}_{i}\neq q^{0}_{j}\) if \(i\neq j\). There exists a unique smooth solution of (1.6) on \([0,T]\). Moreover, we have the following estimates:_
\[|E_{N}(t)|\leqslant e^{C_{b}(1+\alpha_{N})t}(|E_{N}(0)|+I_{N}(0)+1) \tag{3.2}\] \[I_{N}(t)\leqslant e^{C_{b}(1+\alpha_{N})t}(|E_{N}(0)|+I_{N}(0)+1). \tag{3.1}\]
_We also have similar estimates for the rescaled moment of inertia and for the interaction energy:_
\[|\overline{E_{N}}(t)|\leqslant e^{C_{b}(1+\alpha_{N}^{-1})t}(|\overline{E_{N}}(0 )|+\overline{I_{N}}(0)+1) \tag{3.3}\]
\[\overline{I_{N}}(t)\leqslant e^{C_{b}(1+\alpha_{N}^{-1})t}(|\overline{E_{N}}(0 )|+\overline{I_{N}}(0)+1). \tag{3.4}\]
Proof.: Since \(b\) is regular (see Assumption 1.5) and \(S_{b},g,\nabla g\) are regular outside of the diagonal (see Claim (1) of Lemma 2.11), System (1.6) is well-posed up to the first collision time by Cauchy-Lipschitz theorem. We will first prove the bounds on \(E_{N}\) and \(I_{N}\) and then deduce that there is no collision between the points (this is the classical strategy to prove that the Euler point vortex system is well-posed when all the vorticities are positive, as explained for example in [47, Chapter 4.2]). Let us assume that there is no collision up to some time \(T^{*}\leqslant T\).
We first compute the time derivative of \(E_{N}\). Since \(g_{b}\) is symmetric, we have
\[\dot{E}_{N}= \frac{1}{N^{2}}\sum_{i=1}^{N}\left(\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\!\!\dot{q}_{i}\cdot\nabla_{x}g_{b}(q_{i},q_{j})+ \dot{q}_{j}\nabla_{y}g_{b}(q_{i},q_{j})\right)\] \[= \frac{2}{N^{2}}\sum_{i=1}^{N}\bigg{(}-\alpha_{N}\frac{\nabla^{ \perp}b(q_{i})}{b(q_{i})}-\frac{1}{Nb(q_{i})}\!\!\sum_{\begin{subarray}{c}k=1 \\ k\neq i\end{subarray}}^{N}\!\!\nabla_{x}^{\perp}g_{b}(q_{i},q_{k})\bigg{)} \cdot\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\!\!\nabla_{x}g_{b}(q_{i},q_{j})\] \[= -\frac{2\alpha_{N}}{N^{2}}\sum_{i=1}^{N}\frac{\nabla^{\perp}b(q_{ i})}{b(q_{i})}\cdot\bigg{(}\frac{\sqrt{b(q_{i})}}{2\sqrt{b(q_{j})}}g(q_{i}-q_{j}) \nabla b(q_{j})\] \[+\sqrt{b(q_{i})b(q_{j})}\nabla g(q_{i}-q_{j})+\nabla_{x}S_{b}(q_{ i},q_{j})\bigg{)}\]
and thus we get that
\[\dot{E}_{N}=-\frac{2\alpha_{N}}{N^{2}}\sum_{i=1}^{N}\frac{\nabla^{\perp}b(q_{ i})}{b(q_{i})}\cdot\bigg{(}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\!\!\sqrt{b(q_{i})b(q_{j})}\nabla g(q_{i}-q_{j})+ \nabla_{x}S_{b}(q_{i},q_{j})\bigg{)}. \tag{3.5}\]
Now let us bound the right-handside of the upper equality. Using Claim (1) of Lemma 2.7 and Assumption 1.5, we have
\[\bigg{|}\frac{\nabla^{\perp}b(q_{i})}{b(q_{i})}\cdot\nabla_{x}S_{b}(q_{i},q_{ j})\bigg{|}\leqslant C_{b}(1+|q_{j}|)\]
and thus
\[\left|\frac{2\alpha_{N}}{N^{2}}\sum_{i=1}^{N}\frac{\nabla^{\perp}b(q_{i})}{b( q_{i})}\cdot\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\!\!\nabla_{x}S_{b}(q_{i},q_{j})\right|\leqslant C_ {b}\alpha_{N}(1+I_{N}). \tag{3.6}\]
Now remark that
\[\frac{2\alpha_{N}}{N^{2}}\sum_{i=1}^{N}\frac{\nabla^{\perp}b(q_{i})}{b (q_{i})}\cdot\left(\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\!\!\!\sqrt{b(q_{i})b(q_{j})}\nabla g(q_{i}-q_{j})\right)\] \[\qquad=\frac{\alpha_{N}}{N^{2}}\sum_{i=1}^{N}\sum_{\begin{subarray} {c}j=1\\ j\neq i\end{subarray}}^{N}\left(\sqrt{\frac{b(q_{j})}{b(q_{i})}}\nabla^{\perp}b (q_{i})-\sqrt{\frac{b(q_{i})}{b(q_{j})}}\nabla^{\perp}b(q_{j})\right)\cdot \nabla g(q_{i}-q_{j}).\]
Moreover,
\[\sqrt{\frac{b(q_{j})}{b(q_{i})}}\nabla^{\perp}b(q_{i})-\sqrt{\frac {b(q_{i})}{b(q_{j})}}\nabla^{\perp}b(q_{j})= \sqrt{\frac{b(q_{j})}{b(q_{i})}}(\nabla^{\perp}b(q_{i})-\nabla^{ \perp}b(q_{j}))\] \[+\frac{b(q_{j})-b(q_{i})}{\sqrt{b(q_{i})b(q_{j})}}\nabla^{\perp} b(q_{j})\]
and thus using the Lipschitz regularity of \(b\) and \(\nabla b\) (see Assumption 1.5) and \(|\nabla g(q_{i}-q_{j})|=C|q_{i}-q_{j}|^{-1}\) we get that
\[\left|\frac{2\alpha_{N}}{N^{2}}\sum_{i=1}^{N}\frac{\nabla^{\perp}b(q_{i})}{b (q_{i})}\cdot\left[\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\!\!\!\sqrt{b(q_{i})b(q_{j})}\nabla g(q_{i}-q_{j}) \right]\right|\leqslant C_{b}\alpha_{N}. \tag{3.7}\]
Combining inequalities (3.6) and (3.7) we get that
\[|\dot{E}_{N}|\leqslant C_{b}(1+I_{N})\alpha_{N}. \tag{3.8}\]
Now we compute the time derivative of \(I_{N}\):
\[\dot{I}_{N}= \frac{2}{N}\sum_{i=1}^{N}q_{i}\cdot\dot{q}_{i}\] \[= -\frac{2\alpha_{N}}{N}\sum_{i=1}^{N}q_{i}\cdot\frac{\nabla^{ \perp}b(q_{i})}{b(q_{i})}\] \[-\frac{2}{N}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\!\!\frac{\sqrt{b(q_{j})}}{2b(q_{i})\sqrt{b(q_{i})} }g(q_{i}-q_{j})q_{i}\cdot\nabla^{\perp}b(q_{i})\] \[-\frac{2}{N}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\!\!\frac{\sqrt{b(q_{i})b(q_{j})}}{b(q_{i})}q_{i} \cdot\nabla^{\perp}g(q_{i}-q_{j})\] \[-\frac{2}{N}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\!\!\!q_{i}\cdot\nabla_{x}^{\perp}S_{b}(q_{i},q_{j})\] \[=: 2(T_{1}+T_{2}+T_{3}+T_{4}).\]
Using Assumption 1.5 we have
\[|T_{1}|\leqslant C_{b}\alpha_{N}. \tag{3.9}\]
For the second term, using Assumption 1.5 we have
\[|T_{2}| \leqslant\frac{C_{b}}{N^{2}}\sum_{i=1}^{N}\bigg{(}\sum_{\begin{subarray} {c}j=1\\ j\neq i\end{subarray}}^{N}\lvert g(q_{i}-q_{j})\rvert\bigg{)}\] \[\leqslant\frac{C_{b}}{N^{2}}\sum_{i=1}^{N}\bigg{(}\sum_{ \begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}g(q_{i}-q_{j})\mathbf{1}_{\lvert q_{i}-q_{j}\rvert \leqslant 1}+\lvert q_{i}\rvert^{2}+\lvert q_{j}\rvert^{2}\bigg{)}\] \[\leqslant C_{b}I_{N}+\frac{C_{b}}{N^{2}}\sum_{\begin{subarray}{c }1\leqslant i\neq j\leqslant N\\ \lvert q_{i}-q_{j}\rvert\leqslant 1\end{subarray}}g(q_{i}-q_{j}).\]
Now by Assumption 1.5, we have that
\[\frac{1}{N^{2}}\sum_{\begin{subarray}{c}1\leqslant i\neq j \leqslant N\\ \lvert q_{i}-q_{j}\rvert\leqslant 1\end{subarray}}g(q_{i}-q_{j})\leqslant \frac{C_{b}}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}\bigg{(} \sqrt{b(q_{i})b(q_{j})}g(q_{i}-q_{j})\] \[+S_{b}(q_{i},q_{j})\bigg{)}+\frac{C_{b}}{N^{2}}\sum_{ \begin{subarray}{c}1\leqslant i\neq j\leqslant N\\ \lvert q_{i}-q_{j}\rvert\geqslant 1\end{subarray}}\lvert g(q_{i}-q_{j})\rvert\] \[+\frac{C_{b}}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}\lvert S _{b}(q_{i},q_{j})\rvert\] \[\leqslant C_{b}\bigg{(}E_{N}+\frac{1}{N^{2}}\sum_{\begin{subarray}{c}1 \leqslant i\neq j\leqslant N\\ \lvert q_{i}-q_{j}\rvert\geqslant 1\end{subarray}}\lvert g(q_{i}-q_{j})\rvert\] \[+\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}\lvert S_{b} (q_{i},q_{j})\rvert\bigg{)}.\]
Moreover,
\[\frac{C_{b}}{N^{2}}\sum_{\begin{subarray}{c}1\leqslant i\neq j \leqslant N\\ \lvert q_{i}-q_{j}\rvert\geqslant 1\end{subarray}}\lvert g(q_{i}-q_{j})\rvert \leqslant\frac{C_{b}}{N^{2}}\sum_{\begin{subarray}{c}1 \leqslant i\neq j\leqslant N\\ \lvert q_{i}-q_{j}\rvert\geqslant 1\end{subarray}}\lvert q_{i}\rvert^{2}+\lvert q _{j}\rvert^{2}\] \[\leqslant C_{b}I_{N}\]
and using Claim (2) of Lemma 2.11,
\[\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}\lvert S_{b}(q_{i},q_{j}) \rvert\leqslant\frac{C_{b}}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}(1+ \lvert q_{i}\rvert^{2}+\lvert q_{j}\rvert^{2})\leqslant C_{b}(1+I_{N}).\]
Therefore
\[\lvert T_{2}\rvert\leqslant C_{b}(1+\lvert E_{N}\rvert+I_{N}). \tag{3.10}\]
For the third term we write
\[T_{3}= -\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\frac{\sqrt{b(q_{j})}-\sqrt{b(q_{i})}}{\sqrt{b(q_{i} )}}\nabla^{\perp}g(q_{i}-q_{j})\cdot q_{i}\] \[= -\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}\lvert S_{b }(q_{i},q_{j})\rvert\] \[=
\[-\frac{1}{2N^{2}}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\nabla^{\perp}g(q_{i}-q_{j})\cdot(q_{i}-q_{j})\] \[= -\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\frac{\sqrt{b(q_{j})}-\sqrt{b(q_{i})}}{\sqrt{b(q_{i} )}}\nabla^{\perp}g(q_{i}-q_{j})\cdot q_{i}-0\]
and thus using the Lipschitz regularity of \(b\) (see Assumption 1.5) we get
\[|T_{3}|\leqslant C_{b}(1+I_{N}). \tag{3.11}\]
For the fourth term, using Claim (1) of Lemma 2.7 we get
\[|T_{4}| =\left|-\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j= 1\\ j\neq i\end{subarray}}^{N}\frac{1}{b(q_{i})}q_{i}\cdot\nabla_{x}^{\perp}S_{b}( q_{i},q_{j})\right|\] \[\leqslant C_{b}\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}|q_{i}| (1+|q_{j}|)\] \[\leqslant C_{b}(1+I_{N}). \tag{3.12}\]
Combining with inequalities (3.9), (3.10), (3.11) and (3.12) we get that
\[|\dot{I}_{N}|\leqslant C_{b}(1+\alpha_{N}+|I_{N}|+|E_{N}|). \tag{3.13}\]
Let us write \(U_{N}:=(E_{N},I_{N})\). By equations (3.8) and (3.13) we have
\[|\dot{U}_{N}|\leqslant C_{b}(1+\alpha_{N})(1+|U_{N}|)\]
therefore by Gronwall's lemma we have
\[|U_{N}(t)|\leqslant e^{C_{b}(1+\alpha_{N})t}(|U_{N}(0)|+1)-1\]
from which (3.1) and (3.2) follows.
Let us use these bounds to prove that there is no collision (and it will follow that System (1.6) is globally well-posed). If \(i\neq j\), then
\[g(|q_{i}-q_{j}|)\leqslant C_{b}\bigg{(}E_{N}+\frac{1}{N^{2}}\sum_{1\leqslant k\neq l \leqslant N}\sum|S_{b}(q_{k},q_{l})|\] \[-\frac{1}{N^{2}}\sum_{\begin{subarray}{c}1\leqslant k\neq l \leqslant N\\ (k,l)\neq(i,j)\end{subarray}}g(q_{k}-q_{l})\bigg{)}\] \[\leqslant C_{b}\bigg{(}E_{N}+\frac{1}{N^{2}}\sum_{1\leqslant k\neq l \leqslant N}(1+|q_{k}|^{2}+|q_{l}|^{2})\bigg{)}.\]
where we used Claim (2) of Lemma 2.11 and \(\ln|x-y|\leqslant|x|+|y|\). Thus by inequalities (3.1) and (3.2) we get
\[g(|q_{i}-q_{j}|)\leqslant C_{b}(e^{C_{b}(1+\alpha_{N})t}(|E_{N}(0)|+I_{N}(0)+1)+1)\]
and therefore
\[|q_{i}(t)-q_{j}(t)|\geqslant\exp\bigg{(}-2\pi C_{b}(e^{C_{b}(1+\alpha_{N})t}(| E_{N}(0)|+I_{N}(0)+1)+1)\bigg{)}>0.\]
It follows that there is no collision on \([0,T]\). The bounds on \(\overline{E_{N}}\) and \(\overline{I_{N}}\) follow directly from Inequalities (3.1) and (3.2) applied to \(t=\alpha_{N}^{-1}\tau\).
## 4. Time derivatives of the modulated energies
The time derivatives of \(\mathcal{F}_{b,N}\) and of \(\overline{\mathcal{F}}_{b,N}\), defined in (1.13) and (1.14), are given by the two following propositions:
**Proposition 4.1**.: _Let \(\omega\) be a weak solution of (1.1) in the sense of Definition 1.1, \((q_{1},...,q_{N})\) be solutions of (1.6). We denote_
\[\omega_{N}=\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}(t)}.\]
_Assume that \(\omega\) satisfies Assumption 1.6. Then \(\mathcal{F}_{b,N}\) is Lipschitz and for almost every \(t\in[0,T]\),_
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}_{b,N}(t)=\] \[2\iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2})\setminus\Delta} \left(u(t,x)-\alpha\frac{\nabla^{\perp}b(x)}{b(x)}\right)\cdot\nabla_{x}g_{b} (x,y)\,\mathrm{d}(\omega(t)-\omega_{N}(t))^{\otimes 2}(x,y)\] \[+2(\alpha_{N}-\alpha)\iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2}) \setminus\Delta}\frac{\nabla^{\perp}b(x)}{b(x)}\cdot\nabla_{x}g_{b}(x,y)\, \mathrm{d}\omega_{N}(t,x)\,\mathrm{d}(\omega(t)-\omega_{N}(t))(y).\]
**Proposition 4.2**.: _Let \((\overline{q_{1}},...,\overline{q_{N}})\) be solutions of (1.7) and \(\overline{\omega}\) be a solution of (1.3) in the sense of Definition 1.2._
_We denote_
\[\overline{\omega}_{N}=\frac{1}{N}\sum_{i=1}^{N}\delta_{\overline{q_{i}}(t)}.\]
_Assume that \(\overline{\omega}\) satisfies Assumption 1.6. Denote \(v=\nabla G_{b}[\overline{\omega}]\). Then \(\overline{\mathcal{F}}_{b,N}\) is Lipschitz and for almost every \(t\in[0,T]\), we have_
\[\frac{\mathrm{d}}{\mathrm{d}t}\overline{\mathcal{F}}_{b,N}(t)=-2 \iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2})\setminus\Delta}\frac{\nabla^{ \perp}b(x)}{b(x)}\cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d}(\overline{\omega}(t)- \overline{\omega}_{N}(t))^{\otimes 2}(x,y)\\ +\frac{2}{N^{2}\alpha_{N}}\sum_{i=1}^{N}\sum_{\begin{subarray}{c }j=1\\ j\neq i\end{subarray}}^{N}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
* \(|g_{b}^{\eta}(x,y)|\leqslant C_{b}(|g(x-y)|+1+|x|^{2}+|y|^{2})\)
* \(|\nabla_{x}g_{b}^{\eta}(x,y)|,|\nabla_{y}g_{b}^{\eta}(x,y)|\leqslant C_{b}(|x- y|^{-1}+1+|x|+|y|)\).
* For any \((x,y)\in(\mathbb{R}^{2})^{2}\) such that \(x\neq y\), \[g_{b}^{\eta}(x,y) \underset{\eta\to 0}{\longrightarrow}g_{b}(x,y)\] \[\nabla_{x}g_{b}^{\eta}(x,y) \underset{\eta\to 0}{\longrightarrow}\nabla_{x}g_{b}(x,y)\] \[\nabla_{y}g_{b}^{\eta}(x,y) \underset{\eta\to 0}{\longrightarrow}\nabla_{y}g_{b}(x,y).\]
Proof of the claim.: We define
\[g_{b}^{\eta}(x,y)=\sqrt{b(x)b(y)}g^{\eta}(x-y)+S_{b}^{\eta}(x,y)\]
where \(g^{\eta}\) is a smooth function satisfying:
* \(g^{\eta}(x)=g(x)\) for \(|x|\geqslant\eta\),
* \(|g^{\eta}(x)|\leqslant|g(x)|\),
* \(|\nabla g^{\eta}(x)|\leqslant C|x|^{-1}\).
that we can obtain by extending \(\ln|_{x\geqslant\eta}\) in a smooth function on \(\mathbb{R}^{+}\). We define \(S_{b}^{\eta}:=S_{b}*\chi_{\eta}\) where \(\chi_{\eta}\) is a mollifier on \(\mathbb{R}^{4}\). Since \(S_{b}\) is locally Lipschitz (see Lemma 2.7), \(S_{b}^{\eta}\) is smooth and we get from Claim (1) of Lemma 2.7 and Claim (2) of Lemma 2.11 that
* \(|S_{b}^{\eta}(x,y)|\leqslant C_{b}(1+|x|^{2}+|y|^{2})\),
* \(|\nabla_{x}S_{b}^{\eta}(x,y)|,|\nabla_{y}S_{b}^{\eta}(x,y)|\leqslant C_{b}(1 +|x|+|y|)\).
Since \(S_{b}\) is locally Lipschitz, \(S_{b}^{\eta}\) and \(\nabla S_{b}^{\eta}\) converge locally uniformly to \(S_{b}\) and \(\nabla S_{b}\) (see for example [13, Proposition 4.21]) and therefore we get the convergence of \(g_{b}^{\eta}(x,y)\) and \(\nabla g_{b}^{\eta}(x,y)\) to \(g_{b}(x,y)\) and \(\nabla g_{b}(x,y)\) for any \(x\neq y\).
With this regularisation we can compute the time derivative of \(T_{1}\):
_Claim 4.4_.: \(T_{1}\in W^{1,\infty}([0,T])\) and for almost every \(t\in[0,T]\), we have
\[\frac{\mathrm{d}T_{1}}{\mathrm{d}t}=2\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2 }}\left(u(t,x)-\alpha\frac{\nabla^{\perp}b(x)}{b(x)}\right)\cdot\nabla_{x}g_{ b}(x,y)\omega(t,x)\omega(t,y)\,\mathrm{d}x\,\mathrm{d}y.\]
Proof of the claim.: For \(0\leqslant s,t\leqslant T\) and \(0<\eta<1\) we have:
\[T_{1}(t)-T_{1}(s)=\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)(\omega( t,x)\omega(t,y)-\omega(s,x)\omega(s,y))\,\mathrm{d}x\,\mathrm{d}y.\]
Now for almost all \(x\) and \(y\) such that \(x\neq y\),
\[|g_{b}^{\eta}(x,y)| |\omega(t,x)\omega(t,y)-\omega(s,x)\omega(s,y)|\] \[\leqslant C_{b}(|g(x-y)|+1+|x|^{2}+|y|^{2})|\omega(t,x)\omega(t,y )-\omega(s,x)\omega(s,y)|\]
and
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}(|g(x-y)|+1+|x|^{2}+|y| ^{2})\\ \times|\omega(t,x)\omega(t,y)-\omega(s,x)\omega(s,y)|\,\mathrm{d}x \,\mathrm{d}y<+\infty\]
because \(\omega\in L^{\infty}\) with compact support. Therefore by dominated convergence theorem we get that
\[T_{1}(t)-T_{1}(s)=\lim_{\eta\to 0}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}^ {\eta}(x,y)(\omega(t,x)\omega(t,y)-\omega(s,x)\omega(s,y))\,\mathrm{d}x\,\mathrm{ d}y. \tag{4.1}\]
Since \(g_{b}^{\eta}\) is smooth and \(\omega\) has compact support, we can use (1.2) to get that
\[\int_{\mathbb{R}^{2}}g_{b}^{\eta}(x,y)(\omega(t,y)-\omega(s,y))\, \mathrm{d}y=\\ \int_{s}^{t}\int_{\mathbb{R}^{2}}\nabla_{y}g_{b}^{\eta}(x,y)\cdot \left(u(\tau,y)-\alpha\frac{\nabla^{\perp}b(y)}{b(y)}\right)\omega(\tau,y)\, \mathrm{d}y\,\mathrm{d}\tau.\]
Let us write
\[\varphi(t,x):=\int_{\mathbb{R}^{2}}g_{b}^{\eta}(x,y)\omega(t,y)\,\mathrm{d}y.\]
Since \(g_{b}^{\eta}\) is smooth we have that for any compact \(K\subset\mathbb{R}^{2}\),
\[(t,x)\mapsto\\ \int_{\mathbb{R}^{2}}\nabla_{y}g_{b}^{\eta}(x,y)\cdot\left(u(t,y) -\alpha\frac{\nabla^{\perp}b(y)}{b(y)}\right)\omega(t,y)\,\mathrm{d}y\in L^{ \infty}([0,T],\mathcal{C}^{\infty}(K))\]
and thus \(\varphi\in W^{1,\infty}([0,T],\mathcal{C}^{\infty}(K))\) and for almost every \(t\in[0,T]\),
\[\partial_{t}\varphi(t,x)=\int_{\mathbb{R}^{2}}\nabla_{y}g_{b}^{\eta}(x,y) \left(u(\tau,y)-\alpha\frac{\nabla^{\perp}b(y)}{b(y)}\right)\omega(t,y)\, \mathrm{d}y.\]
Therefore we can use \(\varphi\) as a test function in (1.2) (remark that we defined (1.2) for smooth functions only but by density we can extend it to functions which are only \(W^{1,\infty}\) in time) and we get that
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}^{\eta}(x,y)( \omega(t,x)\omega(t,y)-\omega(s,x)\omega(s,y))\,\mathrm{d}x\,\mathrm{d}y\\ = \int_{s}^{t}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\nabla_{y}g_ {b}^{\eta}(x,y)\cdot\left(u(\tau,y)-\alpha\frac{\nabla^{\perp}b(y)}{b(y)} \right)\omega(\tau,y)\omega(\tau,x)\,\mathrm{d}y\,\mathrm{d}x\,\mathrm{d}\tau\] \[+\int_{s}^{t}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\nabla_{x} g_{b}^{\eta}(x,y)\cdot\left(u(\tau,x)-\alpha\frac{\nabla^{\perp}b(x)}{b(x)} \right)\omega(\tau,x)\omega(\tau,y)\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}\tau.\]
Now we have that for almost every \(x\) and \(y\) such that \(x\neq y\) and almost every \(\tau\in[0,T]\),
\[|\nabla_{x}g_{b}^{\eta}(x,y)\cdot\left(u(\tau,x)-\alpha\frac{ \nabla^{\perp}b(x)}{b(x)}\right)\omega(\tau,x)\omega(\tau,y)|\\ \leqslant C_{b}(|x-y|^{-1}+1+|x|^{2}+|y|^{2})|\left|u(\tau,x)- \alpha\frac{\nabla^{\perp}b(x)}{b(x)}\right|\left|\omega(\tau,y)\right|\left| \omega(\tau,x)\right|\]
and
\[\int_{s}^{t}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}(|x-y|^{-1} +1+|x|^{2}+|y|^{2})\\ \times\left|u(\tau,x)-\alpha\frac{\nabla^{\perp}b(x)}{b(x)} \right|\left|\omega(\tau,y)\right|\left|\omega(\tau,x)\right|\mathrm{d}x\, \mathrm{d}y<+\infty.\]
Applying dominated convergence theorem we find that
\[\int_{s}^{t}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\nabla_{x}g_{b} ^{\eta}(x,y)\cdot\left(u(\tau,x)-\alpha\frac{\nabla^{\perp}b(x)}{b(x)}\right) \omega(\tau,x)\omega(\tau,y)\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}\tau\] \[\underset{\eta\to 0}{\longrightarrow}\int_{s}^{t}\iint_{\mathbb{R}^{2} \times\mathbb{R}^{2}}\nabla_{x}g_{b}(x,y)\cdot\left(u(\tau,x)-\alpha\frac{ \nabla^{\perp}b(x)}{b(x)}\right)\omega(\tau,x)\omega(\tau,y)\,\mathrm{d}x\, \mathrm{d}y\,\mathrm{d}\tau.\]
We can do the same for the first term to get that
\[\int_{s}^{t}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\nabla_{y}g _{b}^{\eta}(x,y)\cdot\left(u(\tau,y)-\alpha\frac{\nabla^{\perp}b(y)}{b(y)} \right)\omega(\tau,y)\omega(\tau,x)\,\mathrm{d}y\,\mathrm{d}x\,\mathrm{d}\tau\] \[\underset{\eta\to 0}{\longrightarrow}\int_{s}^{t}\iint_{\mathbb{R}^{2} \times\mathbb{R}^{2}}\nabla_{y}g_{b}(x,y)\cdot\left(u(\tau,y)-\alpha\frac{ \nabla^{\perp}b(y)}{b(y)}\right)\omega(\tau,y)\omega(\tau,x)\,\mathrm{d}y\, \mathrm{d}x\,\mathrm{d}\tau.\]
Using that \(\nabla_{y}g_{b}(x,y)=\nabla_{x}g_{b}(y,x)\) and (4.1) we get that \(T_{1}\in W^{1,\infty}([0,T])\) and for almost every \(t\in[0,T]\), we get Claim 4.4.
We know by Equation 3.5 that
\[\dot{E}_{N}=-\frac{2\alpha_{N}}{N^{2}}\sum_{i=1}^{N}\frac{\nabla^{\perp}b(q_{ i})}{b(q_{i})}\cdot\underset{j\neq i}{\sum_{j=1}^{N}}\nabla_{x}g_{b}(q_{i},q_{j})\]
and therefore
\[\dot{E}_{N}=-2\alpha_{N}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}\setminus \Delta}\frac{\nabla^{\perp}b(x)}{b(x)}\cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d} \omega_{N}(x)\,\mathrm{d}\omega_{N}(y). \tag{4.2}\]
Now we compute the derivative of the second term:
_Claim 4.5_.: \(T_{2}\) is Lipschitz and for almost every \(t\in[0,T]\), we have
\[\frac{\mathrm{d}}{\mathrm{d}t}T_{2}(t)\] \[= -2\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\left(u(t,x)-\alpha \frac{\nabla^{\perp}b(x)}{b(x)}\right)\cdot\nabla_{x}g_{b}(x,y)\omega(t,x)\, \mathrm{d}x\,\mathrm{d}\omega_{N}(t,y)\] \[+2\alpha_{N}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\frac{ \nabla^{\perp}b(x)}{b(x)}\cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d}\omega_{N}(x) \omega(t,y)\,\mathrm{d}y\] \[+2\iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2})\setminus\Delta}u(t,x)\cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d}\omega_{N}(x)\,\mathrm{d}\omega_{N}(y).\]
Proof of the Claim.: If we use the regularisation \(g_{b}^{\eta}\) we defined in Claim 4.3, Equation (1.2) and if we let \(\eta\) tends to zero as we did for the proof of Claim 4.4, we can show that \(T_{2}\) is Lipschitz and that for almost every \(t\in[0,T]\), we have
\[\frac{\mathrm{d}T_{2}}{\mathrm{d}t}=T_{2,1}+T_{2,2}\]
where
\[T_{2,1} :=-\frac{2}{N}\sum_{i=1}^{N}\int_{\mathbb{R}^{2}}\left(u(t,x)- \alpha\frac{\nabla^{\perp}b(x)}{b(x)}\right)\cdot\nabla_{x}g_{b}(x,q_{i})\omega( t,x)\,\mathrm{d}x\] \[=-2\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\left(u(t,x)-\alpha \frac{\nabla^{\perp}b(x)}{b(x)}\right)\cdot\nabla_{x}g_{b}(x,y)\omega(t,x)\, \mathrm{d}x\,\mathrm{d}\omega_{N}(t,y) \tag{4.3}\]
and
\[T_{2,2} := -\frac{2}{N}\sum_{i=1}^{N}\dot{q}_{i}\cdot\int_{\mathbb{R}^{2}} \nabla_{y}g_{b}(x,q_{i})\omega(t,x)\,\mathrm{d}x\] \[= -\frac{2}{N}\sum_{i=1}^{N}\dot{q}_{i}\cdot\int_{\mathbb{R}^{2}} \nabla_{x}g_{b}(q_{i},x)\omega(t,x)\,\mathrm{d}x\] \[= \bigg{[}\frac{2\alpha_{N}}{N}\sum_{i=1}^{N}\frac{\nabla^{\perp}b (q_{i})}{b(q_{i})}\cdot\int_{\mathbb{R}^{2}}\nabla_{x}g_{b}(q_{i},x)\omega(t,x )\,\mathrm{d}x\bigg{]}\] \[+\bigg{[}\frac{2}{N^{2}}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j =1\\ j\neq i\end{subarray}}^{N}\!\frac{1}{b(q_{i})}\nabla_{x}^{\perp}g_{b}(q_{i},q_{ j})\cdot\int_{\mathbb{R}^{2}}\nabla_{x}g_{b}(q_{i},x)\omega(t,x)\,\mathrm{d}x\bigg{]}\] \[=: T_{2,2,1}+T_{2,2,2}.\]
Now we have
\[T_{2,2,1} =\frac{2\alpha_{N}}{N}\sum_{i=1}^{N}\frac{\nabla^{\perp}b(q_{i}) }{b(q_{i})}\cdot\int_{\mathbb{R}^{2}}\nabla_{x}g_{b}(q_{i},x)\omega(t,x)\, \mathrm{d}x\] \[=2\alpha_{N}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\frac{ \nabla^{\perp}b(x)}{b(x)}\cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d}\omega_{N}(x) \omega(t,y)\,\mathrm{d}y \tag{4.4}\]
and using \(\int_{\mathbb{R}^{2}}\nabla_{x}g_{b}(q_{i},y)\omega(y)\,\mathrm{d}y=b(q_{i})u ^{\perp}(t,q_{i})\) (see Proposition 2.8), we get
\[T_{2,2,2}= \frac{2}{N^{2}}\sum_{i=1}^{N}u(t,q_{i})\cdot\sum_{\begin{subarray} {c}j=1\\ j\neq i\end{subarray}}^{N}\!\nabla_{x}^{\perp}g_{b}(q_{i},q_{j})\] \[= 2\iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2})\setminus\Delta}u(t, x)\cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d}\omega_{N}(x)\,\mathrm{d}\omega_{N}(y).\]
Combining the upper equality with (4.3) and(4.4) we get the proof of Claim (4.5).
Now remark that
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla_{x}g_{b}(x,y)\, \mathrm{d}\omega_{N}(x)\,\mathrm{d}\omega(y)=\int_{\mathbb{R}^{2}}u\cdot bu^{ \perp}\,\mathrm{d}\omega_{N}=0.\]
Thus combining Claim 4.4, Equation (4.2) and Claim 4.5 we obtain Proposition 4.1.
We now compute the derivative of the rescaled modulated energy:
Proof of Proposition 4.2.: We split \(\overline{\mathcal{F}}_{b,N}\) in three terms:
\[\overline{\mathcal{F}}_{b,N}= \iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\overline{ \omega}(t,x)\overline{\omega}(t,y)\,\mathrm{d}x\,\mathrm{d}y\] \[-\frac{2}{N}\sum_{i=1}^{N}\int_{\mathbb{R}^{2}}g_{b}(x,\overline{ q_{i}})\overline{\omega}(t,x)\,\mathrm{d}x+\overline{E_{N}}\] \[=: T_{1}+T_{2}+\overline{E_{N}}.\]
Let us compute the time derivative of the first term. Using the regularisation \(g_{b}^{\eta}\) we defined in Claim 4.3 and using (1.2) and letting \(\eta\) tends to zero as we did for the proof of Claim 4.4, one can show that \(T_{1}\) is Lipschitz and that for almost every \(t\in[0,T]\), we have
\[\frac{\mathrm{d}T_{1}}{\mathrm{d}t}=-2\iint_{\mathbb{R}^{2}\times\mathbb{R}^{ 2}}\frac{\nabla^{\perp}b(x)}{b(x)}\cdot\nabla_{x}g_{b}(x,y)\overline{\omega}( t,x)\overline{\omega}(t,y)\,\mathrm{d}x\,\mathrm{d}y. \tag{4.5}\]
For the derivative of \(\overline{E_{N}}\) we rescale Equation (4.2) to get
\[\frac{\mathrm{d}}{\mathrm{d}t}\overline{E_{N}}=-2\iint_{\mathbb{R}^{2}\times \mathbb{R}^{2}\setminus\Delta}\frac{\nabla^{\perp}b(x)}{b(x)}\cdot\nabla_{x}g _{b}(x,y)\,\mathrm{d}\overline{\omega}_{N}(x)\,\mathrm{d}\overline{\omega}_{N} (y). \tag{4.6}\]
Now let us compute the derivative of the second term:
_Claim 4.6_.: (4.7) \[\begin{split}\frac{\mathrm{d}T_{2}}{\mathrm{d}t}=& 2\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\frac{ \nabla^{\perp}b(x)}{b(x)}\cdot\nabla_{x}g_{b}(x,y)\overline{\omega}(t,x)\, \mathrm{d}x\,\mathrm{d}\overline{\omega}_{N}(t,y)\\ &+2\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\frac{\nabla^{\perp} b(x)}{b(x)}\cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d}\overline{\omega}_{N}(x)\, \mathrm{d}\overline{\omega}(y)\\ &+\frac{2}{N^{2}\alpha_{N}}\sum_{i=1}^{N}\sum_{\begin{subarray} {c}j=1\\ j\neq i\end{subarray}}^{N}\frac{v(t,\overline{q_{i}})}{b(\overline{q_{i}})} \cdot\nabla_{x}g_{b}(\overline{q_{i}},\overline{q_{j}}).\end{split}\]
Proof of (4.7).: Using the regularisation \(g_{b}^{\eta}\) we defined in Claim 4.3 and using (1.2) and letting \(\eta\) tends to zero as we did for the proof of Claim 4.4, one can show that \(T_{2}\) is Lipschitz and that for almost every \(t\in[0,T]\), we have
\[\frac{\mathrm{d}T_{2}}{\mathrm{d}t}=T_{2,1}+T_{2,2}\]
where
\[T_{2,1} :=\frac{2}{N}\sum_{i=1}^{N}\int_{\mathbb{R}^{2}}\frac{\nabla^{ \perp}b(x)}{b(x)}\cdot\nabla_{x}g_{b}(x,q_{i})\overline{\omega}(t,x)\,\mathrm{ d}x\] \[=2\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\frac{\nabla^{\perp} b(x)}{b(x)}\cdot\nabla_{x}g_{b}(x,y)\overline{\omega}(t,x)\,\mathrm{d}x\, \mathrm{d}\overline{\omega}_{N}(t,y)\]
and
\[T_{2,2}:= -\frac{2}{N}\sum_{i=1}^{N}\dot{\overline{q_{i}}}\cdot\int_{ \mathbb{R}^{2}}\nabla_{y}g_{b}(x,q_{i})\overline{\omega}(t,x)\,\mathrm{d}x\]
\[= -\frac{2}{N}\sum_{i=1}^{N}\dot{\overline{q}_{i}}\cdot v(t,\overline{q _{i}})\] \[= \frac{2}{N}\sum_{i=1}^{N}v(t,\overline{q_{i}})\cdot\left[\frac{ \nabla^{\perp}b(\overline{q_{i}})}{b(\overline{q_{i}})}+\frac{1}{N\alpha_{N}} \underset{j\neq i}{\overset{N}{\underset{j=1}{\sum}}}\frac{1}{b(\overline{q_{i} })}\nabla_{x}g_{b}(\overline{q_{i}},\overline{q_{j}})\right]\] \[= 2\iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2})\setminus\Delta} \frac{\nabla^{\perp}b(x)}{b(x)}\cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d}\overline{ \omega}_{N}(x)\,\mathrm{d}\overline{\omega}(y)\] \[+\frac{2}{N^{2}\alpha_{N}}\sum_{i=1}^{N}\underset{j\neq i}{ \overset{N}{\underset{j=1}{\sum}}}\frac{v(t,\overline{q_{i}})}{b(\overline{q_ {i}})}\cdot\nabla_{x}g_{b}(\overline{q_{i}},\overline{q_{j}})\]
and thus we have (4.7).
Combining Equations (4.5), (4.6) and (4.7) we get (4.2).
## 5. Properties of the modulated energy
For \(0<\eta<1\), we denote
\[H_{N,\eta}:=G_{b}\left[\frac{1}{N}\sum_{i=1}^{N}\widetilde{\delta}_{q_{i}}^{( \eta)}-\omega\right].\]
If \(b=1\) this quantity is the electric potential introduced by Serfaty in [63, Equation (3.12)] divided by \(N\).
**Proposition 5.1**.: _Let \(\omega\in\mathcal{P}(\mathbb{R}^{2})\cap L^{\infty}(\mathbb{R}^{2})\) with compact support and \(q_{1},...,q_{N}\in\mathbb{R}^{2}\) be such that \(q_{i}\neq q_{j}\) if \(i\neq j\). Then the following inequality holds:_
\[\int_{\mathbb{R}^{2}}\frac{1}{b}|\nabla H_{N,\eta}|^{2}+\frac{C_{ b}}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}(g(q_{i}-q_{j})-g^{(\eta)}(q_{i}-q_{j}))\] \[\leqslant\mathcal{F}_{b}(Q_{N},\omega)+C_{b}\bigg{(}\frac{g(\eta )}{N}+I(Q_{N})(\eta+N^{-1})+\|\omega\|_{L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{ \infty}}\,g(\eta)\eta\bigg{)}\]
_where \(g^{(\eta)}\) is defined by (2.12)._
From this proposition we see that even if it is not necessarily positive, the modulated energy is bounded from below by some negative power of \(N\) (provided that \((I(Q_{N}))\) is bounded). We will also prove the three following corollaries:
**Corollary 5.2**.: _If \(\omega\) and \(Q_{N}\) satisfy the hypothesis of Proposition 5.1, then there exists \(c>0\) such that_
\[\frac{c}{N^{2}}|\{(q_{i},q_{j});|q_{i}-q_{j}|\leqslant\varepsilon\}| \leqslant \mathcal{F}_{b}(Q_{N},\omega)+C_{b}\bigg{(}\frac{g(\varepsilon)}{N }+I(Q_{N})(\varepsilon+N^{-1})\] \[+\|\omega\|_{L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{\infty}}\,g( \varepsilon)\varepsilon\bigg{)}.\]
**Corollary 5.3**.: _Let \(\alpha\in(0,1)\) and \(\xi\) be a test function (for example smooth with compact support or in the Schwartz space), then if \(\omega\) and \(Q_{N}\) satisfy the hypothesis of Proposition 5.1 we have_
\[\left|\int_{\mathbb{R}^{2}}\xi\left(\frac{1}{N}\sum_{i=1}^{N}\delta _{q_{i}}-\omega\right)\right|\leqslant C_{b}|\xi|_{\mathcal{C}^{0,\alpha}}N^{-\alpha}+C_{b}\left(\int_{ \mathbb{R}^{2}}\frac{1}{b}|\nabla\xi|^{2}\right)^{\frac{1}{2}}\left(\mathcal{F }_{b}(\omega,Q_{N})\right.\] \[+\frac{\ln(N)}{N}+I(Q_{N})N^{-1}\] \[+\left\|\omega\right\|_{L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{\infty }}\frac{\ln(N)}{N}\right)^{\frac{1}{2}}.\]
_In particular, there exists \(\beta>0\) such that for all \(s<-1\),_
\[\left\|\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}}-\omega\right\|_{H^ {s}}\leqslant C_{b}((1+I(Q_{N})+\|\omega\|_{L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{ \infty}})N^{-\beta}\] \[+\mathcal{F}_{b}(\omega,Q_{N})).\]
**Corollary 5.4**.: _If \(\omega\) and \(Q_{N}\) satisfy the hypothesis of Proposition 5.1 and if \((I(Q_{N}))\) is bounded, then the two following assertions are equivalent:_
1. \(\mathcal{F}_{b}(\omega,Q_{N})\underset{N\to+\infty}{\longrightarrow}0\)_._
2. \(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}}\underset{N\to+\infty}{ \overset{*}{\underset{N\to+\infty}{\longrightarrow}}}\omega\) _for the weak-_\(*\) _topology of probability measures and_ \[\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(q_{i},q_{j}) \longrightarrow\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega(x) \omega(y)\,\mathrm{d}x\,\mathrm{d}y.\]
Proposition 5.1 and Corollaries 5.2, 5.3 and 5.4 are analogues of other results obtained in [22, 52, 63]. Proposition 5.1 is an equivalent of [63, Proposition 3.3] or [52, Proposition 2.2] and the proof will follow the same steps: regularise the modulated energy and control the remainders. Some terms are very similar to the ones obtained in the Coulomb case whereas other terms are specific to the lake kernel and will be handled using the estimates proved in Section 2.
Corollary 5.2 is an equivalent of [52, Corollary 2.3] and Corollary 5.3 is an equivalent of [63, Proposition 3.6]. Both can be deduced from Proposition 5.1 in the same way [52, Corollary 2.3] and [63, Proposition 3.6] are deduced from [63, Proposition 3.3] or [52, Proposition 2.2].
Corollary 5.4 is an equivalent of [22, Lemma 2.6] and its proof proceeds in the same way. Due to the bound we assumed on the moment of inertia, tightness issues will be easier to handle.
Let us begin by proving the main proposition of this section:
Proof of Proposition 5.1.: Let us regularise the modulated energy (1.12) using the regularisation of the dirac mass \(\widetilde{\delta}\) defined in (2.13). We have
\[\mathcal{F}_{b}(Q_{N},\omega)=\]
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\,\mathrm{d} \left(\frac{1}{N}\sum_{i=1}^{N}\widetilde{\delta}_{q_{i}}^{(\eta)}-\omega \right)(x)\,\mathrm{d}\left(\frac{1}{N}\sum_{i=1}^{N}\widetilde{\delta}_{q_{i} }^{(\eta)}-\omega\right)(y)\] \[+\frac{1}{N^{2}}\!\!\sum_{1\leqslant i\neq j\leqslant N}\iint_{ \mathbb{R}^{2}\times\mathbb{R}^{2}}(\sqrt{b(q_{i})b(q_{j})}g(q_{i}-q_{j})\] \[-\sqrt{b(x)b(y)}g(x-y))\,\mathrm{d}\widetilde{\delta}_{q_{i}}^{( \eta)}(x)\,\mathrm{d}\widetilde{\delta}_{q_{j}}^{(\eta)}(y)\] \[+\frac{1}{N^{2}}\!\!\sum_{1\leqslant i\neq j\leqslant N}\iint_{ \mathbb{R}^{2}\times\mathbb{R}^{2}}(S_{b}(q_{i},q_{j})-S_{b}(x,y))\,\mathrm{d} \widetilde{\delta}_{q_{i}}^{(\eta)}(x)\,\mathrm{d}\widetilde{\delta}_{q_{j}}^ {(\eta)}(y)\] \[-\frac{1}{N^{2}}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times \mathbb{R}^{2}}g_{b}(x,y)\,\mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(x)\, \mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(y)\] \[+\frac{2}{N}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{ 2}}\big{(}\sqrt{b(x)b(y)}g(x-y)\] \[-\sqrt{b(x)b(q_{i})}g(x-q_{i})\big{)}\omega(x)\,\mathrm{d}x\, \mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(y)\] \[+\frac{2}{N}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{ 2}}(S_{b}(x,y)-S_{b}(x,q_{i}))\omega(x)\,\mathrm{d}x\,\mathrm{d}\widetilde{ \delta}_{q_{i}}^{(\eta)}(y)\] \[=: T_{1}+T_{2}+T_{3}+T_{4}+T_{5}+T_{6}.\]
_Claim 5.5_.: We have
\[T_{1}=\int_{\mathbb{R}^{2}}\frac{1}{b}|\nabla H_{N,\eta}|^{2}.\]
Proof of the claim.: Let us first fix \(\mu\) smooth with compact support and average zero and write \(H_{\mu}=G_{b}[\mu]\). By Proposition 2.8, we have
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\mu(x)\mu(y) \,\mathrm{d}x\,\mathrm{d}y =\int_{\mathbb{R}^{2}}H_{\mu}(x)\mu(x)\,\mathrm{d}x\] \[=-\int_{\mathbb{R}^{2}}H_{\mu}(x)\,\mathrm{div}\left(\frac{1}{b} \nabla H_{\mu}\right)(x)\,\mathrm{d}x.\]
Let \(R>0\), then integrating by parts we get
\[-\int_{B(0,R)}H_{\mu}\,\mathrm{div}\left(\frac{1}{b}\nabla H_{\mu}\right)=- \int_{\partial B(0,R)}\frac{1}{b}H_{\mu}\nabla H_{\mu}\cdot\,\mathrm{d} \vec{S}+\int_{B(0,R)}\frac{1}{b}|\nabla H_{\mu}|^{2}.\]
Using Proposition 2.5 applied to \(\omega=\mu\), \(u=-\frac{1}{b}\nabla^{\perp}H_{\mu}\) and \(\psi=H_{\mu}\), we have
\[\left|\int_{\partial B(0,R)}\frac{1}{b}H_{\mu}\nabla H_{\mu}\cdot\,\mathrm{d} \vec{S}\right|\leqslant\frac{C}{R^{2}}(1+R^{\delta})R\underset{R\to+\infty}{ \longrightarrow}0\]
and therefore
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\mu(x)\mu(y)\,\mathrm{d}x \,\mathrm{d}y=\int_{\mathbb{R}^{2}}\frac{1}{b}|\nabla H_{\mu}|^{2}.\]
Now consider a sequence \((\mu_{k})\) of smooth functions with compact support and average zero converging to \(m:=\frac{1}{N}\sum_{i=1}^{N}\widetilde{\delta}_{q_{i}}^{(\eta)}-\omega\) in \(\dot{H}^{-1}\), then by Lemma 2.12,
\[\nabla H_{\mu_{k}}\underset{k\to+\infty}{\longrightarrow}\nabla H_{N,\eta}\text { in }L^{2}.\]
and therefore
\[\int_{\mathbb{R}^{2}}\frac{1}{b}|\nabla H_{\mu_{k}}|^{2}\underset{k\to+\infty }{\longrightarrow}\int_{\mathbb{R}^{2}}\frac{1}{b}|\nabla H_{N,\eta}|^{2}\]
and
\[\bigg{|}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}} g_{b}(x,y)\mu_{k}(x)\mu_{k}(y)\,\mathrm{d}x\,\mathrm{d}y-\iint_{ \mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\,\mathrm{d}m(x)\,\mathrm{d}m(y) \bigg{|}\] \[=\bigg{|}\int_{\mathbb{R}^{2}}G_{b}[\mu_{k}-m]\,\mathrm{d}\mu_{k} +\int_{\mathbb{R}^{2}}G_{b}[m]\,\mathrm{d}(\mu_{k}-m)\bigg{|}\] \[\leqslant C\left\|\nabla G_{b}[\mu_{k}-m]\right\|_{L^{2}}\left\| \mu_{k}\right\|_{\dot{H}^{-1}}+C\left\|\nabla G_{b}[m]\right\|_{L^{2}}\left\| \mu_{k}-m\right\|_{\dot{H}^{-1}}\] \[\leqslant C\left\|\mu_{k}-m\right\|_{\dot{H}^{-1}}\]
by Lemma 2.12 so we get Claim 5.5.
Now let us bound the fourth term:
_Claim 5.6_.: \[|T_{4}|\leqslant\frac{C_{b}}{N}(g(\eta)+I(Q_{N})).\]
Proof.: We write
\[T_{4}= -\frac{1}{N^{2}}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times \mathbb{R}^{2}}\sqrt{b(x)b(y)}g(x-y)\,\mathrm{d}\widetilde{\delta}_{q_{i}}^{( \eta)}(x)\,\mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(y)\] \[-\frac{1}{N^{2}}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times \mathbb{R}^{2}}S_{b}(x,y)\,\mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(x)\, \mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(y)\] \[=: T_{4,1}+T_{4,2}.\]
Using the definition of \(\widetilde{\delta}_{q}\) (2.13) and Equality (2.15) we get
\[T_{4,1} =-\frac{1}{N^{2}}\sum_{i=1}^{N}m_{b}(q_{i},\eta)^{2}\iint_{ \mathbb{R}^{2}\times\mathbb{R}^{2}}g(x-y)\,\mathrm{d}\delta_{q_{i}}^{(\eta)}(x )\,\mathrm{d}\delta_{q_{i}}^{(\eta)}(y)\] \[=-\frac{1}{N^{2}}\sum_{i=1}^{N}m_{b}(q_{i},\eta)^{2}\int_{ \mathbb{R}^{2}}g^{(\eta)}(x-q_{i})\,\mathrm{d}\delta_{q_{i}}^{(\eta)}(x).\]
Therefore,
\[|T_{4,1}|\leqslant\frac{C_{b}g(\eta)}{N}.\]
Now by Claim (2) of Lemma 2.11, we have
\[|T_{4,2}|\leqslant\frac{C_{b}}{N^{2}}\sum_{i=1}^{N}(1+|q_{i}|^{2}).\]
We get that
\[|T_{4}|\leqslant\frac{C_{b}}{N}(1+I(Q_{N})+g(\eta))\leqslant\frac{C_{b}}{N}(g( \eta)+I(Q_{N})).\]
Now we bound the third and the sixth term:
_Claim 5.7_.: \[|T_{3}|+|T_{6}|\leqslant C_{b}(\left\|\omega\right\|_{L^{1}((1+|x|)\,\mathrm{d} x)}+I(Q_{N}))\eta.\]
Proof.: For \(x\in\partial B(q_{i},\eta),y\in\partial B(q_{j},\eta)\), we use Claim (1) of Lemma 2.7 and the symmetry of \(S_{b}\) to get
\[|S_{b}(q_{i},q_{j})-S_{b}(x,y)| \leqslant|S_{b}(q_{i},q_{j})-S_{b}(x,q_{j})|+|S_{b}(x,q_{j})-S_{b} (x,y)|\] \[\leqslant C_{b}(1+|q_{j}|)\eta+C_{b}(1+|q_{i}|)\eta\] \[\leqslant C_{b}(1+|q_{i}|+|q_{j}|)\eta.\]
Thus we can bound the third term:
\[|T_{3}|\leqslant C_{b}(1+I(Q_{N}))\eta. \tag{5.1}\]
The sixth term can be bounded in the same way:
\[|T_{6}|\leqslant\frac{C_{b}}{N}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times \mathbb{R}^{2}}(1+|x|+|q_{i}|)\eta\omega(x)\,\mathrm{d}x\,\mathrm{d}\widetilde {\delta}_{q_{i}}^{(\eta)}(y).\]
We get that
\[|T_{6}|\leqslant C_{b}(\left\|\omega\right\|_{L^{1}((1+|x|)\,\mathrm{d}x)}+I( Q_{N}))\eta. \tag{5.2}\]
and combining (5.1) with (5.2) we get Claim 5.7.
Now let us bound the fifth term:
_Claim 5.8_.: \[|T_{5}|\leqslant C_{b}\left\|\omega\right\|_{L^{1}\cap L^{\infty}}\eta g( \eta).\]
Proof.: Using Proposition 2.13 we write \(T_{5}\) as
\[T_{5}= \frac{2}{N}\sum_{i=1}^{N}\int_{\mathbb{R}^{2}}(m_{b}(q_{i},\eta) g^{(\eta)}(x-q_{i})-\sqrt{b(q_{i})}g(x-q_{i}))\sqrt{b(x)}\omega(x)\,\mathrm{d}x\] \[= \frac{2}{N}\sum_{i=1}^{N}(m_{b}(q_{i},\eta)-\sqrt{b(q_{i})})\int_ {\mathbb{R}^{2}}g^{(\eta)}(x-q_{i})\sqrt{b(x)}\omega(x)\,\mathrm{d}x\] \[+\frac{2}{N}\sum_{i=1}^{N}\sqrt{b(q_{i})}\int_{\mathbb{R}^{2}}(g^ {(\eta)}(x-q_{i})-g(x-q_{i}))\sqrt{b(x)}\omega(x)\,\mathrm{d}x.\]
and thus by (2.16) and since \(|g^{(\eta)}(x-q_{i})|\leqslant C(g(\eta)+|x|+|q_{i}|)\) we have
\[|T_{5}|\leqslant C_{b}\left\|\omega\right\|_{L^{1}}\eta g(\eta)+C_{b}\left\|\omega \right\|_{L^{1}(|x|\,\mathrm{d}x)}\eta+C_{b}\left\|\omega\right\|_{L^{1}}(1+I( Q_{N}))\eta\] \[+C_{b}\left\|\omega\right\|_{L^{\infty}}\int_{B(0,\eta)}|g^{(\eta )}(x)-g(x)|\,\mathrm{d}x.\]
We get that
\[|T_{5}|\leqslant C_{b}\left\|\omega\right\|_{L^{1}((1+|x|)\,\mathrm{d}x)\cap L ^{\infty}}\eta g(\eta)+(1+I(Q_{N}))\eta\]
since \(\omega\) is a probability density.
We are only remained to estimate from below the second term:
_Claim 5.9_.: \[T_{2}\geqslant\frac{C_{b}}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}(g(q_{i}-q_{j })-g^{(\eta)}(q_{i}-q_{j}))-C_{b}\eta g(\eta).\]
Proof.: We also split \(T_{2}\) in two terms:
\[T_{2}= \frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}\sqrt{b(q_{i} )b(q_{j})}g(q_{i}-q_{j})\] \[-m_{b}(q_{i},\eta)m_{b}(q_{j},\eta)\iint_{\mathbb{R}^{2}\times \mathbb{R}^{2}}g(x-y)\operatorname{d}\!\delta^{(\eta)}_{q_{i}}(x)\operatorname {d}\!\delta^{(\eta)}_{q_{j}}(y)\] \[= \frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}\sqrt{b(q_{i} )b(q_{j})}g(q_{i}-q_{j})\] \[-m_{b}(q_{i},\eta)m_{b}(q_{j},\eta)\int_{\mathbb{R}^{2}}g^{(\eta) }(q_{i}-y)\operatorname{d}\!\delta^{(\eta)}_{q_{j}}(y)\] \[= \frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}(\sqrt{b(q_{i })b(q_{j})}-m_{b}(q_{i},\eta)m_{b}(q_{j},\eta))\] \[\times\int_{\mathbb{R}^{2}}g^{(\eta)}(q_{i}-y)\operatorname{d} \!\delta^{(\eta)}_{q_{j}}(y)\] \[+\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}\sqrt{b(q_{i })b(q_{j})}\left(g(q_{i}-q_{j})-\int_{\mathbb{R}^{2}}g^{(\eta)}(q_{i}-y) \operatorname{d}\!\delta^{(\eta)}_{q_{j}}(y)\right)\] \[= T_{2,1}+T_{2,2}.\]
Writing
\[\sqrt{b(q_{i})b(q_{j})}-m_{b}(q_{i},\eta)m_{b}(q_{j},\eta) =\sqrt{b(q_{i})}(\sqrt{b(q_{j})}-m_{b}(q_{j},\eta))\] \[+m_{b}(q_{j},\eta)(\sqrt{b(q_{i})}-m_{b}(q_{i},\eta))\]
and using (2.16) we get that
\[|T_{2,1}|\leqslant C_{b}\eta g(\eta). \tag{5.3}\]
Now by (2.15),
\[g (q_{i}-q_{j})-\int_{\mathbb{R}^{2}}g^{(\eta)}(q_{i}-y) \operatorname{d}\!\delta^{(\eta)}_{q_{j}}(y)\] \[=g(q_{i}-q_{j})-g^{(\eta)}(q_{i}-q_{j})+\int_{\mathbb{R}^{2}}(g(q _{i}-y)-g^{(\eta)}(q_{i}-y))\operatorname{d}\!\delta^{(\eta)}_{q_{j}}(y)\] \[\geqslant g(q_{i}-q_{j})-g^{(\eta)}(q_{i}-q_{j})+0\]
and thus
\[T_{2,2}\geqslant\frac{C_{b}}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}(g(q_{i }-q_{j})-g^{(\eta)}(q_{i}-q_{j})). \tag{5.4}\]
We get Claim 5.9 combining Equations (5.3) with (5.4).
Combining Claims 5.5, 5.6, 5.7, 5.8 and 5.9 we get the proof of Proposition 5.1.
Now we prove the "counting close particles" Corollary:
Proof of Corollary 5.2.: The proof is exactly the same as the proof of [57, Lemma 3.7]. If \(|q_{i}-q_{j}|\leqslant\varepsilon\) then
\[g(q_{i}-q_{j})-g^{(2\varepsilon)}(q_{i}-q_{j}) =-\frac{1}{2\pi}\ln|q_{i}-q_{j}|+\frac{1}{2\pi}\ln(2\varepsilon)\] \[\geqslant-\frac{1}{2\pi}\ln(\varepsilon)+\frac{1}{2\pi}\ln(2 \varepsilon)=\frac{1}{2\pi}\ln(2)>0.\]
Thus, since \(g-g^{(2\varepsilon)}\geqslant 0\),
\[\frac{1}{2\pi N^{2}}\ln(2)|\{(q_{i},q_{j});|q_{i}-q_{j}|\leqslant\varepsilon\}|\] \[\leqslant \frac{1}{N^{2}}\sum_{\begin{subarray}{c}1\leqslant i\neq j\leqslant N \\ |q_{i}-q_{j}|\leqslant\varepsilon\end{subarray}}(g(q_{i}-q_{j})-g^{(2 \varepsilon)}(q_{i}-q_{j}))\] \[\leqslant \frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}(g(q_{i}-q_{j} )-g^{(2\varepsilon)}(q_{i}-q_{j}))\] \[\leqslant \mathcal{F}_{b}(Q_{N},\omega)\] \[+C_{b}\bigg{(}\frac{g(\varepsilon)}{N}+I(Q_{N})(\varepsilon+N^{- 1})+\|\omega\|_{L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{\infty}}\,g(\varepsilon) \varepsilon\bigg{)}.\]
where we used Proposition 5.1 in the last inequality.
Now we prove the coercivity result:
Proof of Corollary 5.3.: We have
\[\int_{\mathbb{R}^{2}}\xi\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_ {i}}-\omega\right)= \frac{1}{N}\int_{\mathbb{R}^{2}}\xi\left(\sum_{i=1}^{N}\delta_{q_ {i}}-\widetilde{\delta}_{q_{i}}^{(\eta)}\right)\] \[+\int_{\mathbb{R}^{2}}\xi\left(\frac{1}{N}\sum_{i=1}^{N}\widetilde {\delta}_{q_{i}}^{(\eta)}-\omega\right)\] \[=:T_{1}+T_{2}.\]
Now,
\[T_{1} =\frac{1}{N}\sum_{i=1}^{N}\xi(q_{i})-m_{b}(q_{i},\eta)\int_{ \partial B(q_{i},\eta)}\frac{\xi(x)}{\sqrt{b(x)}}\,\mathrm{d}\delta_{q_{i}}^{ (\eta)}(x)\] \[=\frac{1}{N}\sum_{i=1}^{N}m_{b}(q_{i},\eta)\int_{\partial B(q_{i},\eta)}\frac{\xi(q_{i})-\xi(x)}{\sqrt{b(x)}}\,\mathrm{d}\delta_{q_{i}}^{(\eta )}(x).\]
Thus
\[|T_{1}|\leqslant C_{b}|\xi|_{\mathcal{C}^{0,\alpha}}\eta^{\alpha}.\]
Using a sequence \((\mu_{k})\) of smooth functions with compact support and average \(0\) converging to \(\frac{1}{N}\sum_{i=1}^{N}\widetilde{\delta}_{q_{i}}^{(\eta)}-\omega\) as we have done for Claim 5.5 we can show that
\[T_{2}=\int_{\mathbb{R}^{2}}\frac{1}{b}\nabla\xi\cdot\nabla H_{N,\eta}\]
and therefore
\[|T_{2}|\leqslant \left(\int_{\mathbb{R}^{2}}\frac{1}{b}|\nabla\xi|^{2}\right)^{ \frac{1}{2}}\left(\int_{\mathbb{R}^{2}}\frac{1}{b}|\nabla H_{N,\eta}|^{2} \right)^{\frac{1}{2}}\] \[\leqslant C_{b}\left(\int_{\mathbb{R}^{2}}\frac{1}{b}|\nabla\xi|^{2} \right)^{\frac{1}{2}}\left(\mathcal{F}_{b}(Q_{N},\omega)+\frac{g(\eta)}{N}\right.\] \[\left.+I(Q_{N})(\eta+N^{-1})+\|\omega\|_{L^{1}((1+|x|)\,\mathrm{d }x)\cap L^{\infty}}\,g(\eta)\eta\right)^{\frac{1}{2}}\!.\]
by Proposition 5.1. We conclude by taking \(\eta=N^{-1}\). The bound on
\[\left\|\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}}-\omega\right\|_{H^{s}}\]
follows from Sobolev embeddings.
We finish this section by proving the weak-\(*\) convergence result:
Proof of Corollary 5.4.: Let us denote \(\omega_{N}=\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}}\) and prove that \((\omega_{N})\) is a tight sequence of probability measures. Let \(R>1\), then
\[|\{i\in[1,N]\;;\;|q_{i}|\geqslant R\}|R^{2}\leqslant \sum_{\begin{subarray}{c}i=1\\ |q_{i}|\geqslant R\end{subarray}}^{N}|q_{i}|^{2}\] \[\leqslant NI(Q_{N}). \tag{5.5}\]
Dividing by \(NR^{2}\) both sides of the inequality we get
\[\int_{B(0,R)^{c}}\,\mathrm{d}\omega_{N}\leqslant I(Q_{N})R^{-2}\]
and since \((I(Q_{N}))\) is bounded we get that \((\omega_{N})\) is tight. We will now prove the following Claim:
_Claim 5.10_.: Assume that \((\omega_{N})\) converges to \(\omega\) for the weak-\(*\) topology of probability measures and that \((I(Q_{N}))\) is bounded. Then \(\mathcal{F}_{b}(Q_{N},\omega)\underset{N\to+\infty}{\longrightarrow}0\) if and only if we have
\[\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(q_{i},q_{j}) \longrightarrow\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega(x )\omega(y)\,\mathrm{d}x\,\mathrm{d}y.\]
Proof.: Let \(\varepsilon>0\). We write the modulated energy as the sum of three terms:
\[\mathcal{F}_{b}(Q_{N},\omega)= -\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega(x) \omega(y)\,\mathrm{d}x\,\mathrm{d}y\] \[+\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(q_{i},q _{j})\] \[-2\int_{\mathbb{R}^{2}}\psi(y)\,\mathrm{d}(\omega_{N}-\omega)(y) \tag{5.6}\]
where \(\psi=G_{b}[\omega]\). Let \(R\geqslant 1\) be such that \(\operatorname{supp}(\omega)\subset B(0,R)\). We have
\[\int_{\mathbb{R}^{2}}\psi\operatorname{d}(\omega-\omega_{N})=-\int_{B(0,R)^{c} }\psi\operatorname{d}\!\omega_{N}+\int_{B(0,R)}\psi\operatorname{d}\!(\omega- \omega_{N}).\]
We bound the first term as we did to obtain (5.5):
\[\left|\int_{B(0,R)^{c}}\psi\operatorname{d}\!\omega_{N}\right| \leqslant\frac{1}{N}\sum_{\begin{subarray}{c}i=1\\ |q_{i}|\geqslant R\end{subarray}}^{N}|\psi(q_{i})|\] \[\leqslant\frac{C_{b}}{N}\sum_{\begin{subarray}{c}i=1\\ |q_{i}|\geqslant R\end{subarray}}^{N}(1+|q_{i}|^{\delta})\] \[\leqslant C_{b}(R^{-2}I(Q_{N})+R^{2-\delta}I(Q_{N}))\]
for some \(0<\delta<1\) (by Proposition 2.5). Therefore,
\[\left|\int_{B(0,R)^{c}}\psi\operatorname{d}\!\omega_{N}\right|\leqslant\varepsilon\]
if \(R\) is big enough. Now let \(\chi_{R,\beta}\) be a smooth function such that \(0\leqslant\chi\leqslant 1\), \(\chi_{R,\beta}(x)=1\) if \(|x|\leqslant R\) and \(\chi_{R,\beta}(x)=0\) if \(|x|\geqslant R+\beta\). Then
\[\int_{B(0,R)}\psi\operatorname{d}\!(\omega-\omega_{N})=\int\chi_{R,\beta}\psi \operatorname{d}\!(\omega-\omega_{N})-\int_{R\leqslant|x|\leqslant R+\beta} \chi_{R,\beta}\psi\operatorname{d}\!(\omega-\omega_{N})\]
Choosing \(\beta\) small enough we have
\[\left|\int_{R\leqslant|x|\leqslant R+\beta}\chi_{R,\beta}\psi\operatorname{d }\!(\omega-\omega_{N})\right|\leqslant\varepsilon.\]
Now \(\psi\) is continuous (see Lemma 2.3) so by weak-\(*\) convergence of \((\omega_{N})\) to \(\omega\) we get that
\[\int\psi\chi_{R,\beta}\operatorname{d}\!(\omega-\omega_{N})\mathop{\longrightarrow }\limits_{N\to+\infty}0\]
and therefore
\[\limsup_{N\to+\infty}\left|\int_{\mathbb{R}^{2}}\psi\operatorname{d}\!( \omega-\omega_{N})\right|\leqslant 2\varepsilon.\]
for all \(\varepsilon>0\), so we get
\[\int_{\mathbb{R}^{2}}\psi\operatorname{d}\!(\omega-\omega_{N})\mathop{ \longrightarrow}\limits_{N\to+\infty}0.\]
Using (5.6) we get that \(\mathcal{F}_{b}(Q_{N},\omega)\mathop{\longrightarrow}\limits_{N\to+\infty}0\) if and only if we have
\[\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}g_{b}(q_{i},q_{j}) \longrightarrow\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}g_{b}(x,y)\omega(x) \omega(y)\operatorname{d}\!x\operatorname{d}\!y.\]
It follows directly from the Claim that \((2)\implies(1)\). Now if we have \((1)\), using Corollary 5.3 we have convergence of \((\omega_{N})\) to \(\omega\) in any \(H^{s}\) for any \(s<-1\). It follows by Prokhorov's theorem \((\omega_{N})\) converges to \(\omega\) for the weak-\(*\) topology of probability measures. By the Claim we also have convergence of the interaction energy and therefore \((1)\implies(2)\).
## 6. Proof of the main Proposition 6.1
Let us recall that for \(q\in\mathbb{R}^{2}\), \(Q_{N}=(q_{1},...,q_{N})\in(\mathbb{R}^{2})^{N}\) and \(0<\eta<1\), we have denoted
\[I(Q_{N})=\frac{1}{N}\sum_{i=1}^{N}|q_{i}|^{2},\]
\[\widetilde{\delta}_{q}^{(\eta)}=m_{b}(q,\eta)\frac{\mathrm{d}\delta_{q}^{(\eta )}}{\sqrt{b}}\]
and
\[m_{b}(q,\eta)=\left(\int_{\mathbb{R}^{2}}\frac{\mathrm{d}\delta_{q}^{(\eta)}}{ \sqrt{b}}\right)^{-1}\]
where \(\delta_{q}^{(\eta)}\) is the uniform probability measure on the circle \(\partial B(q,\eta)\).
In this Section, we prove the following result:
**Proposition 6.1**.: _Let \(Q_{N}=(q_{1},...,q_{N})\in(\mathbb{R}^{2})^{N}\) such that \(q_{i}\neq q_{j}\) if \(i\neq j\), \(u\in W^{1,\infty}(\mathbb{R}^{2},\mathbb{R}^{2})\) and \(\omega\in\mathcal{P}(\mathbb{R}^{2})\cap L^{\infty}(\mathbb{R}^{2})\) with compact support such that \(\nabla G_{b}[\omega]\) is continuous and bounded. There exists \(\beta\in(0,1)\) (independent of \(\omega\), \(u\) and \(Q_{N}\)) such that_
\[\bigg{|}\iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2})\setminus \Delta} u(x)\cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d}\left(\frac{1}{N}\sum_{i=1}^{N }\delta_{q_{i}}-\omega\right)^{\otimes 2}(x,y)\bigg{|}\] \[\leqslant C_{b}\left\|u\right\|_{W^{1,\infty}}|\mathcal{F}_{b}(Q_{N},\omega)|\] \[+C_{b}(1+\left\|u\right\|_{W^{1,\infty}})\left\|\omega\right\|_{L ^{1}((1+\left|x\right|)\,\mathrm{d}x)\cap L^{\infty}}(1+I(Q_{N}))N^{-\beta}.\]
This proposition is an equivalent of [63, Proposition 1.1] or [52, Proposition 4.1] and the proof will follow the same steps: regularise the dirac masses, use the structure of the lake kernel to bound the regular part and control the remainders. Some terms are very similar to the ones obtained in the Coulomb case and we will use both the properties of our regularisation (see Subsection 2.4) and some estimates proved in [52] to bound them. As in the proof of Proposition 5.1 some terms are specific to the lake kernel and we will use results of Section 2 to bound them.
Proof.: Let us fix \(0<\eta<\frac{1}{8}\) and write
\[\begin{split}&\iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2})\setminus \Delta}u(x)\cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d}\left(\frac{1}{N}\sum_{i=1}^{N} \delta_{q_{i}}-\omega\right)^{\otimes 2}(x,y)\\ =&\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}u(x) \cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d}\left(\frac{1}{N}\sum_{i=1}^{N}\widetilde {\delta}_{q_{i}}^{(\eta)}-\omega\right)^{\otimes 2}(x,y)\\ &+\bigg{(}-\frac{1}{N}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times \mathbb{R}^{2}}u(x)\cdot\nabla_{x}g_{b}(x,y)\bigg{[}\,\mathrm{d}\omega(x)\, \mathrm{d}(\delta_{q_{i}}-\widetilde{\delta}_{q_{i}}^{(\eta)})(y)\\ &+\,\mathrm{d}(\delta_{q_{i}}-\widetilde{\delta}_{q_{i}}^{(\eta) })(x)\,\mathrm{d}\omega(y)\bigg{]}\bigg{)}\\ &+\left(\frac{1}{N^{2}}\sum_{1\leqslant i,j\leqslant N}\iint_{( \mathbb{R}^{2}\times\mathbb{R}^{2})\setminus\Delta}u(x)\cdot\nabla_{x}g_{b}(x,y)\\ &\,[\,\mathrm{d}\delta_{q_{i}}(x)\,\mathrm{d}\delta_{q_{j}}(y)- \,\mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(x)\,\mathrm{d}\widetilde{ \delta}_{q_{j}}^{(\eta)}(y)]\right)\\ =:& T_{1}+T_{2}+T_{3}.\end{split} \tag{6.1}\]
Let us bound the first term. As in Section 5 we write
\[H_{N,\eta}:=G_{b}\left[\frac{1}{N}\sum_{i=1}^{N}\widetilde{\delta}_{q_{i}}^{( \eta)}-\omega\right].\]
We claim:
_Claim 6.2_.: \[T_{1}= -\int_{\mathbb{R}^{2}}u(x)\cdot\nabla H_{N,\eta}(x)\nabla\left( \frac{1}{b}\right)\cdot\nabla H_{N,\eta}(x)\,\mathrm{d}x\] \[+\int_{\mathbb{R}^{2}}\nabla\left(\frac{1}{2b}u\right):[H_{N,\eta },H_{N,\eta}]\]
Proof of the Claim.: This claim is similar to [63, Lemma 4.3] and we proceed the same way: Let us first fix \(\mu\) smooth with compact support and average zero and write \(H_{\mu}=G_{b}[\mu]\). Then
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla_{x}g_{b }(x,y)\,\mathrm{d}\mu^{\otimes 2}(x,y)\] \[= -\int_{\mathbb{R}^{2}}u(x)\cdot\nabla H_{\mu}(x)\,\mathrm{div} \left(\frac{1}{b}\nabla H_{\mu}\right)(x)\,\mathrm{d}x\] \[= -\int_{\mathbb{R}^{2}}u(x)\cdot\nabla H_{\mu}(x)\nabla\left( \frac{1}{b}\right)\cdot\nabla H_{\mu}(x)\,\mathrm{d}x\] \[-\int_{\mathbb{R}^{2}}\frac{1}{b}u\cdot\nabla H_{\mu}\Delta H_{ \mu}.\]
For the second integral of the right handside we proceed as in [63] and use the stress-energy tensor defined by (1.11) (for more details, see [63,
Equality (1.25)] and the associated references):
\[\int_{\mathbb{R}^{2}}\frac{1}{b}u\cdot\nabla H_{\mu}\Delta H_{\mu}=\int_{ \mathbb{R}^{2}}\frac{1}{2b}u\cdot\operatorname{div}([H_{\mu},H_{\mu}]).\]
Integrating over a ball of radius \(R\) and integrating by parts we get
\[\int_{B(0,R)}\frac{1}{2b}u\cdot\operatorname{div}([H_{\mu},H_{ \mu}])= \int_{\partial B(0,R)}\frac{1}{2b}[H_{\mu},H_{\mu}]u\cdot\,\mathrm{ d}\vec{S}\] \[-\int_{B(0,R)}\nabla\left(\frac{1}{2b}u\right):[H_{\mu},H_{\mu}].\]
Using Proposition 2.5 (applied to \(\omega=\mu\) and \(\psi=H_{\mu}\)) we have
\[\left|\int_{\partial B(0,R)}\frac{1}{2b}[H_{\mu},H_{\mu}]u\cdot\,\mathrm{d} \vec{S}\right|\leqslant\frac{C_{b,\mu}\left\|u\right\|_{L^{\infty}}}{R^{4}}R.\]
Letting \(R\longrightarrow\infty\) we obtain
\[\int_{\mathbb{R}^{2}}\frac{1}{2b}u\cdot\operatorname{div}([H_{\mu},H_{\mu}]) =-\int_{\mathbb{R}^{2}}\nabla\left(\frac{1}{2b}u\right):[H_{\mu},H_{\mu}].\]
Now if \((\mu_{k})\) is a sequence of smooth functions with compact support and average zero such that
\[\mu_{k}-\left(\frac{1}{N}\sum_{i=1}^{N}\widetilde{\delta}_{q_{i}}^{(\eta)}- \omega\right)\underset{N\rightarrow+\infty}{\longrightarrow}0\text{ in }\dot{H}^{-1}\]
then by Lemma 2.12 we have
\[\nabla H_{\mu_{k}}\underset{N\rightarrow+\infty}{\longrightarrow}\nabla H_{N,\eta}\text{ in }L^{2}\]
and therefore since \(u\in W^{1,\infty}\) and since \([H_{\mu_{k}},H_{\mu_{k}}]\) (defined by Equation (1.11)) is quadratic in the derivatives of \(H_{\mu_{k}}\) we get that
\[-\int_{\mathbb{R}^{2}}u(x)\cdot\nabla H_{\mu_{k}}(x)\nabla\left(\frac{1}{b} \right)\cdot\nabla H_{\mu_{k}}(x)\,\mathrm{d}x+\int_{\mathbb{R}^{2}}\nabla \left(\frac{1}{2b}u\right):[H_{\mu_{k}},H_{\mu_{k}}]\]
converges to
\[-\int_{\mathbb{R}^{2}}u(x)\cdot\nabla H_{N,\eta}(x)\nabla\left(\frac{1}{b} \right)\cdot\nabla H_{N,\eta}(x)\,\mathrm{d}x+\int_{\mathbb{R}^{2}}\nabla \left(\frac{1}{2b}u\right):[H_{N,\eta},H_{N,\eta}]\]
as \(k\longrightarrow+\infty\). We are only left to justify that
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla_{x}g_{b }(x,y)\,\mathrm{d}\mu_{k}^{\otimes 2}(x,y)\] \[\underset{k\rightarrow+\infty}{\longrightarrow}\iint_{\mathbb{R }^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d}\left(\frac{ 1}{N}\sum_{i=1}^{N}\widetilde{\delta}_{q_{i}}^{(\eta)}-\omega\right)^{\otimes 2}(x,y).\]
We define
\[m=\frac{1}{N}\sum_{i=1}^{N}\widetilde{\delta}_{q_{i}}^{(\eta)}.\]
Let us consider a sequence \((\nu_{k})\) of smooth probability densities with support included in a ball \(B(0,R)\) independent of \(k\) (containing \(\operatorname{supp}(m)\)), such that
\((\nu_{k}-m)\) converges to zero in \(\dot{H}^{-1}\) and for the weak-\(*\) topology of probability measures. If we set \(\mu_{k}=\nu_{k}-\omega\), then
\[\mu_{k}-(m-\omega)\underset{k\to+\infty}{\longrightarrow}0\text{ in }\dot{H}^{-1}.\]
Now we write
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla_{x}g_{b }(x,y)\,\mathrm{d}\mu_{k}^{\otimes 2}(x,y)\] \[-\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla_{x}g_{ b}(x,y)\,\mathrm{d}(m-\omega)^{\otimes 2}(x,y)\] \[= \iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla_{x}g_{ b}(x,y)\omega(x)\,\mathrm{d}x\,\mathrm{d}(m-\nu_{k})(y)\] \[+\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla_{x}g_{ b}(x,y)\omega(y)\,\mathrm{d}y\,\mathrm{d}(m-\nu_{k})(x)\] \[+\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla_{x}g_ {b}(x,y)\,\mathrm{d}(\nu_{k}\otimes\nu_{k}-m\otimes m)(x,y)\] \[=: I_{1}+I_{2}+I_{3}.\]
We have
\[|I_{1}|=\bigg{|}\int_{\mathbb{R}^{2}}u\cdot\nabla G_{b}[m-\nu_{k} ]\omega\bigg{|} \leqslant\left\|u\right\|_{L^{\infty}}\left\|\omega\right\|_{L^{2 }}\left\|\nabla G_{b}[m-\nu_{k}]\right\|_{L^{2}}\] \[\leqslant C\left\|u\right\|_{L^{\infty}}\left\|\omega\right\|_{L^ {2}}\left\|m-\nu_{k}\right\|_{\dot{H}^{-1}}\]
by Lemma 2.12 and therefore \(I_{1}\underset{k\to+\infty}{\longrightarrow}0\). Recall that \((m-\nu_{k})\) converges to zero for the weak-\(*\) topology of probability measures. Therefore
\[I_{2}=\int_{\mathbb{R}^{2}}u\cdot\nabla G_{b}[\omega]\,\mathrm{d}(m-\nu_{k}) \underset{k\to+\infty}{\longrightarrow}0\]
since \(u\) and \(\nabla G_{b}[\omega]\) are continuous and bounded by assumption. Now we want to show that \(I_{3}\) converges to zero. Remark that writing \(\mu_{k}=\nu_{k}-\omega\) and proving that \(I_{1}\) and \(I_{2}\) converge to zero allowed us to restrict ourself to study the convergence of
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla_{x}g_{b}(x,y)\, \mathrm{d}\nu_{k}(x)\,\mathrm{d}\nu_{k}(y)\]
for \(\nu_{k}\) nonnegative (which will be crucial for using Delort's argument below). We use the definition of \(g_{b}\) (2.10) to write
\[I_{3}= \iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla\sqrt{b }(x)\sqrt{b(y)}g(x-y)\,\mathrm{d}(\nu_{k}\otimes\nu_{k}-m\otimes m)(x,y)\] \[+\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\sqrt{b(x)b(y)}u(x) \cdot\nabla g(x-y)\,\mathrm{d}(\nu_{k}\otimes\nu_{k}-m\otimes m)(x,y)\] \[+\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla_{x}S_{ b}(x,y)\,\mathrm{d}(\nu_{k}\otimes\nu_{k}-m\otimes m)(x,y)\] \[=: I_{3,1}+I_{3,2}+I_{3,3}.\]
We write
\[\begin{split} I_{3,1}=&\iint_{\mathbb{R}^{2}\times \mathbb{R}^{2}}u(x)\cdot\nabla\sqrt{b}(x)\sqrt{b(y)}g(x-y)\,\mathrm{d}(\nu_{k}-m) (x)\,\mathrm{d}\nu_{k}(y)\\ &+\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla\sqrt{ b}(x)\sqrt{b(y)}g(x-y)\,\mathrm{d}m(x)\,\mathrm{d}(\nu_{k}-m)(y)\\ =&\int_{\mathbb{R}^{2}}(u\cdot\nabla\sqrt{b})(g*[ \sqrt{b}\nu_{k}])\,\mathrm{d}(\nu_{k}-m)\\ &+\int_{\mathbb{R}^{2}}(u\cdot\nabla\sqrt{b})(g*[\sqrt{b}(\nu_{k} -m)])\,\mathrm{d}m.\end{split} \tag{6.2}\]
Recall that \(B(0,R)\) is a ball containing the supports of \(m\) and \(\nu_{k}\). Consider a smooth probability density \(\rho\) with support in \(B(0,R)\). We define
\[\begin{split}\chi_{k}&=\left(\int_{\mathbb{R}^{2}} \sqrt{b}\nu_{k}\right)\rho,\\ \chi_{\infty}&=\left(\int_{\mathbb{R}^{2}}\sqrt{b} \,\mathrm{d}m\right)\rho\end{split}\]
and write
\[\begin{split}\nabla g*(\sqrt{b}(\nu_{k}-m))=&\nabla g *(\sqrt{b}\nu_{k}-\chi_{k}+\chi_{\infty}-\sqrt{b}m)\\ &+\left(\int_{\mathbb{R}^{2}}\sqrt{b}\nu_{k}-\int_{\mathbb{R}^{2} }\sqrt{b}\,\mathrm{d}m\right)\nabla g*\rho.\end{split} \tag{6.3}\]
Now
\[\begin{split}\left\|\nabla g*(\sqrt{b}\nu_{k}-\chi_{k}+\chi_{ \infty}-\sqrt{b}m)\right\|_{L^{2}}^{2}\\ &\qquad\qquad=C\int_{\mathbb{R}^{2}}\frac{1}{|\xi|^{2}}|\widehat {\sqrt{b}\nu_{k}}(\xi)-\widehat{\chi_{k}}(\xi)+\widehat{\chi_{\infty}}(\xi)- \widehat{\sqrt{b}m}(\xi)|^{2}\,\mathrm{d}\xi.\end{split}\]
Remark that \(\alpha_{k}=\sqrt{b}\nu_{k}-\chi_{k}+\chi_{\infty}-\sqrt{b}m\) is a Radon measure with support included in \(B(0,R)\) such that \(\widehat{\alpha_{k}}(0)=0\). Therefore
\[\begin{split}\left|\int_{\mathbb{R}^{2}}e^{-ix\cdot\xi}\, \mathrm{d}\alpha_{k}(x)\right|&=\left|\int_{\mathbb{R}^{2}}(e^{- ix\cdot\xi}-1)\,\mathrm{d}\alpha_{k}(x)\right|\\ &=2\left|\int_{\mathbb{R}^{2}}\sin\left(\frac{x\cdot\xi}{2} \right)\,\mathrm{d}\alpha_{k}(x)\right|\\ &\leqslant CR|\xi|\int_{\mathbb{R}^{2}}\,\mathrm{d}|\alpha_{k}|(x) \\ &\leqslant C_{b,R}|\xi|.\end{split}\]
It follows that for \(\varepsilon>0\)
\[\int_{|\xi|\leqslant\varepsilon}\frac{1}{|\xi|^{2}}|\widehat{\alpha_{k}}(\xi )|^{2}\,\mathrm{d}\xi\leqslant C_{b,R}\varepsilon^{2}.\]
Moreover,
\[\begin{split}&\int_{|\xi|\geqslant\varepsilon}\frac{1}{|\xi|^{2}}| \widehat{\sqrt{b}\nu_{k}}(\xi)-\widehat{\chi_{k}}(\xi)+\widehat{\chi_{\infty}} (\xi)-\widehat{\sqrt{b}m}(\xi)|^{2}\,\mathrm{d}\xi\\ &\qquad\leqslant C_{\varepsilon}\left(\int_{\mathbb{R}^{2}}| \widehat{\chi_{k}}(\xi)-\widehat{\chi_{\infty}}(\xi)|^{2}\,\mathrm{d}\xi+ \int_{\mathbb{R}^{2}}\frac{1}{1+|\xi|^{2}}|\widehat{\sqrt{b}\nu_{k}}(\xi)- \widehat{\sqrt{b}m}(\xi)|^{2}\right)\end{split}\]
\[\leqslant C_{\varepsilon}\left(\left\|\chi_{k}-\chi_{\infty}\right\|_{L^{2}}+ \left\|\sqrt{b}\nu_{k}-\sqrt{b}m\right\|_{H^{-1}}\right)\mathop{\longrightarrow} \limits_{k\to+\infty}0\]
since \(b\) is smooth. Therefore
\[\limsup_{k\to+\infty}\left\|\nabla g\ast(\sqrt{b}\nu_{k}-\chi_{k}+\chi_{ \infty}-\sqrt{b}m)\right\|_{L^{2}}^{2}\leqslant C_{b,R}\varepsilon^{2} \tag{6.4}\]
for all \(\varepsilon>0\) so
\[\nabla g\ast(\sqrt{b}\nu_{k}-\chi_{k}+\chi_{\infty}-\sqrt{b}m)\mathop{ \longrightarrow}\limits_{k\to+\infty}^{L^{2}}0. \tag{6.5}\]
By Hardy-Littlewood-Sobolev inequality (see for example [2, Theorem 1.7]), \(\nabla g\ast\rho\in L^{p}\) for all \(2<p<+\infty\) so
\[\left(\int_{\mathbb{R}^{2}}\sqrt{b}\nu_{k}-\int_{\mathbb{R}^{2}}\sqrt{b}m \right)\nabla g\ast\rho\mathop{\longrightarrow}\limits_{k\to+\infty}0\text{ in }L^{2}(B(0,R)).\]
Combining the upper limit with (6.3) and (6.5) we get that
\[\nabla g\ast(\sqrt{b}\nu_{k})\mathop{\longrightarrow}\limits_{k\to+\infty} \nabla g\ast(\sqrt{b}m)\text{ in }L^{2}(B(0,R)).\]
Now, by convolution inequality, we have
\[\left\|g\ast[\sqrt{b}\nu_{k}]\right\|_{L^{2}(B(0,R))}\leqslant C_{b}\left\|g \right\|_{L^{2}(B(0,2R))}\left\|\nu_{k}\right\|_{L^{1}}\leqslant C_{b}\left\| g\right\|_{L^{2}(B(0,2R))} \tag{6.6}\]
so \((g\ast[\sqrt{b}\nu_{k}])\) is bounded in \(H^{1}(B(0,R))\) which is compactly embedded in \(L^{2}(B(0,R))\). Therefore by (6.5), up to extraction, \((g\ast[\sqrt{b}\nu_{k}])\) converges to \(g\ast[\sqrt{b}m]+C\) where \(C\) is a constant. If \(x_{0}\in B(0,R)\) is at a positive distance from the supports of \(\nu_{k}\) and \(m\) then \(g(x_{0}-\cdot)\) is continuous on the supports of \(\nu_{k}\) and \(m\) and therefore
\[g\ast[\sqrt{b}\nu_{k}](x_{0})\mathop{\longrightarrow}\limits_{k\to+\infty}g \ast[\sqrt{b}m](x_{0})\]
by dominated convergence theorem. It follows that \(C=0\), thus
\[g\ast[\sqrt{b}\nu_{k}]\mathop{\longrightarrow}\limits_{k\to+\infty}g\ast[ \sqrt{b}m]\text{ in }H^{1}(B(0,R)).\]
We recall that since \(b\) is smooth,
\[\sqrt{b}\nu_{k}\mathop{\longrightarrow}\limits_{k\to+\infty}\sqrt{b}m\text{ in }H^{-1}.\]
Moreover, \(m\in H^{-1}\) with compact support and \(u\cdot\nabla\sqrt{b}\in W^{1,\infty}\) so it follows by Decomposition 6.2 that
\[I_{3,1}\mathop{\longrightarrow}\limits_{k\to+\infty}0.\]
Since \(\nabla g\) is antisymmetric we can write
\[I_{3,2}= \frac{1}{2}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}H_{u}(x,y) \operatorname{d}(\sqrt{b}\nu_{k})(x)\operatorname{d}(\sqrt{b}\nu_{k})(y)\] \[-\frac{1}{2}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}H_{u}(x,y) \operatorname{d}(\sqrt{b}m)(x)\operatorname{d}(\sqrt{b}m)(y)\]
where
\[H_{u}(x,y)=\frac{1}{2}(u(x)-u(y))\cdot\nabla g(x-y).\]
We recall that \((\sqrt{b}\nu_{k})\) is a sequence of nonnegative functions with supports in \(B(0,R)\) converging to \(\sqrt{b}m\) in \(H^{-1}\) and for the weak-\(*\) topology of measures with finite mass. Moreover, since \(u\) is Lipschitz, \(H_{u}\) is continuous outside of the diagonal and bounded. Therefore we can use Delort's argument (see [20, Proposition 1.2.6] or [60, Inequalities (3.4) and (3.5)]) to prove that
\[I_{3,2}\underset{k\to+\infty}{\longrightarrow}0.\]
Finally we write
\[I_{3,3}= \int_{\mathbb{R}^{2}}u(x)\cdot\left(\int_{\mathbb{R}^{2}}\nabla_{ x}S_{b}(x,y)\,\mathrm{d}\nu_{k}(y)\right)\,\mathrm{d}(\nu_{k}-m)(x)\] \[+\int_{\mathbb{R}^{2}}u(x)\cdot\left(\int_{\mathbb{R}^{2}}\nabla_ {x}S_{b}(x,y)\,\mathrm{d}m(x)\right)\,\mathrm{d}(\nu_{k}-m)(y).\]
By Proposition 2.7\(u(x)\cdot\nabla_{x}S_{b}(x,y)\) is locally Holder with respect to both variables and therefore since \(\nu_{k}\otimes\nu_{k}-m\otimes m\) has compact support we have that \(I_{3,3}\underset{k\to+\infty}{\longrightarrow}0\).
It follows from Claim 6.2 that
\[|T_{1}|\leqslant C_{b}\,\|u\|_{W^{1,\infty}}\int_{\mathbb{R}^{2}}|\nabla H_{N,\eta}|^{2}.\]
Hence by Proposition 5.1 we get
\[\begin{split}|T_{1}|\leqslant& C_{b}\,\|u\|_{W^{1,\infty}}\left(|\mathcal{F}_{b}(Q_{N},\omega)|+\frac{g(\eta)}{N}+I(Q_{N})( \eta+N^{-1})\right.\\ &+\left\|\omega\right\|_{L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{ \infty}}g(\eta)\eta\right)\!.\end{split} \tag{6.7}\]
Now let us split \(T_{2}\) in three parts:
\[\begin{split} T_{2}=&-\frac{1}{N}\sum_{i=1}^{N} \iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\big{(}u(x)\cdot\nabla_{x}g_{b}(x,y )\\ &+u(y)\cdot\nabla_{x}g_{b}(y,x)\big{)}\omega(x)\,\mathrm{d}x\, \mathrm{d}(\delta_{q_{i}}-\widetilde{\delta}_{q_{i}}^{(\eta)})(y)\\ =&-\frac{1}{N}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2} \times\mathbb{R}^{2}}\big{(}u(x)\cdot\nabla\sqrt{b}(x)\sqrt{b(y)}\\ &+u(y)\cdot\nabla\sqrt{b}(y)\sqrt{b(x)}\big{)}g(x-y)\omega(x)\, \mathrm{d}x\,\mathrm{d}(\delta_{q_{i}}-\widetilde{\delta}_{q_{i}}^{(\eta)})( y)\\ &-\frac{1}{N}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times\mathbb{R} ^{2}}\sqrt{b(x)b(y)}(u(x)-u(y))\\ &\cdot\nabla g(x-y)\omega(x)\,\mathrm{d}x\,\mathrm{d}(\delta_{q _{i}}-\widetilde{\delta}_{q_{i}}^{(\eta)})(y)\\ &-\frac{1}{N}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times\mathbb{R} ^{2}}\big{(}u(x)\cdot\nabla_{x}S_{b}(x,y)\\ &+u(y)\cdot\nabla_{x}S_{b}(y,x)\big{)}\omega(x)\,\mathrm{d}x\, \mathrm{d}(\delta_{q_{i}}-\widetilde{\delta}_{q_{i}}^{(\eta)})(y)\\ =:&-(T_{2,1}+T_{2,2}+T_{2,3}).\end{split}\]
We will bound the three terms separately:
_Claim 6.3_.: There exists \(0<s<1\) such that
\[|T_{2,1}|\leqslant C_{b}\left\|u\right\|_{W^{1,\infty}}\left\|\omega\right\|_{L^ {1}((1+|x|)\,\mathrm{d}x)\cap L^{\infty}}(1+I(Q_{N}))\eta^{s}.\]
Proof of the claim.: Since \(\widetilde{\delta}_{q_{i}}^{(\eta)}\) is a probability measure, we can write
\[T_{2,1}= \frac{1}{N}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2 }}\nabla\sqrt{b}(x)\cdot u(x)\omega(x)(\sqrt{b(q_{i})}g(x-q_{i})\] \[-\sqrt{b(y)}g(x-y))\,\mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta )}(y)\,\mathrm{d}x\] \[+\frac{1}{N}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times\mathbb{R}^ {2}}\sqrt{b(x)}\omega(x)(\nabla\sqrt{b}(q_{i})\cdot u(q_{i})g(x-q_{i})\] \[-\nabla\sqrt{b}(y)\cdot u(y)g(x-y))\,\mathrm{d}\widetilde{\delta }_{q_{i}}^{(\eta)}(y)\,\mathrm{d}x\] \[= \frac{1}{N}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times\mathbb{R}^ {2}}(\nabla\sqrt{b}\cdot u\omega)(x)(\sqrt{b(q_{i})}-\sqrt{b(y)})g(x-y)\, \mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(y)\,\mathrm{d}x\] \[+\frac{1}{N}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times\mathbb{R}^ {2}}(\nabla\sqrt{b}\cdot u\omega)(x)\sqrt{b(q_{i})}\] \[\times\left(g(x-q_{i})-g(x-y)\right)\mathrm{d}\widetilde{\delta}_ {q_{i}}^{(\eta)}(y)\,\mathrm{d}x\] \[+\frac{1}{N}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times\mathbb{R}^ {2}}\sqrt{b(x)}\omega(x)(\nabla\sqrt{b}(q_{i})\cdot u(q_{i})\] \[-\nabla\sqrt{b}(y)\cdot u(y))g(x-y)\,\mathrm{d}\widetilde{\delta }_{q_{i}}^{(\eta)}(y)\,\mathrm{d}x\] \[+\frac{1}{N}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times\mathbb{R}^ {2}}\sqrt{b(x)}\omega(x)\nabla\sqrt{b}(q_{i})\cdot u(q_{i})\] \[\times\left(g(x-q_{i})-g(x-y)\right)\mathrm{d}\widetilde{\delta} _{q_{i}}^{(\eta)}(y)\,\mathrm{d}x.\]
For the first integral, we use the Lipschitz regularity of \(\sqrt{b}\) to bound
\[\left|\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}(\nabla\sqrt{b} \cdot u\omega)(x)(\sqrt{b(q_{i})}-\sqrt{b(y)})g(x-y)\,\mathrm{d}\widetilde{ \delta}_{q_{i}}^{(\eta)}(y)\,\mathrm{d}x\right|\] \[\leqslant C_{b}\eta\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}|( \nabla\sqrt{b}\cdot u\omega)(x)g(x-y)|\,\mathrm{d}\widetilde{\delta}_{q_{i}}^ {(\eta)}(y)\,\mathrm{d}x.\]
Moreover for \(y\in\partial B(q_{i},\eta)\), we have
\[\int_{\mathbb{R}^{2}} |(\nabla\sqrt{b}\cdot u\omega)(x)g(x-y)|\,\mathrm{d}x\] \[\leqslant \int_{B(y,1)}|(\nabla\sqrt{b}\cdot u\omega)(x)g(x-y)|\,\mathrm{d}x\] \[+\int_{B(y,1)^{c}}|(\nabla\sqrt{b}\cdot u\omega)(x)g(x-y)|\, \mathrm{d}x\] \[\leqslant \left\|\nabla\sqrt{b}\cdot u\omega\right\|_{L^{\infty}}\left\|g \right\|_{L^{1}(B(0,1))}+\int_{B(y,1)^{c}}|(\nabla\sqrt{b}\cdot u\omega)(x)|( |x|+|y|)\,\mathrm{d}x\] \[\leqslant C_{b}\left\|u\right\|_{L^{\infty}}\left\|\omega\right\|_{L^{ \infty}}(1+|q_{i}|)\]
since \(b\) satisfies Assumption 1.5. Therefore
\[\Big{|}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}(\nabla\sqrt{b}\cdot u \omega)(x)(\sqrt{b(q_{i})}-\sqrt{b(y)})g(x-y)\,\mathrm{d}\widetilde{\delta}_{q_ {i}}^{(\eta)}(y)\,\mathrm{d}x\Big{|}\\ \leqslant C_{b}\left\|u\right\|_{L^{\infty}}\left\|\omega\right\|_{L^{ \infty}}(1+|q_{i}|)\eta.\]
The third integral can be bounded in the same way:
\[\Big{|}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}} \sqrt{b(x)}\omega(x)(\nabla\sqrt{b}(q_{i})\cdot u(q_{i})-\nabla \sqrt{b}(y)\cdot u(y))g(x-y)\,\mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(y) \,\mathrm{d}x\Big{|}\] \[\leqslant C_{b}\left\|u\right\|_{W^{1,\infty}}\eta\iint_{\mathbb{R}^{2} \times\mathbb{R}^{2}}|\sqrt{b(x)}\omega(x)g(x-y)|\,\mathrm{d}\widetilde{\delta }_{q_{i}}^{(\eta)}(y)\,\mathrm{d}x\] \[\leqslant C_{b}\left\|u\right\|_{W^{1,\infty}}\left\|\omega\right\|_{L^{ 1}((1+|x|)\,\mathrm{d}x)\cap L^{\infty}}(1+|q_{i}|)\eta.\]
Summing over \(N\) we get that both the first and the third line can be bounded by
\[C_{b}\left\|u\right\|_{W^{1,\infty}}\left\|\omega\right\|_{L^{1}((1+|x|)\, \mathrm{d}x)\cap L^{\infty}}(1+I(Q_{N}))\eta. \tag{6.8}\]
Now the second integral is equal to
\[\frac{1}{N}\sum_{i=1}^{N}\sqrt{b(q_{i})}\int_{\mathbb{R}^{2}}(g*(\nabla\sqrt{b }\cdot u\omega)(q_{i})-g*(\nabla\sqrt{b}\cdot u\omega)(y))\,\mathrm{d}\delta _{q_{i}}^{(\eta)}(y)\]
and thus by Morrey's inequality (see [13, Theorem 9.12]) its absolute value can be bounded by
\[C_{b,p}\eta^{1-\frac{2}{p}}\left\|\nabla g*(\nabla\sqrt{b}\cdot u\omega)\right\| _{L^{p}}\]
for any finite \(p>2\). The fourth integral can be bounded in the same way by
\[C_{b,p}\eta^{1-\frac{2}{p}}\left\|\nabla g*(\sqrt{b}\omega)\right\|_{L^{p}}.\]
Using Hardy-Littlewood-Sobolev inequality (see for example [2, Theorem 1.7]) we have
\[C_{b}\eta^{1-\frac{2}{p}}\left\|\nabla g*(\sqrt{b}\omega)\right\|_{L^{p}} \leqslant C_{b}\eta^{1-\frac{2}{p}}\left\|\omega\right\|_{L^{\frac{2p}{p+2}}}. \tag{6.9}\]
Combining (6.8) and (6.9) we get that
\[|T_{2,1}|\leqslant C_{b}\left\|u\right\|_{W^{1,\infty}}\left\|\omega\right\|_ {L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{\infty}}(1+I(Q_{N}))\eta^{s}\]
for some \(0<s<1\).
Now we bound \(T_{2,2}\):
_Claim 6.4_.: \[|T_{2,2}|\leqslant C_{b}\left\|\nabla u\right\|_{L^{\infty}}\left\|\omega \right\|_{L^{1}\cap L^{\infty}}\eta.\]
Proof of the claim.: Let us recall that
\[\widetilde{\delta}_{q}^{(\eta)}=m_{b}(q,\eta)\frac{\mathrm{d}\delta_{q}^{( \eta)}}{\sqrt{b}}\]
and thus
\[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\sqrt{b(x)b(y)}(u(x)-u(y))\cdot \nabla g(x-y)\omega(x)\,\mathrm{d}x\,\mathrm{d}(\delta_{q_{i}}-\widetilde{ \delta}_{q_{i}}^{(\eta)})(y)\]
\[= m_{b}(q_{i},\eta)\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\sqrt{b(x)} (u(x)-u(y))\cdot\nabla g(x-y)\omega(x)\,\mathrm{d}x\,\mathrm{d}(\delta_{q_{i}}- \delta_{q_{i}}^{(\eta)})(y)\] \[+\left(1-\frac{m_{b}(q_{i},\eta)}{\sqrt{b(q_{i})}}\right)\int_{ \mathbb{R}^{2}}\sqrt{b(x)b(q_{i})}(u(x)-u(q_{i}))\cdot\nabla g(x-q_{i})\omega( x)\,\mathrm{d}x.\]
The first integral is exactly the term defined in [52, Equation (4.10)] with \(s=0\) and \(m=0\) (remark that we can choose \(m=0\) since no extension procedure is needed for \(s=0\) and \(d=2\), for more details we refer to the introduction of [52, Section 4]). It can be bounded by the right hand side of [52, Equation (4.24)] :
\[C\left\|\nabla u\right\|_{L^{\infty}}\left\|\nabla|^{-1}(\sqrt{b} \omega)\right\|_{L^{\infty}}\eta \leqslant C_{b}\left\|\nabla u\right\|_{L^{\infty}}\left\|| \nabla|^{-1}\omega\right\|_{L^{\infty}}\eta\] \[\leqslant C_{b}\left\|\nabla u\right\|_{L^{\infty}}\left\|\omega \right\|_{L^{1}\cap L^{\infty}}\eta.\]
A proof of the last inequality can be found for example in [35, Lemma 1]. Now by (2.16) and the Lipschitz regularity of \(u\) we can bound the second line by
\[C_{b}\left\|\nabla u\right\|_{L^{\infty}}\left\|\omega\right\|_{L^{1}}\eta.\]
Combining the two upper equations we get
\[\left|T_{2,2}\right|\leqslant C_{b}\left\|\nabla u\right\|_{L^{\infty}}\left\| \omega\right\|_{L^{1}\cap L^{\infty}}\eta.\]
_Claim 6.5_.: \[\left|T_{2,3}\right|\leqslant C_{b,s}\left\|u\right\|_{W^{1,\infty}}\left\| \omega\right\|_{L^{1}((1+\left|x\right|)\,\mathrm{d}x)}(1+I(Q_{N}))\eta^{s}.\]
Proof of the claim.: We write \(T_{2,3}\) as
\[T_{2,3}= \frac{1}{N}\sum_{i=1}^{N}\bigg{(}\iint_{\mathbb{R}^{2}\times \mathbb{R}^{2}}\omega(x)u(x)\cdot(\nabla_{x}S_{b}(x,q_{i})-\nabla_{x}S_{b}(x,y) )\,\mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(y)\,\mathrm{d}x\] \[+\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\omega(x)(u(q_{i})-u(y ))\cdot\nabla_{x}S_{b}(q_{i},x)\,\mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta) }(y)\,\mathrm{d}x\] \[+\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\omega(x)u(y)\cdot( \nabla_{x}S_{b}(q_{i},x)-\nabla_{x}S_{b}(y,x))\,\mathrm{d}\widetilde{\delta}_{ q_{i}}^{(\eta)}(y)\,\mathrm{d}x\bigg{)}.\]
Using Claims (1) and (2) of Lemma 2.7, we get that for some \(0<s<1\),
\[\left|T_{2,3}\right|\leqslant \frac{1}{N}\sum_{i=1}^{N}\bigg{(}C_{b,s}\left\|u\right\|_{L^{ \infty}}\left\|\omega\right\|_{L^{1}}(1+\left|q_{i}\right|)\eta^{s}\] \[+\left\|\nabla u\right\|_{L^{\infty}}\left\|\omega\right\|_{L^{1}( (1+\left|x\right|)\,\mathrm{d}x)}\eta\] \[+\left\|u\right\|_{L^{\infty}}\left\|\omega\right\|_{L^{1}((1+ \left|x\right|)\,\mathrm{d}x)}\eta^{s}\bigg{)}\] \[\leqslant C_{b,s}\left\|u\right\|_{W^{1,\infty}}\left\|\omega\right\|_{L^{1} ((1+\left|x\right|)\,\mathrm{d}x)}(1+I(Q_{N}))\eta^{s}.\]
Combining Claims 6.3, 6.4 and 6.5 we get that
\[\left|T_{2}\right|\leqslant C_{b,s}\left\|u\right\|_{W^{1,\infty}}\left\| \omega\right\|_{L^{1}((1+\left|x\right|)\,\mathrm{d}x)\cap L^{\infty}}(1+I(Q_{N }))\eta^{s}. \tag{6.10}\]
Now let us write \(T_{3}\) as
\[T_{3}= \frac{1}{N^{2}}\sum_{1\leqslant i,j\leqslant N}\iint_{(\mathbb{R}^{2} \times\mathbb{R}^{2})\setminus\Delta}u(x)\cdot\nabla_{x}g_{b}(x,y)\] \[(\,\mathrm{d}\delta_{q_{i}}(x)\,\mathrm{d}\delta_{q_{j}}(y)-\, \mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(x)\,\mathrm{d}\widetilde{\delta }_{q_{j}}^{(\eta)}(y))\] \[= \frac{1}{N^{2}}\sum_{1\leqslant i,j\leqslant N}\iint_{(\mathbb{R} ^{2}\times\mathbb{R}^{2})\setminus\Delta}u(x)\cdot\nabla\sqrt{b}(x)\sqrt{b(y)} g(x-y)\] \[(\,\mathrm{d}\delta_{q_{i}}(x)\,\mathrm{d}\delta_{q_{j}}(y)-\, \mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(x)\,\mathrm{d}\widetilde{\delta }_{q_{j}}^{(\eta)}(y))\] \[+\frac{1}{N^{2}}\sum_{1\leqslant i,j\leqslant N}\iint_{(\mathbb{ R}^{2}\times\mathbb{R}^{2})\setminus\Delta}\sqrt{b(x)b(y)}u(x)\cdot\nabla g (x-y)\] \[(\,\mathrm{d}\delta_{q_{i}}(x)\,\mathrm{d}\delta_{q_{j}}(y)-\, \mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(x)\,\mathrm{d}\widetilde{\delta }_{q_{j}}^{(\eta)}(y))\] \[+\frac{1}{N^{2}}\sum_{1\leqslant i,j\leqslant N}\iint_{(\mathbb{ R}^{2}\times\mathbb{R}^{2})\setminus\Delta}u(x)\cdot\nabla_{x}S_{b}(x,y)\] \[(\,\mathrm{d}\delta_{q_{i}}(x)\,\mathrm{d}\delta_{q_{j}}(y)-\, \mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(x)\,\mathrm{d}\widetilde{\delta }_{q_{j}}^{(\eta)}(y))\] \[=: T_{3,1}+T_{3,2}+T_{3,3}.\]
We bound the first term:
_Claim 6.6_.: \[\left|T_{3,1}\right|\leqslant C_{b}\left\|u\right\|_{L^{\infty}} \left|\mathcal{F}_{b}(Q_{N},\omega)\right|+C_{b}\left\|u\right\|_{W^{1,\infty} }\bigg{(}\frac{g(\eta)}{N}\\ +I(Q_{N})(\eta+N^{-1})+\left\|\omega\right\|_{L^{1}((1+\left|x \right|)\,\mathrm{d}x)\cap L^{\infty}}g(\eta)\eta\bigg{)}.\]
Proof of the claim.: We write
\[T_{3,1}= -\frac{1}{N^{2}}\sum_{i=1}^{N}\iint_{\mathbb{R}^{2}\times\mathbb{ R}^{2}}u(x)\cdot\nabla\sqrt{b}(x)\sqrt{b(y)}g(x-y)\,\mathrm{d}\widetilde{ \delta}_{q_{i}}^{(\eta)}(x)\,\mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(y)\] \[+\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}\iint_{( \mathbb{R}^{2}\times\mathbb{R}^{2})\setminus\Delta}u(x)\cdot\nabla\sqrt{b}(x) \sqrt{b(y)}g(x-y)\] \[(\,\mathrm{d}\delta_{q_{i}}(x)\,\mathrm{d}\delta_{q_{j}}(y)-\, \mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(x)\,\mathrm{d}\widetilde{\delta }_{q_{j}}^{(\eta)}(y))\] \[=: T_{3,1,1}+T_{3,1,2}.\]
By the definition of \(\widetilde{\delta}_{q_{i}}^{(\eta)}\) (2.13) we have
\[T_{3,1,1}= -\frac{1}{N^{2}}\sum_{i=1}^{N}m_{b}(q_{i},\eta)^{2}\iint_{\mathbb{ R}^{2}\times\mathbb{R}^{2}}u(x)\cdot\frac{\nabla\sqrt{b}(x)}{\sqrt{b(x)}}g(x-y)\, \mathrm{d}\delta_{q_{i}}^{(\eta)}(x)\,\mathrm{d}\delta_{q_{i}}^{(\eta)}(y)\] \[= -\frac{1}{N^{2}}\sum_{i=1}^{N}m_{b}(q_{i},\eta)^{2}\int_{\mathbb{ R}^{2}}\frac{u(x)\cdot\nabla\sqrt{b}(x)}{\sqrt{b}(x)}g^{(\eta)}(x-q_{i})\, \mathrm{d}\delta_{q_{i}}^{(\eta)}(x)\]
by Claim (2.15). It follows by Assumption 1.5 that
\[\left|T_{3,1,1}\right|\leqslant\frac{C_{b}\left\|u\right\|_{L^{\infty}}g(\eta) }{N}. \tag{6.11}\]
Now we write
\[T_{3,1,2}= \frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}\left((u\cdot \nabla\sqrt{b})(q_{i})\sqrt{b(q_{j})}g(q_{i}-q_{j})\right.\] \[-\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}(u\cdot\nabla\sqrt{b}) (x)\sqrt{b(y)}g(x-y)\operatorname{d}\!\widetilde{\delta}_{q_{i}}^{(\eta)}(x) \operatorname{d}\!\widetilde{\delta}_{q_{j}}^{(\eta)}(y)\right)\] \[= \frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}\left((u\cdot \nabla\sqrt{b})(q_{i})\sqrt{b(q_{j})}g(q_{i}-q_{j})\right.\] \[-m_{b}(q_{j},\eta)\int_{\mathbb{R}^{2}}(u\cdot\nabla\sqrt{b})(x) g^{(\eta)}(x-q_{j})\operatorname{d}\!\widetilde{\delta}_{q_{i}}^{(\eta)}(x)\right)\]
by the definition of \(\widetilde{\delta}_{q_{i}}^{(\eta)}\) (2.13) and Claim (2.15). Now,
\[T_{3,1,2}= \frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}(u\cdot\nabla \sqrt{b})(q_{i})\sqrt{b(q_{j})}(g(q_{i}-q_{j})-g^{(\eta)}(q_{i}-q_{j}))\] \[+\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}(u\cdot \nabla\sqrt{b})(q_{i})\sqrt{b(q_{j})}\] \[\times\int_{\mathbb{R}^{2}}(g^{(\eta)}(q_{i}-q_{j})-g^{(\eta)}(x- q_{j}))\operatorname{d}\!\delta_{q_{i}}^{(\eta)}(x)\] \[+\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}(u\cdot \nabla\sqrt{b})(q_{i})\sqrt{b(q_{j})}\int_{\mathbb{R}^{2}}g^{(\eta)}(x-q_{j}) \operatorname{d}\!(\delta_{q_{i}}^{(\eta)}-\widetilde{\delta}_{q_{i}}^{(\eta )})(x)\] \[+\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}\sqrt{b(q_{j})}\] \[\times\int_{\mathbb{R}^{2}}((u\cdot\nabla\sqrt{b})(q_{i})-(u \cdot\nabla\sqrt{b})(x))g^{(\eta)}(x-q_{j})\operatorname{d}\!\widetilde{ \delta}_{q_{i}}^{(\eta)}(x)\] \[+\frac{1}{N^{2}}\sum_{1\leqslant i\neq j\leqslant N}(\sqrt{b(q_{ j})}-m_{b}(q_{j},\eta))\] \[\times\int_{\mathbb{R}^{2}}(u\cdot\nabla\sqrt{b})(x)g^{(\eta)}(x -q_{j})\operatorname{d}\!\widetilde{\delta}_{q_{i}}^{(\eta)}(x)\] \[= :S_{1}+S_{2}+S_{3}+S_{4}+S_{5}.\]
Since \(g-g^{(\eta)}\) is nonnegative we can bound
\[|S_{1}|\leqslant C_{b}\left\|u\right\|_{L^{\infty}}\frac{1}{N^{2}}\sum_{1 \leqslant i\neq j\leqslant N}(g(q_{i}-q_{j})-g^{(\eta)}(q_{i}-q_{j}))\] \[\leqslant C_{b}\left\|u\right\|_{L^{\infty}}\left|\mathcal{F}_{b}(Q_{N}, \omega)\right|+C_{b}\left\|u\right\|_{L^{\infty}}\left(\frac{g(\eta)}{N}\right.\] \[+I(Q_{N})(\eta+N^{-1})+\left\|\omega\right\|_{L^{1}((1+|x|) \operatorname{d}\!x)\cap L^{\infty}}g(\eta)\eta\right) \tag{6.12}\]
by Proposition 5.1. Now remark that if \(|q_{i}-q_{j}|\geqslant 2\eta\) and \(x\in\partial B(q_{i},\eta)\),
\[|q_{j}-x|\geqslant|q_{i}-q_{j}|-|q_{i}-x|\geqslant 2\eta-\eta\geqslant\eta\]
and it follows by Claim (2.15) that
\[\int_{\mathbb{R}^{2}}g^{(\eta)}(x-q_{j})\,\mathrm{d}\delta^{(\eta)}_{ q_{i}}(x) =\int_{\mathbb{R}^{2}}g(x-q_{j})\,\mathrm{d}\delta^{(\eta)}_{q_{i}}(x)\] \[=g^{(\eta)}(q_{i}-q_{j}).\]
Hence we can write
\[S_{2}=\frac{1}{N^{2}}\sum_{\begin{subarray}{c}1\leqslant i\neq j\leqslant N\\ |q_{i}-q_{j}|\leqslant 2\eta\end{subarray}}(u\cdot\nabla\sqrt{b})(q_{i})\sqrt{b(q_{j})} \int_{\mathbb{R}^{2}}(g^{(\eta)}(q_{i}-q_{j})-g^{(\eta)}(x-q_{j}))\,\mathrm{d }\delta^{(\eta)}_{q_{i}}(x).\]
Notice that if \(|q_{i}-q_{j}|\leqslant 2\eta\) and \(x\in\partial B(q_{i},\eta)\), then
\[|g^{(\eta)}(q_{i}-q_{j})-g^{(\eta)}(x-q_{j})|\leqslant\left\|\nabla g^{(\eta )}\right\|_{L^{\infty}}\eta=C\eta^{-1}\eta\leqslant C.\]
Therefore,
\[\begin{split}|S_{2}|\leqslant&\frac{C_{b}\left\|u \right\|_{L^{\infty}}}{N^{2}}|\{(q_{i},q_{j});|q_{i}-q_{j}|\leqslant 2\eta\}| \\ \leqslant& C_{b}\left\|u\right\|_{L^{\infty}}| \mathcal{F}_{b}(Q_{N},\omega)|+C_{b}\left\|u\right\|_{L^{\infty}}\left(\frac{ g(2\eta)}{N}\right.\\ &+I(Q_{N})(2\eta+N^{-1})+\left\|\omega\right\|_{L^{1}((1+|x|)\, \mathrm{d}x)\cap L^{\infty}}g(2\eta)2\eta\right)\end{split} \tag{6.13}\]
by Corollary 5.2 applied to \(\varepsilon=2\eta\).
By definition of \(\widetilde{\delta}^{(\eta)}_{q_{i}}\) (2.13) we can write
\[\begin{split} S_{3}=&\frac{1}{N^{2}}\sum_{1\leqslant i \neq j\leqslant N}(u\cdot\nabla\sqrt{b})(q_{i})\sqrt{b(q_{j})}\\ &\times\int_{\mathbb{R}^{2}}g^{(\eta)}(x-q_{j})\left(1-\frac{m_{ b}(q_{i},\eta)}{\sqrt{b(x)}}\right)\,\mathrm{d}\delta^{(\eta)}_{q_{i}}(x)\end{split}\]
and therefore
\[|S_{3}|\leqslant\frac{C_{b}\left\|u\right\|_{L^{\infty}}g(\eta)}{N}\sum_{i=1 }^{N}\int_{\mathbb{R}^{2}}\left|\frac{m_{b}(q_{i},\eta)}{\sqrt{b(x)}}-1\right| \,\mathrm{d}\delta^{(\eta)}_{q_{i}}(x).\]
For \(x\in\partial B(q_{i},\eta)\), we have
\[\begin{split}\left|\frac{m_{b}(q_{i},\eta)}{\sqrt{b(x)}}-1\right| &\leqslant C_{b}\left|m_{b}(q_{i},\eta)^{-1}-\frac{1}{\sqrt{b(x)} }\right|\\ &\leqslant C_{b}\left|\int_{\mathbb{R}^{2}}\frac{\mathrm{d} \delta^{(\eta)}_{q_{i}}(y)}{\sqrt{b(y)}}-\frac{1}{\sqrt{b(x)}}\right|\\ &\leqslant C_{b}\eta\end{split}\]
since \(b\) is Lipschitz by Assumption 1.5. It follows that
\[|S_{3}|\leqslant C_{b}\left\|u\right\|_{L^{\infty}}g(\eta)\eta. \tag{6.14}\]
Now by regularity of \(u\), \(b\) and Proposition 2.13, we have
\[|S_{4}|+|S_{5}|\leqslant C_{b}\left\|u\right\|_{W^{1,\infty}}\eta g(\eta).\]
Combining the upper inequality with (6.11), (6.12), (6.13) and (6.14) we obtain Claim 6.6.
For the third term we have the following bound:
_Claim 6.7_.: For \(s\) small enough, we have
\[|T_{3,3}|\leqslant C_{b,s}\left\|u\right\|_{W^{1,\infty}}(1+I(Q_{N}))\eta^{s}.\]
Proof.: We write
\[T_{3,3}= \frac{1}{N^{2}}\sum_{1\leqslant i,j\leqslant N}u(q_{i})\cdot \nabla_{x}S_{b}(q_{i},q_{j})\] \[-\frac{1}{N^{2}}\sum_{1\leqslant i,j\leqslant N}\iint_{\mathbb{R }^{2}\times\mathbb{R}^{2}}u(x)\cdot\nabla_{x}S_{b}(x,y)\,\mathrm{d}\widetilde{ \delta}_{q_{i}}^{(\eta)}(x)\,\mathrm{d}\widetilde{\delta}_{q_{j}}^{(\eta)}(y)\] \[= \frac{1}{N^{2}}\sum_{1\leqslant i,j\leqslant N}\iint_{\mathbb{R }^{2}\times\mathbb{R}^{2}}\bigg{(}u(q_{i})\cdot\nabla_{x}S_{b}(q_{i},q_{j})-u (q_{i})\cdot\nabla_{x}S_{b}(q_{i},y)\] \[+u(q_{i})\cdot\nabla_{x}S_{b}(q_{i},y)-u(q_{i})\cdot\nabla_{x}S_{ b}(x,y)\] \[+u(q_{i})\cdot\nabla_{x}S_{b}(x,y)-u(x)\cdot\nabla_{x}S_{b}(x,y) \bigg{)}\,\mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(x)\,\mathrm{d} \widetilde{\delta}_{q_{j}}^{(\eta)}(y).\]
Therefore,
\[|T_{3,3}|\leqslant \frac{1}{N^{2}}\sum_{1\leqslant i,j\leqslant N}\left\|u\right\|_{L^{ \infty}}|\nabla_{x}S_{b}(q_{i},\cdot)|_{\mathcal{C}^{0,s}(B(q_{j},1))}\eta^{s}\] \[+\frac{1}{N^{2}}\sum_{1\leqslant i,j\leqslant N}\left\|u\right\| _{L^{\infty}}\eta^{s}\int_{\mathbb{R}^{2}}|\nabla_{x}S_{b}(\cdot,y)|_{ \mathcal{C}^{0,s}}\,\mathrm{d}\widetilde{\delta}_{q_{j}}^{(\eta)}(y)\] \[+\frac{1}{N^{2}}\sum_{1\leqslant i,j\leqslant N}\iint_{\mathbb{R }^{2}\times\mathbb{R}^{2}}\left\|u\right\|_{W^{1,\infty}}\eta|\nabla_{x}S_{b} (x,y)|\,\mathrm{d}\widetilde{\delta}_{q_{i}}^{(\eta)}(x)\,\mathrm{d} \widetilde{\delta}_{q_{j}}^{(\eta)}(y).\]
By Proposition 2.7, for \(s\) small enough we have
\[|T_{3,3}|\leqslant \frac{C_{b,s}}{N^{2}}\sum_{1\leqslant i,j\leqslant N}\left\|u \right\|_{L^{\infty}}|(1+|q_{j}|)\eta^{s}\] \[+\frac{C_{b,s}}{N^{2}}\sum_{1\leqslant i,j\leqslant N}\left\|u \right\|_{L^{\infty}}\eta^{s}(1+|q_{j}|)\] \[+\frac{C_{b}}{N^{2}}\sum_{1\leqslant i,j\leqslant N}\left\|u \right\|_{W^{1,\infty}}\eta(1+|q_{j}|)\] \[\leqslant C_{b,s}\left\|u\right\|_{W^{1,\infty}}(1+I(Q_{N}))\eta^{s}.\]
We are only remained to bound \(T_{3,2}\):
_Claim 6.8_.: For \(\varepsilon>2\eta\) small enough, we have
\[|T_{3,2}|\leqslant \frac{C_{b}}{N}\left\|\nabla u\right\|_{L^{\infty}}+\frac{C_{b} \eta\left\|\nabla u\right\|_{L^{\infty}}}{\varepsilon}+C_{b}\left\|\nabla u \right\|_{L^{\infty}}\bigg{(}|\mathcal{F}_{b}(Q_{N},\omega)|+\frac{g( \varepsilon)}{N}+\eta\] \[+I(Q_{N})(\varepsilon+N^{-1})+\left\|\omega\right\|_{L^{1}((1+|x| )\,\mathrm{d}x)\cap L^{\infty}}g(\varepsilon)\varepsilon\bigg{)}.\]
Proof.: Let us denote
\[k_{u}(x,y)=(u(x)-u(y))\cdot\nabla g(x-y)\]
and remark that
\[\left|k_{u}(x,y)\right|\leqslant C\left\|\nabla u\right\|_{L^{\infty}}. \tag{6.15}\]
Since \(\nabla g\) is antisymmetric we can write \(T_{3,2}\) as
\[T_{3,2}=\\ \frac{1}{2N^{2}}\sum_{1\leqslant i,j\leqslant N}\iint_{(\mathbb{R} ^{2}\times\mathbb{R}^{2})\backslash\Delta}\sqrt{b(x)b(y)}k_{u}(x,y)\,\mathrm{d }(\delta_{q_{i}}+\widetilde{\delta}_{q_{i}}^{(\eta)})(x)\,\mathrm{d}(\delta_{q _{j}}-\widetilde{\delta}_{q_{j}}^{(\eta)})(y).\]
Using the definition of \(\widetilde{\delta}_{q_{i}}^{(\eta)}\) (2.13) we can write
\[\mathrm{d}(\delta_{q_{i}}+\widetilde{\delta}_{q_{i}}^{(\eta)}) (x)\,\mathrm{d}(\delta_{q_{j}}-\widetilde{\delta}_{q_{j}}^{(\eta)}) (y)\] \[= \,\mathrm{d}\bigg{(}\delta_{q_{i}}+\frac{m_{b}(q_{i},\eta)}{ \sqrt{b}}\delta_{q_{i}}^{(\eta)}\bigg{)}(x)\,\mathrm{d}\bigg{(}\delta_{q_{j}} -\frac{m_{b}(q_{j},\eta)}{\sqrt{b}}\delta_{q_{j}}^{(\eta)}\bigg{)}(y)\] \[= \frac{m_{b}(q_{i},\eta)m_{b}(q_{j},\eta)}{\sqrt{b(x)b(y)}}\, \mathrm{d}(\delta_{q_{i}}+\delta_{q_{i}}^{(\eta)})(x)\,\mathrm{d}(\delta_{q_{ j}}-\delta_{q_{j}}^{(\eta)})(y)\] \[+\left(1-\frac{m_{b}(q_{i},\eta)m_{b}(q_{j},\eta)}{\sqrt{b(q_{i}) }b(q_{j})}\right)\,\mathrm{d}\delta_{q_{i}}(x)\,\mathrm{d}\delta_{q_{j}}(y)\] \[+\frac{m_{b}(q_{i},\eta)}{\sqrt{b(q_{i})}}\left(1-\frac{m_{b}(q_{ j},\eta)}{\sqrt{b(q_{j})}}\right)\,\mathrm{d}\delta_{q_{i}}^{(\eta)}(x)\, \mathrm{d}\delta_{q_{j}}(y)\] \[+\frac{m_{b}(q_{j},\eta)}{\sqrt{b(y)}}\left(\frac{m_{b}(q_{i}, \eta)}{\sqrt{b(q_{i})}}-1\right)\,\mathrm{d}\delta_{q_{i}}(x)\,\mathrm{d} \delta_{q_{j}}^{(\eta)}(y).\]
We will use some inequalities proved in [52] and Corollary 5.2 to control the first line, but let us begin by controling the three last remainders. Using the bound (6.15) and (2.16) we can bound
\[T_{3,2,2}:= \frac{1}{2N^{2}}\sum_{1\leqslant i,j\leqslant N}\iint_{(\mathbb{R }^{2}\times\mathbb{R}^{2})\backslash\Delta}\sqrt{b(x)b(y)}k_{u}(x,y)\] \[\bigg{(}\left(1-\frac{m_{b}(q_{i},\eta)m_{b}(q_{j},\eta)}{\sqrt{b( q_{i})b(q_{j})}}\right)\,\mathrm{d}\delta_{q_{i}}(x)\,\mathrm{d}\delta_{q_{j}}(y)\] \[+\frac{m_{b}(q_{i},\eta)}{\sqrt{b(q_{i})}}\left(1-\frac{m_{b}(q_{ j},\eta)}{\sqrt{b(q_{j})}}\right)\,\mathrm{d}\delta_{q_{i}}^{(\eta)}(x)\, \mathrm{d}\delta_{q_{j}}(y)\] \[+\frac{m_{b}(q_{j},\eta)}{\sqrt{b(y)}}\left(\frac{m_{b}(q_{i}, \eta)}{\sqrt{b(q_{i})}}-1\right)\,\mathrm{d}\delta_{q_{i}}(x)\,\mathrm{d} \delta_{q_{j}}^{(\eta)}(x)\bigg{)}.\]
by
\[\left|T_{3,2,2}\right|\leqslant C_{b}\left\|\nabla u\right\|_{L^{\infty}}\eta. \tag{6.16}\]
We are remained to bound
\[T_{3,2,1}:= \frac{1}{2N^{2}}\sum_{1\leqslant i,j\leqslant N}m_{b}(q_{i},\eta)m_ {b}(q_{j},\eta)\] \[\times\iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2})\backslash\Delta} k_{u}(x,y)\,\mathrm{d}(\delta_{q_{i}}+\delta_{q_{i}}^{(\eta)})(x)\,\mathrm{d}(\delta_{q_ {j}}-\delta_{q_{j}}^{(\eta)}(y).\]
Using decomposition (4.26) and inequalities (4.27), (4.28) and (4.31) of [52] with \(s=0\) and \(m=0\) (remark that we can choose \(m=0\) since no extension procedure is needed for \(s=0\) and \(d=2\), for more details we refer to the introduction of [52, Section 4]), we get that for any small parameter \(\varepsilon>2\eta\),
\[|T_{3,2,1}|\leqslant\frac{C_{b}}{N}\left\|\nabla u\right\|_{L^{\infty}}+\frac{ C_{b}\left\|\nabla u\right\|_{L^{\infty}}}{N^{2}}|\{(q_{i},q_{j});|q_{i}-q_{j}| \leqslant\varepsilon\}|+\frac{C\eta\left\|\nabla u\right\|_{L^{\infty}}}{ \varepsilon}.\]
Using Corollary 5.2, we get that
\[\begin{split} T_{3,2,1}\leqslant&\frac{C_{b}}{N} \left\|\nabla u\right\|_{L^{\infty}}+\frac{C_{b}\eta\left\|\nabla u\right\|_{L ^{\infty}}}{\varepsilon}+C_{b}\left\|\nabla u\right\|_{L^{\infty}}\bigg{(}| \mathcal{F}_{b}(Q_{N},\omega)|\\ &+\frac{g(\varepsilon)}{N}+I(Q_{N})(\varepsilon+N^{-1})+\left\| \omega\right\|_{L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{\infty}}g(\varepsilon) \varepsilon\bigg{)}.\end{split} \tag{6.17}\]
And we get Claim 6.8 by combining (6.17) with (6.16).
We finish the proof of Proposition 6.1 using Decomposition (6.1), Inequalities (6.7), (6.10) and Claims 6.6, 6.7 and 6.8. That gives
\[\bigg{|}\iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2})\backslash \Delta} u(x)\cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d}\left(\frac{1}{N}\sum_{i=1}^{ N}\delta_{q_{i}}-\omega\right)^{\otimes 2}(x,y)\bigg{|}\] \[\leqslant C_{b}\left\|u\right\|_{W^{1,\infty}}\bigg{(}|\mathcal{F}_{b}(Q_{N },\omega)|+\frac{g(\eta)}{N}+I(Q_{N})(\eta+N^{-1})\] \[+\left\|\omega\right\|_{L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{ \infty}}g(\eta)\eta\bigg{)}\] \[+C_{b,s}\left\|u\right\|_{W^{1,\infty}}\left\|\omega\right\|_{L^ {1}((1+|x|)\,\mathrm{d}x)\cap L^{\infty}}(1+I(Q_{N}))\eta^{s}\] \[+C_{b}\left\|u\right\|_{L^{\infty}}\mathcal{F}_{b}(Q_{N},\omega)+ C_{b}\left\|u\right\|_{W^{1,\infty}}\bigg{(}\frac{g(\eta)}{N}\] \[+I(Q_{N})(\eta+N^{-1})+\left\|\omega\right\|_{L^{1}((1+|x|)\, \mathrm{d}x)\cap L^{\infty}}g(\eta)\eta\bigg{)}\] \[+C_{b,s}\left\|u\right\|_{W^{1,\infty}}(1+I(Q_{N}))\eta^{s}\] \[+\frac{C_{b}}{N}\left\|\nabla u\right\|_{L^{\infty}}+\frac{C_{b} \eta\left\|\nabla u\right\|_{L^{\infty}}}{\varepsilon}+C_{b}\left\|\nabla u \right\|_{L^{\infty}}\bigg{(}|\mathcal{F}_{b}(Q_{N},\omega)|\] \[+\frac{g(\varepsilon)}{N}+\eta+I(Q_{N})(\varepsilon+N^{-1})+ \left\|\omega\right\|_{L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{\infty}}g(\varepsilon) \varepsilon\bigg{)}.\]
Choosing \(\varepsilon=N^{-1}\) and \(\eta=N^{-2}\), and since \(\left\|\omega\right\|_{L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{\infty}}\) is bounded by below (because \(\omega\) is a probability density) we get that
\[\bigg{|}\iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2})\backslash\Delta}u(x)\cdot \nabla_{x}g_{b}(x,y)\,\mathrm{d}\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{q_{i}}- \omega\right)^{\otimes 2}(x,y)\bigg{|}\]
\[\leqslant C_{b}\left\|u\right\|_{W^{1,\infty}}\left|\mathcal{F}_{b}(Q_{N}, \omega)\right|\] \[+C_{b}(1+\left\|u\right\|_{W^{1,\infty}})\left\|\omega\right\|_{L^ {1}((1+\left|x\right|)\,\mathrm{d}x)\cap L^{\infty}}(1+I(Q_{N}))N^{-\beta}\]
for some \(0<\beta<1\).
## 7. Mean-field limit
In this section we prove the mean-field limit Theorem 1.8. For this purpose let us first prove the following estimates:
**Theorem 7.1**.: _If \(\omega\) is a weak solution of (1.1) with initial datum \(\omega_{0}\) (in the sense of Definition 1.1) that satisfies Assumption 1.6 and if \(I_{N}(0)\) is bounded, there exists a constant_
\[A:=A\left(b,T,\|u\|_{L^{\infty}([0,T],W^{1,\infty})}\,,\|\omega\|_{L^{\infty}( [0,T],L^{1}((1+\left|x\right|)\,\mathrm{d}x)\cap L^{\infty})}\,,\sup_{N}I_{N}( 0)\right)\]
_such that for every \(t\in[0,T]\),_
\[\left|\mathcal{F}_{b,N}(t)\right|\leqslant A(\left|\mathcal{F}_{b,N}(0) \right|+(1+\left|E_{N}(0)\right|)(N^{-\beta}+\left|\alpha_{N}-\alpha\right|)). \tag{7.1}\]
_If \(\overline{\omega}\) is a weak solution of (1.3) with initial datum \(\omega_{0}\) (in the sense of Definition 1.2) that satisfies Assumption 1.6 and if \(\overline{I_{N}}(0)\) is bounded, there exists a constant_
\[B:= B\bigg{(}b,T,\left\|\overline{\omega}\right\|_{L^{\infty}([0,T],L^{1}((1+ \left|x\right|)\,\mathrm{d}x)\cap L^{\infty})},\] \[\left\|\nabla g\ast\overline{\omega}\right\|_{L^{\infty}([0,T],W^ {1,\infty})},\sup_{N}\overline{I_{N}}(0)\bigg{)}\]
_such that for every \(t\in[0,T]\),_
\[\left|\overline{\mathcal{F}}_{b,N}(t)\right|\leqslant B(\left|\overline{ \mathcal{F}}_{b,N}(0)\right|+(1+\left|\overline{E_{N}}(0)\right|)(N^{-\beta}+ \alpha_{N}^{-1})). \tag{7.2}\]
Proof.: By Proposition 4.1, we have that for almost every \(t\in[0,T]\),
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}_{b,N}(t)\] \[= 2\iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2})\setminus\Delta} \left(u(t,x)-\alpha\frac{\nabla^{\perp}b(x)}{b(x)}\right)\cdot\nabla_{x}g_{b}( x,y)\,\mathrm{d}(\omega(t)-\omega_{N}(t))^{\otimes 2}(x,y)\] \[+2(\alpha_{N}-\alpha)\iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2}) \setminus\Delta}\frac{\nabla^{\perp}b(x)}{b(x)}\cdot\nabla_{x}g_{b}(x,y)\, \mathrm{d}\omega_{N}(t,x)\,\mathrm{d}(\omega(t)-\omega_{N}(t))(y)\] \[=: L_{1}+2(\alpha_{N}-\alpha)L_{2}.\]
Using Proposition 6.1, we have
\[\left|L_{1}\right|\leqslant C_{b}\left\|u-\alpha\frac{\nabla^{\perp}b}{b}\right\|_{L^{\infty}([0,T],W^ {1,\infty})}\left|\mathcal{F}_{b}(Q_{N},\omega)\right|\] \[+C_{b}\left(1+\left\|u-\alpha\frac{\nabla^{\perp}b}{b}\right\|_{L^ {\infty}([0,T],W^{1,\infty})}\right)\] \[\times\left\|\omega\right\|_{L^{\infty}([0,T],L^{1}((1+\left|x \right|)\,\mathrm{d}x)\cap L^{\infty})}(1+I_{N}(t))N^{-\beta}.\]
By Proposition 3.1, we have
\[I_{N}(t)\leqslant C_{b,T}(1+\left|E_{N}(0)\right|+I_{N}(0))\]
since \((\alpha_{N})\) is bounded (here we consider the case \(\alpha_{N}\underset{N\rightarrow+\infty}{\longrightarrow}\alpha\)). Therefore,
\[|L_{1}|\leqslant C_{b}\big{(}1+\|u\|_{L^{\infty}([0,T],W^{1,\infty})}\,\big{)}| \mathcal{F}_{b}(Q_{N},\omega)|+C_{b,T}\left(1+\|u\|_{L^{\infty}([0,T],W^{1, \infty})}\right)\] \[\times\|\omega\|_{L^{\infty}([0,T],L^{1}((1+|x|)\,\mathrm{d}x) \cap L^{\infty})}\,(1+I_{N}(0)+|E_{N}(0)|)N^{-\beta}. \tag{7.3}\]
Now
\[L_{2}= \frac{1}{N}\sum_{i=1}^{N}\frac{\nabla^{\perp}b(q_{i})}{b(q_{i})}\] \[\cdot\bigg{[}\int_{\mathbb{R}^{2}\setminus\{q_{i}\}}\sqrt{b(q_{i })b(y)}\nabla g(q_{i}-y)\,\mathrm{d}\bigg{(}\omega(t)-\frac{1}{N}\sum_{j=1}^{ N}\delta_{q_{j}(t)}\bigg{)}\] \[+\int_{\mathbb{R}^{2}\setminus\{q_{i}\}}\nabla_{x}S_{b}(q_{i},y) \,\mathrm{d}\bigg{(}\omega(t)-\frac{1}{N}\sum_{j=1}^{N}\delta_{q_{j}(t)} \bigg{)}\bigg{]}\] \[=: L_{2,1}+L_{2,2}+L_{2,3}\]
with
\[|L_{2,1}| =\bigg{|}\frac{1}{N}\sum_{i=1}^{N}\frac{\nabla^{\perp}b(q_{i})}{ \sqrt{b(q_{i})}}\cdot\int_{\mathbb{R}^{2}}\nabla g(q_{i}-y)\sqrt{b(y)}\omega(t,y)\,\mathrm{d}y\bigg{|}\] \[\leqslant C_{b}\,\|\omega\|_{L^{\infty}([0,T],L^{1}\cap L^{\infty})} \tag{7.4}\]
(for the last inequality see for example [35, Lemma 1]). For the second term
\[L_{2,2}=-\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\!\!\sqrt{b(q_{i})b(q_{i})}\frac{\nabla^{\perp}b(q_{ i})}{b(q_{i})}\cdot\nabla g(q_{i}-q_{j}).\]
We can bound it as in (3.7) to get
\[|L_{2,2}|\leqslant C_{b}. \tag{7.5}\]
For the last term, we use Claim (1) of Lemma 2.7 to get
\[|L_{2,3}| =\bigg{|}\frac{1}{N}\sum_{i=1}^{N}\frac{\nabla^{\perp}b(q_{i})}{ b(q_{i})}\cdot\int_{\mathbb{R}^{2}}\nabla_{x}S_{b}(q_{i},y)\,\mathrm{d}\bigg{(} \omega(t)-\frac{1}{N}\!\!\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\!\!\!\delta_{q_{j}(t)}\bigg{)}(y)\bigg{|}\] \[\leqslant C_{b}\int_{\mathbb{R}^{2}}(1+|y|)\,\mathrm{d}\bigg{(} \omega(t)+\frac{1}{N}\!\!\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\!\!\!\delta_{q_{j}(t)}\bigg{)}(y)\] \[\leqslant C_{b}(\|\omega\|_{L^{\infty}([0,T],L^{1}((1+|x|)\, \mathrm{d}x))}+I_{N}(t))\] \[\leqslant C_{b,T}(\|\omega\|_{L^{\infty}([0,T],L^{1}((1+|x|)\, \mathrm{d}x))}+1+I_{N}(0)+|E_{N}(0)|)\]
by Proposition 3.1. Combining the upper inequality with (7.3), (7.4) and (7.5) we get that for almost every \(t\in[0,T]\),
\[\bigg{|}\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}_{b,N}(t)\bigg{|}\] \[\leqslant C_{b}(1+\|u\|_{L^{\infty}([0,T],W^{1,\infty})})|\mathcal{F}_{b,N}( t)|+C_{b}\left(1+\|u\|_{L^{\infty}([0,T],W^{1,\infty})}\right)\]
\[\times\|\omega\|_{L^{\infty}([0,T],L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{ \infty})}\left(1+I_{N}(0)+|E_{N}(0)|\right)N^{-\beta}\] \[+C_{b,T}|\alpha_{N}-\alpha|\bigg{(}\left\|\omega\right\|_{L^{ \infty}([0,T],L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{\infty})}+1+I_{N}(0)+|E_{N}(0 )|\bigg{)}.\]
Therefore there exists a constant \(A\) depending only on the quantities \(b\), \(T\), \(\|u\|_{L^{\infty}([0,T],W^{1,\infty})}\), \(\|\omega\|_{L^{\infty}([0,T],L^{1}((1+|x|)\,\mathrm{d}x)\cap L^{\infty})}\) and \(I_{N}(0)\) (which is uniformly bounded in \(N\) by assumption) such that for almost every \(t\in[0,T]\),
\[\left|\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}_{b,N}(t)\right|\leqslant A(| \mathcal{F}_{b,N}(t)|+(1+|E_{N}(0)|)(N^{-\beta}+|\alpha_{N}-\alpha|)).\]
By Gronwall's lemma (up to taking another constant \(A\) depending on the same quantities) we get (7.1).
Now let us study the rescaled regime where \(\alpha_{N}\underset{N\rightarrow+\infty}{\longrightarrow}+\infty\). By Proposition 4.2 we have
\[\frac{\mathrm{d}}{\mathrm{d}t}\overline{\mathcal{F}}_{b,N}(t)= -2\iint_{(\mathbb{R}^{2}\times\mathbb{R}^{2})\setminus\Delta} \frac{\nabla^{\perp}b(x)}{b(x)}\cdot\nabla_{x}g_{b}(x,y)\,\mathrm{d}(\overline {\omega}(t)-\overline{\omega}_{N}(t))^{\otimes 2}(x,y)\] \[+\frac{2}{N^{2}\alpha_{N}}\sum_{i=1}^{N}\sum_{\begin{subarray}{c} j=1\\ j\neq i\end{subarray}}^{N}\frac{v(t,\overline{q_{i}})}{b(\overline{q_{i}})}\cdot \nabla_{x}g_{b}(\overline{q_{i}},\overline{q_{j}})\] \[=: L_{1}+L_{2}.\]
The first term can be bounded by Proposition 6.1:
\[\begin{split}|L_{1}|\leqslant& C_{b}\left\|\frac{ \nabla b}{b}\right\|_{W^{1,\infty}}|\overline{\mathcal{F}}_{b,N}(t)|+C_{b} \bigg{(}1+\left\|\frac{\nabla b}{b}\right\|_{W^{1,\infty}}\bigg{)}\\ &\times\|\overline{\omega}\|_{L^{\infty}([0,T],L^{1}((1+|x|)\, \mathrm{d}x)\cap L^{\infty})}\left(1+I(\overline{Q_{N}})\right)N^{-\beta}\\ \leqslant& C_{b}|\overline{\mathcal{F}}_{b,N}(t)|+C_{b,T}\left\|\overline{\omega}\right\|_{L^{\infty}([0,T],L^{1}((1+|x|)\,\mathrm{ d}x)\cap L^{\infty})}\\ &\times(1+\overline{I_{N}}(0)+|\overline{E_{N}}(0)|)N^{-\beta} \end{split} \tag{7.6}\]
where we used Proposition 3.1 in the last inequality. We split the second line in three terms:
\[L_{2}= \frac{2}{N^{2}\alpha_{N}}\sum_{i=1}^{N}\sum_{\begin{subarray}{c} j=1\\ j\neq i\end{subarray}}^{N}\frac{v(t,\overline{q_{i}})}{b(\overline{q_{i}})}\cdot \frac{\nabla b(\overline{q_{i}})}{2\sqrt{b(\overline{q_{i}})}}\sqrt{b( \overline{q_{j}})}g(\overline{q_{i}}-\overline{q_{j}})\] \[+\frac{2}{N^{2}\alpha_{N}}\sum_{i=1}^{N}\sum_{\begin{subarray}{c} j=1\\ j\neq i\end{subarray}}^{N}\frac{v(t,\overline{q_{i}})}{b(\overline{q_{i}})}\cdot \nabla g(\overline{q_{i}}-\overline{q_{j}})\sqrt{b(\overline{q_{i}})b(\overline {q_{i}})}\] \[+\frac{2}{N^{2}\alpha_{N}}\sum_{i=1}^{N}\sum_{\begin{subarray}{c} j=1\\ j\neq i\end{subarray}}^{N}\frac{v(t,\overline{q_{i}})}{b(\overline{q_{i}})}\cdot \nabla_{x}S_{b}(\overline{q_{i}},\overline{q_{j}})\] \[=: L_{2,1}+L_{2,2}+L_{2,3}.\]
We can bound the first term by
\[\left|L_{2,1}\right|\leqslant\frac{C_{b}}{N^{2}\alpha_{N}}\left\|v\right\|_{L^{ \infty}}\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\lvert g(\overline{q_{i}}-\overline{q_{j}})\rvert\]
and applying Lemma 2.3 we get
\[\left\|v\right\|_{L^{\infty}}=\left\|\nabla G_{b}[\overline{\omega}]\right\|_ {L^{\infty}}\leqslant C_{b}\left\|\overline{\omega}\right\|_{L^{\infty}([0,T],L^{1}\cap L^{\infty})}.\]
We can bound \(\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\lvert g(\overline{q_{i}}-\overline{q_{j}})\rvert\) as we did for Inequality (3.10) to get
\[\left|L_{2,1}\right|\leqslant C_{b}\left\|\overline{\omega}\right\|_{L^{ \infty}([0,T],L^{1}\cap L^{\infty})}(1+\lvert\overline{E_{N}}\rvert+ \overline{I_{N}})\alpha_{N}^{-1}.\]
The second term \(L_{2,2}\) can be bounded as in (3.7) to get
\[\left|L_{2,2}\right|\leqslant C_{b}(1+\left\|v\right\|_{L^{\infty}([0,T],W^{1,\infty})})\alpha_{N}^{-1}\]
and the last term can be bounded directly using Claim (1) of Lemma 2.7:
Combining these three inequalities with (7.6) and using Proposition 3.1 to bound \(\overline{I_{N}}\) we get that for almost every \(t\in[0,T]\),
\[\left|\frac{\mathrm{d}}{\mathrm{d}t}\overline{\mathcal{F}}_{b,N}( t)\right|\leqslant C_{b}\lvert\overline{\mathcal{F}}_{b,N}(t)\rvert+C_{b}(\left\| \overline{\omega}\right\|_{L^{\infty}([0,T],L^{1}((1+\lvert x\rvert)\, \mathrm{d}x)\cap L^{\infty})}\] \[+\left\|v\right\|_{L^{\infty}([0,T],W^{1,\infty})})\] \[\times(1+\lvert\overline{I_{N}}(0)\rvert+\lvert\overline{E_{N}}( 0)\rvert)(N^{-\beta}+\alpha_{N}^{-1}).\]
And therefore there exists a constant \(B\) depending only on the quantities \(b\), \(T\),\(\lVert\omega\rVert_{L^{\infty}([0,T],L^{1}((1+\lvert x\rvert)\, \mathrm{d}x)\cap L^{\infty})}\) and \(\overline{I_{N}}(0)\) (which is uniformly bounded in \(N\) by assumption) such that for almost every \(t\in[0,T]\),
\[\left|\frac{\mathrm{d}}{\mathrm{d}t}\overline{\mathcal{F}}_{b,N}( t)\right|\leqslant B(\lvert\overline{\mathcal{F}}_{b,N}(t)\rvert+(1+\lvert\overline{E_{N}}( 0)\rvert)(N^{-\beta}+\alpha_{N}^{-1})).\]
By Gronwall's lemma (up to taking another constant \(B\) depending on the same quantities) we get (7.2).
Proof of Theorem 1.8.: By Corollary 5.4, weak-\(*\) convergence and convergence of the interaction energy gives that \((\mathcal{F}_{b,N}(0))\) and \((\overline{\mathcal{F}_{b,N}}(0))\) converge to zero. Using convergence of the interaction energy we also get that \(\lvert E_{N}(0)\rvert\) and \(\lvert\overline{E_{N}}(0)\rvert\) are bounded. Thus by Inequalities (7.1) and (7.2) we get that for any \(t\in[0,T]\)\((\mathcal{F}_{b,N}(t))\) and \((\overline{\mathcal{F}_{b,N}}(t))\) converge to zero and the theorem follows by Corollary 5.4.
| |
2301.13778 | Differentially Private Distributed Bayesian Linear Regression with MCMC | We propose a novel Bayesian inference framework for distributed
differentially private linear regression. We consider a distributed setting
where multiple parties hold parts of the data and share certain summary
statistics of their portions in privacy-preserving noise. We develop a novel
generative statistical model for privately shared statistics, which exploits a
useful distributional relation between the summary statistics of linear
regression. Bayesian estimation of the regression coefficients is conducted
mainly using Markov chain Monte Carlo algorithms, while we also provide a fast
version to perform Bayesian estimation in one iteration. The proposed methods
have computational advantages over their competitors. We provide numerical
results on both real and simulated data, which demonstrate that the proposed
algorithms provide well-rounded estimation and prediction. | Barış Alparslan, Sinan Yıldırım, Ş. İlker Birbil | 2023-01-31T17:27:05 | http://arxiv.org/abs/2301.13778v2 | # Differentially Private Distributed Bayesian
###### Abstract
We propose a novel Bayesian inference framework for distributed differentially private linear regression. We consider a distributed setting where multiple parties hold parts of the data and share certain summary statistics of their portions in privacy-preserving noise. We develop a novel generative statistical model for privately shared statistics, which exploits a useful distributional relation between the summary statistics of linear regression. Bayesian estimation of the regression coefficients is conducted mainly using Markov chain Monte Carlo algorithms, while we also provide a fast version to perform Bayesian estimation in one iteration. The proposed methods have computational advantages over their competitors. We provide numerical results on both real and simulated data, which demonstrate that the proposed algorithms provide well-rounded estimation and prediction.
**Keywords:** Differential privacy, linear regression, distributed learning, MCMC
## 1 Introduction
Linear regression is a mathematical method that lies at the core of statistical research. Many researchers have been working on linear regression since the 19th century, and hence, many well-known solution methods exist. On a separate note, privacy-preserving statistical learning has gained popularity and importance in recent years, with _differential privacy_ prevailing as the most commonly used definition for privacy (Dwork, 2006; Dwork et al., 2014; Dankar and El Emam, 2013). As a result, there is a recent but growing interest in differentially private linear regression.
Many works in the data privacy literature do not mainly focus on regression but are motivated by or can be applied to regression. As an example, differentially private empirical risk minimisation (Chaudhuri et al., 2009; Bassily et al., 2014; Abadi et al., 2016; Kuru et al., 2022) can be applied to regression once it is cast as a data-driven optimisation problem. Many general-purpose Bayesian differentially private estimation methods can also be used in regression problems. Williams and Mcsherry (2010) is one of the first works that considered a hierarchical model for the privatised data and Bayesian estimation for the model parameters. Zhang et al. (2016) analyse several differential privacy mechanisms for posterior sampling and suggest using these mechanisms also
for linear regression. Dimitrakakis et al. (2017) developed a posterior sampling query algorithm to combine differential privacy and Bayesian inference. Contrary to those one-sample approaches, general-purpose differentially private Markov chain Monte Carlo (MCMC) algorithms, which aim to identify the posterior distribution via iterative sampling, can also be applied to regression (Wang et al., 2015; Foulds et al., 2016; Wang et al., 2015; Yildirim and Ermis, 2019; Heikkila et al., 2019; Gong, 2022; Alparslan and Yildirim, 2022; Ju et al., 2022).
Several works in the literature are somewhat more directly related to differentially private regression. Zhang et al. (2012) suggested a functional mechanism method, which is based on perturbing polynomial objective functions with privacy-preserving noise. As an alternative, Dwork et al. (2014b); Wang (2018) considered perturbation of summary statistics. Alabi et al. (2022) provide a technical discussion on different point estimation methods for differentially private simple linear regression, that is when we have a single feature. Ferrando et al. (2022) present a method to compute confidence intervals for the coefficients of linear regression. Cai et al. (2021) study the rates of convergence for parameter estimation with differential privacy via output perturbation, where a non-private estimator is perturbed. All those works consider point estimation of the linear regression parameters.
In this paper, we focus on differential private distributed Bayesian inference for the parameters of linear regression. We use a novel hierarchical model that relies on a distributional relationship (Proposition 1) between the summary statistics of linear regression, which, to the best of our knowledge, has not been exploited so far. We propose Bayesian inference algorithms that take perturbations of summary statistics as observations. The general inferential tool we pick in this paper is MCMC, a well-known framework for iterative sampling from posterior distributions. As we shall see, the proposed MCMC algorithms in this paper already have lower computational complexities per iteration than their closest competitors in Bernstein and Sheldon (2019). Additionally, we also propose much faster Bayesian estimation methods that perform estimation in one iteration. Finally, we assume a distributed setting where the total dataset is shared among multiple parties (data nodes), who want to collaborate for the inference of a common parameter, see _e.g._, Heikkila et al. (2017) for such a setting. The non-distributed setting is just a special case (single data holder) for our methodology.
This paper has connections with several works in the literature, yet it has significant differences from each of those, as we shall explain below.
For the privacy-preserving mechanism, we consider adding noise to summary statistics of linear regression, similarly to Wang (2018); Bernstein and Sheldon (2019). The adaSSP framework of Wang (2018) motivates the fast Bayesian estimation methods developed in this paper. However, adaSSP is a point estimation method while we aim for a posterior distribution. The latter work, Bernstein and Sheldon (2019), is particularly related to this paper as they also study Bayesian linear regression with differential privacy using perturbed statistics of data. However, there are some important differences between our work and that of Bernstein and Sheldon (2019). These differences stem from the choice of summary statistics and the consequent hierarchical structure used for modelling linear regression. Those modelling differences lead to significant differences in the inference methods as well as significant computational advantages for our methods. Specifically, the computational complexity of our methods is \(\mathcal{O}(d^{3})\), where \(d\) is the number of features. This order is much less than the \(\mathcal{O}(d^{6})\) of Bernstein and Sheldon (2019). Finally, neither Wang (2018) nor Bernstein and Sheldon (2019) has considered a distributed learning setting like we do in
this paper, although both works can be modified for the distributed setting after moderate modifications.
Foulds et al. (2016); Heikkila et al. (2017) are other differentially Bayesian inference methods that target posterior distributions of perturbed summary statistics of sensitive data. The one by Heikkila et al. (2017) is particularly interesting because they consider a distributed setting and present linear regression as their showcase example. However, we differ from those works in the way we model the perturbed statistics and in the choice of inference methods. Specifically, Foulds et al. (2016); Heikkila et al. (2017) treat the perturbed statistics as if not perturbed, while we incorporate the effect of perturbation in our model.
Recently, Alparslan and Yildirim (2022) and Ju et al. (2022) employ data augmentation for modelling sensitive and privatised data and propose MCMC for Bayesian inference, the latter work having linear regression as a major application. Their methods have \(\mathcal{O}(n)\) complexity per iteration in general where \(n\) is the number of instances in the data set, which can be slow when \(n\) is large. In contrast, our methods are scalable in data size since their computational complexities do not depend on \(n\). We note that Alparslan and Yildirim (2022, Section 4.2) also present an MCMC method scalable with \(n\) that exploits the approximate normality of additive summary statistics. However, a direct application of that would lead to an algorithm with \(\mathcal{O}(d^{6})\) computational complexity (per iteration), like in Bernstein and Sheldon (2019).
The paper is organised as follows: In Section 2 we review differential privacy. In Section 3 we lay out the hierarchical model for differentially private distributed linear regression with perturbed summary statistics. In Section 4, we present and discuss the aspects of the proposed inference algorithms. Section 5, we provide numerical experiments. We conclude in Section 6.
Notation:Matrices and vectors are shown in bold-face notation. For a matrix \(\mathbf{A}\), its transpose, trace, and determinant (whenever they exist) are \(\mathbf{A}^{T}\), \(\operatorname{tr}(\mathbf{A})\), and \(|\mathbf{A}|\), respectively. For any sequence \(\{a_{i}\}_{i\geq 0}\), we let \(a_{i:j}=(a_{i},\dots,a_{j})\). We write \(x\sim P\) to mean the random variable \(x\) has distribution \(P\). \(\mathcal{N}(\mathbf{m},\mathbf{\Sigma})\) stands for the multivariate normal distribution with mean \(\mathbf{m}\) and covariance \(\mathbf{\Sigma}\). Wishart and inverse-Wishart distributions with scale matrix \(\Lambda\) and \(\kappa\) degrees of freedom are shown as \(\mathcal{W}(\mathbf{\Lambda},\kappa)\) and \(\mathcal{IW}(\mathbf{\Lambda},\kappa)\), respectively. \(\mathcal{IG}(a,b)\) stands for the inverse-gamma distribution with shape and scale parameters \(a\) and \(b\). We augment those notations with \(\mathbf{x}\) to denote the respective probability density functions (pdf), _e.g._, as \(\mathcal{N}(\mathbf{x};\mathbf{m},\mathbf{\Sigma})\).
## 2 Differential Privacy
Differential privacy (Dwork, 2006, 2008) concerns randomised algorithms that run on sensitive, or usually private, data. A randomised algorithm takes an input data set \(D\in\mathcal{D}\) and returns a random output in \(\mathcal{O}\), where the randomness is intrinsic to the algorithm. A differentially private algorithm constrains the difference between the probability distributions of the outputs obtained from neighbouring data sets. We say two data sets are neighbours if they differ by one individual's piece of data.
**Definition 1** (Differential privacy).: A randomised algorithm \(M:\mathcal{D}\mapsto\mathcal{O}\) is \((\epsilon,\delta)\)-differentially private (DP) if for any pair of neighbouring data sets \(D,D^{\prime}\in\mathcal{D}\) and for any subset \(O\subseteq\mathcal{O}\) of the of support domain, it satisfies
\[\mathbb{P}[M(D)\in O]\leq e^{\epsilon}\mathbb{P}[M(D^{\prime})\in O]+\delta.\]
The definition implies that smaller \((\epsilon,\delta)\) leads to more privacy.
Privacy-preserving algorithms often use noise-adding mechanisms. A popular noise-adding mechanism is the _Gaussian mechanism_(Dwork et al., 2006), which perturbs a function \(f:\mathcal{D}\mapsto\mathbb{R}^{k}\) of the sensitive data, for some \(k\geq 1\), with a random noise drawn from the Gaussian distribution. The amount of the added noise depends on the \(L_{2}\)-_sensitivity_ of the function, given by
\[\Delta_{f}=\max_{\text{neighbour}D_{1},D_{2}\in\mathcal{D}}\lVert f(D_{1})-f(D_ {2})\rVert_{2}.\]
An \((\epsilon,\delta)\)-DP Gaussian mechanism returns
\[f(D)+\Delta_{f}\sigma(\epsilon,\delta)\mathbf{v},\quad\mathbf{v}\sim\mathcal{N}(\mathbf{ 0},\mathbf{I}_{k}) \tag{1}\]
upon taking \(D\) as the input, where the quantity \(\sigma(\epsilon,\delta)\) ensures \((\epsilon,\delta)\)-DP. In this work, we take \(\sigma(\epsilon,\delta)\) as the analytical solution given in Balle and Wang (2018, Algorithm 1) due to its tightness. The Gaussian mechanism is also central to other forms of privacy, such as zero-concentrated DP (Bun and Steinke, 2016) and Gaussian DP (Dong et al., 2022).
In this paper, we consider \((\epsilon,\delta)\)-DP as the type of privacy and the Gaussian mechanism to generate noisy observations. Moreover, the proposed methods in this paper never use the sensitive data once given the noisy observations generated using the Gaussian mechanism, hence exploiting the _post-processing_ property of differential privacy (Dwork and Roth, 2014).
**Theorem 1** (Post-processing).: _If \(M:\mathcal{D}\mapsto\mathcal{O}\) be \((\epsilon,\delta)\)-DP and let \(f:O\to O^{\prime}\) be another mapping independent of \(D\) given \(M(D)\). Then \(f_{M}:\mathcal{D}\mapsto\mathcal{O}^{\prime}\) with \(f_{M}(D)=f(M(D))\) is \((\epsilon,\delta)\)-DP._
## 3 Differentially Private Distributed Linear Regression
In this section, we present a new hierarchical model for differentially private distributed linear regression. For ease of exposition, we first present a model with a single data holder, then generalise the model for the distributed setting.
### Basic Model and Privacy Setup
Suppose we have a sequence of random variables \(\{(\mathbf{x}_{i},y_{i}):i=1,\ldots,n\}\), where \(\mathbf{x}_{i}\in\mathcal{X}\subseteq\mathbb{R}^{d\times 1}\) are the feature vectors and \(y_{i}\in\mathcal{Y}\subseteq\mathbb{R}\) is the \(i\)'th response variable. We consider the normal linear regression to model the dependency between \(\mathbf{x}_{i}\) and \(y_{i}\). Specifically,
\[y_{i}=\mathbf{x}_{i}^{T}\mathbf{\theta}+e_{i},\quad e_{i}\stackrel{{\text {i.i.d.}}}{{\sim}}\mathcal{N}(0,\sigma_{y}^{2}),\quad i=1,\ldots,n,\]
where \(\mathbf{\theta}\in\mathbb{R}^{d}\) is the vector of the linear regression coefficients. We assume that the feature vectors \(\mathbf{x}_{i}\)'s are i.i.d. with distribution \(P_{x}\). Below, we will particularly focus on the case when \(P_{x}\) can be assumed to be a normal distribution. However, we will also present algorithms for general \(P_{x}\).
In matrix notation, the above can shortly be expressed as
\[\mathbf{y}=\mathbf{X}\mathbf{\theta}+\mathbf{e},\quad\mathbf{e}\sim\mathcal{N}(\mathbf{0},\sigma_{y}^{ 2}\mathbf{I}_{n}),\]
where \(\mathbf{X}=\begin{bmatrix}\mathbf{x}_{1}^{T}&\ldots&\mathbf{x}_{n}^{T}\end{bmatrix}^{T}\) is the so-called design matrix, \(\mathbf{y}=\begin{bmatrix}y_{1}&\ldots&y_{n}\end{bmatrix}^{T}\). Additionally, we also define the summary statistics of \(\mathbf{X}\) and \(\mathbf{y}\) given by
\[\mathbf{S}:=\mathbf{X}^{T}\mathbf{X},\quad\mathbf{z}:=\mathbf{X}^{T}\mathbf{y},\]
respectively. We assume a setup where \(\mathbf{S}\) and \(\mathbf{z}\) are privately released as the noisy summary statistics \(\hat{\mathbf{S}}\) and \(\hat{\mathbf{z}}\) are constructed as
\[\hat{\mathbf{S}} =\mathbf{S}+\sigma_{s}\mathbf{M}, \tag{2}\] \[\hat{\mathbf{z}} =\mathbf{z}+\sigma_{z}\mathbf{v},\quad\mathbf{v}\sim\mathcal{N}(\mathbf{0},\mathbf{I }_{d}), \tag{3}\]
where \(\mathbf{M}\) is a \(d\times d\) symmetric matrix with its upper triangular elements drawn from \(\mathcal{N}(0,1)\). Dwork et al. (2014) arrange \(\sigma_{s}\) and \(\sigma_{z}\) so that both (2) and (3) are \((\epsilon/2,\delta/2)\) differentially private, leading to \((\epsilon,\delta)\)-DP overall. Differently than Dwork et al. (2014), we set
\[\sigma_{s}=\sigma_{z}=\Delta_{sz}\sigma(\epsilon,\delta),\]
where \(\sigma(\epsilon,\delta)\) is given in Balle and Wang (2018, Algorithm 1), and \(\Delta_{sz}\) is the overall \(L_{2}\) sensitivity of \([\mathbf{S},\mathbf{z}]\), given by
\[\Delta_{sz}=\sqrt{\|X\|^{4}+\|X\|^{2}\|Y\|^{2}}\]
with \(\|X\|=\max_{\mathbf{x}\in\mathcal{X}}\|\mathbf{x}\|_{2}\) and \(\|Y\|=\max_{y\in\mathcal{Y}}|y|\).
Based on the above relations, we shall represent a hierarchical model that enables Bayesian inference of \(\mathbf{\theta}\) given \(\hat{\mathbf{S}}\) and \(\hat{\mathbf{z}}\). One important element of our modelling approach is the following result that establishes the conditional distribution of \(\mathbf{z}\) given \(\mathbf{S}\), \(\mathbf{\theta}\), and \(\sigma_{y}^{2}\).
**Proposition 1**.: For the normal linear regression model, we have
\[\mathbf{z}|\mathbf{S},\mathbf{\theta},\sigma_{y}^{2}\sim\mathcal{N}(\mathbf{S}\mathbf{\theta},\bm {S}\sigma_{y}^{2}).\]
Proof.: First, note that,
\[\mathbb{E}[\mathbf{z}|\mathbf{X},\mathbf{\theta},\sigma_{y}^{2}] =\mathbb{E}[\mathbf{X}^{T}\mathbf{X}\mathbf{\theta}+\mathbf{X}^{T}\mathbf{e}]=\mathbf{S} \mathbf{\theta}, \tag{4}\] \[\mathrm{Cov}(\mathbf{z}|\mathbf{X},\mathbf{\theta},\sigma_{y}^{2}) =\mathbf{X}^{T}\mathbf{X}\sigma_{y}^{2}=\mathbf{S}\sigma_{y}^{2}, \tag{5}\]
and observe that both moments depend on \(\mathbf{X}\) through its statistic \(\mathbf{S}\). Therefore, the conditional density of \(\mathbf{z}\) given \(\mathbf{S}\), \(\mathbf{\theta}\), and \(\sigma_{y}^{2}\) is
\[p(\mathbf{z}|\mathbf{X},\mathbf{\theta},\sigma_{y}^{2})=\mathcal{N}(\mathbf{z};\mathbf{S}\mathbf{ \theta},\mathbf{S}\sigma_{y}^{2}).\]
Next, define the function \(f:\mathbb{R}^{n\times d}\mapsto[0,\infty)\) with \(f(\mathbf{X})=p(\mathbf{z}|\mathbf{X},\mathbf{\theta},\sigma_{y}^{2})\) and let \(\mathcal{C}_{\mathbf{S},\mathbf{\theta},\sigma_{y}^{2}}=\{\mathbf{X}:\mathbf{X}^{T}\mathbf{X}=\mathbf{S}\}\), Since the function \(f\) is constant over \(\mathcal{C}_{\mathbf{S},\mathbf{\theta},\sigma_{y}^{2}}\), we can write
\[p(\mathbf{z}|\mathbf{S})=\int_{\mathcal{C}_{\mathbf{S},\mathbf{\theta},\sigma_{y}^{2}}}f\mathrm{ d}P_{x}=\mathcal{N}(\mathbf{z};\mathbf{S}\mathbf{\theta},\mathbf{S}\sigma_{y}^{2}),\]
where the second equation is by moment equations in (4) and (5) above. This concludes the proof.
Finally, we assign prior distributions for \(\mathbf{\theta}\), \(\sigma_{y}^{2}\) as
\[\mathbf{\theta}\sim\mathcal{N}(\mathbf{m},\mathbf{C}),\quad\sigma_{y}^{2}\sim\mathcal{IG} (a,b). \tag{6}\]
At this point, it is worth discussing some important modelling differences between our work and Bernstein and Sheldon (2019). In Bernstein and Sheldon (2019), the central limit theorem (CLT) is applied to \(\big{[}\mathbf{S},\mathbf{z},\mathbf{y}^{T}\mathbf{y}\big{]}\), leading to a normality assumption for the whole vector. In contrast, we use the _exact_ conditional distribution \(p(\mathbf{z}|\mathbf{S},\mathbf{\theta},\sigma^{2})\) thanks to Proposition 1. Moreover, unlike Bernstein and Sheldon (2019), we do _not_ require a noisy version \(\mathbf{y}^{T}\mathbf{y}\), hence have a slight advantage of using less privacy-preserving noise. In summary, our model has a different hierarchical structure and requires less privacy-preserving noise.
### Distributed Setting
Next, we extend our model to the distributed setting, where the total data are shared among \(J\geq 1\) data holders as
\[(\mathbf{X},\mathbf{y})=\{(\mathbf{X}_{j},\mathbf{y}_{j});j=1,\ldots,J\}. \tag{7}\]
We let \(n_{i}\) be number of rows in each \(\mathbf{x}_{i}\), so that \(n=n_{1}+\ldots+n_{J}\). Each data holder \(j\) shares their own summary statistics \(\mathbf{S}_{j}=\mathbf{X}_{j}^{T}\mathbf{X}_{j}\), \(\mathbf{z}_{j}=\mathbf{X}_{j}^{T}\mathbf{y}_{j}\) with privacy-preserving noise
\[\begin{split}\hat{\mathbf{S}}_{j}&=\mathbf{S}_{j}+\sigma_{s }\mathbf{M}_{j},\\ \hat{\mathbf{z}}_{j}&=\mathbf{z}+\sigma_{z}\mathbf{v}_{j},\quad \mathbf{v}_{j}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d}).\end{split} \tag{8}\]
Note that, to preserve a given \((\epsilon,\delta)\)-DP overall, each party must provide that level of privacy for their data, hence \(\sigma_{s}\) and \(\sigma_{z}\) are the same as before. The hierarchical structure of the overall model (specified for normally distributed \(\mathbf{x}_{i}\)'s) is shown in Figure 1.
The distributed setting deserves separate consideration than the single data holder case for a couple of reasons: Firstly, the node-specific observations \((\hat{\mathbf{S}}_{1},\hat{\mathbf{z}}_{1}),\ldots,(\hat{\mathbf{S}}_{J},\hat{\mathbf{z}}_{J})\) are altogether statistically _more informative_ on \(\theta\) than their aggregates \(\sum_{j=1}^{J}\hat{\mathbf{S}}_{j}\) and \(\sum_{j=1}^{J}\hat{\mathbf{z}}_{j}\). This is because the aggregate versions are _not_ sufficient statistics of the node-specific observations \((\hat{\mathbf{S}}_{1},\hat{\mathbf{z}}_{1}),\ldots,(\hat{\mathbf{S}}_{J},\hat{\mathbf{z}}_{J})\) with respect to \(\mathbf{\theta}\) (even when \(\sigma_{y}^{2}\) is known.) Therefore, when the node-specific observations are available, one should not, in principle, trivially aggregate them and apply an inference method designed for \(J=1\) using those aggregates.
Secondly, the partitioning of data as in (7) can be relevant to data privacy applications even _outside_ the distributed learning framework, rendering the methodology in Section 4 useful in a
Figure 1: Differentially private distributed linear regression model (specified for normally distributed \(\mathbf{x}_{i}\)’s.)
broader sense. For example, batches of \((\mathbf{x},y)\)-type of data may be donated to a common data collector as in (8). At this point, a particular and interesting relation exists with pan-privacy applications (Dwork et al., 2010). Imagine that sensitive data from individuals are collected sequentially in time, and the data holder is concerned about possible intrusions into the memory where the sensitive data are stored. Then, one possible way to ensure the privacy of the data against such possible intrusions, which is the promise of pan-privacy, is to store the noisy statistics of every new batch of data and erase the original sensitive data. Then, at any time the data collector has data of the form \((\hat{\mathbf{S}}_{1},\hat{\mathbf{z}}_{1}),\ldots,(\hat{\mathbf{S}}_{J},\hat{\mathbf{z}}_{J})\), each pair corresponding to a batch. As a result, inference algorithms as in Section 4 can be applied.
## 4 Algorithms for Bayesian Inference
Bayesian inference targets the posterior distribution of the latent variables of the model, in particular \(\mathbf{\theta}\), given the observations \(\hat{\mathbf{S}}_{1:J}\) and \(\hat{\mathbf{z}}_{1:J}\). We present several Bayesian inference algorithms for the hierarchical model described in the previous section. In addition to other concerns like computational budget, the choice among those approaches mainly depends on the specification of \(P_{x}\) as the distribution of \(\mathbf{S}\) directly depends on it. In this paper, we have considered the following two cases and devised algorithms for each of them:
1. In some cases it may be adequate to specify \(P_{x}=\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{x})\). This leads to \(\mathbf{S}|\mathbf{\Sigma}_{x}\sim\mathcal{W}(\mathbf{\Sigma}_{x},n)\). Further, to account for the uncertainty about the covariance \(\mathbf{\Sigma}_{x}\), one can treat it as a random variable with \(\mathbf{\Sigma}_{x}\sim\mathcal{IW}(\mathbf{\Lambda},\kappa)\). Figure 1 shows the hierarchical structure of the distributed setting with those specifications. We defer discussing the conflict between the normality and boundedness assumptions to Remark 1 towards the end of Section 4.1.
2. As the second case, we assume a general (non-normal) \(P_{x}\). A normal approximation, based on the CLT, could be considered for the distribution \(\mathbf{S}\)(Wilson and Ghahramani, 2011). However, this would require the knowledge (or accurate estimation) of up to the fourth moments of \(P_{x}\) as well as expensive computations for sampling \(\mathbf{S}\). We circumvent those difficulties by plugging in a point estimate of \(\mathbf{S}\) given \(\hat{\mathbf{S}}\) and use it during the sampling process as if it is the true \(\mathbf{S}\) itself. Then, we develop two different algorithms for inference of \(\mathbf{\theta}\), one being an MCMC algorithm and the other providing a closed form-solution for the posterior of \(\mathbf{\theta}\) following a rough point-wise estimation of \(\sigma_{y}^{2}\). Note that these algorithms with fixed \(\mathbf{S}\) do not require a distribution for \(\mathbf{x}\).
Next, we provide the details of our approaches and the resulting algorithms.
### Normally Distributed Features
In this section, we present an MCMC algorithm for Bayesian inference for the differentially private distributed linear regression model when \(P_{x}=\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{x})\) and \(\mathbf{\Sigma}_{x}\sim\mathcal{IW}(\Lambda,\kappa)\). The latent variables involved in this variant are \(\mathbf{\theta},\mathbf{\Sigma}_{x},\sigma_{y}^{2},\mathbf{S}_{1:J},\mathbf{z}_{1:J}\). Their posterior distribution given \(\hat{\mathbf{S}}_{1:J},\hat{\mathbf{z}}_{1:J}\) can be written as
\[p(\mathbf{\theta},\sigma_{y}^{2},\mathbf{\Sigma}_{x},\mathbf{z}_{1:J},\mathbf{S}_{1:J}|\hat{ \mathbf{z}}_{1:J},\hat{\mathbf{S}}_{1:J})\propto p(\mathbf{\theta})p(\sigma_{y}^{2})p(\bm {\Sigma}_{x})\prod_{j=1}^{J}p(\mathbf{z}_{j}|\mathbf{\theta},\sigma_{y}^{2},\mathbf{S})p( \mathbf{S}_{j}|\mathbf{\Sigma}_{x})p(\hat{\mathbf{S}}_{j}|\mathbf{S}_{j})p(\hat{\mathbf{z}}_{j}| \mathbf{z}_{j}). \tag{9}\]
One could design an MCMC algorithm for this posterior distribution that updates \(\mathbf{\theta}\), \(\sigma_{y}^{2}\), \(\mathbf{\Sigma}_{x}\), \(\mathbf{z}_{1:J}\), \(\mathbf{S}_{1:J}\) in turn based on their full conditional distributions. However, such an algorithm suffers from poor convergence because of a high posterior correlation between \(\mathbf{\theta}\) and \(\mathbf{z}_{1:J}\) (as verified in our numerical studies). It is well known that highly correlated variables result in poor convergence if they are updated one conditional on the other. To alleviate that problem, we work with the reduced model where \(\mathbf{z}_{1:J}\) are integrated out. The reduced model has \(\mathbf{\theta},\mathbf{\Sigma}_{x},\sigma_{y}^{2}\) as its latent variables, whose joint posterior distribution can be written as
\[p(\mathbf{\theta},\sigma_{y}^{2},\mathbf{\Sigma}_{x},\mathbf{S}|\hat{\mathbf{z}},\hat{\mathbf{S}}) \propto p(\mathbf{\theta})p(\sigma_{y}^{2})p(\mathbf{\Sigma}_{x})\prod_{j=1}^{J}p(\bm {S}_{j}|\mathbf{\Sigma}_{x})p(\hat{\mathbf{S}}_{j}|\mathbf{S}_{j})p(\hat{\mathbf{z}}_{j}|\mathbf{S }_{j},\mathbf{\theta},\sigma_{y}^{2}), \tag{10}\]
where \(p(\hat{\mathbf{z}}|\mathbf{S},\mathbf{\theta},\sigma_{y}^{2})=\mathcal{N}(\hat{\mathbf{z}}; \mathbf{S}\mathbf{\theta},\sigma_{y}^{2}\mathbf{S}\mathbf{\theta}+\sigma_{z}^{2}\mathbf{I}_{d})\).
We would like to sample from the posterior distribution in (10) via MCMC that updates \(\mathbf{\theta}\), \(\sigma_{y}^{2}\), \(\mathbf{\Sigma}_{x}\), \(\mathbf{S}_{1:J}\) in turn based on their full conditional distributions. The variables \(\mathbf{\theta}\) and \(\mathbf{\Sigma}_{x}\) enjoy closed-form full conditional distributions (see Appendix A for the derivations):
\[\mathbf{\Sigma}_{x}|\mathbf{S}_{1:J},\hat{\mathbf{S}}_{1:J},\hat{\mathbf{z}}_{1:J} \sim\mathcal{IW}\left(\mathbf{\Lambda}+\sum_{j=1}^{J}\mathbf{S}_{j},\kappa +n\right), \tag{11}\] \[\mathbf{\theta}|\sigma_{y}^{2},\hat{\mathbf{z}},\mathbf{S}_{1:J} \sim\mathcal{N}(\mathbf{m}_{p},\mathbf{\Sigma}_{p}), \tag{12}\]
where the posterior moments for \(\mathbf{\theta}\) are
\[\mathbf{\Sigma}_{p}^{-1}=\sum_{j=1}^{J}\mathbf{S}_{j}(\sigma_{y}^{2}\mathbf{S}_{j}+\sigma _{z}^{2}I)^{-1}\mathbf{S}_{j}+\mathbf{C}^{-1},\quad\mathbf{m}_{p}=\mathbf{\Sigma}_{p}\left( \sum_{j=1}^{J}\mathbf{S}_{j}(\sigma_{y}^{2}\mathbf{S}_{j}+\sigma_{z}^{2}I)^{-1}\hat{ \mathbf{z}}_{j}+\mathbf{C}^{-1}\mathbf{m}\right).\]
The full-conditional distributions of \(\mathbf{S}_{1:J}\) and \(\sigma_{y}^{2}\) have no closed form; hence we design Metropolis-Hastings (MH) moves to update them. For \(\sigma_{y}^{2}\), one can simply use a random-walk MH move targeting \(p(\sigma_{y}^{2}|\mathbf{\theta},\mathbf{S}_{1:J},\hat{\mathbf{z}}_{1:J})\). For \(\mathbf{S}_{1:J}\), their full conditional distribution can be factorised as
\[p(\mathbf{S}_{1:J}|\hat{\mathbf{S}}_{1:J},\hat{\mathbf{z}}_{1:J},\mathbf{\Sigma}_{x},\sigma_{y }^{2},\mathbf{\theta})=\prod_{j=1}^{J}p(\mathbf{S}_{j}|\hat{\mathbf{S}}_{j},\hat{\mathbf{z}}_ {j},\mathbf{\Sigma}_{x},\sigma_{y}^{2},\mathbf{\theta}),\]
where each factor is given by
\[p(\mathbf{S}_{j}|\hat{\mathbf{S}}_{j},\hat{\mathbf{z}}_{j},\mathbf{\Sigma}_{x},\sigma_{y}^{2},\mathbf{\theta})\propto p(\hat{\mathbf{z}}_{j}|\mathbf{S}_{j},\mathbf{\theta},\sigma_{y}^{2}) p(\mathbf{S}_{j}|\mathbf{\Sigma}_{x})p(\hat{\mathbf{S}}_{j}|\mathbf{S}_{j}).\]
Thanks to that factorised form, each \(\mathbf{S}_{j}\) can be updated with an MH move independently and in parallel. For the MH algorithm to update one \(\mathbf{S}_{j}\), we propose a new value from a Wishart distribution as \(\mathbf{S}_{j}^{\prime}\sim\mathcal{W}(\mathbf{S}_{j}/\alpha,\alpha)\), which has mean \(\mathbf{S}_{j}\) and variance determined by \(\alpha\). In our experiments, we adjust \(a\) using ideas from the adaptive MCMC framework (Andrieu and Thoms, 2008) to target an acceptance rate of around \(0.2\).
Algorithm 1 represents the overall MCMC algorithm for the hierarchical model for differentially Bayesian distributed linear regression when \(P_{x}\) is a normal distribution with a random covariance matrix having an inverse-Wishart distribution. We call this algorithm MCMC-normalX.
```
0: Current values of \(\mathbf{S}_{1:J}\), \(\mathbf{\theta}\), \(\sigma_{y}^{2}\), \(\mathbf{\Sigma}_{x}\); observations \(\hat{\mathbf{S}}_{1:J}\),\(\hat{\mathbf{z}}_{1:J}\); noise variances \(\sigma_{s}^{2}\), \(\sigma_{z}^{2}\); proposal parameters \(a\), \(\sigma_{q}^{2}\); hyperparameters \(a,b,\kappa,\mathbf{\Lambda}\), \(\mathbf{m}\), \(\mathbf{C}\).
0: New sample of \(\mathbf{\Sigma}_{x},\mathbf{S},\sigma_{y}^{2},\mathbf{\theta}\)
1 Sample \(\mathbf{\Sigma}_{x}\) using (11).
2for\(j=1,2,\ldots J\)do
3 Update \(\mathbf{S}_{j}\) via an MH move targeting \(p(\mathbf{S}_{j}|\mathbf{\Sigma}_{x},\mathbf{\theta},\hat{\mathbf{z}}_{j})\).
4 Sample \(\mathbf{\theta}\) using (12).
5 Update \(\sigma_{y}^{2}\) via an MH move targeting \(p(\sigma_{y}^{2}|\mathbf{\theta},\mathbf{S}_{1:J},\hat{\mathbf{z}}_{1:J})\).
```
**Algorithm 1**MCMC-normal X - one iteration
_Remark 1_.: Admittedly, a potential concern is a conflict between the normality and boundedness assumptions (both for \(\mathbf{x}\) and \(y\)). However, we also note that the collected data often happen to have some natural boundaries (which can be exploited to determine the sensitivity of the shared statistics), and yet the normal distribution is still used for modelling and subsequent inference mainly for sake of tractability. With the normality assumption, one can implement computationally efficient algorithms at the expense of minor modelling inaccuracies. While we acknowledge the methodologies in Alparslan and Yildirim (2022, Section 4.2) and Ju et al. (2022) that can correctly incorporate the effect of truncation into inference, we remark that those methods pay the price of exactness by having \(\mathcal{O}(n)\) computational complexity per iteration.
### Features with a General Distribution
The normality assumption for \(\mathbf{x}_{i}\)'s in Section 2 may not be adequate for some data sets. Moreover, when \(d\) is large, updating \(\mathbf{S}_{j}\)'s can be the bottleneck of MCMC-normalX in Algorithm 1 in terms of computation time and convergence. We propose two algorithms to address both of those concerns. As it turns out, those algorithms provide accurate estimations even for the case of normally distributed features; see Section 5.1.
Our approach for \(\mathbf{x}_{i}\)'s with a general distribution is based on estimating \(\mathbf{S}_{j}\)'s from the beginning, using some principled estimation method, and fixing \(\mathbf{S}_{j}\)'s to those estimates during the whole course of the inference procedure. In that way, we obtain a faster MCMC algorithm at the expense of targeting an approximate posterior distribution. Moreover, we have observed in our experiments that this variant is quite competitive in terms of accuracy, especially when the total number of nodes \(J\) increases. We call this variant MCMC-fixedS and present it in Algorithm 2.
As for estimating \(\mathbf{S}_{j}\)'s, one could simply consider taking the privately shared \(\hat{\mathbf{S}}_{j}\) as an estimator for \(\mathbf{S}_{j}\), but \(\hat{\mathbf{S}}_{j}\) is not necessarily a positive (semi-)definite matrix. Instead, we propose the nearest positive semi-definite matrix of to \(\hat{\mathbf{S}}_{j}\) as the estimator of \(\mathbf{S}_{j}\) in terms of the Frobenius norm. (The nearest positive _definite_ matrix to \(\hat{\mathbf{S}}_{j}\) does not exist.) To find the nearest positive semi-definite matrix, we follow Higham (1988) and apply the following procedure for each \(j=1,\ldots,J\): (i) Calculate the eigendecomposition \(\hat{\mathbf{S}}_{j}=\mathbf{E}\mathbf{D}\mathbf{E}^{T}\), where \(\mathbf{E}\) is a matrix of eigenvectors, and \(\mathbf{D}\) is a diagonal matrix consisting of the eigenvalues \(\lambda_{i}\). (ii) The nearest symmetric positive semi-definite matrix is \(\widetilde{\mathbf{S}}_{j}=\mathbf{E}\mathbf{D}_{+}\mathbf{E}^{T}\), where \(\mathbf{D}_{+}\) is a diagonal matrix with \(\mathbf{D}_{+}(i,i)=\max\{\mathbf{D}(i,i),0\}\).
Note that \(\widetilde{\mathbf{S}}_{j}\) found above is the maximum likelihood estimator of \(\mathbf{S}_{j}\) given \(\hat{\mathbf{S}}_{j}\) (over the set of positive semi-definite matrices) since the conditional distribution of \(\hat{\mathbf{S}}_{j}\) given \(\mathbf{S}_{j}\) is a normal
distribution with mean \(\mathbf{S}_{j}\).
```
Input: Current values of \(\mathbf{\theta}\), \(\sigma_{y}^{2}\); estimates \(\hat{\mathbf{S}}_{1:J}\), observations \(\hat{\mathbf{z}}_{1:J}\); noise variance \(\sigma_{z}^{2}\), and hyperparameters \(a\), \(b\), \(\mathbf{m}\), \(\mathbf{C}\). Output: New sample of \(\sigma_{y}^{2},\mathbf{\theta}\).
1 Use \(\mathbf{S}_{1:J}=\widetilde{\mathbf{S}}_{1:J}\) throughout. Sample \(\mathbf{\theta}\) using (12). Update \(\sigma_{y}^{2}\) via an MH move targeting \(p(\sigma_{y}^{2}|\mathbf{\theta},\mathbf{S}_{1:J},\hat{\mathbf{z}}_{1:J})\).
```
**Algorithm 2**MCMC-fixedS - one iteration
```
Input: \(\hat{\mathbf{S}}_{1:J}\), \(\hat{\mathbf{z}}_{1:J}\); noise variance: \(\sigma_{z}^{2}\); estimate \(\tilde{\sigma}_{y}^{2}\) of \(\sigma_{y}^{2}\); hyperparameters: \(\mathbf{m}\), \(\mathbf{C}\). Output: Estimate \(\hat{\mathbf{\theta}}\).
1for\(j=1,2,\ldots J\)do
2 Calculate the estimate \(\widetilde{\mathbf{S}}_{j}\) for \(\mathbf{S}_{j}\) using \(\hat{\mathbf{S}}_{j}\).
3 Calculate \(\mathbf{\Sigma}_{j}=\widetilde{\mathbf{S}}_{j}(\tilde{\sigma}_{y}^{2}\widetilde{\mathbf{S }}_{j}+\sigma_{z}^{2}\mathbf{I})^{-1}\widetilde{\mathbf{S}}_{j}\).
4 Calculate \(\mathbf{m}_{j}=\widetilde{\mathbf{S}}_{j}(\tilde{\sigma}_{y}^{2}\widetilde{\mathbf{S}}_{j }+\sigma_{z}^{2}\mathbf{I})^{-1}\hat{\mathbf{z}}_{j}\).
5return Posterior moments of \(\mathbf{\theta}\): \(\mathbf{\Sigma}_{\text{post}}^{-1}=\sum_{j=1}^{J}\mathbf{\Sigma}_{j}+\mathbf{C}^{-1}\), \(\mathbf{m}_{\text{post}}=\mathbf{\Sigma}_{\text{post}}\left(\mathbf{C}^{-1}\mathbf{m}+\sum_{j= 1}^{J}\mathbf{m}_{j}\right)\).
```
**Algorithm 3**Bayes-fixedS-fast
MCMC-fixedS in Algorithm 2 is faster than MCMC-normalX in Algorithm 1, since it avoids the step to update \(\mathbf{S}_{j}\)'s, which constitutes the main computational burden on Algorithm 1. However, MCMC-fixedS can be made even faster by fixing \(\sigma_{y}^{2}\) also. As a crude estimator, we used \(\tilde{\sigma}_{y}^{2}=\|\mathcal{Y}\|/3\) throughout the experiments. When \(\sigma_{y}^{2}\) is fixed in addition to \(\mathbf{S}_{1:J}\), we end up with a non-iterative method where the posterior distribution of \(\mathbf{\theta}\) is calculated in closed form. We call the resulting algorithm Bayes-fixedS-fast and present it in Algorithm 3. Algorithm 3 does nothing but returns the moments of the posterior distribution of \(\mathbf{\theta}\) given \(\widetilde{\mathbf{S}}_{j}\)'s, \(\hat{\mathbf{z}}_{j}\)'s, \(\tilde{\sigma}_{y}^{2}\), and the prior parameters for \(\mathbf{\theta}\).
### Computational Cost
All our methods described in this section require \(\mathcal{O}(d^{3})\) computation (per iteration for the iterative ones in Algorithms 1 and 2, or as a whole for the fast version in Algorithm 3) since they deal with \(d\times d\) matrices. In contrast, as Bernstein and Sheldon (2019) apply CLT to the vector \([\mathbf{S},\mathbf{z},\mathbf{y}^{T}\mathbf{y}]\), their methods deal with covariance matrices of size \((d^{2}+d+1)\)_explicitly_, which leads to \(\mathcal{O}(d^{6})\) computation per MCMC iteration. For even moderate \(d\), this computational difference becomes dramatic and the latter may be prohibitive. Moreover, the complexity of our methods does not depend on \(n\). This is in contrast to the \(\mathcal{O}(n)\) complexity of general-purpose methods, such as Alparslan and Yildirim (2022, Section 4.3) and Ju et al. (2022), that can be applied to linear regression.
### Extensions
We mention two other variants of our methodology, deferring the details to Appendix B.
Another solution for dealing with non-normal \(P_{x}\) could be to average the feature vectors in \(\mathbf{X}\) (and the corresponding response variables in \(\mathbf{y}\)), so that the averaged rows of \(\mathbf{X}\) can be modelled as approximately normal, due to CLT. This enables using the methods devised for normally distributed features. For the details of this approach, see Appendix B.1.
Secondly, if the features are normally distributed but the data are not centred, we need to include the intercept parameter, which corresponds to appending \(\mathbf{x}_{i}\) with a one from the left, and MCMC-normalK does not directly apply. In that case, we can modify the hierarchical model that accommodates the non-centralised features and the intercept parameter and still benefit from the sampling techniques involved in MCMC-normalX in Algorithm 1. Appendix B.2 contains the details of the modified hierarchical model.
## 5 Numerical Experiments
We present several numerical evaluations of the proposed methods, MCMC-normalX, MCMC-fixedS, and Bayes-fixedS-fast with simulated and real data. We compare our algorithms with two methods: adaSSP of Wang (2018) and the MCMC method of Bernstein and Sheldon (2019) for differentially private linear regression that we call MCMC-B&S. Note that adaSSP and MCMC-B&S are originally proposed for the non-distributed setting, that is, \(J=1\). For a comprehensive comparison, we have implemented their extensions for \(J\geq 1\). The details of those extensions are provided in Appendix C. In particular, we have carefully generalised the model in Bernstein and Sheldon (2019) for \(J\geq 1\) similarly as we have done for our model in Section 3.2. What we call MCMC-B&S is the adaptation of Bernstein and Sheldon (2019, Algorithm 1) for this generalised model (and \((\epsilon,\delta)\)-DP). The code to replicate all of the experiments in this section can be found at [https://github.com/sinanyildirim/Bayesian_DP_dist_LR.git](https://github.com/sinanyildirim/Bayesian_DP_dist_LR.git).
### Experiments with Simulated Data
We have considered two different configurations, \((n=10^{5},d=2)\) and \((n=10^{5},d=5)\), for the problem size. For each \((n,d)\), we have simulated the data as follows: We have generated \(\mathbf{\theta}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d})\), \(\mathbf{x}_{i}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{x})\) where \(\mathbf{\Sigma}_{x}\sim\mathcal{IW}(\mathbf{\Lambda},\kappa)\) with \(\kappa=d+1\) and selected the scale matrix randomly as \(\mathbf{\Lambda}=\mathbf{V}^{T}\mathbf{V}\), where \(\mathbf{V}\) is a \(d\times d\) matrix of i.i.d. variables from \(\mathcal{N}(0,1)\). The response variables \(\mathbf{y}\) have been generated with \(\sigma_{y}^{2}=1\). For inference, we have used the same \(\mathbf{\Lambda}\), \(\kappa\) as above and \(a=20\), \(b=0.5\), \(\mathbf{m}=\mathbf{0}_{d\times 1}\), \(\mathbf{C}=(a-1)/b\mathbf{I}_{d}\) for the other hyperparameters.
We have evaluated the methods at all combinations of \(J\in\{1,5,10\}\) and \(\epsilon\in\{0.1,0.2,0.5,1,2,5,10\}\). All the MCMC algorithms have been run for \(10^{4}\) iterations. For each \((J,\epsilon)\) pair, we have tried each method \(50\) times (each with different noisy observations) to obtain average performances.
For performance metrics, we have looked at the mean squared errors (MSE) of (i) the estimates \(\hat{\mathbf{\theta}}\), and (ii) the predictions \(\hat{y}(\mathbf{x}_{\text{test}})\) generated by the methods. For the Bayesian methods, \(\hat{\mathbf{\theta}}\) is taken as the mean posterior, which can be numerically estimated for the MCMC algorithms. For prediction performance, we have calculated \(\mathbb{E}[\hat{y}(\mathbf{x}_{\text{test}})-y_{\text{test}}]^{2}\). For the Bayesian methods, \(\hat{y}(\mathbf{x}_{\text{test}})\) is the posterior predictive expectation of \(y_{\text{test}}\) at \(\mathbf{x}_{\text{test}}\). For adaSSP, we simply take \(\hat{y}(\mathbf{x}_{\text{test}})=\mathbf{x}_{\text{test}}^{T}\hat{\mathbf{\theta}}\).
The results are summarised in Figure 2. We observe that MCMC-fixedS and Bayes-fixedS-fast outperform adaSSP and MCMC-B&S in almost all cases both in terms of estimation and prediction. Comparing the full-scale algorithms MCMC-normalX and MCMC-B&S (that involve updates of \(\mathbf{S}\)), we observe a clear advantage of MCMC-normalX at \(d=2\), but MCMC-B&S becomes more competitive at \(d=5\). This can be attributed to the fact that MCMC-B&S requires the extra statistic \(\mathbf{y}^{T}\mathbf{y}\), unlike MCMC-normalX, which causes MCMC-B&S to use more noisy statistics. This difference becomes more significant at small \(d\), where the relative effect of the presence of \(\mathbf{y}^{T}\mathbf{y}\) on the sensitivity is more significant. Finally, all methods improve as \(\epsilon\) grows, which is expected.
We also compare the computation times of the MCMC algorithms MCMC-normalX, MCMC-fixedS, and MCMC-B&S1. Figure 3 shows the run-times of the algorithms vs \(d\). The drastic difference in computational loads explained in Section 4.3 is also visible in the figure. While MCMC-B&S may be improved in terms of accuracy as \(d\) increases, the \(\mathcal{O}(d^{6})\) dramatically slows it down.
Figure 3: Run times per iteration for MCMC algorithms
Figure 2: Averaged prediction and estimation performances (over 50 runs). Top row: \(n=10^{5},d=2\), Bottom row: \(n=10^{5},d=5\).
### Experiments with Real Data
For the real data case, we have used four different data sets from the UCI Machine Learning Repository. We have disregarded the columns including string data or key values (ID, name, date, _etc._), and we have considered the most right-hand column as \(\mathbf{y}\). The finalised data sets are summarised below.
For prediction, we have taken 80% of the data for training and the rest for testing. We present the average prediction performances (out of 50 runs) in Table 1 for each dataset and \(J\) with \(\epsilon=1\). We observe that the prediction performances of the compared methods are close, while MCMC-fixed-S and Bayes-fixed-S are arguably the most stable ones. When \(J>1\) (the distributed data setting), those two methods beat adaSSP and MCMC-B&S more satisfactorily.
## 6 Conclusion
We propose a novel Bayesian inference framework, with MCMC being its main workhorse, for a differentially private distributed linear regression setting where the data is partitioned among the data holders. We provide several Bayesian inference algorithms suited to the developed hierarchical model for linear regression. Those algorithms can be preferred one over the other depending on the computational budget, model specifics, or how much we know about the underlying statistical facts of the data. We exploit the conditional structure between the summary statistics of linear regression, as given in Proposition 1, which leads to feasible algorithms with computational advantages over their competitors. The numerical experiments show that the proposed methods are competitive with their state-of-the-art alternatives in terms of accuracy.
The extensions mentioned in Section 4.4 indicate potential future directions. There is also room
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline \(J\) & **data sets** & \(n\) & \(d\) & **hyperlinks** \\ \hline power plant energy & 7655 & 4 & view link \\ bike sharing & 13904 & 14 & view link \\ air quality & 7486 & 12 & view link \\
3d road & 347900 & 3 & view link \\ \hline \hline \end{tabular}
\end{table}
Table 1: Averaged prediction performances (over 50 runs) for the real datasets - \(\epsilon=1\)
for improvement of MCMC-normalX. We chose the most common MH moves to update \(\sigma_{y}^{2}\) and \(\mathbf{S}_{j}\)'s, without paying much attention to their efficiencies. Especially for large \(d\), more advanced techniques, such as those stemming from Hamiltonian Monte Carlo (Neal, 2001) or pseudo-marginal MCMC (Andrieu and Roberts, 2009), may be employed to facilitate the mixing of the algorithm.
## 7 Acknowledgement
The study was funded by the Scientific and Technological Research Council of Turkey (TUBITAK) ARDEB Grant No 120E534.
Supplementary material:The code to replicate the experiments in Section 5 can be found at [https://github.com/sinanyildirim/Bayesian_DP_dist_LR.git](https://github.com/sinanyildirim/Bayesian_DP_dist_LR.git).
| 分散型差分プライバシーの線形回帰に適用できる新しいベイズ推定フレームワークを提案します。
複数の当事者がデータを保持し、それぞれがプライバシーを保ちつつ、データの一部を共有する特定の要約統計を共有します。
プライバシーを保護するためのノイズを適用する新しい生成統計モデルを開発します。
このモデルは線形回帰の要約統計間の有用な分布関係を有効活用します。
線形回帰の係数のベイズ推定は、マルコフ連鎖 Monte Carlo アルゴリズムを用いて行われ、1回でベイズ推定を行うための高速バージョンも提供しています。
提案された方法は、競合製品よりも計算が効率的です。
実データとシミュレーションデータに関する数値結果を示し、提案されたアルゴリズムは、バランスのとれた推定と予測を示しています。 |